I, Robot by Isaac Asimov
This is a relatively small collection of Asimov's robot short stories - as opposed to the more comprehensive collections that have been published. I believe this was his first collection of robot stories, first released as a book around 1950. They are presented in the context of a thin meta-story in which each of the original stories is supposed to be pre-retirement reminiscences by Susan Calvin, the robot psychologist who appears in numerous Asimov robot stories. (To me, this thin string tying the stories together added nothing of importance, but it is brief and doesn't do anything to degrade the stories.)
Most or all of these stories were originally written in the 1940s, but there is no reason to fear you will find the immature kind of SF some may associate with that era.
Most of them are presented somewhat like mysteries in that there is some not-yet-understood situation that needs to be examined and logically explored. The culmination of the story is finding the explanation for what happened.
Most of the stories involve consequences related to the Three Laws Of Robotics, although some of these are rather obscure corners of the matter. Although the Laws Of Robotics play a role in the final story, I would say the pivotal consideration is more a question of cause and effect. In any case, these are more thoughtful stories than adventure or good guys vs. bad guys.
The stories are:
The story is about an early robot designed to be a companion for a child. It doesn't deal so much with the Laws Of Robotics as it does with human fears of robots, social pressures and the dynamics between members of a family.
A recently arrived robot on Mercury drunkenly walks in circles around a work site. Two humans' lives depend on him getting his work done. What about his work and the Three Laws Of Robotics have put the robot in this state? And what will get him out?
A new model of robot tries to understand the world around him. He concludes it is unreasonable to believe humans could have made him or that the human explanations of the world could be true. Obviously, what the humans naively call "the energy converter" is The Master that created robots. How will the essential work at the energy converter get done? If its energy beam to Earth gets even a tiny bit off it could destroy areas of Earth.
4) Catch That Rabbit
A new kind of mining robot is tested on an asteroid. There's a supervising robot with 5 linked subservient robots. But there are periodic cases of bizarre (and unproductive) activity. The testers need to figure out what's happening.
Starting with the implausible premise of a mind-reading robot [that wasn't even designed to read minds], puzzling things happen at the robot company. The employees are taking advantage of the robot's ability to read other people's minds, but there's something wrong with the results.
6) Little Lost Robot
At a special space colony physics research is being done. Regular robots prevent the experiments from finishing when they "rescue" humans from the risks of the experiments. Robots with a modified First Law Of Robotics are used, but problems follow.
After one computer brain breaks down (presumably based on the Laws of Robotics) while processing math to develop a hyperspace ship, another computer is asked more carefully to take on the task. It designs a hyperspace ship, but both the ship and the computer's behavior have some oddities. Why did the first computer break down if the ship could be safe made? And if the ship is safe, why the odd behavior?
A rising politician is accused of being a humanoid robot on an Earth where they are outlawed. Along the way we learn there's not much difference between the behavior of a robot following the Three Laws Of Robotics and a humane person leading a very just life. We also learn about what is meaningful evidence.
9) The Evitable Conflict
Computers control Earth's economies. Things are going pretty smoothly, but there are a few oddities that make the world leader suspicious. Perhaps anti-robot fanatics are up to mischief, perhaps the computers are starting to malfunction. But the experts keep telling him that all the possible avenues to error are impossible.
On re-reading these stories I found myself more aware of the poor choices made by the supposedly expert characters that led to the challenges in the stories. The stories may be internally consistent, and if you changed the story by only eliminated certain poor choices the story would lack the challenges that make it an interesting story (rather than a dull telling of somebody's day at the office). Still, these characters are supposed to be experts. Take the story about the robot who uses logic to conclude he could not have been built by humans. One of the robot's fundamental premises is that a being can't make something closer to perfection than itself and the robot is closer to perfection than humans. Nobody even attempts to challenge this assumption. Nobody asks the robot, "You mean no matter how much you studied and tried, it would be impossible for you to build a robot with features you lack or was more durable or had any other kind of improvement? You mean extraordinarily bright humans couldn't even accidentally stumble upon some inventions that eventually some extraordinarily bright human might put together just to see what would happen and end up with something superior to humans?"
In Little Lost Robot, an important element is a modification made to the First Law Of Robotics. In the story, humans need to do work which robots interpret as putting humans in danger. Acting under the First Law, they prevented the humans from doing the work. So new robots are made that don't have the part of the First Law that says a robot can't let a human come to harm as a result of the robot's inaction. I would think they could have simply amended the First Law to say the robot could not allow a human to come to harm as a result of inaction - unless that specific human ordered the robot to consider the human not in danger. But as I said, then there wouldn't have been a story.