This is the second book in what is intended to be a trilogy. The book can be read as a stand-alone novel. Reading book 1 first may make things somewhat more clear, but Watch does include bits to fill in background material presented in book 1. Reading Watch may make one curious what will happen in book 3. However, the end of Watch does not leave you truly hanging.
For information on book 1 (Wake), see the review
The story continues the story of Caitlin, her family and Webmind (the emergent AI on the internet). In the first book, Caitlin is blind as a result of a condition in which the eyes and brain are intact, but nerve signals between the eyes and brain are unusable. A Japanese researcher provides her with a device which fixes the nerve signals, allowing her to see. This is a prototype device, so the researcher monitors the data over the internet. An emergent AI on the internet intercepts the signal. Caitlin and the AI learn of each other and become friends.
In the second book, we're introduced to US government employees associated with WATCH (an agency that looks for internet activity that the government doesn't like).
Caitlin tells her parents about Webmind. (Initially, her parents assume Caitlin has been fooled by some internet schemer. But Caitlin is so insistent her physicist father devises a set of questions for Webmind that convinces him. There are some times in the book that it seems things go too easily for a teenager, but there are times like this example it seems more believable.)
Caitlin's family also tells Dr. Kuroda (the Japanese researcher). The parents and Dr. Kuroda become involved in communicating with Webmind. Dr. Kuroda provides software to help Webmind access audio and video files on the internet. A microphone is added to Caitlin's "Eyepod" to let Webmind hear as well as see what Caitlin experiences. And Webmind learns how to add text messages to Caitlin's vision data stream so she can get messages from him even when she isn't at a computer.
When WATCH concludes there may be an emergent AI in the internet, they call in a military AI expert. The expert informs them that there is an established protocol which says an emergent AI should at least be isolated from the rest of the world, if not destroyed - in order to insure it can do no harm to people. No immediate action is taken for two reasons: (1) the president tells them he wants them to wait until he decides what to have them do, and (2) they don't know exactly where the AI is or how to act against it. When WATCH is unsuccessful in identifying what the AI is made out of or which computer systems it is located on, they decide to question Caitlin, her family and Dr. Kuroda. As a result, Webmind learns they are investigating him and presumably want to eliminate him. When WATCH takes action, it does not come as a surprise.
The conflict raises a question that has been in SF and science for decades. What would an AI do? What can or should we do to protect ourselves from AI's that make choices harmful to humans? Unfortunately, Watch doesn't delve much into this. Webmind is depicted as being friendly. Caitlin's parents express some questions as to whether they can count on Webmind being friendly. Caitlin tries to treat Webmind nicely and introduce Webmind to internet data that might encourage Webmind to be friendly. But mostly, Webmind lacks mental attributes of dangerous individuals. We're not really forced to think seriously about safety issues because he's a nice guy. Nor do the government forces talk about any AI programming safeguards such as Asimov's Laws of Robotics. Sawyer doesn't offer us any real choices other than being buddies or killing emergent AI's.
A smaller subplot in the series deals with Hobo, a chimp-bonobo mixed ape, at a research institute. He knows sign language to communicate. In this book he's given a chance to "talk" via computer with another ape at another facility. Sometime after that, Webmind connects to Hobo's computer to "talk" about what kind of life Hobo wants to live. One gets the feeling Hobo's story has a ways to go later in the series.
The book ends with what I'll call after-thoughts for lack of a better word. These concluding remarks are on the preachy side. During the course of the book, certain concepts come up and are discussed by characters. Those, too, can leave the impression of the author trying to persuade the reader, but generally not as glaringly obvious as the after-thoughts.
Repeatedly, Sawyer has the characters talk and think about the book "The Origin of Consciousness in the Breakdown of the Bicameral Mind" by Julian Jeans. It presents the theory that true human consciousness did not emerge until a few thousand years ago. I haven't read Jeans. book, but Watch doesn't clarify what is meant in this context by full consciousness. Ancient writings suggest people operated on different assumptions and accepting what may seem illogical frameworks. But I don't think that constitutes lack of consciousness. Hobo uses sign language with humans. If conversation (not just answering by rote) is possible without consciousness, it raises many questions. If we traveled to a planet with a hunter-gather society and a spoken language, what test would let us know whether they were "conscious"?