I second slayton's answer.
The transition matrix is simply the list of probabilities that one state will go to another.
A hidden Markov model assumes you can't actually see what the state of the system is (it's hidden). For example, suppose your neighbor has a dog. The dog may be hungry or full, this is the dog's state. You can't ask the dog if it's hungry, and you can't look inside its stomach, so the state is hidden from you (since you only glance outside, at the dog, briefly each day you can't keep track of when it runs inside to eat or and how much it ate if so).
You know, however, that after it ate and became full, it will become hungry again after some time (depending on how much it ate last, but you don't know that so it might as well be random) and when it is hungry, it will eventually run inside and eat (sometimes it will sit outside out of laziness despite being hungry).
Given this system, you cannot see when the dog is hungry and when it is not. However, you can infer it from whether the dog whines. If it's whining, it's probably hungry. If it's happily barking, it's probably full. But just because it's whining doesn't mean it's hungry (maybe its leg hurts) and just the bark doesn't mean full (maybe it was hungry but got excited at something). However, usually a bark comes when it's full, and a whine comes when it's hungry. It may also make no sound at all, telling you nothing about its state.
So this is the emission matrix. The "hungry" state is more likely to "emit a whine", ditto for full and barks. The emission matrix says what you will observe in each given state.
If you use a square identity matrix for your emission matrix, then each state will always emit itself, and you will end up with non-hidden Markov model.