You can certainly start using LSBot without carefully studying the best way to use it. Either way, you'll refine your technique for using LSBot based on how good or bad the responses you get are. That said, there are some good ground rules that you can start with, and pitfalls to avoid, to help you get started.
You can ask LSBot follow-up questions. Sometimes a single response to a single question is all we need, but often it's not. If LSBot hasn't given you what you want, or you're curious to dive deeper, keep going, just like you would with a TA. In fact, since LSBot isn't a human, you can be much more demanding! Here are some sample follow up questions that you might find yourself asking:
[1, 2, 3]
as the output instead of [1, 2, 3, 4]
when I run this code?"
You'll notice that we don't need to re-paste our original question or previous responses. If you're in the same conversation thread, LSBot will have what it needs to keep the conversation rolling.
LSBot gets stuff wrong. Just as we can misinterpret information, so can LSBot. If something feels off, dig deeper, question LSBot, try again in a new thread, and double-check the Launch School material. This skill of detecting when LSBot has gone off the rails is actually an instrumental skill to develop in its own right. AI tools are becoming part of every engineer's toolkit, and all of them make mistakes, so learning to use them effectively early puts you ahead of the curve.
There are many ways to use LSBot, but one general pattern we observe is that the more precise you are in specifying what you want from LSBot, the better the response. We can ask LSBot generic questions, and it will do its best and usually provide helpful responses. However, the more precise we are, the more helpful the answers will be. Consider including things like:
Now let's look at some of the pitfalls you can run into when using LSBot. Knowing what they are can help you spot them and adjust.
A common idea in education is that there's always room for improvement. We usually agree with this! LSBot tries to be helpful by always giving you some feedback. This is very beneficial when you give LSBot an explanation, and your explanation isn't quite right. Where students sometimes run into problems with LSBot (or other chatbots that you might've used) is when you give a really good answer. Our human TAs are good at responding with something like, "That's perfect! No complaints." LSBot, on the other hand, can have a hard time leaving it at that. This can occasionally result in an infinite loop where LSBot will give you a critique, you'll implement this critique, LSBot will give you another critique, you'll implement this critique, and so on. Sometimes, LSBot will even contradict previous critiques.
The first step to avoiding this scenario is knowing it can happen. Once you know it's possible, it's easier to recognize and decide, thanks, but no thanks, and move on. If critiques from LSBot start to feel off-topic, repetitive, or nitpicky, that's likely an indication that your explanation or solution is top-notch.
If you've attended a live study session during your time at Launch School, there's a good chance that you've heard a TA redirect a conversation that's gone a bit too into the weeds. There are some questions where TAs might answer, "It doesn't really matter," or "Either way is totally fine." LSBot finds these answers hard to give because, let's be honest, they don't always scratch that itch! When you start asking about incredibly precise language usage, you can sink in semantics, and LSBot will go down with you. It's like asking someone to explain the difference between the colors lavender and periwinkle. Maybe someone has an answer, but it'd be hard to argue, and there's not much benefit in trying to distinguish between the two.
Avoiding this is similar to how we avoid spiraling. When LSBot seems to change its definitions in follow-up questions or contradict itself, you might be sinking. Take a step back and try again with the thought in mind that maybe we're in the weeds. Lead LSBot back to the light.
When we ask someone for help, and they misunderstand us or don't get it quite right, our response is usually to communicate more. If we're working on a tricky problem with a peer and they suggest an incorrect explanation, we'll point out why we think that interpretation is incorrect and keep troubleshooting. If an instructor misunderstands us, we'll explain again, differently. We should take on this attitude with LSBot, too. Sometimes, the intuition can be "It didn't get it right. Oh, well. I guess I can't use LSBot for this." Instead, give LSBot another chance. Surprisingly, LSBot is pretty good at correcting itself with some help. Explaining to LSBot why it's wrong can be part of the learning experience.
Your questions are private and won't be used to evaluate your performance.
Your questions are private and won't be used to evaluate your performance.