So I'm working my way through Byron Morgan's Applied Stochastic Modelling (2nd ed) (which, by the way, I think is great so far -- more later), and I'm trying to work the examples using Inference for R studio. Usually, when I do this sort of thing, I put all my data in .csv (comma separated value) files and write one big program with lots of #'s to divide up the section. With Inference for R, I placed the dataset into an Excel file attached to the container and was able to automatically call it without any explicit import commands. I was also able to divide up different sections of the problem into different code blocks, which I could explicitly turn on or off. I don't know yet if the code blocks can have dependencies, as the different sections I was working with are more or less independent. I also used the debugging function, which should be familiar to anyone using gdb--you can set breakpoints, continue, and step line-by-line through programs and monitor the state of the programs (e.g. intermediate variables). This I found immensely helpful in making sure my programs ran correctly.
It's a bit of a change in mindset from my normal R development, but I could definitely get used to it, and I think it could make much of my R development more efficient. More later.