Seasonal silliness with my favorite open source statistical package.
Biostatistics, clinical trial design, critical thinking about drugs and healthcare, skepticism, the scientific process.
Friday, December 25, 2009
Friday, November 20, 2009
My implementation of Berry and Berry's hierarchical Bayes algorithm for adverse events
I've been working on this for quite some time (see here for a little background), so I'm pleased that it looks close to done at least as far as the core algorithm. It uses global variables for now, and I'm sure there are a couple of other bugs lurking, but here it is, after the jump.
Working on a drug safety project
In order to move some of my personal interests along, I have been trying to implement the methodology found in Berry and Berry's article Accounting for Multiplicities in Assessing Drug Safety. This methodology uses the MedDRA hierarchy to improve the power of detecting damage to a particular organ. (The drawback, of course, is that MedDRA system organ classes do not perfectly correspond to the biology.) Apparently some other groups have already done so, but these implementations are hiding in paid software or on people's local drives, but doing my own implementation in R is a good learning experience.
I've been working on this project for some time now, off an on. Well, I've been making progress, and I'll share the results when I'm done. I'd also like to implement some of the other similar algorithms in this area, including the Poisson model that accounts for multiple occurrences of an adverse event, and a recent methodology that looks for "syndromes" (i.e. occurrences of groups of specific events all of which arise within a short time) and "constellations" (where the time restrictions are relaxed).
I've been working on this project for some time now, off an on. Well, I've been making progress, and I'll share the results when I'm done. I'd also like to implement some of the other similar algorithms in this area, including the Poisson model that accounts for multiple occurrences of an adverse event, and a recent methodology that looks for "syndromes" (i.e. occurrences of groups of specific events all of which arise within a short time) and "constellations" (where the time restrictions are relaxed).
Thursday, October 15, 2009
Free copy of Elements of Statistical Learning
The Stanford school of data mining has a free pdf of The Elements of Statistical Learning at the book's website. It's an excellent book, describing (and going a long way toward unifying) everything from regression to smoothing, tree-based methods, clustering, and so forth.
Wednesday, September 23, 2009
Beginning with the end in mind when collecting data: having data vs. using data
One of Steven Covey's famous phrases is to begin with the end in mind. In my own work I have found that when planning clinical trials this adage is ignored when planning data collection and databases. Data collection is planned to have data, rather than use it.
I'll give a recent example. I was asked to calculate the number of days a person was supposed to take a drug. We had the start date and end date, and so it should have been easy to do end - start + 1. However, to complicate matters, we were asked to consider days when the investigator told the subject to lay off the drug. This data was collected as free text. So, for example, the data could show up as follows:
But we should not have had to. One person reviewing the data collection with the knowledge that this data would have to be analyzed would have immediately and strongly recommended that the data be collected in a structured format for the statistician to analyze at the end of the trial.
It is with great interest that I note that this problem is much wider. This blog post suggests a possible reason: problems of the past had to do with hidden information or data, but modern problems have to do with problems hidden within data that is in plain sight (a hypothesis of Malcolm Gladwell and probably many others). That is, in the past, having the data was good enough. We did not have space to store huge amounts of data, and certainly not the processing power to sift through all of it. Now, we have the storage and the processing power, but our paradigm of thinking about data has not kept up. We are still thinking that all we need is to have it, when what we really need is to analyze it, discard what's irrelevant, and correctly interpret what is there.
And that's why Hal Varian regards statistics as the "sexy job" of the next decade.
I'll give a recent example. I was asked to calculate the number of days a person was supposed to take a drug. We had the start date and end date, and so it should have been easy to do end - start + 1. However, to complicate matters, we were asked to consider days when the investigator told the subject to lay off the drug. This data was collected as free text. So, for example, the data could show up as follows:
- 3 days beginning 9/22/2009
- 9/22/2009-9/24/2009
- 9/22-24/2009
- Sept 22, 2009 - Sept 24, 2009
But we should not have had to. One person reviewing the data collection with the knowledge that this data would have to be analyzed would have immediately and strongly recommended that the data be collected in a structured format for the statistician to analyze at the end of the trial.
It is with great interest that I note that this problem is much wider. This blog post suggests a possible reason: problems of the past had to do with hidden information or data, but modern problems have to do with problems hidden within data that is in plain sight (a hypothesis of Malcolm Gladwell and probably many others). That is, in the past, having the data was good enough. We did not have space to store huge amounts of data, and certainly not the processing power to sift through all of it. Now, we have the storage and the processing power, but our paradigm of thinking about data has not kept up. We are still thinking that all we need is to have it, when what we really need is to analyze it, discard what's irrelevant, and correctly interpret what is there.
And that's why Hal Varian regards statistics as the "sexy job" of the next decade.
Wednesday, August 26, 2009
The Bayesian information criterion (BIC) doesn't make sense to me
For fitting a model, several different criteria can be used. The first is the Aikake information criterion (AIC), which is basically -2*loglikelihood + 2*# parameters. So if you add a parameter to the model, it penalizes the AIC by 2, so you would need a commensurate decrease in -2*loglikelihood (so basically the loglikelihood would have to increase by at least 1) to make it worth adding.
The BIC penalizes the -2*loglikelihood by # parameters*log(sample size). So for large sample sizes, to add a parameter to the model you would have to improve the loglikelihood by a lot more. But, doesn't a richer sample allow you to explore more parameters for the model? So in the case where you are able to explore more parameters, the BIC forces you to use fewer. Doesn't make a lot of sense to me, although the loglikelihood does take on a larger range of values with a larger sample as it sums over the sample.
The BIC penalizes the -2*loglikelihood by # parameters*log(sample size). So for large sample sizes, to add a parameter to the model you would have to improve the loglikelihood by a lot more. But, doesn't a richer sample allow you to explore more parameters for the model? So in the case where you are able to explore more parameters, the BIC forces you to use fewer. Doesn't make a lot of sense to me, although the loglikelihood does take on a larger range of values with a larger sample as it sums over the sample.
Thursday, July 30, 2009
Meetings and makers
Paul Graham has a very interesting article on the schedules of the manager and maker. As someone expected to perform both functions at the same time I find the tension between the two rather intense. I often end up asking whether I will ever be able to do any of the work I have to meet about.
Bonus: Stephen Dubner's take. He's in a similar position, only serially.
Bonus: Stephen Dubner's take. He's in a similar position, only serially.
Saturday, July 11, 2009
Causal inference and biostatistics
I've been following the discussion on causal inference over at Gelman's blog with quite a bit of interest. Of course, this is in response to Judea Pearl's latest book on causal inference, which differs quite a bit from the theory that had been forwarded by Donald Rubin and his colleagues for the last 35 years or so.
This is a theory that I think deserves more attention in biostatistics. After all, it goes back to the root of why we are studying drugs. Ultimately, we really don't give a damn about whether outcomes are better in the treated group than in the placebo group. Rather, we are more interested in whether we are able to benefit individuals by giving them a treatment. In other words, we are interested in the unknowable quantity of what each person's outcome is if they are treated and what it is if they are not. If there's an improvement and it outweighs the (unknowable) risks, the drug is worth while. The reason we are interested in outcomes of a treated group and outcomes of a placebo group is that it's a surrogate for this unknowable quantity, especially if you use the intention-to-treat principle. However, as mentioned in the linked article and the research by Rubin, the intention to treat principle fails to deliver on its promise despite its simplicity and popularity.
Some clinical trials are now being run with causal inference as a central part of the design. Tools such as R and WinBUGS and Bayesian concepts now make this logistically feasible. Further advances in statistical handling of partial compliance to treatment, biological pathways of drugs, and the intention to treat principle itself make causal inference look much more desirable by the day. It's really only inertia caused by the popularity and (apparent) simplicity of intention to treat that makes this concept slower to catch on.
This is a theory that I think deserves more attention in biostatistics. After all, it goes back to the root of why we are studying drugs. Ultimately, we really don't give a damn about whether outcomes are better in the treated group than in the placebo group. Rather, we are more interested in whether we are able to benefit individuals by giving them a treatment. In other words, we are interested in the unknowable quantity of what each person's outcome is if they are treated and what it is if they are not. If there's an improvement and it outweighs the (unknowable) risks, the drug is worth while. The reason we are interested in outcomes of a treated group and outcomes of a placebo group is that it's a surrogate for this unknowable quantity, especially if you use the intention-to-treat principle. However, as mentioned in the linked article and the research by Rubin, the intention to treat principle fails to deliver on its promise despite its simplicity and popularity.
Some clinical trials are now being run with causal inference as a central part of the design. Tools such as R and WinBUGS and Bayesian concepts now make this logistically feasible. Further advances in statistical handling of partial compliance to treatment, biological pathways of drugs, and the intention to treat principle itself make causal inference look much more desirable by the day. It's really only inertia caused by the popularity and (apparent) simplicity of intention to treat that makes this concept slower to catch on.
Wednesday, July 1, 2009
PK/PD blogging
Well, it seems that for any topic, there is someone willing to blog it. In this particular case, that is a very good thing. Via Derek Lowe's excellent blog, I found someone blogging on pharmacokinetics and pharmacodynamics (PK/PD). Drug development is very inefficient as it stands right now, and PK/PD modeling and simulation is one up and coming way to make it more efficient. I'll look forward to seeing what the author has to say in the upcoming weeks.
Sunday, June 21, 2009
Excellence in reporting statistics
Sharon Begley of Newsweek received the Excellence in Statistical Reporting Award - Statistical Modeling, Causal Inference, and Social Science
I have to say, I didn't even know the ASA (a professional organization to which I belong and participate) gives out awards for reporting. With all the bad reporting involving statistics, it's refreshing to see a real effort for someone to try to help the public make sense of everything. Way to go Sharon!
Thursday, June 18, 2009
Test post from Inference for R
I am testing out the Inference for R blogging tool.
a <- 3
b<-3
c<-a+b
print(c)
[1] 6
Pretty neat.
Sunday, June 14, 2009
Graphing the many dimensions of gay rights
Gay Rights are Popular in Many Dimensions - Statistical Modeling, Causal Inference, and Social Science
In addition to having an interesting message, the graph in the article is the most well done I've seen in a long time. The data to ink ratio is extremely high, and the amount of sheer data presented is astounding. Yet, the graph is clear and easy to read.
Friday, June 12, 2009
The IN VIVO Blog: Pfizer Deceives, While GSK Shines
The IN VIVO Blog: Pfizer Deceives, While GSK Shines
Dear Pfizer:
Cut it out. Really. You're making it harder for everybody, including those of us who are honestly trying to get drugs on the market for the patients. You're the 800 pound gorilla, and you need to take on your responsibility as role model, not the big spoiled kid who tricks the teacher.
Monday, June 1, 2009
The future of data visualization stupidity
also,
Ok, I don't know where to begin. X-axis, I guess. The really funky distortion on the x-axis distorts the areas of this graph so even if they meant something you couldn't make any inferences. The y-axis doesn't exist, making these areas meaningless even if you could rescale the x-axis. Even if you could find a common y-axis scale there is no differentiation between historical data and projection, but perhaps you could be astute to figure that out (no uncertainty intervals, though).
But there's an even more fundamental problem. The above graph is not about information at all, but rather media. So the graph is as much about information as its information content.
But you can take it even further. It looks like the poor "local marketplace" finally died out in 1998. But that ignores local marketplaces created by, say, social networks. Television stations have websites, blogs, and many even have social network sites. Newspapers and magazines have websites, blogs, and even social networks. To create such a sharp distinction in the graph above avoids a fundamental point about information media: that the lines are blurring, and will likely to stay blurred or even blur more in the future.
All in all, a useless graphic.
Sunday, May 24, 2009
Thursday, May 21, 2009
Is it just me, or is Eli Lilly asking for a torcetrapib-like epic fail?
From Lilly's press release:
It may be standard practice in some circles to run a large trial on a long term endpoint based on a shorter term endpoint or a biomarker (or even 2), but, while I hope this trial succeeds (we definitely need an effective Alzheimer's treatment), I can't put my money on it at this point. I wish them luck.
INDIANAPOLIS, May 21, 2009 /PRNewswire-FirstCall via COMTEX News Network/ -- Eli Lilly and Company (NYSE: LLY) today announced it will begin enrolling patients this month in two separate but identical Phase III clinical trials of solanezumab(i), previously referred to as LY2062430, an anti-amyloid beta monoclonal antibody being investigated as a potential treatment to delay the progression of mild to moderate Alzheimer's disease. The trials, called EXPEDITION and EXPEDITION 2, will each include a treatment period that lasts 18 months and are expected to enroll a total of 2,000 patients age 55 and over from 16 countries.Ok, fine. I'm sure the Lilly solanezumab team is excited and nervous, as this is their moment in the spotlight. But then later on:
Alzheimer's disease theory suggests that amyloid beta clumps together and eventually kills brain cells. Solanezumab binds specifically to soluble amyloid beta and thereby may draw the peptide away from the brain through the blood. In short-term clinical studies, solanezumab appeared to have dose-dependent effects on amyloid beta in blood and cerebrospinal fluid. The clinical studies to date have been too short to evaluate any potential delay in the progress of Alzheimer's disease.Ok, hit the brakes. I am acquainted with some of the statisticians over at Lilly, and they are very bright people who have contributed a lot to the field. I can't help but think they are more nervous than the people who made the decision to run with a long-term endpoint in Phase 3 based on a short-term endpoint in Phase 2. I've known a couple of epic fails falling into the same category. Throw on top of that that the theory behind it, while still king of the hill, is starting to be thrown into doubt, and Alzheimer's is a graveyard anyway. (Just read this series to see the associated problems.)
It may be standard practice in some circles to run a large trial on a long term endpoint based on a shorter term endpoint or a biomarker (or even 2), but, while I hope this trial succeeds (we definitely need an effective Alzheimer's treatment), I can't put my money on it at this point. I wish them luck.
Tuesday, May 19, 2009
Deep thought of the day
Biostatisticians should be involved much more than we currently are in the forming of a data collection strategy in clinical trials. Too often we are left in the margins of case report form design, and so we have to deal with the consequences of decisions made by those who don't live downstream of the data collection process.
I have a suspicion that clinical trials isn't the only place where this principle applies.
I have a suspicion that clinical trials isn't the only place where this principle applies.
Sunday, May 17, 2009
New edition of Jennison and Turnbull
I just found out that there will probably be a second edition to Jennison and Turnbull's book on group sequential designs (with a slightly different title) coming out next year. Rock on.
Tuesday, May 12, 2009
(Pharmaceutical) SAS programmers
I've seen more SAS programmers work on short term projects than any other field in the CRO/Pharma/Biotech industry. I don't know how they do it.
Monday, May 11, 2009
I'm all for studying comparative effectiveness, but ...
Eye on FDA explains very well my concerns with comparative effectiveness research.
Sunday, April 5, 2009
Inference for R: part 2
So I'm working my way through Byron Morgan's Applied Stochastic Modelling (2nd ed) (which, by the way, I think is great so far -- more later), and I'm trying to work the examples using Inference for R studio. Usually, when I do this sort of thing, I put all my data in .csv (comma separated value) files and write one big program with lots of #'s to divide up the section. With Inference for R, I placed the dataset into an Excel file attached to the container and was able to automatically call it without any explicit import commands. I was also able to divide up different sections of the problem into different code blocks, which I could explicitly turn on or off. I don't know yet if the code blocks can have dependencies, as the different sections I was working with are more or less independent. I also used the debugging function, which should be familiar to anyone using gdb--you can set breakpoints, continue, and step line-by-line through programs and monitor the state of the programs (e.g. intermediate variables). This I found immensely helpful in making sure my programs ran correctly.
It's a bit of a change in mindset from my normal R development, but I could definitely get used to it, and I think it could make much of my R development more efficient. More later.
It's a bit of a change in mindset from my normal R development, but I could definitely get used to it, and I think it could make much of my R development more efficient. More later.
Thursday, April 2, 2009
Inference for R: first impressions
Inference for R is a new development environment for R. I think it is just for Windows at this point. Not only does it integrate a decent editor, conditional execution of code, and R, but it also improves the data management by packaging R functions, datasets (manipulated through Excel 2007 or 2003), and R "code blocks" into one "container." It also, optionally, integrates R and MS Word or Excel (with Powerpoint integration promised "soon"), effectively solving some issues I've been having albeit not in such a lightweight way.
I'll be trying the product out, and will report back here.
I'll be trying the product out, and will report back here.
Wednesday, March 25, 2009
Challenges in statistical review of clinical trials
The new 2009 ASA Biopharm newsletter is out, and the cover article is important not for the advice it gives to statistical reviewers (I'm assuming the point of view of an industry statistician), but for a glimpse into the mindset of a statistical reviewer at the FDA. Especially interesting is the use of similar results in a trial as a warning sign for potential fraud or misrepresentation of the actual data.
Friday, March 6, 2009
I was wrong: SAS does have decent random number generation
Here and other places I've been dissing on SAS's random number generation capabilities as unsuitable. Well, it turns out that I'm at least half wrong. If you use the ranuni or other ran___ series of random number generators, you still have the 232-1 period generator. SAS's implementation is top of the line of this class of generators, but the class is unsuitable for anything but the simplest of tasks (or teaching demonstrations). Serious modern clinical trial simulation tasks require a period of at least 264.
Enter SAS's RAND function. RAND takes a string argument that identifies the distribution (e.g. uniform or normal) followed by enough numerical parameters to identify the class member (e.g. normal requires either 0 or 2 numerical parameters&emdash;0 parameter gives you an N(0,1) distribution and 2 parameters identify the mean and variance). The special thing about RAND is that it is based on the Mersenne twister algorithm, which has a period of 219937-1 and very good "randomness" properties.
So I hereby recant my criticism of SAS's PRNG capabilities.
Simplicity is no substitute for correctness, but simplicity has an important role
The test of a good procedure is how well it works, not how well it is understood. -- John TukeyPerhaps I'm abusing Tukey's quote here, because I'm speaking of situations where the theory of the less understood methodology is fairly well understood, or at least fairly obvious to the statistician from previous theory. I'm also, in some cases, substituting "how correct it is" in place of "how well it works."
John Cook wrote a little the other day on this quote, and I wanted to follow up a bit more. I've run into many situations where a more understood method was preferred over one that would have, for example, cut the sample size of a clinical trial or made better use of the data that was collected. The sponsor simply wanted to go with the method that was taught in the first year statistics course because it was easier to understand. The results were often analysis plans that were less powerful, covered up important issues, or simply wrong (i.e. exact answer to the wrong question). It's a delicate balance especially for someone trained in theoretical statistics corresponding with a scientist or clinician in a very applied setting.
Here's how I resolve the issue. I think that the simpler methods are great for storytelling. I appreciate Andrew Gelman's tweaks to the simpler methods (and his useful discussing on Tukey as well!), and think basic graphing and estimation methods serve a useful purpose for presentation and first-order approximations of data analysis. But, in most practical cases, they should not be the last effort.
On a related note, I'm sure most statisticians know by know that they will have the "sexiest job" of the 2010 decade. The key will be how well we communicate our results. And here is where judicious use of the simpler methods (and creative data visualization) will make the greatest contributions.
Monday, January 5, 2009
Best wishes, Ed!
Ed Silverman, the man behind Pharmalot, is closing up shop. I read his site nearly every day, and often got news from there days before it would appear in other outlets. I wish Ed all the best on his next endeavor, and I'm sure we'll be hearing from him again.
Update: via Derek Lowe, we now know where Ed is going.
Update: via Derek Lowe, we now know where Ed is going.
Sunday, January 4, 2009
O'Brien-Fleming designs in practice
It seems that the O'Brien-Fleming design is the most popular of all group sequential clinical trial designs. This particular strategy represents over 80% of all adaptive clinical trials I have been involved with, and I've been involved with quite a few. Of course, there are tweaks to this design. For example, you can run a trial as a two-sided O'Brien-Fleming (so that you stop early if you show exceptional benefit or harm) or as a efficacy-futility design. For the efficacy-futility design you can use an 'inner-wedge' strategy so that you continue the trial in the case of moderate benefit or harm, or you can use a one-sided efficacy-futility design so that you continue in the case of moderate benefit but stop in the case of futility or any apparent harm. You can design the trial as a classical O'Brien-Fleming design or as a Lan-DeMets approximation using spending functions. Most of these decisions, despite the combinatorial explosion of choices, are straightforward because of ethical and logistical constraints. The main choices to be made are the number of interim analyses and, in the case where there are just one or two, the timing of the analyses. Statistically, this design is easy to implement, though there are several "gotchas" that makes it such that I would want someone experienced in the methodology to at least supervise the statistics. (Running software such as the SAS routines or EaST, while those packages are good, won't suffice.) After the design is done, the fun (or the hard part, depending on your point of view) begins. Other articles in this blog cover some of the finer details of design. (Just click the tags to find related entries.)
The first decision is how to implement the interim analyses, and exactly who should know what information at each interim analysis. In Phase 2, this isn't as big a deal as it is in Phase 3, where regulatory authorities tend to be very picky. If subjects' treatment codes, or even the results of the analysis based on those codes, are revealed publically at the interim analysis stage, the trial may be compromised. For example, the greatest fear is that the sponsor will make unethical adjustments to the trial based on the interim analysis, but this is not the only concern. If word gets out that the treatment is "successful" or "not successful" at the interim analysis, and the trial is supposed to continue, then potential subjects may refuse to give consent because they may want to be guaranteed treatment (if successful) or not want to receive a futile treatment. Therefore, the most common way to implement the interim analysis is to use an independent board consisting of at least a biostatistician and clinician who do not otherwise make decisions in the trial. These individuals will analyze the unblinded data and make recommendations based on the results. The sponsor will receive only recommendations, which usually consist of continue the study, terminate the study, or modify the study.
Once the actual interim analysis process is chosen (and what information is released), then the clinical trial operations and data management groups also have to prepare for the interim analyses. Clinical study monitors need to go to the clinics and verify all interim data to make sure it is consistent with the investigators assessment, just as if the interim analysis were the study's final analysis. Data management has to enter and verify the data in the database. Then the independent statistician (i.e. whose only job regarding to study is running the interim analysis) needs to run and verify the analysis. Electronic data capture (EDC) usually makes this process faster and easier, making a smaller lag time between the cut off data for the interim data and the analysis (and recommendations). The minutes from the review of the analysis need to be archived, but sealed until the end of the study. In my experience, the time between data cutoff and the interim analysis is very busy, and very hectic no matter whether EDC is used or not.
Of course, the above logistical considerations apply to any sort of group sequential, adaptive, or sequential Bayesian design. If a sequential trial runs to the maximum sample size, then it is more expensive than the fixed sample size counterpart because of the added planning in the beginning and the additional effort in the middle. These designs can show their strengths when they terminate early, however.
Sequential and adaptive designs have several subtle caveats that I will address in the coming posts.
The first decision is how to implement the interim analyses, and exactly who should know what information at each interim analysis. In Phase 2, this isn't as big a deal as it is in Phase 3, where regulatory authorities tend to be very picky. If subjects' treatment codes, or even the results of the analysis based on those codes, are revealed publically at the interim analysis stage, the trial may be compromised. For example, the greatest fear is that the sponsor will make unethical adjustments to the trial based on the interim analysis, but this is not the only concern. If word gets out that the treatment is "successful" or "not successful" at the interim analysis, and the trial is supposed to continue, then potential subjects may refuse to give consent because they may want to be guaranteed treatment (if successful) or not want to receive a futile treatment. Therefore, the most common way to implement the interim analysis is to use an independent board consisting of at least a biostatistician and clinician who do not otherwise make decisions in the trial. These individuals will analyze the unblinded data and make recommendations based on the results. The sponsor will receive only recommendations, which usually consist of continue the study, terminate the study, or modify the study.
Once the actual interim analysis process is chosen (and what information is released), then the clinical trial operations and data management groups also have to prepare for the interim analyses. Clinical study monitors need to go to the clinics and verify all interim data to make sure it is consistent with the investigators assessment, just as if the interim analysis were the study's final analysis. Data management has to enter and verify the data in the database. Then the independent statistician (i.e. whose only job regarding to study is running the interim analysis) needs to run and verify the analysis. Electronic data capture (EDC) usually makes this process faster and easier, making a smaller lag time between the cut off data for the interim data and the analysis (and recommendations). The minutes from the review of the analysis need to be archived, but sealed until the end of the study. In my experience, the time between data cutoff and the interim analysis is very busy, and very hectic no matter whether EDC is used or not.
Of course, the above logistical considerations apply to any sort of group sequential, adaptive, or sequential Bayesian design. If a sequential trial runs to the maximum sample size, then it is more expensive than the fixed sample size counterpart because of the added planning in the beginning and the additional effort in the middle. These designs can show their strengths when they terminate early, however.
Sequential and adaptive designs have several subtle caveats that I will address in the coming posts.
Thursday, January 1, 2009
New year's resolution
A minor resolution is to be a little more attentive to this blog.
Happy New Year!
Happy New Year!
Subscribe to:
Posts (Atom)