Tuesday, August 30, 2016

Top down proteomics on a CE-Orbitrap!


Lots going on in the world of capillary electrophoresis (CE) - electrospray these days!  (Sometimes this is referred to as capillary Zone electrophoresis (CZE) as in this new paper from Yimeng Zhao et al., 

In this study, this group fractionates an intact yeast proteome via RPLC and takes their own CE source coupled to a QE HF and sees how many proteoforms they can get.

Their in-house CE system has some serious run time capabilities and it looks like they were doing 40 minute runs. They end up detecting at the MS1 level over 3,000 individual proteoforms and are able to get MS/MS confirmation on 580 of them.

You could argue that is a lot of run time for 580 proteins. That is 15 hours or so, but it shows the potential that optimized CE has for top-down proteomics!

Monday, August 29, 2016

2D-gels show there are two different mechanisms of skin aging


It is no secret that exposure to UV increases the appearance of skin aging, but don't tell Tanning Mom (video link).

Jia-You Fang et al., decided to use proteomics to take the process apart and it looks like we're looking at two very unique things that just look alike from the surface.

They started with an animal model and looked at the protein levels induced by natural aging versus those induced by UV exposure using a 2D-gel based system with silver staining. The differential spots were determined by a fancy sounding spot software and the cool spots were cut and digested out for MALDI-TOF peptide mapping.

They observed some key differences, including the involvement of ubiquitin in the natural aging process where it didn't play a role in the UV induced. Now that I think of it, it seems naive to assume that both would be on the same mechanisms -- but I found the abstract surprising enough to read this short paper.


Saturday, August 27, 2016

hEIDI -downstream processing for massive Mascot processed data sets.


Does your data processing resolve around a central and powerful Mascot server, but you want to compile massive datasets in a streamlined manner?

Maybe you should check out hEIDI.  It is described in this new paper in JPR from AM Hesse et al.,.


To be perfectly honest, at first glance it definitely looks like you'll need a good bioinformatician to put this together from even the downloadable tools here, (I tried, but very rapidly got to a window that seems to assume a lot of pre-existing knowledge. I'm gonna assume I'm dumb and not that the authors of the instruction manual left out several critical pieces of information necessary to get going. Honestly the safer assumption!) but the output seems pretty powerful and it kicks it out in a format that meets the MIAPE standards automatically.


Fading plastic


What's up with that LCQ...or that 3200? Was it always yellow/brown?

Turns out it is a consequence of the plastic that is in use. Check out how extreme the difference is in these Super Nintendo!


There is a cool article on this here.

Friday, August 26, 2016

Ummm...gene name errors are widespread in the scientific literature....

Okay....so you've probably ran into this before. Honestly, I would be really surprised if you haven't but if this paper is correct, Excel autoformatting is a much bigger problem than I ever guessed...

Ever sorted your output list and ended up with something weird like this?


Umm...why are there a bunch of dates in September at the top of my output sheet? And why do they have quantification values?

Cause if Excel sees the gene identifier for Septin 3, SEP3, its gonna assume you're just lazy and didn't feel like writing the date out correctly and it'll fix it for you. Which is all well and good, cause septins are super boring and don't really do anything...wait...they're GTPases!  Okay, no problem. We'll just put an apostrophe in front of it 'SEP3' and everything is okay.  Everybody else do that and we're fine (except...if you have to convert it to text and back and then it does it again).

This isn't the only one. Alexis knew several off the top of her head when ABRF Tweeted this paper yesterday, so its affecting the nucleotide bioinformaticians as well.

The title is flashy!


Sounds alarmist, right? How bad could it be?  Shockingly surprisingly bad!

They pulled thousands of published papers and supplemental files and looked for genes that had annotation mistakes that could be directly attributed to Excel autocorrects (autocowrecks?) they found hundreds of supplemental files per year in the relatively small list of journals they looked at.

The journal average is about 20% or so. About 20% of the supplemental data found in leading genomics journals in the last 10 years had some sort of Excel-linked mess-ups in the data. The sample size was smaller and the number of supplemental tables is always bigger in the biggest journals, but the jounal with the highest percentage of spreadsheet autoconversion mistakes? Something called ?Nature? -- with over 30% of files showing these issues.

The solution these authors suggest after all that searching? Databases!

No thanks! I'll turn off all the autocorrect and autofill functions, like this!

Go to File and Options is at the bottom and this window will pop up.

Important: Change your UserName to something funny so that people will see it as the author of any spreadsheet you send them (they'll see it rarely enough that it will stay funny for a while)/ Then go to the Proofing menu!


Then open the autocorrect options and turn a bunch of stuff off!



If you don't need it, turn it off!

Shoutout to the ABRF forum that turned this up and @ABRF for Tweeting it posting the paper link!

Thursday, August 25, 2016

A researcher's guide to mass-spec based proteomics!


I can say with absolute certainty that I'm not the only person who considers the guy in the picture above to be kind of a personal hero. And not just because other people in that category occasionally send me pics like the one above late at night.

This new paper (Epub ahead of print as of early this AM) does an even better job of characterizing why. While the title might seem kinda dull, the topic of this paper centers on probably the most common conversation I have during my day job (3 times yesterday and twice with people with really impressive titles). That question is -- How do we translate the brilliant stuff mass spectrometrists are doing into what biologists care about?

Honestly, my entire career has been based on being kinda okay at both mass spec and biology and talking a LOT so this is dear to my heart.

How does this paper help?

WILEY DON'T SUE ME!! Please see disclaimer! I'll take it down, but I'm directing you traffic for free.

By breaking this stuff down to basics as shown in the picture above. How clear is that!??!

Is it the most original paper ever written? Nah... But it sure is approachable, extremely well-written, and breaks down a lot of this stuff to more palatable little blocks without leaning heavily on maths maybe us biologists weren't required to take. I highly recommend downloading this paper!

While I'm on this topic, I'm going to self-promote a little. I've prepared a seminar on something similar. You can call it a "Orbitrap physics for biologists day". It originally started as a way to maybe help collaborators or core lab customers at JHU to ask better questions, but has grown into something a little larger.


If you are on campus at the NIH, this is 2 metro stops up at Twinbrook lane. We're working on having the meeting recorded/telecasted and made available, but we don't have details yet.

If you are local and want to attend you can register for this meeting here. 

You bet I'll be stealing borrowing and fully citing some of the beautiful and concise illustrations and explanations from this new paper as improvements to this workshop material!




IMSC


Thanks to Twitter, I've been able to follow along a little here and there on what is going on at IMSC in Toronto. Though...not thanks  @theIMSF, cause they haven't tweeted in over 4 years.... but #IMSC2016 and @ScientistSaba have been great for conference highlights

This reminded me to check out the IMSF website, which has some great resources including direct links to at least 30 other international mass spectrometry groups. This is super useful for seeing if you can leverage your 2017 conference budget to help realize your life-long dream of celebrating Halloween in Switzerland...for example... ;)


Wednesday, August 24, 2016

Convert iTRAQ labeled spectral libraries to TMT (or reverse it!)


Yeah! Been waiting to talk about this one!

Paper first: It is from (Jane) Zheng Zhang et al., out of Steve Stein's lab (and some guy named Markey was involved as well). Its in this month's JPR.

Spectral libraries are faster than general search algorithms and they have great levels of confidence. This is because you aren't looking at theoretical spectra generated on the fly. You are either looking at real observed MS/MS spectra or you are looking at spectra that are informed by real-life MS/MS spectra. (There are fully in silico developed spectral libraries out there, too, but this doesn't count here).

Downside? You have to have a darned spectral library that is relevant to your experiment. For example, you know the hundreds or thousands of iTRAQ labeled cancer cell line runs in repositories people have put in? Great data to pull spectral libraries from -- if you are doing iTRAQ. If you want to use TMT so you can have more channels you're out of luck cause you can't search that data against any of those libraries...

Until now!

What this study shows is that --HEY! The peptide backbone fragments the same way whether its TMT or iTRAQ labeled. And if you just pull the spectral library and convert (clean) it to keep the peptides that will stay (and are experimentally observed) and mass shift the ones that will change thanks to the tag, it works almost as well as running a sample with the correct reagent and original library!

Work is going on at NIST to take libraries and convert them for direct searching. They also state in the paper that spectral library searching that can utilize PTMs are in works! Yeah!



Tuesday, August 23, 2016

Double laser dissociation in the HCD cell of a Q Exactive!


All summer, at virtually the same time every single morning the same two questions have occurred to me:

1) How many other people are reading JASMS in their bathroom right now? Guess it depends on the fiber intake of the average mass spectrometrist -- and where they have JASMS delivered. This is probably a question that could be answered with some effort, but I doubt I'll ever make it a priority to find out. A SurveyMonkey and some epidemiology stats could shed some light, but a tracking App of some kind would probably be necessary to get real numbers. Is it worth the time? Or is it better for it to be one of those deeper philosophical mysteries that we shouldn't ever try to investigate because it would reveal more about us than we should ever really know?

2) Why doesn't anyone I know have a mass spec equipped with lasers?  This is the important one.

Cause...guess what?!?!....the cover of JASMS is another group with a....


...in their HCD cell.  I know...I promised not to write about any more of these, but this one raised the bar. They put 2 kinds of lasers in their HCD cell and used it to analyze intact ubiquitin.

One laser is high energy and another is low energy, but it appears they can use them both simultaneously.

They read out the fragments in the Orbitrap at 140k resolution with 3 microscans and they sum 50 scans for their fragment analysis. This is, arguably, an awful long time to accumulate fragments of an 8.5kDa protein. But this is a fragmentation method development paper, so we don't really mind about that.

How's it do? 84% sequence coverage of the +13 charge state!  Not so bad!

Now...when is someone in Maryland getting a laser HCD cell?!?!

Monday, August 22, 2016

SPECTRUM ANALYZER!!! Pull 43 metrics out of any .RAW file!!!


Is this old news? Maybe it is, but I've never seen it till this weekend. AND...I'm always hearing people say "I wish RAWMeat still worked for these new instruments..."  While many programs will extract the same data out of RAW files that RAWMeat does/did, most of them provide that after you have processed your data.

SpectrumAnalyzer is the opposite. It is a tiny and fast piece of software that pulls the scan header information from any of your RAW files and makes a handy TXT file out of it.

As a warning, it does do this for EVERY SINGLE SPECTRA, so it looks a little daunting, but you could easily make a little matrix document that bins the results into histograms and produces a pretty output.

You can get this nice little tool at the IMP page here (go to Tools!)

Sunday, August 21, 2016

Proteomics and phosphoproteomics of Scott Syndrome!


Not the right Scott at all! And much more serious.

Scott Syndrome (I just learned this today) is an extremely rare bleeding disorder. How rare? Well, there is one guy in the UK who is confirmed to have it and is a registered blood donor. They call him ScottUK.

Some work has gone into studying the disease and conclusions have been drawn regarding how Ca2+ channels are involved, but more work needs to be done.

Enter Fiorella Solari et al., and an incredibly through proteomic analysis of platelets from unaffected people and ScottUK.

iTRAQ is used
Phosphoproteomics is used
HighPH reverse phase fractionation is used
A Q Exactive does the heavy lifting and
PD 1.4 plus a publicly available Excel Macro from Karl Mechteler's lab (??) for simplifying modified peptide output reports  (find it under software tools!) is used to find the cool stuff
AND
PRMs on an Orbitrap Fusion are used to confirm what they found.

See, thorough!

The did this with activated and unactivated platelets AND they also use open searching techniques (with quantification) to study protein cleavage events.

Honestly, I'm unqualified to assess the biology on this, but if you have a woefully under-analyzed disease you wanted to know more about -- this is a great study to show you how to go from little information to tons of information about that disease!

Saturday, August 20, 2016

What MS1 resolution and mass accuracy do you need to be as precise as an SRM?


I have wondered about this for years. I even put together a really poorly planned idea to try and test it myself that was hindered by 1) my lack of access to a triple quad 2) my lack of free time 3) my lack of motivation to spend some of my little free time trying to get access to a triple quad, figure out how to run it and then...run some stuff on it 4) the poorly planned idea was written on a napkin and the poor idea was written in even worse handwriting and the ink was diluted by something that one might guess was a solution of some reasonably high concentration of EtOH buffer.  I KNOW I'm not the only person who has a manilla envelope full of napkins from late meeting hangouts with brilliant people.

What was this about again? OH YEAH!!!

As resolution and mass accuracy increase in an MS1 scan your certainty that the ion you are looking at increases. For example, if I shoot an ion through a single quad and I see a peak that elutes at 10.5 minutes and is 312.1 +/- 0.5 Da and I'm looking for an ion that is 311.8413 -- is that my ion? If the matrix is simple enough and the concentration is high enough, SURE! it probably is. But what if that is a HeLa peptide digest? If you submit that as quantification of your ion, that paper is coming back to you. The complexity there (>1M peptides?) is way too high for you to be sure.

What if you are running a super high end TOF instrument that can get true 20,000 resolution?  If that TOF comes back with 311.92, is that your ion? Your certainly is definitely going up and your reviewers might actually take that.

SRM/MRMs are the gold standard. You isolate your MS1 mass +/-0.5Da and you fragment your ion and you collect your single transition at +/-0.5Da. We've historically considered you pretty certain that you are looking at the right ion. Anyone who has done years of them in complex matrices (ugh...) can tell you that....well....you really hope you get 2 or 3 really good transitions, and with peptides you typically do. Chances are you can tease out what peak is the real one and what is coisolation. Still, though, this is the gold standard. (Also the gold standard because they all use the same expired or rapidly expiring licensed technologies which makes them 1) all the same --85% of your ions hit the detector 2) suuuper cheap today, in mass spec terms.)

SO. BACK ON TOPIC. Here is the question. At what mass accuracy and resolution do you need in an MS1 before it is as good or better than a QQQ????  After years of wondering about this I was reading a review before a meeting about something completely unrelated and BOOM!! these guys did it like 2 years ago!!

Its 2016? 3 years ago!!

I think I know all these people....stalked them on LinkedIN just now...I totally worked with this team around 2013 on something completely different. They had the answer to this question the whole time. I've got to work on saying less and listening more. But not today!

Wow. This is rambling Saturday....

What did they do? They took multiple mixtures of increasing complexity and spiked in peptides that didn't belong. The mixtures started real simple and went all the way up to HeLa digest separated on a short gradient. As complex as you can get without getting into the waking nightmare that is plant proteomics.

The peptides were spiked in at varying concentrations. A nice QQQ was used to quantify the peptides as they went. An Orbi Velos and an Orbitrap Elite were used for MS1 scans.

They did some really imposing math stuff and worked on the certainty.

And they found?!?!

(unenthusiastic drumroll, please!!!!!!!)

60k resolution and extracting the exact mass at +/-4ppm is roughly the equivalent of an MRM when the same retention time extractions are used.

Could this blog post have been 4 sentences and a link to this nice Open Access paper? Yeah....

EDIT: 9/11/16 -- Almost the same exact findings from a separate group around this time:

Differences? They used an original Exactive. Found that 50,000 resolution would give them the same specificity for quantification of drugs IN PATIENT PLASMA as an SRM.

Friday, August 19, 2016

The Colorectal cancer atlas!


Is this the summer of the Atlas? It might be.

This one right now is a little more style than substance, but it is a work in progress. The CRCA is an effort by David Chisanga et al., to pull together all the information out there in the colon cancer world. The paper was just released and the web portal went online just recently.

If this is your field, you'll probably want to bookmark it. As all the resources come online this'll probably be something you'll want to come back to. Just give 'em a little bit to work out the bugs. You can visit the graphically pretty interface directly here. 

Thursday, August 18, 2016

Nanospray single run analysis of peptidoglycan!



WHOA!

I'd wondered how long it would be till someone pulled this off!  Bacterial cell walls are sugar chains connected by peptide crossbridges. To get to it you use enzymes that destroy all the DNA and protein and you're left with the cell wall. You chop what is left up with lysozyme, reduce the sugars so they don't all elute in a single peak and then use LC-LC-MS to quantify the compounds and figure out what they are to reassemble a picture of the cell wall.

Marshall Bern (wait, I know him!) et al., show in this new study that they can ditch the first LC part. Just run the digested reduced peptidoglycan out on a single dimensional LC gradient and use ETD and HCD and BOOM!  you're done. (That sentence is too long).

By simplifying it they get to see some new, low-abundance muropeptides (glycopeptides) in C. difficile that they haven't seen before.

The paper points out that the traditional method is slooow. You use a phosphate buffer system to really get very clean separation of your compounds and fraction collect them. The phosphate buffer system is flushed out and then the remainder is analyzed with reverse phase. Yeah. Its slow. But this is the reason they've been doing it this way:

(From a Dave Popham paper. They all look like this.)

Using the phosphate buffer system you can quantify it like its a NIST standard.  Can you get a 4% CV? Heck yeah, you can.

Just shooting muropeptides on reverse phase alone as they did in this study?


Great for finding new muropeptides!  Ummm....maybe not so great for quantifying them and ultimately elucidating the peptidoglycan 3D structure? Hmmm....if only there was a quantification technique that didn't care about peak shape...then you'd have the whole package.

But for straight-up discovery of new muropeptides? Single shot 1D ETD/HCD with Byonic processing looks like the solution!

Wednesday, August 17, 2016

In-depth study of protein inference!


August and September is crazy time for my day job so the blog is probably going to be worse than usual for a while...so if I write about something during this time its cause I really really like it.

Case in point:


(Direct link here)

These guys totally had a protein inference party!  Protein inference is the problem that I think is really well described in the image at the top (this supposedly was taken from the wall in a 1st grade class....this is definitely a better school than the one I went to....).

We KNOW the Peptide Spectral Match (probably...). Our search engines are great at that. "This MS1 mass and MS/MS fragmentation of the area around that MS1 mass matches this peptide from our FASTA index".  The tricky part is what protein is present?

So this group of slackers did what everyone else would do...who had the technical capabilities of taking most of the protein inference algorithms and then putting them into the same operating environment. They used something called KNIME which appears to be some sort of a big Cloud-based collaboration environment. To get everything working together they assembled an OpenMS workflow within this environment. Of course, they made it all available to download on GitHub here (under the really cool name, KNIME-OMICS)

Once they got everything all operating under the same technical conditions and parameters. Wait. Describe everything:

They used the search engines: Mascot, X!Tandem and MS-GF+
And then the inference algorithm: FIDO, PIA, ProteinProphet, ProteinLP and MSBayesPro (I don't know the last 2. No time to investigate)

They picked 4 datasets of varying levels of complexity from public repositories. They range from a yeast digest all the way up to a lung cancer analysis. Then they go to work.  I'd like to mention that the paper is really well written. No guesswork regarding what setting they used for which algorithm. Every one I have the concentration on this little sleep to really look at seems clearly detailed.

Good news for us non-programmers in the Proteome Discoverer world, cause FIDO seems to perform really well in these studies, providing the highest number of unique proteins inferred as the number of databases increase of all the inference algorithms. Go FIDO!



Probably the coolest conclusion is something we've probably all observed a little -- that increasing the number of algorithms doesn't always increase the number of proteins inferred at the end. But it does increase the number of peptides which increases the strength and accuracy of our inferences.

 Despite the fact we didn't see Sequest or Comet employed I think we can infer from the other 3 algorithms and strength of observations that what they show here would reproduce well in the most used search algorithms as well.  This is the most thorough and best controlled study I've ever seen on protein inference so I'll definitely take it!


Tuesday, August 9, 2016

Under 60 days left to watch the talks you missed at ASMS!


This summer is flying!  September 30th seemed really far off. But considering there were 387(!!!) talks filmed at ASMS that you might want to watch...you might want to put logging in and watching a few on your queue. Maybe pause Agents of Shield where you are, knock out Bojack Horseman and then go back to see what the Avengers sidekicks are up to?

Wait. What? 

The videos are a little tricky to find via Google for some reason.

The site is: www.pathlms.com/asms

Direct link here. 

Monday, August 8, 2016

Proteomics indicates a strikingly strong pathway in an Alzheimer's mouse model


I'm going to admit right off the bat that I didn't read this whole paper. Mouse neuro- stuff is something that I'm not entirely comfortable thinking about, so if you mention in your methods you did "contextual fear conditioning as described previously" I'm not following that link for the details.

 What I do know? This study used a mouse Alzheimer's model and found a differentially regulated pathway that lit up like an airport runway in the midwest.

I'm going to start by assuming their model makes sense so I don't have to read any of those parts. In general, though, they took brains from their normalish super inbred mice and brains from their disease model inbred mice and extracted the hippocampal region. Having some experience in this sample prep technique, its worth noting that once you get past the vivid daylight hallucinations of the mice politely requesting their brains back, you're over the hump. The nightmares are relatively infrequent over the following 6 years or so.

The hippocampal regions were FASP digested, and separated on a 10cm nanoLC column over a 5 hour gradient on their Orbitrap Velos as they previously described in a study a few years ago. I find it somewhat remarkable that a 10cm column could contain enough theoretical plates to adequately make use of a gradient this long, but they again report a striking number of identified peptides/proteins. (~2,600 found in 3/4 samples).

Relative quantification was performed using spectral counting and normalization in: Visualization and downstream analysis in IPA.

What did they find? 103 statistically significantly disregulated proteins that all seem to fall right into a canonical pathway. When I mention proteomics in terms of neuro samples, I'm often met with the suggestions that this stuff is incredibly difficult. And I'm not trying to downplay the capabilities of this group. They are clearly experts in this technique, but around the sample prep this is a pretty straight-forward method that leads to as clear of a set of conclusions as I've seen.

Friday, August 5, 2016

Flu phosphoproteomics reveals almost 300 new phosphorylation sites!


It is seriously too early to be thinking about the flu, right? In a few months I'll be thinking about it all the time, wondering when some gross/adorable neice, nephew or friend's kid is gonna share a well-cultured virus with me.

Sandra Söderholm et al., have clearly been thinking about it for a while, cause they just knocked out a stellar phosphoproteomic analysis of flu-infected human macrophages. You can find it open access here at MCP.

They did some nice biology including mouse work on top of the human macrophages, but I mostly care about the other stuff.

The macrophages +/- virus were lysed, proteins digested, and the phosphopeptides IMAC enriched

Wait. Segway here. There are far far too many ways to enrich phosphopeptides at this point. I think we need some common nomenclature or something. For example-- extremely extremely minor criticism of this paper...but if you tell me that you IMAC enriched as you did in this paper and I go to this paper, get the PDF and find in the method section and you give me another reference to go to -- and its the exactly zero modifications of the Villen and Gygi IMAC method --


Oh no! The segway became longer than the serious stuff again!

The IMAC enriched phosphopeptides were ran out EasyNanoLC and Q Exactive (classic, I think). Method looks pretty standard, but I'd have to pull the files from PRIDE (PXD001620) to find out the gradient length and stuff (sorry. I know I'm a nerd, but this is seriously what I'm gonna look at first.)

They used Proteome Discoverer 1.3 with Mascot (big human database w/custom reverse) and PhosphoRS to get their lists. So they have macrophages +/-phosphorylation lists.

That stuff goes to PhosFox!  What's that? The website is here. They wrote it themselves (paper here) and it looks really cool. Its for +/- phosphorylation stuff. Unfortunately, it may only be available in Perl, but I can't investigate right now, I gotta get to work.

Wrap up. Honestly, I like this paper cause...
1) It is super biology centric. Here is our model and we're just doing a discovery to see what we can find
2) The mass spec just looks like another tool in the lab. Here is our standard old IMAC setup, EasyNano, QE, PD 1.3, and our cute little custom program to pull out the neat stuff --oh, and they use IPA to pull it all together.
3) And guess what?!? They found some new stuff. Phosphorylation sites with high confidence that no one has reported before that appear to be involved in our host response to viruses. Not the newest instrument, not some cutting edge new sample prep method -- solid, proven technology and methodology and some killer biology and BOOM!  A bunch of stuff we didn't know before!


(The Pug:Blog word ratio has now been returned to normal.)



Thursday, August 4, 2016

How are isotope ratios on Orbitraps?


Something I occasionally hear is that "Orbitraps don't have proper isotope fidelity" or "TOF instruments have more accurate isotope reporting".

That doesn't seem right to me, cause SILAC and TMT always look great and we bank on those -- but they are relative abundances. Our friends in the small molecule world need absolute abundances and things. A common experiment that they demonstrate in San Jose is for an organism to be dosed with a heavy isotope drug and then they track the degradation of that drug through the body fluid of the organism by following the weird isotope ratios. (If all goes well, some friends are going to start a study like this here in Baltimore soon!)

That experiment seems like proof that isotope ratios on the Orbitrap are rock solid, but again, that might be lenient, right? Maybe just resolution and mass accuracy is enough to pull it off.

But then there is this App note (warning...this is direct link to the PDF) from about 10 years ago.  In maybe not-the-most neutral study ever, the anonymous author of the App note from one instrument vendor demonstrated that her/his instrument had better isotopic ratios than the Orbitrap. In a more neutral study..and...um...somewhat confusingly written paper seems to have found issues with some Orbitrap instrument. In a methods paper that is a criticism of the capabilities of a technology, it is difficult to tell which technology was employed (what Orbitrap is it? Definitely an Orbitrap Discovery [translation...oldest] was used. Maybe an Orbitrap XL. Hard to tell. Wait. Does it say "Exactive" in the method? It totally does. Did they use an Exactive? If so, why is there no description of the device or method used.  Sigh....I seriously need another hobby....

Okay. So we see where this "Orbitrap isotopic fidelity" stuff originates from.

Why am I rambling about it? Cause Edward D. Hoegg et al., decided to go crazy old school and check this issue out for real. And you know what?


YEAH!

What did they do? The Penultimate isotope ratio experiment.

The one that this guy would have done first day his Orbitrap was installed -- ionize some darned Uranium and shoot it in!

How'd they ionize some uranium? They used Liquid Sampling-Atmospheric Pressure Glow Discharge Microplasma Ionization. You can tell they aren't proteomics people, cause we totally would have come up with something better than LS-APGD. That isn't even a cool word.

How's it work? No clue. But it makes URANIUM GO INTO AN EXACTIVE!

What is cleaner from an isotope perspective than U235/U238? Nothing. What violates your warranty faster? (Just kidding. I don't know, but I can't imagine your FSE wants much to do with you at that point, lol.)

They mess with all sorts of parameters. Different amounts of uranium. Different injection times, microscans, AGC targets.

And the isotope ratios are rock solid. End of story.


Wednesday, August 3, 2016

The NIST mAB standard is AVAILABLE!!!!


Ever tried doing a intact mAB analysis? Used to be suuuper hard. On a Q Exactive, its almost approaching the point of being trivial.  Optimize a short gradient, optimize the NCE, deconvolute it with Respect --> Boom!  Intact antibody mass within a single Da or two of the theoretical.  Sure, its harder if you're doing something like an ADC with 30 drugs conjugated to it, but considering where we were just a few years ago? Many instruments still in use can't provide mass accuracy high enough to tell whether an antibody is glycosylated or not (+/-220Da) so we've come a suuuper long way!

One big problem for mAB researchers is that they've never had a good standard. Especially for Orbitrap precision...until now thanks to the National Institute of Standards!

NIST decided to create a great mAB standard. Something that would suffer from very little lot to lot variability (cause they made literally enough for every lab in the world...for years...). And they characterized the crap out of it.

There have been mass spec standards out there, but they have not been this controlled. Point me toward one if I'm wrong, but the ones I've used have had tremendous problems with lot to lot variability and...the information on the antibody is often wrong or vague. How can I check <10ppm mass accuracy on the antibody when the vendor rounds the mass of the antibody off as something like 147,100 Da? Ya can't. You can verify that your Q Exactive is not on fire. That is it.

Wow. This got rambly. Surprise!!






Tuesday, August 2, 2016

Need to add quan to your Python pipeline? Try PyQuant from JHU!


I just visited this lab yesterday and we were talking about IonStar. Turns out they have been developing their own in-house quantification pathway in Python!

This open access paper from Christopher J. Mitchell et al., details these algorithms. There is a lot of quan out there, but if you're already using a Python-based bioinformatics pipeline these tools might be exactly what you want to look at integrating.

Among the features included -- label free quan, TMT/iTRAQ, SILAC....and...WAIT FOR IT...

(...obviously unrelated...but too funny to not include. Thank you Google Images)

NeuCode!!  YEAH!!  So if the reagents are ever commercially released you can integrate this into your pipeline.

Wait. I'm not done. The code in PyQuant also has the capabilities for


15N, 13C, and heavy O labeling!!


Read enough and just want this awesome package? You can get it directly here!






Monday, August 1, 2016

Another use for TAILS proteomics -- protein fate studies in Pancreatic cancer!!


Check out this paper in Cell. I honestly didn't know this was possible. Partially cause I keep forgetting what the heck TAILS is -- and partially cause this is brilliant.

What is it? An analysis of protein fate. We know tons about how we make proteins through that whole central dogma stuff. DNA --> RNA --> Protein. After that the protein does whatever the protein does, but we've got to get rid of it when it is used up or gets faulty or whatever. Lots of the simulations show that it is naive to consider that a cell has a constant population of complete protein around -- its all somewhere between made -- and being un-made.

This is where the title of the paper just hung me up.  "TAILS shows that protein degradation dominates over proteolytic processing..."???? What the heck does that mean!?!?

Wikipedia helps clear up the title!  We're talking about 2 different mechanisms for getting rid of the old proteins!  Proteolytic processing implies that something like a caspase is finding SPECIFIC proteins and protein sites and destroying the proteins in a systematic and planned fashion.

Protein degradation? That implies its a whole lot less controlled. In cellular systems....I'm going to think that more control = better!

This is where TAILS comes in (also stolen from the WikiPedia article...how did we work without this resource again? Walk to libraries? Dewey Decimals?


In TAILS you start tagging at the protein level!  So if you have two systems...say one that is as scarily under understood as Pancreatic cancer vs normal cell lines, if you see a massive drop in the cancer cell lines in the normal N-termini of your protein, this could be 1) Less overall protein or 2) Less complete proteins!  Cool, right?!?

The tricky part is that you're gonna want to verify that your protein abundances aren't generally shifting, but you can do that with regular old TMT/iTRAQ. And they do.

And to get the conclusions for this paper you're going to need to properly understand your "normal" protein degradation pathways -- what proteins that they cleave AND where. And they do.

So they're able to conclude that a big problem in their pancreatic cancer cell models is that the proteins are being broken down a whole lot more by the generic degradation stuff than the normal controls. Which....might leave one to wonder what is messed up up-stream since caspases are  tightly regulated by all sorts of things...including post-translational modifications (cool paper on that here).

Obviously this is something that requires further and deeper investigation!