Thursday, March 22, 2018

Enrichment profiles to lead us toward better plasma proteomics assays!

I pretty much have to leave this cool new study here, I've nearly shoveled myself out -- I'M SO TIRED OF SHOVELING SNOW!!  And I'm just catching my breath, before I finish shoveling and see how hybrid-car friendly the roads are between here and work.

Big reason I really want to get back to this paper later? Ummm...ever wondered why you couldn't validate something someone else found in plasma samples even though your samples were identical? Did you harvest with EDTA? Did they? Did you heat inactivate? Did they? It can make a huge difference!

By addressing issues like this, these authors start to establish unique profiles that explain a lot of the differences.

As valuable as these authors demonstrate comprehensive profiling can be -- to me it might even be more important that it highlights the value of comprehensive conserved protocols!

BenchSci -- Find antibodies based on published proof that they work!

Antibodies aren't going to go away, but maybe this cool new site can alleviate one of the biggest headaches -- finding one that has been proven to work for your application.

Instead of:
1) Going to a manufacturer website X
2) Seeing if they have an antibody for your protein
3) Checking if that antibody works for immunoprecipitation or whatever
4) Checking if the manufacturer just says it does -- or if anyone has actually published anything on it
5) Checking the paper to see if you believe the author's results...
6) ...sigh.... go back to 1 (rarely, of course! it got through peer review, didn't it?!? I'm just being funny, but still, that's a lot of clicking through webpages!)

This team uses a machine learning algorithm thing to scroll through the literature and find published proof that there are antibodies for a protein and that they work for certain applications

I talked to one of the developers to:
1) Make sure that this is really what they are doing

2) Make sure that they'd considered scientific literature access stuff because this sounds like a great idea -- as long as they don't end up going to prison for it next week. They are actually working with individual publishers so this is all legit.

3) To suggest they contact all the proteomics journals next(!! and they are !!) so their database can have more MS-compatible antibodies.

To use it you put in your protein of interest in the search bar at the top and on the right blue bar you can start adding filtering parameters like what organism, cell type, application, etc.,  This will start to reduce the figures that show proof of the antibody working for your application.

TADAA! You've got the direct link to the peer reviewed evidence of a functional antibody -- then you know what company to go to.

It looks like you need to register for a free account to get some of the info, but as far as I can tell that's the only catch. (Leave me a comment if you find other ones, please.) Right now it looks like all wins -- You get to the right reagent faster and you have your reference for why you selected that antibody in the first place!

Wednesday, March 21, 2018

Doing MS3-based reporter ion phosphoproteomics? MS3-IDQ!

If you are doing MS3 based reporter ion quantification, you should really check out MS3-IDQ!

The paper is available here.

The idea is really simple. If you are going to be getting both MS2 and MS3 based data on your peptide, you might as well utilize it to increase your identification and localization rates, right!??

I'm sure this isn't the first group to have thought about it -- but implementing it? Umm....yeah..... There are a ton of factors to consider from the instrument side and from the data processing perspective. I, for one, am really glad that they did the work so I don't have to!

Tuesday, March 20, 2018

Immune system response in a 500 year old mummy!

The overall story here is really sad from the human perspective (spending some time trying to find just the right Scooby Doo gif helped lighten my mood), but the science in this 5 year old paper highlights capabilities we have that we might never think of.

The article is short so I won't go into it too much, but you know how hard it is to get an extra FFPE slice out of your collaborator? Imagine the material limitations that you have when you are getting it from a museum with some National Geographic people watching your every move. The fact that this Orbitrap XL could identify any peptides -- let alone make observations on the immune system response(!!) of material this old and limited and precious is really amazing.

The authors spend a lot of time on the normalization maths. Of course, we always want a control and some biological replicates before you try any sort of quan -- but this is one of the rare cases where I'll cut the team a break. Fancy math on the mass spec data and PCR to back it up leads the team to some really interesting conclusions on what might have happened to a couple of kids 500 years ago!

Sunday, March 18, 2018

I love getting reader comments, but this might be my all-time favorite!

From an anonymous author:

Take that, haters.

More on the Beadome!

Yesterday I discovered something everyone else knows all about and I'm still fascinated with how I might exploit it to give my collaborators better data -- there is SO much out there on this topic. I think I can be doing better experiments by Monday (today if our IT security people would allow me remote login...not sure why but my head hurts too much for me to drive anywhere)

This study has some amazing insight into how much of a problem the beadome truly is! 

In controlled pulldowns less than 1% of the total peptides identified -- and around 1% of the total ion intensity (TIC) come from peptides that are associated with the enrichment! One percent! The rest? Beadome....

The antibody really is trying to just pull down it's targets -- it just sucks at it (what a surprise! antibodies being unreliable? How weird...) But this isn't a Ben hates antibodies post. There is no question at all that when my collaborators do these pull-downs that they enrich their proteins. It's a crude tool, but it works. The important thing here is how do I better get to the 1% of the signal that matters here!?!?

This may not be the first study, but I'm only at least 7 years late to the "Why don't I make a static exclusion list?" party. Check this out!

Unfortunately, it adds some important biological context to everything. Wait. Unfortunate? Oh. It turns out there is a bunch of different ways to do one of these pulldown things. However -- there is tons of potential here for developing lists of your BeadOme junk and eliminating it from fragmentation if you know how the pulldown is done. They go through all sorts of different methods and develop a list for each kind of pull-down thing.

However there is a lot that is shared -- independent of the method -- but it looks like the biggest impact would be from running your experiment while excluding the pulldown things specific static exclusion lists. It bears further investigation for sure!

If we go back to the first paper I linked, the quantification methodology with imputation (random score input imputation) allows them to ignore the beadome impact from the data processing side. This is great -- if you have enough dynamic range to get to that 1% of the signal you really want to! But I still think the use of static exclusion will help a lot to get down to things like PTMs on that 1%....

Saturday, March 17, 2018

The Beadome! (The crap that sticks in every pull-down!)

I'm new to IP pull-downs, affinity purification, affinity enrichment, all this stuff. My background in the lab is global plasma/serum stuff. I never have enough dynamic range, I never have enough peptide coverage and the instrument is never fast enough.

At my new facility, IP is the way of life. Everything is enriched with big proteins taken from mouse or camel blood or whatever and stuck to beads or something. It's a mystery to me how it works and I'm too busy fixing EasyNanos and looking for the coolest PTMs you've ever heard of to ask.

As I'm running these things my first thoughts have been -- wait -- if we're using an antibody shouldn't we pull down like, I don't know, the one thing that the antibody is specific for?  Aren't antibodies for matching to one single protein? WHY ARE THERE 2000 THINGS!?!?!

Some of this isn't my ignorance. Turns out a lot of it is just crap that will ALWAYS pull down. There have been multiple awesome studies over the last 10+ years on the "BEADOME"

This is a great first starting point (2008)!

This review is more recent and has some interesting new information as well (can't take screenshots I have it in hardback only)....

Okay -- so here's the question I'm going to get to really quickly. When I do plasma proteomics the first thing I do is build the biggest static exclusion list that I can. I have exactly ZERO time to waste fragmenting albumin, transferrin, and about 45 other proteins. If you're really interested in albumin, you'd better tell me about it up front because no mass spec I'm in front of for very long will ever detect an albumin peptide.

How consistent is this BeadOme thing? And should the Q Exactive (which has a maximum static exclusion list of 5,000 ions) and the Fusion (which -- I've yet to run plasma or serum on yet, so I don't know the upper limit -- gotta be more than 5,000, right? It's got 2 PlayStation 4 CPUs inside it!) be continuously ignoring a lot of stuff to boost my dynamic range?!?!?

Friday, March 16, 2018

TagGraph -- I don't get it, but it looks seriously smart!

Yup. I don't get it. It's Saturday afternoon and my plans are to celebrate my 30-ish percent Irish heritage in the stereotypically tacky way us Americans do just about everything.  It is actually coincidental that these graphs are green.

I'm leaving this here so I can take a look at it later. What I do know is that there is a clear bias in the search engines I use daily that are biased against the PTMs I'm looking for and TagGraph seems to take a nice hard swing at that by working completely around the problem in a new way! Honestly, I'm not going to trust my understanding enough to share it, but what I think I get -- seems seriously smart.

You can find the preprint article here. If you figure it out before I do and want to give me a call and set me on the right path, today would be a bad day for it, but I'm open to it later. Tomorrow morning might also be off limits. We'll see!

Oh yeah! Here is the link!

Thursday, March 15, 2018

Accurate masses of iTRAQ/TMT reporter ions.

I'm going to turn something from frustration into something useful!! Take that universe! I just wanted to find, in a hurry, the accurate mass reporter fragments for iTRAQ and TMT. Google Images was nice enough to find me 300 pictures with unit mass.

I'm just putting these here so that they'll pop up in Google image searches in the future.

Of course, this should go without saying, but just to be clear: iTRAQ and TMT are the properties of Proteome Sciences and are trademarked to ABI(TM,R) and Thermo Fisher Scientific (TM,R), respectively.

In no particular order here are these poorly drawn images. Go Google Image Crawler go.

EDIT 3/22/18:  TOTALLY WORKED!!  Only image on the front page of Google Images with anything behind the decimal place. 

Wednesday, March 14, 2018

Benchmarking quantitative strategies for phosphoproteomics!

Shoutout to @PreOmics for tipping me off to this one (great job Google Scholar alerts -- how'd you miss this one?!?)

I can't even get through this one this morning -- daylight savings time is dumb, but considering we're about to fire up a 10 time point 3 replicate per time point phosphoproteomics study on something that sounds super important (I forget what) it couldn't have shown up at a better time!

At 30 samples we're just at the critical junction where TMT 11-plex should be perfect for it. We'll probably miss some channels moving from plex to plex, but we'll save so much time that it has to be worth it (and that 11th channel is our great pooled control!). The next decision -- it's phospho -- do we break out the MS3 SPS to deal with our ratio compression, knowing full well that the lower speed of the method is going to mean fewer total phosphopeptides?

THIS TEAM GOES THROUGH ALL OF IT!!!  THEY EVEN SPECIFICALLY LOOK AT DNA DAMAGE PROTEINS (which may actually be something like what we're doing -- but...again... I can't remember. You know, it's actually better science, I think. All of my experiments are double blinded. I can't interject unintended bias when I can't remember what organism or project I'm working on!)

Speaking of biases -- this study does a pretty nice job of supporting one of mine. TMT MS2 methods look pretty impressive compared to MS3....definitely check it out!

Monday, March 12, 2018

Call the pathologist (or whatever they're called) why don't they fix cells with MS-compatible crosslinkers?!?!

Okay -- umm....what is the downside here?!?!

Check this out and tell me if you find one!

We're constantly trying to work around the fact that all these cool tissues in repositories -- EVERYWHERE -- have fixed tissue. It's formaldehyde then paraffin and I'm always super impressed when anyone gets anything out of them when they go back to re-analyze them.

Sure -- it works okay for imaging (with specific antibodies) but we're often stuck with reversing the crosslinking (with varying degrees of success) or ignoring the crosslinked moeities. (Come on blogger, at least one of those is a word...)

Why don't we toss the dangerous formaldehyde junk and use better reagents to preserve the tissues!?!?  This group looks at preserving the tissues with MS-cleavable/compatible crosslinkers - and it works!

Sunday, March 11, 2018

More alkylation discussions!

Last week I put up a post about a simpler method for reduction alkylation used in some recent studies that I liked. That post is here. I don't think I've ever received (real) comments on a post so quickly and I suggest that you check those out.

One comment will lead you to this great recent paper that takes a deep look at these issues.

These authors take a good hard look at 2-chloroacetamide and find that it comes with it's own special annoyances. Massive increases in methionine oxidation compared to iodoacetamide and both singly and double oxidations of tryptophan (I'm having trouble imagining where that second oxidation goes...I should get more coffee). They also examine how different buffer conditions will lead to increased/decreased off-target alkylation effects.

This is obviously a complex issue that requires a lot more consideration than this blogger is willing to do on a Sunday afternoon, but I'm hoping to get to a single standardized protocol for my experiments soon. When I settle on something I'm just going to do it this way for years. I need to be able to go back to historic data and match it to new stuff and any tweaks in sample prep that lead to improved IDs aren't worth sacrificing my ability to easily align and remine old data.

Saturday, March 10, 2018

TMTc+ -- specificity at the level of MS3-based methods with MS2 speed!

I always err toward MS2-based reporter ion experiments. I know the accuracy is better with the MS3 based methods, but my problem is always trying to get to the protein(s) that everyone is so interested in and I can't take the speed hit.

This relatively simple looking new approach shows that by deconvolution and intelligent use of the TMT complementary ion fragments you can have it all!

The use of the high mass fragments that still carry TMT fragment tags is not new, these authors described TMTc a few years ago --and there is a free Proteome Discoverer node from IMP that will allow you to utilize them, but TMTc+ takes it a couple steps further. By adjusting conditions to bias toward the formation of the complement ions and by taking the shape of the ion isolation window into account they demonstrate massive improvements in this approach.

How massive? Ummm....more quantified proteins than a label free approach?!?  If true, this is paradigm flipping stuff. Every study I've ever seen has shown throwing in the reporter ion tag to decrease the number of IDs compared to label free approaches and I think we all just accept it as the trade off for being able to multiplex a bunch of samples at once. But...if you actually get more IDs -- or even no loss -- the next question is why wouldn't you use TMT for every experiment (where your n < something crazy, of course)

This study was just accepted by ACS, so I changed the link above to the direct toward that version of the paper. The preprint is still available at bioRxiv here.

Friday, March 9, 2018

Platelet proteomics of patients with early stage cancer!

What a great idea for a study, and what interesting results!

We're almost always going after the plasma and serum -- and we all know how much that sucks. 11 orders of magnitude dynamic range? And just about everything is glycosylated? Blech.  Some recent studies have shown that there is lots to learn in the cellular blood components -- even in boring old RBCs.

This group goes after another somewhat neglected cell type in our blood stream -- the platelets -- and they study them in people with/without cancer and post tumor removal (!!)

The proteome still appears pretty complex in these cells because they use 1D-gels to simplify the proteins before digesting and running the samples on a NanoESI-Q Exactive. The data is processed in MaxQuant.

A curious decision was made at this point in the data processing. I'm not being critical at all -- the downstream stats and bioinformatics look top notch to me -- but instead of using the XIC extraction and normalization capabilities that are present in MaxQuant and Perseus the group appears to work with spectral counting.

It obviously works for these authors! The downstream pipeline is rock solid and the output heatmaps and pathway representation is as nice as anything I've ever seen.

The platelet proteome is demonstrated to be really complex -- and patients with early stage cancer (n=11!) have some really profound differences in their platelet proteins compared to healthy controls. This looks like an absolute gold mine for potential early cancer detection markers!!

All of the RAW data has been deposited to PRIDE/ProteomeXchange via PXD005921. It hasn't been made activated for download, but I just pinged them to see if it could be. Slackers...the paper officially publishes...April 15th...2018....

Thursday, March 8, 2018

EncyclopeDIA -- Gas phase fractionation + chromatogram libraries!

I have no intention of waking up enough to get this paper right. Primarily because I've got a huge and awesome day ahead in the lab....

My sleepy readthrough of this new study was something like this.

" this...?"
"Oh..maybe...come on, brain...maybe...?..."
"Nope. No idea what I just read. At all. Did they give a monkey ritalin and then let it dance on a keyboard? These aren't words! Why is everything capitalized wrong places.?!?...Is this the weird SpongeBob mocking meme..?...Where is my coffee?"
" is really smart...oh no...I should probably try this...cRAP...I should try this...oh no...that means I'd have to explain it to someone...but it's probably smart enough to be worth it...."
"I need more coffee or whatever they have for breakfast up there..."

What I've got (I'm gonna try. No promises):

Gas phase fractionation is one of those things that we've looked at for years -- it will obviously work -- but, in the end, it always comes off as kind of disappointing considering how much instrument time it takes so we never really do it. I fell for it for the 11th time less than a year ago! 

DIA is really powerful but the sensitivity sucks and background noise makes it one of those things that you hate to show your collaborators. "Yes, your peptide is these 5 fragments. No, ignore the other 175 fragments. They're supposed to be there. QUIT LOOKING AT THEM! (Why did I show you this...)"

And lots of chromatographic centric people -- PNNL comes to mind for some reason -- have been telling us for years that we need to work on the chromatography and it needs to be an important factor in our peptide ID and quan.

What if the gas phase fractionation DIA was used in conjunction with fantastic retention times to build the libraries for your DIA analysis? Would it then justify the time? Is this the chromatographic library? This is where I'm unclear....)

What I do get is the gas phase fractionation reruns in 100 Da windows with narrow isolation DIA (4 Da?) with the speed of the Q Exactive HF allows the authors to get really deep identification and they assign the identification with some fancy math things to a specific retention time (and I think this is the chromatographic library?) and then they can go back and match their "normal" DIA experiments.

Critical point here -- it relies so much on retention times that the authors are clear to point out if they use a home packed column this probably won't work.

Unrelated reminder:

 What do you get out of all of this? What about DIA that is actually more sensitive than DDA? What about more IDs? Like 50% more IDs than equivalent DDA experiment!!

Major bonus? You can show your collaborators a chromatographic peak from run to run and not a DIA fragmentation window (don't ever show anybody else a DIA fragmentation window!)

And (