Sounds promising. You would need to do a TDI (time delay integration) scan with external trigger. Can Micromanager do that. Or then this would be a good plug-in to add
Sounds promising. You would need to do a TDI (time delay integration) scan with external trigger. Can Micromanager do that. Or then this would be a good plug-in to add
Hi to all,
after i posted that comments about the HiSeq components three years ago to that “Illumina GAIIx Teardown” blog at swarthmore.edu i was surprised how many mails i received over the following months and years from several work groups where people were trying to do new things with old HiSeqs. I provided technical documentation and help in questions how to reverse protocols hard- and software. My own efforts here with my three HiSeqs were delayed due to my work load here at my institute therefore i did not finish the communication with the TDI line scanner. One is still complete, two others i completely emptied to reuse parts. The space was needed by the institute
Then just three months ago i was pointed to this documentation you did here and was even more surprised, this is really a nice community that has been formed to collect the infos needed! Today i found the updates with the successful scans of transmission and fluorescence images. That’s nice, I think there is no need to reverse the scanner part from scratch anymore
Older platforms like the ABI Solid i hacked before were nearly completely cobbled together from commercial of the shelf components and very easy to reuse and to modify both in hard and software. I think the time-to-market was the most important thing for the competing companies back then so the construction from R&D was nearly unchanged sold - with a nice compartment around.
In the HiSeq is a little more custom electronics/firmware (especially in the FPGA part, also a little bit in the ARM controller) but still open enough to be reused and modified, maybe completely. Only if you need to change filters to adapt to alternative fluophores like described in various papers on protocols on “hacked” Illumina GA IIx it can be mandantory to perform some readjustments to the optical bench - e.g. beam expander.
The NovaSeq is even more customized and closed - i was not able yet to reverse engineer one of them but had my nose deeeep inside while one of them was set up in our lab two weeks ago.
I hope i can contribute something to this HiSeq 2000/2500 reversing in the future, maybe write a control software to replace the Illumina and Hamamatsu Software needed - beside the API of course. Limiting factor here is really time but working together this can be quicker.
Best from Kiel (Germany),
That will be one for the students to look into. Cameras are handled by things called DeviceAdaptors:
For hamamatsu cameras this looks like a wrapper around the DCAM API. Unfortunately there’s no source for some of them - including DCAM.
I mentioned there that I am currently available for software and electronics contract work and interested in scientific equipment and very much interested in open sourcing stuff.
So to respond here, rather than clutter up the other thread. I think a WeMakeIt campaign set up by me would make sense then even if @gaudi is the only one to pledge. But it does seem like there are others here interested that could potentially contribute too.
I’d be happy to travel or even relocate for a while to be near a machine or we could budget in buying another machine or sending one to the UK: whatever makes the most sense.
All the details need to be worked out of course. I need to be sure I am capable of doing all the work or able to get help. How we could collaborate (@jmarkham) and the exact budget needs to be worked out too, but how would people feel about contributing towards this work in the form of a WeMakeIt campaign in general?
Unfortunately I can’t commit money at this point (I will ask around though) but I am keen to co-ordinate. At the start of December two elec-eng students are coming to do an internship subject over their summer holidays (about 12 weeks). The reason I have the hi-seq is that it broke and the fault seems to be in the main controller box, so depending on whether this can be addressed, we may end up using a subset of the hardware. (I haven’t had a chance to look harder. Some back story is that I work at a cancer hospital doing clinical genomics and this is just something that they tolerate me doing on the side.)
In terms of overall goals, if the hi-seq can be driven by micro-manager then this makes it available to a wide range of users in the microscope community and I’d love to be able to contribute to that. My personal interest is to use it to look at cells and their labelled dsDNA molecules on a custom microfluidics chip. At the moment I have the use of an old Zeiss Axiovert 200m for development work but the hi-seq would actually be more suitable if it worked.
Depending on subject area, @tboysen and @gaudi, the possibility exists to host and supervise students at PeterMac, WEHI should that be of interest. Conversely, if you have any opportunities, I’m happy to make enquiries at this end to facilitate. I’ll be in Europe for Christmas and until 12/1. Are you guys gonna be at work over any of that time?
Hi John (and all others here),
i think the way to control a (intact!) hiseq with a alternative Software to use for microscopy and even quantitative fluorescence measurements is not that long. I see most of the needed information is already there. OK, i know, it’s easy just to say this but harder to actually do it - i’m sure obstacles are lurking on the way i can’t see at the moment.
I assumed the most problems to control the line scanner with all it’s features, it was the last component left to reverse. But it’s possible to reuse the Hamamatsu software components you find in a Illuina installation. However, i did not found the time to test it yet and check out how much of the image processing is done in hamamatsu and Illumina binaries. And i don’t know if there are licence problems in the end when reusing software that came bundled with the illumina software here - hopefully not if it’s just science we do.
My main problem here at the moment holding me back from really contributing is time. I’m also working at a clinical genomics institute both technically and in research and it’s just too much at the moment. But my technical things and the hiseq hacks in the basements are tolerated here too, maybe because i build new microfluidic lab machines from the junk i treasured up from generations of sequences from ALF over Roche and ABI to Illumina.
To @jmarkham… if you can provide me some informations about what problems the control box is making maybe i can find out where to search and even send you the spare parts to get your project running, i have three sets here - minus one fluidic control board where the ARM controller died while connected to the JTAG adapter. First time that happened for me, don’t know what i did wrong. (BTW: The valves and drivers are already prepared to send to you, just drop me a mail with the address to ship to via private mail.)
Best from Kiel,
Happy to see this project taking some momentum.
Technically I am sure @kaspar you can do it. I have made a working prototype of a software in PureData . I know how to control all the elements in the machine (lasers, filter motors, x/y motors etc). For the scanning I know the commands to trigger scanning in the FPGA and I can take pictures with the Hamamatsu TDI demo program. ( documented here: https://www.hackteria.org/wiki/HiSeq2000_-_Next_Level_Hacking , maybe some things need to me explained more, I am happy to do that).
So what would needs to be done:
Better graphical interface. With some sliders, knobs, speed settings, preview window, maybe selecting scanning area by mouse, choosing the camera and filters etc.
Opening 6 serial ports (for the different units) and sending the corresponding commands. The commands are text based (see command list on the wiki) and as easy as “LASER ON” “LASER OFF”.
Reading the picture form the frame grabber. I now do this with the Hamamatsu TDI demo software. It works. However I think it would be a big improvement to integrate this with the control software. The pictures could be previewed in the interface. Automatic scanning etc would be possible. For this we would need to interface with the framgrabber via the DCAM Interface Library. The library is openly available from Hamamatsu and probably well documented.
Post processing of some (BIG) images. This is kind of optional as it can be done in ImageJ. Would be nice to stitch scanned lines together into one big picture automatically. Probably more a question on how to program it properly and the power of your computer.
Extra features: Once the basic software is running there are - as always - endless improvements that could be made, such as auto-focusing, combining fluorescent images with bright light etc etc.
Integration into Micromanager sound interesting if possible with the machine we have. (As I said earlier the TDI scanning must be solved as it is probably not standard in this software). Also we have to consider what is easier (stand alone or integration) and flexibility for other uses - I can still imagine to do some sequencing with the open software (by integrating the flushing of the chips with the chemicals between images, so would need to control some valves and pumps) - or also turn the machine into a flow cytometer…
For the WeMakeIt campaign, I also think it might be an easy way to get some extra money. Need to setup a description and a video - probably us talking something and some images from the prototype. Then administratively its very lean, you basically just get the money. Maybe we need to think of a simple use case to include in the campaign. Such as finding micro-plastic in soil (what we allready tried). We can then spread the word and hopefully find some more people that have a HiSeq and want to reuse it. (And yes, GaudiLabs would just be a gold sponsor of the project).
** Have to check, I think this science booster ends soon, maybe already end of this year. So let’s do in now**
For working on the machine. Ask around, these machines are available at sequencing labs and universities, so if you could get one @kaspar that would be fun. Probably the crowd-funding campaign would even help getting more machines if we mention that we need some for you. (Institutions payed millions for these machines only some years ago and they are sad to see them be without use alread.y)
Then it would probably be good if you could come to GaudiLabs, @kaspar and @jmarkham so that I can show you the scanning and transfer as much knowledge from my hack to you. Between Chrismas and NewYear would be good for me (I will skip CCC or maybe we can even meet there). Best if we already have to money by then.
Yes Urs, seeing this momentum makes fun! I hope with the combination of the different emphases in experience (coding, reverse engineering, electronics, lab…) of the people here it will be a project with an productive outcome soon.
My expertise that could be of help is mainly in electronics, embedded computing & reverse engineering and i have experience in the work with laser optics. I will help whenever i can. My problem here is time, because i have too many projects to do for the institute, my private ones are just slowed down to a hold, i even did not finished the project with the ABI Solid components yet that started before the Hiseq hack.
One of the three Hiseq 2000 i could prey in 2015 is still complete. The director of the institute is planning to put it in the lobby as display model - therefore i already added a window for a direct view on the illuminated optical bench and wrote software for an Atmel controller to recreate the light show (it’s a TLC5947 LED driver behind that LED bar). So i could reactivate that machine, i have to install a new control PC because the old ones were reused as a small cluster. And i have the components from the other two, so if something breaks i can provide spare parts.
As newer sequencers are replacing the Hiseqs more and more in the sense of running costs it’s just a question of time when the next Hiseq is avaliable for me “to play with” - i would transport that to you (i think you’re located in south germany?) if it’s needed to complete this project. I will ask when the next one is trown out. How short-lived they are, just crazy…
Good to know, spare parts can be helpful. I once broke a silicon damper on my machine and had a hard time to fix it. Also laser and optics knowledge is important. The fluorescent images that I took are kind of unevenly lit and have stripes. Still not sure where this comes from, maybe we can find out. The big LED bar on the machine can be controlled with simple commands via the FPGA virtual serial port (see command list).
@kaspar if you want to start testing with the camera DCAM library - I have an extra camera unit that I could send you. If you get one of the HiSeq framegrabber cards you could already start experimenting with it.
just wanted to say that I am very much interested in helping out in any way I can - and I guess reverse engineering of software and hardware. as well as software engineering would probably be where i could contribute the most at this time.
Also eager to get my hands on a Hiseq to play around with and would have space for it in my studio in Berlin so if you hear of any orphaned Hiseqs out there looking for a new home let me know
I’ve been following this thread, it’s off my core focus but is really interesting. I’m more of a phone talker… is anyone willing to have a 30 minute conversation with me about how this works? I’m curious to know if we could use the Our Sci platform to manage communications + output from the HiSeq… that would reduce the overhead of creating input/output software to the device, and create an interesting application for us that we may actually be able to use in our paid work (which means we can contribute more consistently to make it useful for this application).
First simple question - does the data go in and out via USB serial? If so, then I’d love to hear more. If not, maybe not a good fit.
It may not be a good fit which is ok, but I’d like to explore it.
Any takers on a call with me so I can pepper you with questions
Happy to see you are following the project.
The whole machine can be controlled via one USB port that carries several virtual serial devices, by sending commands and getting back feedback.
See ports and command list:
The pictures however go from the cameras (through an FPGA board) and into the framegrabber card in the computer.
And these pictures can be huge (scanning area 100x100 mm, Resulution 0.35 um, 16 bit)
See picture scanned:
I sent an email to WeMakeIt to find out if we indeed have a hard deadline at the end of the year to take advantage of the fund matching. I looked over their terms and conditions and they do stipulate that the campaign beneficiary should be in Switzerland.
It does seem like re-implementing the Pure Data sketch in a more mainstream programming language (which will make it easier to build on and improve without becoming wire spaghetti) seems like a fairly straight forward task. The frame grabber is obviously a little less clear to know what is needed. I’ll see about getting the linked card and organizing a used PC to run everything.
We should probably organize the campaign into “stretch goals” depending on the amount of funding we can raise. Something like:
Exact levels would have to be estimated with a bit more care as I just made these up from the top of my head.
@gaudi, maybe you should also apply for a Mozilla mini grant seperately from this campaign? In the meantime I have some reading to do to understand how these machines work and what can be done with them.
If anyone does take you up on that, I’d like to join that call too.
Hi all, it’s fantastic to see the level of interest. Just quickly to answer a few points:
For those who have not used micromanager, it has solved many of the problems that have been raised. Here is a slightly old publication and I recommend having a look at the Users’s Guide to see what it can do. Auto-focus, multi-dimensional acquisition etc are all implemented. In fact it has already been used to drive an older GAIIx instrument. It has ImageJ available inside which cab be used to do any required post-processing. I have a particular interest in using ilastik on the fly to dynamically control the experiment in response to what’s going on in the field of view.
Of course this depends on being able to make it to control camera. If this can’t be done as per my previous post (and I I’m afraid haven’t had a chance to look into this), then perhaps it would be possible to write a micro-manager driver that just acts as a wrapper for the binaries that Urs is using. I would regard this as a last resort though.
Oh nice, Micro Manager looks useful (and it’s open source).
@kaspar The goal for the campaign needs to be carefully calculated as the science booster only doubles the money when you reach the set goal. So when we set 3000 and we reach 3000 we will get 6000… what is more they dont double I guess and also no strech goals. So I would go for a goal of 2000 + what we can expect form other contributors. Also I would not make it to complicated with stretch goals. I would include the “frame grabbing” in the main goal - as this is really needed for a nice implementation. And then maybe one stretch goal.
For the “Swiss Based” you could use the address and account of the International Hackteria Society. We did this for the “Biohack Retreat” and for the “Humusapiens” I guess. Maybe they will start to wonder why there are so many projects from this network - this is just how it is. Marc also mentioned that there is ways around “local Swiss” - and Marc also knows well the process as he did all the accounting for earlier projects. @dusjagr
! Vorsicht !
they also rake 10% off…
Thanks Kaspar, I threw in my stuff for the doodle poll. Would love to learn more. checked out micro manager - that’s a great piece of software for microscopes! I’m glad to know it exists, I hadn’t seen it before, very cool. I think that level of detail configuration and adjustment is beyond what would be reasonable to do in Our Sci, but if the use of the Hiseq is more straightforward (send this, get that, send this, choose option a b c, send this, receive data and then visualize) and consistent (like you always either do it using method 1 2 or 3 NOT let’s adjust lots of options on the fly or here’s 10 configuration screens with 10 buttons each or something) then the Our Sci platform would likely be useful and save development time. In other words: we’re going to making processes to do the same shit over and over in a nice consistent and comparable way Looking forward to learning more!
I had a closer look a the camera issueS…
The dcam-api works with various Hamamatsu hardware, including the phoenix frame grabbers in the hiseq.
This is what microscope vendors and others use to support Hamamatsu cameras from within their own software.
Similarly, micro-manager also uses it to support Hamamatsu cameras
It may already support the hiseq’s cameras: “Other cameras with the Phoenix card may also work…”
Easy to check - install it, along with the latest dcam-api, open up the UI and see if it can talk to the camera and if it knows about TDI.
While micro-manager is open source they can’t publish the code for some drivers due to vendor restrictions. They may be more forthcoming offline - I’ll get in touch and see what they say.
If the existing Hamamatsu device adapter doesn’t do what we need and micro-manager are unwilling to let us try and fix it, then there is sufficient documentation and prior art to write another one. On the micro-manager side this is described here. On the Hamamatsu side, if you register, then you can download the SDK. The SDK contains code and documentation so you can write programs using the dcam-api. It looks like it has everything required for TDI. From memory, registration just involves sending a camera model and serial number. There are sample programs to control the camera, stream images to disc and also to interface to CUDA (useful for real-time image processing). If anyone wanting to have a look at hte SDK has trouble registering, send me an email.
It looks like everyone is available Friday 6/11 1 am Melbourne time. Shall we lock that in?