I read today the Facebook announce about his new open hardware modular switch, and something tell me that Facebook doesn’t learned from google about the fragmentation lesson.
The words “Open hardware” comes from the idea of no license needed and, If you carefully review the complete article and after dismiss the “propaganda” the really serious technical question that appear to this interesting proposal is in his features and compatibility.
Obviously, you have to think in a Data center switch architecture and no a normal border switch or intra office switch construction. Take this in consideration because they design this for his datacenter using a “Top of the rack” implementation with Ethernet but I do not found anything about fiber channel, FCoE, Direct Fiber or iSCSI and this is a very important part of the data center
At the beginning paragraph you can read the restriction that Facebook found with actual switches architecture and how they solved with this proposal. One of the restrictions is about grow with the actual existing industrial switches but, how do you go further with this implementation? Add more external switches maybe? Because I only count 128 ports switch capacity. In the market exist switches bigger than that.
A very old switch concept shows me that an equipment with distributed management – as this case – use to have an active backplane to avoid losing slot as one of his great advantages, but this is not the case in this design.
The design flexibility of using SFP+ ports in the line card is not a good advisor for the cost of the equipment. Just remember, we are talking here of 40G Ethernet SFP ports. Why not a fixed solution?
Of course there is a lot of features and philosophical discussions about the correct equipment and what you have to expect for a data center switch – No STP, cut through over store and forward, Layer 3, 4 switching, priority switching, etc. by example, for me, Ethernet is not a backplane protocol – but the main question here for me is what to expect for an “open hardware” project? And is Linux based. If this kind of project go further we can expect big troubleshooting issues with multivendor flavors inside one box, blaming each other for the failures, with multiple management consoles and a different implementations of “how to do” according to the RFC-XXXX lecture that everyone gives. Do you think that I am exaggerating? Just take a look at the Android concept from Google in the mobile phone industry, or the many Linux flavors in the server/desktop. Take this in consideration when you decide to design your data center with and open undocumented and unsupported networking hardware.
Please review the switch propaganda here: https://code.facebook.com/posts/717010588413497/introducing-6-pack-the-first-open-hardware-modular-switch/