Chris Testa KB2BMH taught a class on gate-array programming the SmartFusion chip, a Linux system and programmable gate-array on a single chip, using MyHDL, the Python Hardware Design Language to implement a software-defined radio transceiver. Watch all 4 sessions: 1, 2, 3, 4. And get the slides and code. Chris’s Whitebox hardware design implementing an FCC-legal 50-1000 MHz software-defined transceiver in Open Hardware and Open Source, will be available in a few months. Here’s an Overview of Whitebox and HT of the Future. Slashdot readers funded this video and videos of the entire TAPR conference. Thanks!
As you may have heard, the NSA has had some success in cracking Secure Shell (SSH) connections. To respond to these risks, a guide written by Stribika tries to help you make your shell as robust as possible. The two main concepts are to make the crypto harder and make stealing keys impossible. So prepare a cup of coffee and read the tutorial carefully to see what could be improved in your configuration. Stribika gives also some extra security tips: don’t install what you don’t need (as any code line can introduce a bug), use the kind of open source code that has actually been reviewed, keep your software up to date, and use exploit mitigation technologies.
A wonderful Whitepaper entitled "Worst Cases for Correct Rounding of the Elementary Functions in Double Precision" (PDF) has been released. This has prompted Siddhesh Poyarekar from Red Hat to take a professional look into the mathematical functions found in GlibC (the GNU C library.) He has been able to provide an 8-times performance improvement to slowest path of pow() function. Other transcendentals got similar improvements since the fixes were mostly in the generic multiple precision code. These improvements already went into glibc-2.18 upstream. Siddhesh believes that a lot of the low hanging fruit has now been picked, but that this is definitely not the end of the road for improvements in the multiple precision performance. There are other more complicated improvements, like the limitation of worst case precision for exp() and log() functions, based on the results of the paper.
A recent post on Reactive Programming triggered discussions about what is and isn’t considered Reactive Logic. In fact, many have already discovered that Reactive Programming can help improve quality and transparency, reduce programming time and decrease maintenance. But for others, it raises questions like: How does Reactive differ from conventional event-oriented programming? Isn’t Reactive just another form of triggers? What kind of an improvement in coding can you expect using Reactive and why? So to help clear things up, columnist and Espresso Logic CTO Val Huber offers a real-life example that he claims will show the power and long-term advantages Reactive offers. ‘In this scenario, we’ll compare what it takes to implement business logic using Reactive Programming versus two different conventional procedural Programming models: Java with Hibernate and MySQL triggers,’ he writes. ‘In conclusion, Reactive appears to be a very promising technology for reducing delivery times, while improving system quality. And no doubt this discussion may raise other questions on extensibility and performance for Reactive Programming.’ Do you agree?
Graph Databases like Neo4j are inherently built for managing the complex and growing web of connected data. Neo Technology has put together a few resources about the Internet (Graph) of Things, and what this connectivity means for the way we interact with people and devices. Checkout their video, “Graphs to Power the Internet of Connected Things“, and download their whitepaper, “The Internet of (Connected) Things“.
Infoworld has an absolutely wonderful article up, entitled “13 Essential Programming Tools for the Mobile Web” (a must read for any [mobile] web developer!) It includes things like jQuery Mobile, ChocolateChip-UI, Mobl and many other languages, APIs & Frameworks that GREATLY ease the work of developing Web Apps for use on/in Mobile Browsers…
MIT Technology Review has an excellent article summarizing the current state of quantum computing. It focuses on the efforts of Microsoft and Alcatel-Lucent’s Bell Labs to build stable qubits over the past few years. “In 2012, physicists in the Netherlands announced a discovery in particle physics that started chatter about a Nobel Prize. Inside a tiny rod of semiconductor crystal chilled cooler than outer space, they had caught the first glimpse of a strange particle called the Majorana fermion, finally confirming a prediction made in 1937. It was an advance seemingly unrelated to the challenges of selling office productivity software or competing with Amazon in cloud computing, but Craig Mundie, then heading Microsoft’s technology and research strategy, was delighted. The abstruse discovery — partly underwritten by Microsoft — was crucial to a project at the company aimed at making it possible to build immensely powerful computers that crunch data using quantum physics. “It was a pivotal moment,” says Mundie. “This research was guiding us toward a way of realizing one of these systems.”