Categories
Featured

Getac: Rugged mobile technology is stronger than ever, but we’re sticking to computing devices for now

[ad_1]

We’ve always admired rugged mobile technology. Faster processors, higher resolution screens, intuitive operating systems – they’re all essential and impressive. But the build quality and durability of laptops, tablets, phones all capable of surviving unimaginable drops, shocks, and environmental extremes is in a league of its own. 

There’s a reason for the growing popularity of the best rugged laptops, tablets, and phones. After all, those better specs won’t mean much when the display shatters, when dust and sand carpet the motherboard. According to one survey, the tablet PC market alone is forecast to be worth over $1900 million by 2032. It’s a trend replicated for rugged laptops and phones. One of the companies leading the way in mobile solutions protected against the elements is Getac. 

[ad_2]

Source Article Link

Categories
Life Style

Penning micro-trap for quantum computing

[ad_1]

Trapped atomic ions are among the most advanced technologies for realizing quantum computation and quantum simulation, based on a combination of high-fidelity quantum gates1,2,3 and long coherence times7. These have been used to realize small-scale quantum algorithms and quantum error correction protocols. However, scaling the system size to support orders-of-magnitude more qubits8,9 seems highly challenging10,11,12,13. One of the primary paths to scaling is the quantum charge-coupled device (QCCD) architecture, which involves arrays of trapping zones between which ions are shuttled during algorithms13,14,15,16. However, challenges arise because of the intrinsic nature of the radio-frequency (rf) fields, which require specialized junctions for two-dimensional (2D) connectivity of different regions of the trap. Although successful demonstrations of junctions have been performed, these require dedicated large-footprint regions of the chip that limit trap density17,18,19,20,21. This adds to several other undesirable features of the rf drive that make micro-trap arrays difficult to operate6, including substantial power dissipation due to the currents flowing in the electrodes, and the need to co-align the rf and static potentials of the trap to minimize micromotion, which affects gate operations22,23. Power dissipation is likely to be a very severe constraint in trap arrays of more than 100 sites5,23.

An alternative to rf electric fields for radial confinement is to use a Penning trap in which only static electric and magnetic fields are used, which is an extremely attractive feature for scaling because of the lack of power dissipation and geometrical restrictions on the placement of ions23,24. Penning traps are a well-established tool for precision spectroscopy with small numbers of ions25,26,27,28, whereas quantum simulations and quantum control have been demonstrated in crystals of more than 100 ions29,30,31. However, the single trap site used in these approaches does not provide the flexibility and scalability necessary for large-scale quantum computing.

Invoking the idea of the QCCD architecture, the Penning QCCD can be envisioned as a scalable approach, in which a micro-fabricated electrode structure enables the trapping of ions at many individual trapping sites, which can be actively reconfigured during the algorithm by changing the electric potential. Beyond the static arrays considered in previous work23,32, here we conceptualize that ions in separated sites are brought close to each other to use the Coulomb interaction for two-qubit gate protocols implemented through applied laser or microwave fields33,34, before being transported to additional locations for further operations. The main advantage of this approach is that the transport of ions can be performed in three dimensions almost arbitrarily without the need for specialized junctions, enabling flexible and deterministic reconfiguration of the array with low spatial overhead.

In this study, we demonstrate the fundamental building block of such an array by trapping a single ion in a cryogenic micro-fabricated surface-electrode Penning trap. We demonstrate quantum control of its spin and motional degrees of freedom and measure a heating rate lower than in any comparably sized rf trap. We use this system to demonstrate flexible 2D transport of ions above the electrode plane with negligible heating of the motional state. This provides a key ingredient for scaling based on the Penning ion-trap QCCD architecture.

The experimental setup involves a single beryllium (9Be+) ion confined using a static quadrupolar electric potential generated by applying voltages to the electrodes of a surface-electrode trap with geometry shown in Fig. 1a–c. We use a radially symmetric potential \(V(x,y,z)=m{\omega }_{z}^{2}({z}^{2}-({x}^{2}+{y}^{2})/2)/(2e)\), centred at a position 152 μm above the chip surface. Here, m is the mass of the ion, ωz is the axial frequency and e is the elementary charge. The trap is embedded in a homogeneous magnetic field aligned along the z-axis with a magnitude of B 3 T, supplied by a superconducting magnet. The trap assembly is placed in a cryogenic, ultrahigh vacuum chamber that fits inside the magnet bore, with the aim of reducing background-gas collisions and motional heating. Using a laser at 235 nm, we load the trap by resonance-enhanced multiphoton ionization of neutral atoms produced from either a resistively heated oven or an ablation source35. We regularly trap single ions for more than a day, with the primary loss mechanism being related to user interference. Further details about the apparatus can be found in the Methods.

Fig. 1: Surface-electrode Penning trap.
figure 1

a, Schematic showing the middle section of the micro-fabricated surface-electrode trap. The trap chip is embedded in a uniform magnetic field along the z axis, and the application of d.c. voltages on the electrodes leads to 3D confinement of the ion at a height h 152 μm above the surface. Electrodes labelled ‘d.c. + rf’ are used for coupling the radial modes during Doppler cooling. b, Micrographic image of the trap chip, with an overlay of the direction of the laser beams (all near 313 nm) and microwave radiation (near ω0 2π × 83.2 GHz) required for manipulating the spin and motion of the ion. All laser beams run parallel to the surface of the trap and are switched on or off using acousto-optic modulators, whereas microwave radiation is delivered to the ion by a horn antenna close to the chip. Scale bar, 100 μm. c, Epicyclic motion of the ion in the radial plane (xy) resulting from the sum of the two circular eigenmodes, the cyclotron and the magnetron modes. d, Electronic structure of the 9Be+ ion, with the relevant transitions used for coherent and incoherent operations on the ion. Only the levels with nuclear spin mI = +3/2 are shown. The virtual level (dashed line) used for Raman excitation is detuned ΔR +2π × 150 GHz from the 22P3/2 |mI = +3/2, mJ = +3/2 state.

The three-dimensional (3D) motion of an ion in a Penning trap can be described as a sum of three harmonic eigenmodes. The axial motion along z is a simple harmonic oscillator with frequency ωz. The radial motion is composed of modified-cyclotron (ω+) and magnetron (ω) components, with frequencies ω± = ωc/2 ± Ω, where \(\varOmega =\sqrt{{\omega }_{{\rm{c}}}^{2}-2{\omega }_{z}^{2}}/2\) (ref. 36) and ωc = eB/m 2π × 5.12 MHz is the bare cyclotron frequency. Voltage control over the d.c. electrodes of the trap enables the axial frequency to be set to any value up to the stability limit, ωz ≤ ωc/\(\sqrt{2}\) 2π × 3.62 MHz. This corresponds to a range 0 ≤ ω ≤ 2π × 2.56 MHz and 2π × 2.56 MHz ≤ ω+ ≤ 2π × 5.12 MHz for the magnetron and modified-cyclotron modes, respectively. Doppler cooling of the magnetron mode, which has a negative total energy, is achieved using a weak axialization rf quadrupolar electric field (less than 60 mV peak-to-peak voltage on the electrodes) at the bare cyclotron frequency, which resonantly couples the magnetron and modified-cyclotron motions37,38. For the wiring configuration used in this work, the null of the rf field is produced at a height h 152 μm above the electrode plane. Aligning the null of the d.c. (trapping) field to the rf null is beneficial because it reduces the driven radial motion at the axialization frequency; nevertheless, we find that Doppler cooling works with a relative displacement of tens of micrometres between the d.c. and rf nulls, albeit with lower efficiency. The rf field is required only during Doppler cooling, and not, for instance, during coherent operations on the spin or motion of the ion. All measurements in this work are taken at an axial frequency ωz 2π × 2.5 MHz, unless stated otherwise. The corresponding radial frequencies are ω+ 2π × 4.41 MHz and ω 2π × 0.71 MHz.

Figure 1d shows the electronic structure of the beryllium ion along with the transitions relevant to this work. We use an electron spin qubit (consisting of the \(|\uparrow \rangle \equiv |{m}_{{\rm{I}}}=+\,3/2,{m}_{{\rm{J}}}=+\,1/2\rangle \) and \(|\downarrow \rangle \equiv |{m}_{{\rm{I}}}\,=+\,3/2,{m}_{{\rm{J}}}=-\,1/2\rangle \) eigenstates within the 22S1/2 ground-state manifold), which in the high field is almost decoupled from the nuclear spin. The qubit frequency is ω0 2π × 83.2 GHz. Doppler cooling is performed using the detection laser red-detuned from the (bright) |↑ ↔ 22P3/2 |mI = +3/2, mJ = +3/2 cycling transition, whereas an additional repump laser optically pumps population from the (dark) |↓ level to the higher energy |↑ level through the fast-decaying 22P3/2 |mI = +3/2, mJ = +1/2 excited state. State-dependent fluorescence with the detection laser allows for discrimination between the two qubit states based on photon counts collected on a photomultiplier tube using an imaging system that uses a 0.55 NA Schwarzschild objective. The fluorescence can also be sent to an electron-multiplying CCD (EMCCD) camera.

Coherent operations on the spin and motional degrees of freedom of the ion are performed either using stimulated Raman transitions with a pair of lasers tuned to 150 GHz above the 22P3/2 |mI = +3/2, mJ = +3/2 state or using a microwave field. The former requires the use of two 313 nm lasers phase-locked at the qubit frequency, which we achieve using the method outlined in ref. 39. By choosing different orientations of Raman laser paths, we can address the radial or axial motions, or implement single-qubit rotations using a co-propagating Raman beam pair.

The qubit transition has a sensitivity of 28 GHz T−1 to the magnetic field, meaning the phase-coherence of our qubit is susceptible to temporal fluctuations or spatial gradients of the field across the extent of the motion of the ion. Using Ramsey spectroscopy, we measure a coherence time of 1.9(2) ms with the Raman beams. Similar values are measured with the microwave field, indicating that laser phase noise from beam path fluctuations or imperfect phase-locking does not significantly contribute to dephasing. The nature of the noise seems to be slow on the timescale (about 1 ms to 10 ms) of a single experimental shot consisting of cooling, probing and detection, and the fringe contrast decay follows a Gaussian curve. We note that the coherence is reduced if vibrations induced by the cryocoolers used to cool the magnet and the vacuum apparatus are not well decoupled from the experimental setup. Further characterization of the magnetic field noise is performed by applying different orders of the Uhrig dynamical decoupling sequence40,41, with the resulting extracted coherence time from the measurements being 3.2(1) ms, 5.8(3) ms and 8.0(7) ms for orders 1, 3 and 5, respectively. Data on spin-dephasing are presented in Extended Data Fig. 1.

A combination of the Doppler cooling and repump lasers prepares the ion in the |↑ electronic state and a thermal distribution of motional Fock states. After Doppler cooling using the axialization technique, we measure mean occupations of \(\{{\bar{n}}_{+},{\bar{n}}_{-},{\bar{n}}_{z}\}=\{6.7(4),9.9(6),4.4(1)\}\) using sideband spectroscopy on the first four red and blue sidebands38. Pulses of continuous sideband cooling31,38 are subsequently performed by alternatively driving the first and third blue sidebands of a positive energy motional mode and red sidebands of a negative energy motional mode while simultaneously repumping the spin state to the bright state. The 3D ground state can be prepared by applying this sequence for each of the three modes in succession. The use of the third sideband is motivated by the high Lamb–Dicke parameters of approximately 0.4 in our system42,43. After a total time of 60 ms of cooling, we probe the temperature using sideband spectroscopy on the first blue and red sidebands44. Assuming thermal distributions, we measure \(\{{\bar{n}}_{+},{\bar{n}}_{-},{\bar{n}}_{z}\}=\{0.05(1),0.03(2),0.007(3)\}\). We have achieved similar performance of the ground-state cooling at all trap frequencies probed to date. The long duration of the sideband cooling sequence stems from the large (estimated as 80 μm) Gaussian beam radius of the Raman beams each with power in the range of 2 mW to 6 mW, leading to a Rabi frequency Ω0 2π × 8 kHz, which corresponds to π times of approximately 62 μs, 145 μs and 2,000 μs for the ground-state carrier, first and third sidebands, respectively, at ωz = 2π × 2.5 MHz.

Trapped-ion quantum computing uses the collective motion of the ions for multi-qubit gates and thus requires the motional degree of freedom to retain coherence over the timescale of the operation33,45. A contribution to decoherence comes from motional heating due to fluctuations in the electric field at frequencies close to the oscillation frequencies of the ion. We measure this by inserting a variable-length delay twait between the end of sideband cooling and the temperature probe. As shown in Fig. 2, we observe motional heating rates \(\{{\dot{\bar{n}}}_{+},{\dot{\bar{n}}}_{-},{\dot{\bar{n}}}_{z}\}=\{0.49(5)\,{{\rm{s}}}^{-1},3.8(1)\,{{\rm{s}}}^{-1},0.088(9)\,{{\rm{s}}}^{-1}\}\). The corresponding electric-field spectral noise density for the axial mode, \({S}_{{\rm{E}}}=4\hbar m{\omega }_{z}{\dot{\bar{n}}}_{z}/{e}^{2}=3.4(3)\times {10}^{-16}\,{{\rm{V}}}^{2}{{\rm{m}}}^{-2}{{\rm{Hz}}}^{-1}\), is lower than any comparable measurement in a trap of similar size46,47. As detailed in the Methods, we can trap ions in our setup with the trap electrodes detached from any external supply voltage except during Doppler cooling, which requires the axialization signal to pass to the trap. Using this method, we measure heating rates \({\dot{\bar{n}}}_{z}=0.10(1)\,{{\rm{s}}}^{-1}\) and \({\dot{\bar{n}}}_{+}=0.58(2)\,{{\rm{s}}}^{-1}\) for the axial and cyclotron modes, respectively, whereas the rate for the lower-frequency magnetron mode drops to \({\dot{\bar{n}}}_{-}=1.8(3)\,{{\rm{s}}}^{-1}\). This reduction suggests that external electrical noise contributes to the higher magnetron heating rate in the earlier measurements.

Fig. 2: Motional coherence.
figure 2

a, Bright-state population P measured after applying the first red or blue axial sideband probe-pulse to the sideband-cooled ion. As the bright state |↑ has a higher energy than the dark state |↓, the blue sideband cannot be driven when the ion is in the ground state of the axial mode. b, Average phonon number \(\bar{n}\) calculated using the sideband-ratio method44 for all three modes as a function of increasing twait. The purple and orange points indicate data taken with the trap connected and detached, respectively. The heating rates are extracted from the slopes of the linear fits. c, Motional dephasing of the axial mode observed by Ramsey spectroscopy. The purple points indicate data taken with an echo pulse in the sequence. The orange points indicate data taken with an echo pulse, in which, additionally, the trap was detached between Doppler cooling and the detection pulse. Whereas the dataset with the voltage sources detached is taken at ωz 2π × 2.5 MHz, the two data series with the trap attached are taken at an axial mode frequency ωz 2π × 3.1 MHz. The dashed lines show the 1/e line normalized to the Gaussian fits. All error bars indicate the standard error.

Motional-state dephasing was measured using Ramsey spectroscopy, involving setting up a superposition |↑ (|0z + |1z)/\(\sqrt{2}\) of the first two Fock states of the axial mode (here ωz 2π × 3.1 MHz) using a combination of carrier and sideband pulses48. Following a variable wait time, we reverse the preparation sequence with a shifted phase. The resulting decay of the Ramsey contrast shown in Fig. 2c is much faster than what would be expected from the heating rate. The decay is roughly Gaussian in form with a 1/e coherence time of 66(5) ms. Inserting an echo pulse in the Ramsey sequence extends the coherence time to 240(20) ms, which indicates low-frequency noise components dominating the bare Ramsey coherence. Further improvement of the echo coherence time to 440(50) ms is observed when the trap electrodes are detached from external voltage sources between the conclusion of Doppler cooling and the start of the detection pulse, in which again the axialization signal is beneficial. The data with the voltage sources detached are taken at ωz 2π × 2.5 MHz.

An important component of the QCCD architecture14 is ion transport. We demonstrate that the Penning trap approach enables us to perform this flexibly in two dimensions by adiabatically transporting a single ion, and observing it at the new location. The ion is first Doppler-cooled at the original location, and then transported in 4 ms to a second desired location along a direct trajectory. We then perform a 500-μs detection pulse without applying axialization and collect the ion fluorescence on an EMCCD camera. The exposure of the camera is limited to the time window defined by the detection pulse. The lack of axialization is important when the ion is sufficiently far from the rf null to minimize radial excitation due to micromotion and subsequently produce enough fluorescence during the detection window. The ion is then returned to the initial location. Figure 3 shows a result in which we have drawn the first letters of the ETH Zürich logo. The image quality and maximum canvas size are only limited by the point-spread function and field of view of our imaging system, as well as the spatial extent of the detection laser beam, and not by any property of the transport. Reliable transport to a set location and back has been performed up to 250 μm. By probing ion temperatures after transport using sideband thermometry (Extended Data Fig. 2), we have observed no evidence of motional excitation from transport compared with the natural heating expected over the duration of the transport. This contrasts with earlier non-adiabatic radial transport of ensembles of ions in Penning traps, in which a good fraction of the ions were lost in each transport49.

Fig. 3: Demonstration of 2D transport.
figure 3

A single ion is transported adiabatically in the xz plane (normal to the imaging optical axis). The ion is illuminated for 500 μs at a total of 58 positions, here defined by the ETH Zürich logo (see inset for reference image). The red circle indicates the initial position in which the ion is Doppler-cooled. The ion is moved across a region spanning approximately 40 μm and 75 μm along the x (radial) and z (axial) directions, respectively. The sequence is repeated 172 times to accumulate the image.

This work marks a starting point for quantum computing and simulation in micro-scale Penning trap 2D arrays. The next main step is to operate with multiple sites of such an array, which will require optimization of the loading while keeping the ions trapped in shallow potentials. This can be accomplished in the current trap with the appropriate wiring, but notable advantages could be gained by using a trap with a loading region and shuttling ions into the micro-trap region. Multi-qubit gates could then be implemented following the standard methods demonstrated in rf traps23,34. Increased spin-coherence times could be achieved through improvements to the mechanical stability of the magnet, or in the longer term through the use of decoherence-free subspaces, which were considered in the original QCCD proposals14,50,51. For scaling to large numbers of sites, it is likely that scalable approaches to light delivery will be required, which might necessitate switching to an ion species that is more amenable to integrated optics52,53,54,55. The use of advanced standard fabrication methods such as CMOS56,57 is facilitated, compared with rf traps, by the lack of high-voltage rf signals. Compatibility with these technologies demands an evaluation of how close to the surface ions could be operated for quantum computing and will require in-depth studies of heating—here an obvious next step is to sample electric field noise as a function of ion-electrode distance47. Unlike in rf traps, 3D scans of electric field noise are possible in any Penning trap because of the flexibility of confinement to the uniform magnetic field. This flexibility of ion placement has advantages in many areas of ion-trap physics, for instance, in placing ions in anti-nodes of optical cavities58, or sampling field noise from surfaces of interest59,60. We, therefore, expect that our work will open previously unknown avenues in sensing, computation, simulation and networking, enabling ion-trap physics to break out beyond its current constraints.

[ad_2]

Source Article Link

Categories
Life Style

how WebAssembly is changing scientific computing

[ad_1]

In late 2021, midway through the COVID-19 pandemic, George Stagg was preparing to give exams to his mathematics and statistics students at the University of Newcastle, UK. Some would use laptops, others would opt for tablets or mobile phones. Not all of them could even use the programming language that was the subject of the test: the statistical language R. “We had no control, really, over what devices those students were using,” says Stagg.

Stagg and his colleagues set up a server so that students could log in, input their code and automatically test it. But with 150 students trying to connect at the same time, the homegrown system ground to a halt. “Things were a little shaky,” he recalls: “It was very, very slow.”

Frustrated, Stagg spent the Christmas holidays devising a solution. R code runs in a piece of software called an interpreter. Instead of having students install the interpreter on their own computers, or execute their code on a remote server, he would have the interpreter run in the students’ web browsers. To do that, Stagg used a tool that is rapidly gaining popularity in scientific computing: WebAssembly.

Code written in any of a few dozen languages, including C, C++ or Rust, can be compiled into the WebAssembly (or Wasm) instruction format, allowing it to run in a software-based environment inside a browser. No external servers are required. All modern browsers support WebAssembly, so code that works on one computer should produce the same result on any other. Best of all, no installation is needed, so scientists who are not authorized to install software — or lack the know-how or desire to do so — can use it.

WebAssembly allows developers to recycle their finely tuned code, so they don’t have to rewrite it in the language of the web: JavaScript. Google Earth, a 3D representation of Earth from Google’s parent company, Alphabet, is built on WebAssembly. So are the web version of Adobe Photoshop and the design tool Figma. Stagg, who is based in Newcastle but is now a senior software engineer at Posit, a software company in Boston, Massachusetts, solved his exam server issues by porting the R interpreter to WebAssembly in the webR package.

Daniel Ji, an undergraduate computer-science student in Niema Moshiri’s laboratory at the University of California, San Diego, used WebAssembly to build browser interfaces for many of his group’s epidemiological resources, including one that identifies evolutionary relationships between viral genomes1. Moshiri has used those tools to run analyses on smartphones, game systems and low-powered Chromebook laptops. “You might be able to have people run these tools without even needing a standard desktop or laptop computer,” Moshiri says. “They could actually maybe run it on some low-energy or portable device.”

That being said, porting an application to WebAssembly can be a complicated process full of trial and error — and one that’s right for only select applications.

Reusability and restrictions

Robert Aboukhalil’s journey with WebAssembly began with an application that he created in 2017 for quality control of raw DNA-sequencing data. The necessary algorithms already existed in a tool called Seqtk, but they weren’t written in JavaScript. So Aboukhalil, a software engineer at the Chan Zuckerberg Initiative in Redwood City, California, rewrote them — but his implementations were relatively slow. Retooling his application to use WebAssembly improved performance 20-fold. “It was awesome, because it gave me more features that I didn’t have to write myself. And it happened to make the whole website a lot faster.”

C and C++ code can be ported to WebAssembly using the free Emscripten compiler; Rust programmers can use ‘wasm-pack’, an add-on to Rust’s package-manager and compilation utility, ‘cargo’. Python and R code cannot be compiled into WebAssembly, but there are WebAssembly ports of their interpreters called Pyodide and webR, which can run scripting code in these languages.

Quarto, a publishing system that allows researchers to embed and execute R, Python and Javascript code in documents and slide decks, is compatible with WebAssembly, too, using the quarto-webr extension (see our example at go.nature.com/4c1ex). WebAssembly can also be used in Observable computational notebooks, which have uses in data science and visualization and run JavaScript natively. There’s even a version of Jupyter, another computational-notebook platform, called JupyterLite that is built on WebAssembly.

Aboukhalil has ported more than 30 common computational-biology utilities to WebAssembly. His collection of ‘recipes’ — that is, code changes — that allow the underlying code to be compiled is available at biowasm.com. “Compiling things to WebAssembly, unfortunately, isn’t straightforward,” Aboukhalil explains. “You often have to modify the original code to get around things that WebAssembly doesn’t support.”

For instance, modern operating systems can handle 64-bit numbers. WebAssembly, however, is limited to 32 bits, and can access only 232 bytes (4 gigabytes) of memory. Furthermore, it cannot directly access a computer’s file system or its open network connections. And it’s not multithreaded; many algorithms depend on this form of parallelization, which allows different parts of a computation to be performed simultaneously. “A lot of older code won’t compile into WebAssembly, because it assumes that it can do things that can’t be done,” Stagg says.

Compounding these challenges, scientific software sits atop a tower of interconnected libraries, all of which must be ported to WebAssembly for the code to run. Jeroen Ooms, a software engineer in Utrecht, the Netherlands, has ported roughly 85% of the R-universe project’s 23,000 open-source R libraries to WebAssembly. But only about half of those actually work, he says, because some underlying libraries have not yet been converted.

Then, there’s the process of web development. Bioinformaticians don’t typically write code in JavaScript, but it is needed to create the web pages in which those tools will run. They also have to manually handle tasks such as shuttling data between the two language systems and freeing any memory they use – tasks that are handled automatically in pure JavaScript.

As a result, WebAssembly is often used to build relatively simple tools or applied to computationally intensive pieces of larger web applications. As a postdoc, bioinformatician Luiz Irber, then at the University of California, Davis, used WebAssembly to make a Rust language tool called Branchwater broadly accessible. Branchwater converts sequence data into numerical representations called hashes, which are used to search databases of microbial DNA sequences. Rather than having users install a conversion tool or upload their data to remote servers, Irber’s WebAssembly implementation allows researchers to convert their files locally.

Bioinformatician Aaron Lun and software engineer Jayaram Kancherla at Genentech in South San Francisco, California, used WebAssembly to implement kana, a browser-based analysis platform for single-cell RNA-sequencing data sets. The goal, Lun and Kancherla say, was to allow researchers to explore their data without a bioinformatician’s help. About 200 users now use kana each month.

The porting process took “six months, maybe a year’s worth of weekends”, Lun says, and was complicated by the fact that they were starting from C++ libraries glued together with R code. But that was nothing compared with the challenge of crafting a smooth, friendly user experience. “I can see why web developers get paid so much,” he laughs.

Powering up

Developers who need more computing power can supercharge their tools through a related project, WebGPU, which provides access to users’ graphics cards.

Will Usher, a scientific-visualization engineer at the University of Utah in Salt Lake City, and his team used WebGPU and WebAssembly to implement a data-visualization algorithm called ‘Marching Cubes’, with which they manipulated terabyte-scale data sets in a browser2. Computer scientist Johanna Beyer’s team at Harvard University in Cambridge, Massachusetts, created a visualization tool for gigabyte-sized whole-slide microscopy data, using an algorithm called ‘Residency Octree’3. And developers at UK firm Oxford Nanopore Technologies built Bonito, a drag-and-drop basecalling tool that translates raw signals into nucleotide sequences, for the company’s sequencing platform.

Chris Seymour, Oxford Nanopore’s vice-president of platform development, says the company’s aim was to make its tools accessible to scientists who lack the skills to install software or are barred from doing so. Installation can be “a barrier to entry for certain users”, he explains. But WebAssembly is “a zero-install solution”: “They just hit the URL, and they’re good to go.”

There are other benefits, too. Data are never transferred to external servers, alleviating privacy concerns. And because the browser isolates the environment in which WebAssembly code can be executed, it is unlikely to harm the user’s system.

Perhaps most importantly, WebAssembly allows researchers to explore software and data with minimal friction, thus enabling development of educational applications. Aboukhalil has created a series of tutorials at sandbox.bio, with which users can test-drive bioinformatics tools in an in-browser text console. Statistician Eric Nantz at pharmaceuticals company Eli Lilly in Indianapolis, Indiana, is part of a pilot project to use webR to share clinical-trial data with the US Food and Drug Administration — a process that would otherwise require each scientist to install custom computational dashboards. Using WebAssembly, he says, “will minimize, from the reviewer’s perspective, many of the steps that they had to take to get the application running on their machines”.

WebAssembly, says Niema, “bridges that gap that we have in bioinformatics, where bio people are the users, computer-science people are the developers, and how do we translate [between them]?”

Still, brace yourself for complications. “WebAssembly is a great technology, but it’s also a niche technology,” Aboukhalil says. “There’s a small subset of applications where it makes sense to [use it], but when it does make sense it can be very powerful. It’s just a matter of figuring out which use cases those are.”

[ad_2]

Source Article Link

Categories
News

Quantum computing : NVIDIA partners with Pawsey

NVIDIA partners with Pawsey Supercomputing for Quantum computing exploration

In a significant move for the field of quantum computing, NVIDIA has joined forces with Australia’s Pawsey Supercomputing Centre. This collaboration is poised to make a substantial impact on research by combining NVIDIA’s cutting-edge CUDA Quantum platform with the deployment of the powerful Grace Hopper Superchips. These advancements are expected to propel computational capabilities at the Centre’s National Supercomputing and Quantum Computing Innovation Hub to new heights.

At the heart of this partnership is NVIDIA’s CUDA Quantum platform, which is designed to facilitate hybrid quantum computing research. This approach blends classical and quantum computing to solve complex problems more efficiently. Researchers at Pawsey will leverage this platform to advance quantum algorithm development, optimize quantum device designs, and improve techniques for quantum error correction, calibration, and control.

NVIDIA Quantum Computing

Equally important to this initiative is the introduction of the NVIDIA Grace Hopper Superchip. This superchip, which combines the Grace CPU with the Hopper GPU, is specifically engineered for high-precision quantum simulations. These simulations are essential for enhancing our understanding of quantum systems and for developing applications that span a variety of industries.

The economic implications of this venture are substantial. The Australian national science agency, CSIRO, estimates that quantum computing could add $2.5 billion to the economy annually and create 10,000 jobs by 2040. The anticipated growth is expected to stem from quantum computing’s potential to improve areas such as astronomy, life sciences, medicine, and finance.

Researchers at Pawsey are preparing to delve into quantum machine learning, which merges quantum computing with artificial intelligence to process information in new ways. They will also simulate chemical interactions, process radio astronomy images, analyze complex financial systems, and push forward bioinformatics for medical research.

To support these ambitious projects, the NVIDIA Grace Hopper Superchip nodes will be built using NVIDIA’s MGX modular architecture. This architecture is known for its high bandwidth and performance, which are essential for tackling the intricate challenges presented by quantum computing.

The partnership also has a goal of fostering an inclusive environment by providing the Australian quantum community and international collaborators with access to the NVIDIA Grace Hopper platform. This open access is expected to spur discovery and innovation, representing a transformative step for researchers and industries alike.

The collaboration between NVIDIA and the Pawsey Supercomputing Centre is set to drive significant advancements in quantum computing research. By providing researchers with advanced tools and resources, this partnership not only strengthens Australia’s position in the global quantum field but also holds promise for the scientific and economic benefits that this emerging technology is expected to yield.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

IBM watsonx Korea Quantum Computing (KQC) deal sealed

Korea Quantum Computing Signs IBM watsonx Deal

IBM has teamed up with Korea Quantum Computing (KQC) in a strategic partnership that’s set to advance on some computing. This alliance is not just a handshake between two companies; it’s a fusion of IBM’s trailblazing AI software and quantum computing services with KQC’s ambition to push the boundaries of technology.

“We are excited to work with KQC to deploy AI and quantum systems to drive innovation across Korean industries. With this engagement, KQC clients will have the ability to train, fine-tune, and deploy advanced AI models, using IBM watsonx and advanced AI infrastructure. Additionally, by having the opportunity to access IBM quantum systems over the cloud, today—and a next-generation quantum system in the coming years—KQC members will be able to combine the power of AI and quantum to develop new applications to address their industries’ toughest problems,” said Darío Gil, IBM Senior Vice President and Director of Research.

This collaboration includes an investment in infrastructure to support the development and deployment of generative AI. Plans for the AI-optimized infrastructure includes advanced GPUs and IBM’s Artificial Intelligence Unit (AIU), managed with Red Hat OpenShift to provide a cloud-native environment. Together, the GPU system and AIU combination is being engineered to offer members state-of-the-art hardware to power AI research and business opportunities.

Quantum Computing

That’s the vision KQC is chasing, and by 2028, they plan to bring this vision to life by installing an IBM Quantum System Two right in their Busan site. This isn’t just about getting their hands on new gadgets; it’s about weaving quantum computing into the very fabric of mainstream applications. To make this a reality, KQC is already on the move, beefing up their infrastructure with the latest GPUs and IBM’s AI Unit, all fine-tuned for AI applications that will redefine what’s possible.

But what’s advanced technology without a solid foundation? That’s where Red Hat OpenShift comes into play. It’s the backbone that will ensure this complex infrastructure stands strong, offering the scalable cloud services that KQC needs to manage their high-tech setup. And it doesn’t stop there. KQC is also diving into the world of Red Hat OpenShift AI for management and runtime, and they’re exploring the frontiers of generative AI technologies with the WatsonX platform. These are the tools that will fuel the next wave of innovation and efficiency in AI.

Now, let’s talk about the ripple effect. This partnership isn’t just about KQC and IBM; it’s about sparking a fire of innovation across entire industries. Korean companies in finance, healthcare, and pharmaceuticals are joining the fray, eager to collaborate on research that leverages AI and quantum computing. The goal? To craft new applications that will catapult these industries into a new era of technological prowess.

The KQC-IBM partnership is more than a milestone for Korea’s tech landscape; it’s a beacon that signals a new dawn in the application of AI and quantum computing. With the integration of Red Hat OpenShift and the WatsonX platform, KQC is not just boosting its capabilities; it’s setting the stage for groundbreaking research and innovation. This collaboration is a testament to the power of partnership and the shared commitment to sculpting the future of industries with the finest technology at our fingertips.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Quantum computing Hype vs Reality explained

Quantum Computing Hype vs Reality explained

Quantum computing is a term that has been generating a lot of excitement in the tech world. This cutting-edge field is different from the computing most of us are familiar with, which uses bits to process information. Quantum computers use something called qubits, which allow them to perform complex calculations much faster than current computers. While quantum computing is still in its early stages and not yet part of our everyday lives, it’s showing great potential for specialized uses.

One of the leaders in this field is Google Quantum AI, which has developed one of the most sophisticated quantum processors so far. Their work is a testament to it’s researchers commitment to advancing the industry. However, quantum computing is still largely in the research phase, and it will likely be several years before it becomes more mainstream.

Experts in the industry believe that it could take a decade or more before we have quantum computers that are fully functional and error-free, capable of handling practical tasks. This timeline is similar to the development of classical computers, which gradually became more powerful and useful over time.

Google Research Quantum Computing

Learn more about quantum computing as Google Research explains more about the hype and reality of the cutting-edge computer technology that is still under development. As quantum computing continues to develop, we’re starting to see more applications for this technology. It’s expected that quantum systems will enhance, rather than replace, traditional computers, increasing our overall computing capabilities.

Here are some other articles you may find of interest on the subject of  quantum computing :

The potential for quantum computing to transform various industries is immense. It could greatly improve research in fusion energy by making simulations more efficient and reducing the amount of computation needed. In healthcare, it could speed up the process of modeling new drugs. Quantum computing might also lead to better battery technology by optimizing electrochemical simulations, which could result in more effective energy storage solutions and help produce more environmentally friendly fertilizers.

Hype vs Reality

History has shown us that new technologies often lead to applications that we didn’t anticipate. As quantum computing technology continues to evolve, its full potential will become clearer. Quantum computing represents a significant shift in computational capabilities, promising to solve problems intractable for classical computers. However, the field is in its nascent stages, and there’s often a gap between public perception (hype) and the current state of technology (reality). Here’s a comprehensive explanation, distinguishing between the hype and reality of quantum computing:

Quantum Computer Hype :

  • Instant Problem Solving: A common misconception is that quantum computers can instantly solve extremely complex problems, like breaking encryption or solving intricate scientific issues, which traditional computers cannot.
  • Universal Application: There’s a belief that quantum computers will replace classical computers for all tasks, offering superior performance in every computing aspect.
  • Imminent Revolution: The public often perceives that quantum computing is just around the corner, ready to revolutionize industries in the immediate future.
  • Unlimited Capabilities: The hype often implies that there are no theoretical or practical limits to what quantum computing can achieve.

Quantum Computing Reality :

  • Specialized Problem Solving: Quantum computers excel at specific types of problems, such as factorization (useful in cryptography) or simulation of quantum systems. They are not universally superior for all computational tasks.
  • Niche Applications: Currently, quantum computers are suited for particular niches where they can leverage quantum mechanics to outperform classical computers. This includes areas like cryptography, materials science, and complex system modeling.
  • Developmental Stage: As of now, quantum computing is in a developmental phase. Key challenges like error correction, coherence time, and qubit scalability need to be addressed before widespread practical application.
  • Physical and Theoretical Limits: Quantum computers face significant physical and engineering challenges. These include maintaining qubit stability (decoherence) and managing error rates, which grow with the number of qubits and operations.
  • Quantum Supremacy vs. Quantum Advantage: While quantum supremacy (a quantum computer solving a problem faster than a classical computer could, regardless of practical utility) has been claimed, the more crucial milestone of quantum advantage (practical and significant computational improvements in real-world problems) is still a work in progress.
  • Hybrid Systems: The foreseeable future likely involves hybrid systems where quantum and classical computers work in tandem, leveraging the strengths of each for different components of complex problems.
  • Investment and Research: Significant investment and research are ongoing, with breakthroughs happening at a steady pace. However, it’s a field marked by incremental progress rather than sudden leaps.
  • Ethical and Security Implications: The rise of quantum computing brings ethical considerations, particularly in cybersecurity (e.g., breaking current encryption methods) and data privacy. It necessitates the development of new cryptographic methods (quantum cryptography).

The excitement around quantum computing is not without merit. Each new discovery moves us closer to what once seemed like the stuff of science fiction. The progress made by Google Quantum AI and others in this field is a strong sign of the transformative power of quantum computing.

Quantum computing is still in its infancy, but the advancements made by Google and other pioneers are steadily paving the way for a future that includes quantum computation. Although the current state of quantum computing may not live up to the high expectations some have for it, the potential applications and ongoing research suggest that it could indeed live up to its promise in the years to come.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

High-Speed computing with SOT-MRAM array chip from ITRI & TSMC research

High-Speed computing with SOT-MRAM array chip

The semiconductor industry is on the brink of a significant advancement with the latest collaboration between the Industrial Technology Research Institute (ITRI) and Taiwan Semiconductor Manufacturing Company (TSMC). These two powerhouses have joined forces to create a new type of memory chip that promises to make high-speed computing even faster and more efficient. The spin-orbit-torque magnetic random-access memory (SOT-MRAM) array chip they’ve developed is a marvel of modern engineering, designed to use a fraction of the power of current memory chips while delivering lightning-fast performance.

Imagine a memory chip that operates with just 1% of the power required by the chips we use today. That’s exactly what ITRI and TSMC have achieved with their SOT-MRAM chip. Despite this dramatic reduction in power usage, the chip’s speed is not compromised. It boasts access speeds of up to 10 nanoseconds, which is incredibly fast and exactly what’s needed for the demanding applications of today’s high-performance computing, artificial intelligence, and automotive industries.

The unveiling of the SOT-MRAM array chip at the IEEE International Electron Devices Meeting (IEDM 2023) was a moment of pride for the developers. This event is where the brightest minds in the industry gather to discuss and explore the latest technological innovations. The SOT-MRAM chip stood out as a highlight, drawing attention to its potential to become a key component in future memory solutions.

High-Speed Computing with SOT-MRAM

The implications of this technology are vast. In high-performance computing, where speed and efficiency are paramount, SOT-MRAM could be a game-changer. Artificial intelligence systems, which require rapid data processing and energy efficiency, stand to benefit greatly from this technology. The automotive industry, too, with its increasing reliance on advanced electronics, could see a significant boost in performance and reliability from automotive chips that incorporate SOT-MRAM technology.

As we move into an era where artificial intelligence, 5G, and the Artificial Intelligence of Things (AIoT) are becoming more prevalent, the demand for advanced memory solutions is growing. These technologies need to handle large volumes of data while keeping energy consumption to a minimum. The SOT-MRAM chip is a promising solution to this challenge, offering powerful performance without the high energy costs.

This technological breakthrough also reinforces Taiwan’s status as a leader in the semiconductor industry. The country is already recognized as a global hub for semiconductor manufacturing, and the successful partnership between ITRI and TSMC is a testament to Taiwan’s commitment to pushing the boundaries of innovation and maintaining its competitive edge.

The SOT-MRAM array chip is a testament to the power of collaboration and innovation in the field of memory technology. It’s set to make a significant impact on high-speed computing across various sectors. With its advanced architecture, reduced power consumption, and rapid operation, the chip is a shining example of what can be achieved when industry leaders come together to tackle the challenges of modern computing. As the world continues to demand faster and more efficient technology, the SOT-MRAM chip is poised to play a crucial role in meeting those needs.

 

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Artificial Intelligence vs Quantum Computing

Artificial Intelligence vs Quantum Computers

In the ever-evolving world of technology, two titans are making strides that could transform how we tackle some of the most challenging issues facing our society, including the pressing matter of climate change. Artificial intelligence (AI) and quantum computers stand at the forefront of this technological revolution, each with its own set of strengths and weaknesses. But in the fight between Artificial Intelligence vs Quantum Computing will the two technologies combine or will one proved to be a more cost-effective solution to solving the problems of planet Earth?

Quantum computing is a fascinating concept that has intrigued many with its potential to surpass traditional computing methods. It operates on the principles of quantum mechanics, utilizing qubits that can represent numerous states simultaneously, which could allow it to solve certain problems at speeds we’ve never seen before. However, the technology is still in its infancy, and it faces significant hurdles, such as error correction, which can undermine the very speed it promises.

Meanwhile, AI is making waves by enhancing the capabilities of classical computers. These advancements are enabling computers to become smarter and more efficient, capable of handling complex tasks with relative ease. As AI continues to evolve, it pushes the threshold at which quantum computing would become superior even further into the future, making classical computing a tough competitor to beat.

Artificial Intelligence vs Quantum Computing

Here are some other articles you may find of interest on the subject of quantum computing :

Despite the challenges that quantum computing currently faces, its theoretical potential is immense. The unique abilities of qubits might one day allow quantum computers to process information in ways that classical computers cannot, offering solutions to problems that are currently unsolvable. However, at this point in time, AI-driven classical computing is the more viable option for solving real-world problems.

The progress in AI is remarkable, with algorithms becoming increasingly sophisticated. These advancements are empowering classical computers to learn and adapt, solving problems with an efficiency that is difficult to surpass. This rapid growth in AI technology presents a significant hurdle for quantum computing to demonstrate its worth.

Quantum computers

For those interested in the finer details of quantum computing, there are educational resources available, such as courses on brilliant.org, that provide a deeper understanding of the subject. These courses explain complex concepts like interference, superpositions, and entanglement in a way that lays the foundation for a greater appreciation of what quantum technology could one day achieve.

While quantum computing offers an exciting look into the future of problem-solving, its practicality in the present day remains uncertain. AI, on the other hand, continues to expand the capabilities of classical computers, ensuring their place as a vital component in our current technological arsenal. The race between AI and quantum computing is far from over, but for now, AI is leading the way with its practicality and efficiency.

Future technologies

As we look to the future, it’s clear that both AI and quantum computing will play critical roles in advancing our technological capabilities. The question is not whether one will ultimately prove more valuable than the other, but how they will work together to address the complex challenges we face. The potential for AI to enhance quantum computing, and vice versa, suggests that the most effective solutions may come from a synergy of these two powerful technologies.

The journey toward fully realizing the capabilities of quantum computing is a long one, and it’s fraught with technical obstacles that researchers are diligently working to overcome. The quest for stable qubits, effective error correction methods, and scalable quantum systems is ongoing, and each breakthrough brings us closer to harnessing the true power of quantum computing.

AI algorithms

In the meantime, AI is not standing still. It’s being integrated into various industries, revolutionizing fields such as healthcare, finance, and transportation. AI algorithms are becoming more autonomous, learning from data in ways that mimic human cognition, and in some cases, even surpassing it.

The interplay between AI and quantum computing is a testament to the incredible ingenuity of scientists and engineers who are pushing the boundaries of what’s possible. As we continue to explore these technologies, we can expect to see a landscape of problem-solving that is more sophisticated, more efficient, and more capable of addressing the needs of a rapidly changing world.

Ultimately, the future of problem-solving lies in the hands of these two technological giants. Whether it’s through the sheer computational might of quantum computing or the intelligent adaptability of AI, the solutions to some of our most pressing problems may be closer than we think. As we stand on the cusp of these Artificial Intelligence vs Quantum Computers advancements, it’s an exciting time to be a part of the journey toward a smarter, more capable future.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Advancing Quantum Classical Computing with CUDA Quantum 0.5

Advancing Quantum Classical Computing with CUDA Quantum 0.5

Quantum-classical computing applications are rapidly evolving, with the CUDA Quantum platform playing an instrumental role in this development. The open-source platform is designed to facilitate the building of quantum-classical computing applications, compatible with quantum processor units (QPUs), GPUs, and CPUs. The latest release, CUDA Quantum 0.5, introduces a host of new features and improvements, making it a crucial tool in the realm of heterogeneous computing.

CUDA Quantum accelerates workflows in quantum simulation, quantum machine learning, and quantum chemistry by harnessing the power of GPUs. This acceleration is essential as it allows for more efficient and faster computations, enabling researchers and developers to solve complex problems more quickly. With the latest release, CUDA Quantum 0.5, the platform has broadened its scope, introducing more QPUs backends, more simulators, and other enhancements to streamline quantum-classical computing applications development.

CUDA Quantum 0.5

One of the key improvements is the platform’s support for running adaptive quantum kernels, a specification from the Quantum Integrated Runtime (QIR) alliance. This is a significant step towards integrated quantum-classical programming, a concept that combines classical and quantum computing principles to solve complex problems more efficiently.

CUDA Quantum 0.5 also introduces new kernels for quantum chemistry simulations, including Fermionic and Givens rotation and fermionic SWAP kernels. These kernels are instrumental in performing intricate calculations and simulations in the field of quantum chemistry. Furthermore, the platform now supports exponentials of Pauli matrices, which are useful for quantum simulations of physical systems and for developing quantum algorithms for optimization problems.

Quantum Computers

In terms of data handling, CUDA Quantum 0.5 has improved its support for std::vector and (C style) arrays. This enhanced support allows for more flexible and efficient data management, crucial for handling large data sets in quantum computing applications. The platform also now supports execution of for-and while-loops of known lengths on quantum hardware backends, a feature that enhances the efficiency of loop execution in quantum algorithms.

The new release of CUDA Quantum also expands its compatibility with different quantum hardware backends. IQM and Oxford Quantum Circuits (OQC) quantum computers are now supported as QPU backends in CUDA Quantum, joining the already supported quantum computers from Quantinuum and IonQ. This wider range of supported hardware opens up more possibilities for developers and researchers to run their quantum algorithms on different quantum hardware platforms.

Getting started with Quantum Classical Computing

Finally, CUDA Quantum 0.5 has also made significant strides in the area of quantum simulators. The platform has improved its tensor network-based simulators, which are suitable for large-scale simulation of certain classes of quantum circuits. Furthermore, a matrix product state (MPS) simulator has been added to CUDA Quantum. MPS simulators can handle a large number of qubits and more gate depth for certain classes of quantum circuits on a relatively small memory footprint, making them a valuable tool for quantum computing simulations.

The latest release of CUDA Quantum, with its host of new features and improvements, is a significant milestone in the development of quantum-classical computing applications. By providing a platform that supports a variety of quantum hardware, offers advanced kernels for quantum simulations, and improves data handling and simulation capabilities, CUDA Quantum 0.5 is paving the way for the future of quantum-classical computing.

If you would like to get started with CUDA Quantum NVIDIA has created a introductory guide on getting started with CUDA Quantum taking you step-by-step with Python and C++ examples that provide a quick learning path for CUDA Quantum capabilities.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Simply NUC Bloodhound Intel mini PC for IoT and Edge computing

Simply NUC Bloodhound mini PC

The Bloodhound computing solution for IoT and Edge needs by Simply NUC is a breakthrough in the world of compact and powerful computing. This powerful Intel mini PC, aptly named the Bloodhound, is a robust, all-in-one computing solution that packs a heavy punch in a small package. It is designed to meet the demanding needs of IoT and Edge computing, making it an ideal choice for a variety of industrial and commercial applications.

What is Edge computing?

Edge computing is a technology that processes data near the location where it’s being collected, instead of sending it far away to a central data center. This approach helps to make things work faster and more efficiently because it reduces the amount of data that needs to travel over long distances. It’s like having a small computer right where the data is, which can quickly analyze and use the information without needing to send it elsewhere first. This is particularly useful in situations where speed is important, like for smart devices in homes, self-driving cars, or local servers in a factory.

Under the hood, the Bloodhound is powered by an Intel Celeron N5105 CPU, a quad-core processor that delivers reliable performance for most computing tasks. Alongside this, it supports up to 32GB of RAM, providing ample memory for multitasking and running complex applications. This combination of CPU and RAM ensures that the Bloodhound can handle demanding computing tasks with ease.

Storage is another area where the Bloodhound shines. It offers up to 8TB of storage, a massive amount for a device of its size. This vast storage capacity allows for the storage of large amounts of data, making it ideal for applications that require high-volume data processing, such as video surveillance, data analytics, and machine learning.

Other articles we have written that you may find of interest on the products of Simply NUC  :

“Bloodhound mini PC from Simply NUC is the next key piece of your IT solution, from network security and redundancy to edge computing and analysis, Bloodhound can do it all. Loaded with an Intel Celeron N5105 CPU, and paired with up to 16GB of RAM to enable faster operations, and up to 8TB of storage for bulk data or storing local CCTV footage, Bloodhound has all your rugged computing needs covered.”

Rugged Design

The Intel powered Bloodhound mini PC is not just powerful; it is also built to last. It boasts a rugged, fanless, and IP53-rated design, making it suitable for demanding environments such as warehouses and outdoor installations. The fanless design reduces the risk of fan failure, while the IP53 rating ensures that the device is protected against dust and water spray. Furthermore, the Bloodhound is tested to operate 24/7 and can withstand temperatures ranging from 0 to 60C, further attesting to its durability and reliability.

Connectivity

Network security, redundancy, and edge computing capabilities are key features of the Bloodhound. It comes with three 2.5Gb LAN ports, providing the flexibility to connect multiple IP devices and additional networking capabilities. The primary LAN port includes built-in PoE+ (Power over Ethernet), reducing the need for extensive cabling and simplifying installation.

VESA Mount

The Bloodhound mini PC’s slim form factor and included VESA mount make it easy to integrate into hard-to-reach places. Despite its compact size, it can serve a variety of rugged deployments, such as an edge firewall, an edge server for IP cameras, or an IoT gateway. This versatility makes the Bloodhound a valuable addition to any IT infrastructure, offering a range of solutions in a single, compact device.

The Simply NUC Bloodhound Intel mini PC for IoT and Edge is a powerful, durable, and versatile device that is well-suited to a wide range of applications. Its robust performance, extensive storage, rugged design, and comprehensive networking capabilities make it a compelling choice for businesses seeking a compact but powerful computing solution. For more details and full specifications jump over to the Simply NUC official website.

Filed Under: Hardware, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.