Categories
Active Components Electronic Components Semiconductor Technology

Thermal management of semiconductors

Too hot to handle

Every electronic device or circuit will create heat when in use, and it’s important to manage this. If the thermal output isn’t carefully controlled it can end up damaging, or even destroying the circuit.

This is especially an issue in the area of power electronics, where circuits reaching high temperatures are inevitable.

Passive thermal dissipation can only do so much. Devices called heat sinks can be used in circuits to safely and efficiently dissipate the heat created. Fans or air and water-cooling devices can be used also.

Feelin’ hot, hot, hot!

Using thermistors can help reliably track the temperature limits of components. When used correctly, they can also trigger a cooling device at a designated temperature.

When it comes to choosing a thermistor, there is the choice between negative temperature coefficient (NTC) thermistors, and positive temperature coefficient (PTC) thermistors. PTCs are the most suitable, as their resistance will increase as the temperature does.

Thermistors can be connected in a series and can monitor several potential hotspots simultaneously. If a specified temperature is reached or exceeded, the circuit will switch into a high ohmic state.

I got the power!

Power electronics can suffer from mechanical damage and different components can have different coefficients of thermal expansion (CTE). If components like these are stacked and expand at different rates, the solder joints can get damaged.

After enough temperature changes, caused by thermal cycling, degradation will start to be visible.

If there are only short bursts of power applied, there will be more thermal damage in the wiring. The wire will expand and contract with the temperature, and since both ends of the wire are fixed in place this will eventually cause them to detach.

The heat is on

So we’ve established that temperature changes can cause some pretty severe damage, but how do we stop them? Well, you can’t really, but you can use components like heat sinks to dissipate the heat more efficiently.

Heat sinks work by effectively taking the heat away from critical components and spreading it across a larger surface area. They usually contain lots of strips of metal, called fins, which help to distribute heat. Some even utilise a fan or cooling fluid to cool the components at a quicker speed.

The disadvantage to using heat sinks is the amount of space they need. If you are trying to keep a circuit small, adding a heat sink will compromise this. To reduce the risk of this as much as possible,  identify the temperature limits of devices and choose the size of heat sink accordingly.

Most designers should provide the temperature limits of devices, so hopefully matching them to a heat sink will be easy.

Hot ‘n’ cold

When putting together a circuit or device, the temperature limits should be identified, and measures put in place to avoid unnecessary damage.

Heat sinks may not be the best choice for everyone, so make sure to examine your options carefully. There are also options like fan or liquid-based cooling systems.

Cyclops Electronics can supply both electronic components and the heat sinks to protect them. If you’re looking for everyday or obsolete components, contact Cyclops today and see what we can do for you.

Categories
Active Components Electronic Components Passive Components Semiconductor

Superconductivity

Superconductivity is the absence of any electrical resistance of some materials at specific low temperatures. As a starting point this is pretty vague, so let’s define it a bit more clearly.

The benefits of a superconductor is that it can sustain a current indefinitely, without the drawback of resistance. This means it won’t lose any energy over time, as long as the material stays in a superconducting state.

Uses

Superconductors are used in some magnetic devices, like medical imaging devices and energy-storage systems. They can also be used in motors, generators and transformers, or devices for measuring magnetic fields, voltages, or currents.

The low power dissipation, high-speed operation and high sensitivity make superconductors an attractive prospect. However, due to the cool temperatures required to keep the material in a superconducting state, it’s not widely utilised.

Effect of temperature

The most common temperature that triggers the superconductor effect is -253⁰C (20 Kelvin). High-temperature superconductors also exist and have a transition temperature of around -193⁰C (80K).

This so-called transition temperature is not easily achieved under normal circumstances, hence why you don’t hear about superconductors that often. Currently superconductors are mostly used in industrial applications so they can be kept at low temperatures more efficiently.

Type I and Type II

You can sort superconductors into two types depending on their magnetic behaviour. Type I materials are only in their superconducting state until a threshold is reached, at which point they will no longer be superconducting.

Type II superconducting materials have two critical magnetic fields. After the first critical magnetic field the superconductor moves into a ‘mixed state’. In this state some of the superconductor reverts to normal conducting behaviour, which takes pressure off another part of the material and allows it to continue as a superconductor. At some point the material will hit its second critical magnetic field, and the entire material will revert to regular conducting behaviour.

This mixed state of type II superconductors has made it possible to develop magnets for use in high magnetic fields, like in particle accelerators.

The materials

There are 27 metal-based elements that are superconductors in their usual crystallographic forms at low temperatures and low atmospheric pressure. These include well-known materials such as aluminium, tin and lead.

Another 11 elements that are metals, semimetals or semiconductors can also be superconductors at low temperatures but high atmospheric pressure. There are also elements that are not usually superconducting, but can be made to be if prepared in a highly disordered form.

Categories
Active Components Electronic Components Technology

Electronic Components of a hearing aid

Hearing aids are an essential device that can help those with hearing loss to experience sound. The gadget comes in an analogue or digital format, with both using electronic components to amplify sound for the user.

Main components

Both types of hearing aid, analogue and digital, contain semiconductors for the conversion of sound waves to a different medium, and then back to amplified sound waves.

The main components of a hearing aid are the battery, microphone, amplifier, receiver, and digital signal processor or mini-chip.

The battery, unsurprisingly, is the power source of the device. Depending on the type of hearing aid it can be a disposable one or a rechargeable one.

The microphone can be directional, which means it can only pick up sound from a certain direction, which is in front of the hearing aid user. The alternative, omnidirectional microphones, can detect sound coming from all angles.

The amplifier receives signals from the microphone and amplifies it to different levels depending on the user’s hearing.

The receiver gets signals from the amplifier and converts them back into sound signals.

The digital signal processor, also called a mini-chip, is what’s responsible for all of the processes within the hearing aid. The heart of your hearing, if you will.

Chip shortages

As with all industries, hearing aids were affected by the chip shortages caused by the pandemic and increased demand for chips.

US manufacturers were also negatively impacted by Storm Ida in 2021, and other manufacturers globally reported that orders would take longer to fulfil than in previous years.

However, despite the obstacles the hearing aid industry faced thanks to covid, it has done a remarkable job of recovering compared to some industries, which are still struggling to meet demand even now.

Digital hearing aid advantages

As technology has improved over the years, traditional analogue hearing aids have slowly been replaced by digital versions. Analogue devices would convert the sound waves into electrical signals,  that would then be amplified and transmitted to the user. This type of hearing aid, while great for its time, was not the most authentic hearing experience for its users.

The newer digital hearing aid instead converts the signals into numerical codes before amplifying them to different levels and to different pitches depending on the information attached to the numerical signals.

Digital aids can be adjusted more closely to a user’s needs, too, because there is more flexibility within the components within. They often have Bluetooth capabilities too, being able to connect to phones and TVs. There will, however, be an additional cost that comes with the increased complexity and range of abilities.

Categories
Electronic Components Future Semiconductor Technology Transistors

The Angstrom Era of Electronics

Angstrom is a unit of measurement that is most commonly used for extremely small particles or atoms in the fields of physics and chemistry.

However, nanometres are almost too big for new electronic components, and in the not-so-distant future angstrom may be used to measure the size of semiconductors.

It could happen soon

Some large firms have already announced their future plans to move to angstrom within the next decade, which is a huge step in terms of technological advancement.

The most advanced components at the moment are already below 10nm in size, with an average chip being around 14nm. Seeing as 1nm is equal to 10Å it is the logical next step to move to the angstrom.

The size of an atom

The unit (Å) is used to measure atoms, and ionic radius. 1Å is roughly equal to the diameter of one atom. There are certain elements, namely chlorine, sulfur and phosphorus, that have a covalent radius of 1Å, and hydrogen’s size is approximately 0.5Å.

As such, angstrom is mostly used in solid-state physics, chemistry and crystallography.

The origin of the Angstrom

The name of the unit came courtesy of Anders Jonas Ångström, who used the measurement in 1868 to chart the wavelengths of electromagnetic radiation in sunlight.

Using this new unit meant that the wavelengths of light could be measured without the decimals or fractions, and the chart was used by people in the fields of solar physics and atomic spectroscopy after its creation.

Will silicon survive?

It’s been quite a while since Moore’s Law was accurate. The methodology worked on the theory that every two years the number of transistors in an integrated circuit (IC) would double, and the manufacturing and consumer cost would decrease. Despite this principle being relatively accurate in 1965, it does not take into account the shrinking size of electronic components.

Silicon, the material used for most semiconductors, has an atomic size of approximately 2nm (20Å) and current transistors are around 14nm. Even as some firms promise to increase the capabilities of silicon semiconductors, you have to wonder if the material will soon need a successor.

Graphene, silicon carbide and gallium nitride have all been thrown into the ring as potential replacements for silicon, but none are developed enough at this stage for production to be widespread. That said, all three of these and several others have received research and development funding in recent years.

How it all measures up

The conversion of nanometres to angstrom may not seem noteworthy in itself, but the change and advancement it signals is phenomenal. It’s exciting to think about what kind of technology could be developed with electronics this size. So, let’s size up the angstrom era and see what the future holds.

Categories
Electronic Components Future Semiconductor

What are GaN and SiC?

Silicon will eventually go out of fashion, and companies are currently heavily investing in finding its protégé. Gallium Nitride (GaN) and Silicon Carbide (SiC) are two semiconductors that are marked as being possible replacements.

Compound semiconductors

Both materials contain more than one element, so they are given the name compound semiconductors. They are also both wide bandgap semiconductors, which means they are more durable and capable of higher performance than their predecessor Silicon (Si).

Could they replace Silicon?

SiC and GaN both have some properties that are superior to Si, and they’re more durable when it comes to higher voltages.

The bandgap of GaN is 3.2eV and SiC has a bandgap of 3.4eV, compared to Si which has a bandgap of only 1.1eV. This gives the two compounds an advantage but would be a downside when it comes to lower voltages.

Again, both GaN and SiC have a greater breakdown field strength than the current semiconductor staple, ten times better than Si. Electron mobility of the two materials, however, is drastically different from each other and from Silicon.

Main advantages of GaN

GaN can be grown by spraying a gaseous raw material onto a substrate, and one such substrate is silicon. This bypasses the need for any specialist manufacturing equipment being produced as the technology is already in place to produce Si.

The electron mobility of GaN is higher than both SiC and Si and can be manufactured at a lower cost than Si, and so produces transistors and integrated circuits with a faster switching speed and lower resistance.

There is always a downside, though, and GaN’s is the low thermal conductivity. GaN can only reach around 60% of SiC’s thermal conductivity which, although still excellent, could end up being a problem for designers.

Is SiC better?

As we’ve just mentioned, SiC has a higher thermal conductivity than its counterpart, which means it would outlast GaN at a higher heat.

SiC also has more versatility than GaN in what type of semiconductor it can become. The doping of SiC can be performed with phosphorous or nitrogen for an N-type semiconductor, or aluminium for a P-type semiconductor.

SiC is considered to be superior in terms of material quality progress, and the wafers have been produced to a bigger size than that of GaN. SiC on SiC wafers beat GaN on SiC wafers in terms of cost too.

SiC is mainly used for Schottky diodes and FET or MOSFET transistors to make converters, inverters, power supplies, battery chargers and motor control systems.

Categories
Electronic Components Future Semiconductor Technology

Semiconductors in Space

A post about semiconductors being used in space travel would be the perfect place to make dozens of space-themed puns, but let’s stay down to earth on this one.

There are around 2,000 chips used in the manufacture of a single electric vehicle. Imagine, then, how many chips might be used in the International Space Station or a rocket.

Despite the recent decline in the space semiconductor market, it’s looking likely that in the next period there will be a significant increase in profit.

What effect did the pandemic have?

The industry was not exempt from the impact of the shortage and supply chain issues caused by covid. Sales decreased and demand fell by 14.5% in 2020, compared to the year-on-year growth in the years previous.

Due to the shortages, many companies within the industry delayed launches and there was markedly less investment and progress in research and development. However, two years on, the scheduled dates for those postponed launches are fast approaching.

The decline in investment and profit is consequently expected to increase in the next five years. The market is estimated to jump from $2.10 billion in 2021 all the way up to $3.34 billion in 2028. This is a compound annual growth rate (CAGR) of 6.89%.

What is being tested for the future

In the hopes of ever improving the circuitry of spaceships there are several different newer technologies currently being tested for use in space travel.

Some component options are actually already being tested onboard spacecrafts, both to emulate conditions and to take advantage of the huge vacuum that is outer space. The low-pressure conditions can emulate a clean room, with less risk of particles contaminating the components being manufactured.

Graphene is one of the materials being considered for future space semiconductors. The one-atom-thick semiconductor is being tested by a team of students and companies to see how it reacts to the effects of space. The experiments are taking place with a view to the material possibly being used to improve the accuracy of sensors in the future.

Two teams from the National Aeronautics and Space Administration (NASA) are currently looking at the use of Gallium Nitride (GaN) in space too. This, and other wide bandgap semiconductors show promise due to their performance in high temperatures and at high levels of radiation. They also have the potential to be smaller and more lightweight than their silicon predecessors.

GaN on Silicon Carbide (GaN on SiC) is also being researched as a technology for amplifiers that allows satellites to transmit at high radio frequency from Earth. Funnily enough, it’s actually easier to make this material in space, since the ‘clean room’ vacuum effect makes the process of epitaxy – depositing a crystal substrate on top of another substrate – much more straightforward.

To infinity and beyond!

With the global market looking up for the next five years, there will be a high chance of progress in the development of space-specialised electronic components. With so many possible advancements in the industry, it’s highly likely it won’t be long before we see pioneering tech in space.

To bring us back down to Earth, if you’re looking for electronic components contact Cyclops today to see what they can do for you. Email us at sales@cyclops-electronics.com or use the rapid enquiry form on our website.

Categories
Semiconductor Supply Chain Technology

Making silicon semiconductors

As the global shortage of silicon semiconductors (also called chips) continues, what better time is there to read up on how these intricate, tiny components are made?

One of the reasons the industry can’t catch up with the heightened demand for chips is that creating them takes huge amounts of time and precision. From the starting point of refining quartz sand, to the end product of a tiny chip with the capacity to hold thousands of components, let’s have a quick walkthrough of it all:

Silicon Ingots

Silicon is the most common semiconductor material currently used, and is normally refined from the naturally-occurring material silicon dioxide (SiO₂) or, as you might know it, quartz.

Once the silicon is refined and becomes hyper pure, it is heated to 1420˚C which is above its melting point. Then a single crystal, called the seed, is dipped into the molten mixture and slowly pulled out as the liquid silicon forms a perfect crystalline structure around it. This is the start of our wafers.

Slicing and Cleaning

The large cylinder of silicon is then cut into very fine slices with a diamond saw, and further polished so they are at a perfect thickness to be used in integrated circuits (ICs). This polishing process is undertaken in a clean room, where workers have to wear suits that will not collect particles and will cover their whole body. Even a single speck of dirt could ruin the wafers, so the clean room only allows up to 100 particles per cubic foot of air.

Photolithography

In this stage the silicon is covered with a layer of material called a photoresist, and is then put under a UV light mask to create the pattern of circuits on the wafer. Some of the photoresist layer is washed away by a solvent, and the remaining photoresist is stamped onto the silicon to produce the pattern.

Fun fact – The yellow light often seen in pictures of semiconductor fabs is in the lithography rooms. The photoresist material is sensitive to high frequency light, which is why UV is used to make it soluble. To avoid damaging the rest of the wafer, low frequency yellow light is used in the room.

The process of photolithography can be repeated many times to create the required outlines on each wafer, and it is at this stage that the outline of each individual rectangular chip is printed onto the wafer too.

Layering

The fine slices are stacked on top of each other to form the final ICs, with up to 30 unique wafers being used in sequence to create a single computer chip. The outlines of the chips are then cut to separate them from the wafer, and packaged individually to protect them.

The final product

Due to this convoluted, delicate process, the time take to manufacture a single semiconductor is estimated to take up to four months. This, and the specialist facilities that are needed to enable production, results in an extreme amount of care needing to be taken throughout fabrication.

If you’re struggling to source electronic components during this shortage, look no further than Cyclops Electronics. Cyclops specialises in both regular and hard-to-find components. Get in touch now to see how easy finding stock should be, at sales@cyclops-electronics.com.

Categories
Active Components Electronic Components Semiconductor Technology Transistors

The History of Transistors

Transistors are a vital, ubiquitous electronic component. Their main function is to switch or amplify the electrical current in a circuit, and a modern device like a smartphone can contain between 2 and 4 billion transistors.

So that’s some modern context, but have you ever wondered when the transistor was invented? Or what it looked like?

Pre-transistor technology

Going way back to when Ohm’s Law was first discovered in 1820s, people had been aware of circuits and the flow of current. As an extension of this, there was an awareness of conductors.

Following on from this, semiconductors accompanied the birth of the AC-DC (alternating current – direct current) conversion device, the rectifier, in 1874.

Two patents were filed in the 20s and 30s for devices that would have been transistors if they had ever reached past the theoretical stage. In 1925 Julius Lilienfeld of Austria-Hungary filed a patent, but did not end up releasing any papers regarding his research on the field-effect transistor, and so his discoveries were ignored.

Again, in 1934 German physicist Oskar Heil’s patent was on a device that, by applying an electrical field, could control the current in a circuit. With only theoretical ideas, this also did not become the first field effect transistor.

The invention of transistors

The official invention of a working transistor was in 1947, and the device was announced a year later in 1948. The inventors were three physicists working at Bell Telephone Laboratories in New Jersey, USA. William Shockley, John Bardeen and Walter Brattain were part of a semiconductor research subgroup working out of the labs.

One of the first attempts they made at a transistor was Shockley’s semiconductor triode, which was made up of three electrodes, an emitter, a collector and a large low-resistance contact placed on a block of germanium. However, the semiconductor surface trapped electrons, which blocked the main channel from the effect of the external field.

Despite this initial idea not working out, the issue was solved in 1946. After spending some time looking into three-layer structures featuring a reversed and forward-biased junction, they returned to their project on field-effect devices in a year later in 1947. At the end of that year, they found that with two very close contact junctions, with one forward biased and one reverse biased, there would be a slight gain.

The first working transistor featured a strip of gold over a triangle of plastic, finely cut with a razor at the tip to create two contact points with a hair’s breadth between them and placed on top of a block of germanium.

The device was announced in June of 1948 as the transistor – a mix of the words ‘transconductance’, ‘transfer’ and ‘varistor’.

The French connection

At the same time over the water in France, two German physicists working for Compagnie des Freins et Signaux were at a similar stage in the development of a point contact device, which they went on to call the ‘transistron’ when it was released.  

Herbert Mataré and Heinrich Welker released the transistron a few months after the Bell Labs transistor was announced but was engineered completely without influence by their American counterpart due to the secrecy around the Bell project.

Where we are now

The first germanium transistors were used in computers as a replacement for their predecessor vacuum tubes, and transistor car radios were produced all within only six years of its invention.

The first transistor was made with germanium, but since the material can’t withstand heats of more than 180˚F (82.2˚C), in 1954 Bell Labs switched to silicon. Later that year Texas Instruments began mass-producing silicon transistors.

First silicon transistor made in 1954 by Bell Labs, then Texas Instruments made first commercial mass produced silicon transistor the same year. Six years later in 1960 the first in the direct bloodline of modern transistors was made, again by Bell Labs – the metal-oxide-semiconductor field-effect Transistor (MOSFET).

Between then and now, most transistor technology has been based on the MOSFET, with the size shrinking from 40 micrometres when they were first invented, to the current average being about 14 nanometres.

The latest in transistor technology is called the RibbonFET. The technology was announced by Intel in 2021, and is a transistor whose gate surrounds the channel. The tech is due to come into use in 2024 when Intel change from nanometres to, the even smaller measuring unit, Angstrom.

There is also other tech that is being developed as the years march on, including research into the use of 2D materials like graphene.

If you’re looking for electronic components, Cyclops are here to help. Contact us at sales@cylops-electronics.com to order hard-to-find or obsolete electronic components. You can also use the rapid enquiry form on our website https://www.cyclops-electronics.com/

Categories
Component Shortage Electronic Components Future Semiconductor Supply Chain Technology

Ukraine – Russia conflict may increase global electronics shortage

Due to conflict between Russia and Ukraine, both of whom produce essential products for chip fabrication, the electronic component shortage across the globe may worsen.

Ukraine produces approximately half of the global supply of neon gas, which is used in the photolithography process of chip production. Russia is responsible for about 44% of all palladium, which is implemented in the chip plating process.

The two leading Ukrainian suppliers of neon, Ingas and Cryoin, have stopped production in Moscow and said they would be unable to fill orders until the fighting had stopped.

Ingas has customers in Taiwan, Korea, the US and Germany. The headquarters of the company are based in Mariupol, which has been a conflict zone since late February. According to Reuters the marketing officer for Ingas was unable to contact them due to lack of internet or phone connection in the city.

Cryoin said it had been shut since February 24th to keep its staff safe, and would be unable to fulfil March orders. The company said it would only be able to stay afloat for three months if the plant stayed closed, and would be even less likely to survive financially if any equipment or facilities were damaged.

Many manufacturers fear that neon gas, a by-product of Russian steel manufacturing, will see a price spike in the coming months. In 2014 during the annexing of Crimea, the price of neon rose by 600%.

Larger chip fabricators will no doubt see smaller losses due to their stockpiling and buying power, while smaller companies are more likely to suffer as a result of the material shortage.

It is further predicted that shipping costs will rise due to an increase in closed borders and sanctions, and there will be a rise in crude oil and auto fuel prices.

The losses could be mitigated in part by providing alternatives for neon and palladium, some of which can be produced by the UK or the USA. Gases with a chlorine or fluoride base could be used in place of neon, while palladium can be sourced from some countries in the west.

Neon could also be supplied by China, but the shortages mean that the prices are rising quickly and could be inaccessible to many smaller manufacturers.

Neon consumption worldwide for chip production was around 540 metric tons last year, and if companies began neon production now it would take between nine months and two years to reach steady levels.

Categories
Electronic Components Future Semiconductor Technology

What is the Internet of Things?

EveryThing

In terms of IoT, a ‘Thing’ is anything that can transfer data over a network and can have its own IP address. They are most often ‘smart’ devices, that use processors or sensors to accumulate and send data.

These devices have little-to-no need for human interaction, except in cases where the smart device is controlled by a remote control or something similar. Due to the low cost of electronic components and wireless networks being readily available, it’s possible for most things to become, well, Things.

Technically, larger items like computers, aeroplanes, and even phones, cannot be considered IoT devices, but normally contain a huge amount of the smart devices within them. Smaller devices, however, like wearable devices, smart meters and smart lightbulbs can all be counted as IoT items.

There are already more connected IoT devices than there are people in the world, and as more Things are produced this progress shows no sign of slowing.

Applications of IoT

The automation and smart learning of IoT devices has endless uses and can be implemented in many industries. The medical industry can use IoT to remotely monitor patients using smart devices that can track blood pressure, heart rate and glucose levels, and can check if patients are sticking to treatment plans or physiotherapy routines.

Smart farming has garnered attention in recent years for its possibly life-saving applications. The use of IoT devices in the agricultural industry can enable the monitoring of moisture levels, fertiliser quantities and soil analysis. Not only would these functions lower the labour costs for farmers substantially but could also be implemented in countries where there is a desperate need for agriculture.

The industrial and automotive industries also stand to benefit from the development of IoT. Road safety can be improved with fast data transfer of vehicle health, as well as location. Maintenance could be performed before issues begin to affect driving if data is collected and, alongside the implementation of AI, smart vehicles and autonomous cars could be able to drive, brake and park without human error.

What’s next?

The scope of possibilities for IoT will only grow as technology and electronics become more and more accessible. An even greater number of devices will become ‘smart’ and alongside the implementation of AI, we will likely have the opportunity to make our lives fully automated. We already have smart toothbrushes and smart lightbulbs, what more could be possible in the future?

To make it sustainable and cost-effective, greater measures in security and device standardisation need to be implemented to reduce the risk of hacking. The UK government released guidelines in 2018 on how to keep your IoT devices secure, and a further bill to improve cyber security entered into law in 2021.

If you’re looking for chips, processors, sensors, or any other electronic component, get in touch with Cyclops Electronics today. We are specialists in day-to-day and obsolete components and can supply you where other stockists cannot.

Contact Cyclops today at sales@cyclops-electronics.com. Or use the rapid enquiry form on our website to get fast results.

en_GBEnglish