Mitin Novators Europe

Focus on Entrepreneurship and High Tech Industry

The different types of cloud hosting available

types of cloud hosting

The Public Cloud is a solution that consists of hosting data on industrial data centers that are easily accessible and administered by private companies such as Microsoft, Google, Amazon Web Services, IBM, OVH or Salesforce.

These servers are publicly accessible, in self-service, from any terminal connected to the Internet. Thus, all resources (servers, power, active network elements…) are shared among all users. 

The main advantage of the Public Cloud lies in the simplicity and speed of its implementation. A credit card and a few clicks are enough to store your data on a public cloud service. For a company, it is nevertheless necessary to supervise and professionalize this approach, in particular to avoid Shadow IT. This is why more and more IT departments are using the services of IT outsourcing companies.

The strong reactivity and adaptability of the Public Cloud make it a very popular solution.

The implementation of a Public Cloud solution is the best way to benefit from a very fast operational and efficient information system, especially for “turnkey” services such as office suites, CRM, HR suites… 

1 Private Cloud

The Private Cloud is a solution consisting of hosting data on servers dedicated to the needs of a single company. These servers can be physically installed in the company’s premises or in a specialized data center. 

By definition, the Private Cloud is a tailor-made solution that guarantees flexibility and scalability. The CIO and operational teams can build their architecture with infinitely fewer constraints than with a public cloud provider. 

The other strong point of the Private Cloud is its high level of security. Since the data is physically isolated, it logically offers less visibility. This solution is particularly suitable for companies with high confidentiality or data protection requirements that want to control the security chain themselves. 

Large enterprises often build their private clouds in their own data centers, to offer their employees and partners the same flexibility, responsiveness and scalability as public clouds, but in a completely controlled environment. Generally speaking, the simplest solution is still to opt for an external private cloud, which allows the company to free itself from hardware investment issues. 

2 On Premise

The On Premise solution (“On prem” in the language of IT departments) consists of physically hosting the servers and therefore all of the company’s data on its own premises. It involves the company investing in its own technical equipment. 

On Premise can be an alternative to the Cloud for infrastructure (IaaS: Infrastructure as a Service) but also for software and licenses (SaaS: Software as a Service). The company acquires its equipment for an unlimited period. 

The main advantage of On Premise is to have total control over its data. The CIO relies only on himself and his teams and is not dependent on an external provider. Since the data is hosted within the company, it is always accessible regardless of the quality of the Internet connection. Finally, the level of security and confidentiality is logically maximal.

3 Outsourced (virtualized) hosting 

Outsourced hosting consists of having your servers hosted in a professional data center, managed and operated by a digital service provider. 

Although some of them will offer to host physical servers, in this white paper we will only deal with virtualized hosting, which offers more flexibility and scalability than hosting physical machines.

Technically, virtualization consists in splitting a single real server into several completely independent virtual servers. To do this, a virtualization software solution is used, generally called a Hypervisor.

Outsourced Hosting allows the company to benefit from the advantages of a dedicated environment, without having to manage all the tasks of maintaining this infrastructure in operational conditions.

All the maintenance operations of the cooling system, active network elements and security are managed by the provider. In most cases, the provider will also offer a number of maintenance tasks on the server, leaving the CIO to focus on these higher value projects.

Cloud infrastructures are also used in the crypto industries for Blockchains as explained here

Tesla Self Driving Cars Explained

Tesla Self Driving Cars

The technology behind Tesla self-driving cars is a complex one. You can learn about Autopilot, Full Self-Driving, Autosteer, and Deep learning in this article. These features are designed to make driving safer and easier. The technology is constantly improving, and as such, it’s important to understand the different features and the technology behind them.


The Autopilot in Tesla self-driving cars uses radar, cameras, and sound waves to keep track of the road. The company’s founder, Elon Musk, told engineers the Autopilot system should be capable of driving the car without human input. The system’s performance varied, but it generally worked well on two-lane roads and highways. It activated instantly when pressed twice, whereas other systems take several seconds to lock into lane lines. To deactivate Autopilot, the driver must either apply pressure to the brakes or knock the stalk upwards.

Tesla started selling the “Full Self Driving” package in 2017, which allows cars to respond to traffic signals and change lanes without a driver’s input. This option can cost upwards of $10,000. However, the Tesla self-driving system has been plagued by problems in the past. Earlier this year, the company recalled nearly 12,000 vehicles. This was because of a software update that could cause the emergency braking system to turn on without the driver’s approval.

Tesla self driving cars explained
Avoiding crashes like that is key!

Full Self-Driving

Tesla Motors is attempting to start testing “full self-driving” cars, but it’s unclear how the technology will fare. A study by the Dawn Project found that Full Self-Driving cars malfunctioned about every eight minutes in city driving. The results indicate that these vehicles don’t have enough safety features to be fully autonomous. This may be due to the fact that Tesla isn’t yet able to obtain a permit from the California Department of Motor Vehicles, which regulates autonomous vehicles. The company says it is starting slow, but hopes to release full self-driving cars to the public by the end of 2019.

Tesla’s latest software update also shows that Full Self-Driving cars may be able to intentionally run stop signs. The new software update only allows Full Self-Driving cars to run stop signs in areas with speed limits below 30 mph. The National Highway Traffic Safety Administration has opened an investigation to see how well the autopilot system handles hiccups in the road.


Tesla self driving cars feature an assistive steering system called Autosteer. To activate it, drivers must move the drive stalk down twice in quick succession. Once activated, Autosteer will display a message on the touchscreen that reminds the driver to keep their hands on the steering wheel and pay attention.

It can detect emergency vehicles as well. If it detects an oncoming emergency vehicle at night, for example, the touchscreen will display a warning to slow down and a reminder to keep your hands on the steering wheel. Once the lights pass, the car will resume cruising speed. However, drivers can override the feature temporarily by pressing the accelerator pedal. The car also warns the driver if it is likely to run a red light or a stop sign.

Tesla has released updates to Autopilot and its software, which include a limit for the maximum cruising speed on certain roads. Autosteer is configured to restrict cruising speed to 45mph or higher unless the driver manually intervenes. It is also capable of detecting the presence of hands by detecting slight turns of the steering wheel with or without force.

Deep learning

Deep learning is a technique for training neural networks to identify objects in a scene. Tesla uses this technique in order to improve the accuracy of its self driving cars. The AI chips that Tesla uses are specially designed to support neural networks in self driving cars. The cars also have a huge database to store and train the neural networks.

The performance of deep neural networks depends on several factors, including training data and the optimization algorithm used. The larger the data set, the more accurate the AI model will be. Tesla’s self driving car program is one of the most promising programs in the world for developing autonomous cars.


Although Tesla’s Autopilot has set new safety standards, its use is not fully regulated by the federal government. The federal road safety authority, the NHTSA, has not issued specific regulations or performance standards for the technology. It is also unclear whether carmakers can stop people from abusing the system, and whether they should have to provide their customers with steering wheels and other human controls to operate the vehicle.

A new study by NHTSA has found that traffic fatalities have reached a 16-year high, with 42,915 fatalities recorded last year. Despite these findings, NHTSA has chosen to remain neutral in its investigation of Tesla’s Autopilot program. The NHTSA’s study of Autopilot showed that crash rates decreased 40 percent in Autopilot vehicles, a finding that was cited by Tesla for marketing purposes. However, the study was later found to be flawed.

Quantum computing: HPE and Samsung invest in start-up Classiq

Quantic IT by Classiq

The Israeli start-up Classiq, which has developed a platform for designing and optimizing quantum circuits, has just raised $33 million from several investors, including HPE and Samsung funds. The company is planning to open offices worldwide and to hire new staff.

Investments in quantum computing continue, a strategic market in which IT suppliers and governments are committed, including France, which announced in early January a national platform for hybrid quantum computing. This time, HPE and Samsung have just participated in a $35M Series B round of financing by Israeli software company Classiq. Classiq is developing a QAD (Quantum Algorithms Design) solution for the design and optimization of quantum circuits and algorithms. The technology provided “helps quantum teams automate the process of converting high-level functional models into optimized quantum circuits,” the company describes on its website, adding that its platform thus enables the design of quantum algorithms impossible to create otherwise. Users can combine embedded quantum modules with those they have defined, then specify constraints such as the number of gates, circuit depth and entanglement levels. The Classiq platform will then synthesize an optimized circuit that would have required weeks if built manually, its vendor says.

HPE and Samsung invested through their respective funds, Hewlett Packard Pathfinder and Samsung Next, alongside other entrants in the company such as Phoenix (insurance group), Spike Ventures and private investors Lip-Bu Tan (former CEO of Cadence Design Systems) and Harvey Jones (former CEO of Synopsys). They joined the funds already present in the capital, Wing VC, Team8, Entrée Capital, Sumitomo Corp (via IN Venture) and OurCrowd.

A hardware agnostic platform

Since its creation in Tel Aviv 20 months ago, Classiq has raised $50M. The company co-founded by Nir Minerbi, its CEO, Amir Naveh, head of algorithms, and Yehuda Naveh, CTO, brings together experts in quantum computing, theoretical physics, CAD technologies and IT. “This new funding comes at a pivotal moment; the quantum industry is now moving from consulting services to products and from prototype to production,” comments Nir Minerbi in a statement. With the funding, Classiq will open additional offices around the world and strengthen its team of engineers and researchers to continue its development and file patents for quantum algorithm design.

Quantum stack

HP Pathfinder says it is impressed with Classiq’s engine, which automates the creation of quantum circuits and “is leading to a significant lowering of the barriers to entry into quantum computing,” says Paul Glaser, vice president, head of the HPE fund. For his part, Boaz Morris, an investor at Phoenix, points out that while quantum hardware has made impressive progress, the software used to operate it remains woefully inadequate. “Classiq’s hardware-agnostic platform allows companies to develop sophisticated quantum software faster and better than any other method,” he says in a statement. Finally, the Israeli publisher has attracted the interest of independent investors Harvey Jones and Lip-Bu Tan, who come from the computer design industry, since the companies they used to run, Synopsys and Cadence, do for electronic design what Classiq now offers for quantum software design.

A global market of $500M in 2021

The global quantum market is still in its infancy. It has just been estimated at $500M in 2021 in a recent study by Hyperion Research on behalf of QED-C and QC Ware with the support of the European Quantum Industry Consortium and the Quantum Industry of Canada. Its value is expected to grow at an annual rate of nearly 25% through 2024 to reach $880M. Quantum software revenue, including applications and middleware, accounts for 35% of the total, hardware (on-premises and cloud-accessible) accounts for 21%, user access to quantum cloud services accounts for 17% and professional services accounts for 15%.

In France, last month, Pasqal, a company that designs and produces quantum simulators, acquired Dutch quantum software company Qu&Co to merge their projects. Pasqal designs and produces quantum processors based on neutral atom technology. Among the recent mergers in the market, Honeywell formed Quantinuum with Cambridge Quantum last December.

To democratize its Metaverse, Facebook plans to open physical stores

facebook metaverse

The project, in gestation since 2020, is not guaranteed to succeed.

In a strange irony, in order to arouse the “curiosity and proximity” of consumers to the Metaverse, documents obtained by the New York Times reveal that Facebook is considering opening physical stores.

Facebook wants to convince Metaverse skeptics

Until now, Facebook, which has since become Meta, has never opened physical stores. At most, stores in airports or in Manhattan have physically welcomed the public in a short-lived way. Since 2020, the social network has seriously considered this possibility.

These stores would be designed to showcase products from Facebook’s Reality Labs division, which is at the heart of the company’s strategy for the Metaverse. In sleekly styled stores, with branding subtly scattered around according to internal documents, Oculus virtual reality headsets, Ray-Ban augmented reality glasses and various other products.

Meta is well aware of the skepticism surrounding the Metaverse. The Oculus Quest 2, its entry-level headset, has failed to take virtual reality out of a niche audience. The virtual universe desired by Mark Zuckerberg is regularly brought back to the video game Second Life of the early 2000s, now almost forgotten.

An uncertain realization?

Ironically, Facebook hopes to convince a wider audience via its stores. The project seems already advanced. A demonstration store is planned in Burlingame, California, near the offices of Reality Labs.

Several names have been mentioned: Facebook Hub, Facebook Commons, Facebook Reality Store and From Facebook, says the New York Times, before noting that a sober Facebook Store seemed to get the most favor. A name that could be changed to Meta, the new identity of the firm.

A company spokesperson explained that these plans were not confirmed. Chances are that the project will never see the light of day. In the meantime, future virtual reality headsets will continue to try to expand their audience via Meta’s retail partners.

Cannabis Driving: A New Canadian Testing Technology

cannabis drug testing is important

Engineers at York University in Toronto have developed a new device to detect cannabis in the body.

The technology uses a laser and a smartphone-sized infrared camera to measure the level of Tetrahydrocannabinol (THC), the primary psychoactive molecule in cannabis, in an individual’s saliva.

According to the team that developed the technology, this new way of screening cannabis use can provide test results in less than 10 minutes, faster and more accurately than what is currently on the market.

Our laboratory evaluations of the system have shown that it gives a minimal number of false-positive or false-negative results. According to the professor from the Lassonde School of Engineering, the development of this test has more credibility.

Health Canada’s 2019 Canadian Cannabis Survey found that among respondents who had used cannabis in the past 12 months, 26 percent reported driving a vehicle within two hours of smoking or vaporizing cannabis.

According to Mothers Against Drunk Driving (MADD) data, there were 138 cannabis-related deaths in Ontario in 2014.

A better tool than what Canada’s police use?

The Ontario Provincial Police (OPP) currently uses a tool called Dräger at their roadside checks, which detects cannabis residue in the oral cavity.

Dr. Tabatabaei believes his tool is more accurate and faster than the Dräger test because its technology can detect much lower THC concentrations, is more convenient to transport, and can provide results more quickly.

Considering its potential use in the field by law enforcement, Dr. Tabatabaei’s team designed the screening tool to be smaller and more mobile than the Dräger system, which weighs 4.5 kg and must be installed on a table.

Like the new technology unveiled by the York University researchers, the Dräger tool only shows cannabis in the mouth. Still, it does not directly prove that an individual being tested is intoxicated.

As for the possibility of the OPP using the new tool developed by Dr. Tabatabaei’s team, OPP Sergeant Kerry Schmidt confirmed that it is up to the government to test and verify the reliability of new devices coming to market. Until then, we will not use any other devices.

Cannabis can be found for long enough as traces in the human body according to Ganja Times.

A technology that can be used in the fight against Covid

The York University research team is actively pursuing the commercialization of its discovery. Dr. Tabatabaei believes that this technology can detect a large number of molecules, which go beyond THC.

Our technology is a platform that allows accurate reading of rapid tests. It can detect different types of molecules: THC is just one, and COVID-19 is another,” says Tabatabaei.

Fingerprint protection is relatively easy to bypass

For years now, fingerprint security unlocking has been available on the vast majority of mobiles and even entry-level models. With the arrival of TouchID on the iPhone 5S in 2013, hackers have been trying to hack this type of authentication located on the round button of the mobile. At the time, it only took them 48 hours to succeed.

This challenge has become more and more complicated due to the security reinforcements provided by the various manufacturers. Today, fingerprints are often used for double authentication of an account on mobiles, and it is quite useful. It can be said that everyone is now safe from a hacking attempt via these fingerprint security systems. “Everyone,” except perhaps people specifically targeted by hackers with significant resources or state support. It is the finding of a study just published by Cisco’s Talos security group. With a budget of $2,000 a month, they have been testing fingerprint authentication systems on mobiles from Apple, Microsoft, Samsung, Huawei, and the other three major manufacturers of sensors found on electronic devices. In the end, out of 20 attempts with each device, in 80% of the cases, the authentication was successful with false fingerprints very close to the real ones.

0% chance of success with Windows 10

While this hacking is possible and quite effective, the team explains that more than 50 impression molds had to be made before one worked with this level of result. The experiment lasted several months. In other words, it’s not within reach of all pirates. You first have to get fingerprints from the target and design the print models. It requires such determination that the goal must be of great importance to wish to access the contents of his device. For this reason, pirate groups supported by entities with ample resources or states would be the only ones likely to carry out the operation.

The figures for the iPhone and other mobiles were quite similar. Some models, such as the Honor 7x or the Samsung Note 9, could be systematically unlocked. Overall, the newer the models, the more attempts were required. In any case, this means that the probability of accessing the content of mobile before it was blocked by code is very high. Apart from the mobiles, only two sensor-secured USB sticks refused all attempts to unlock with the fake fingerprint. The computer they were securing was powered by Windows 10. For the researchers, it is thanks to the fingerprint comparison algorithm integrated into Windows 10 that the security was reinforced. But for them, that doesn’t mean it’s impossible. It would just take a little more time and resources to achieve it.

American researchers are alarmed by the lack of regulation around artificial intelligence

The AI Now Institute Research Center from New York University published a report on Thursday, December 12, 2019, on the implications of artificial intelligence (AI) in society. This year, researchers focused on the negative aspects of AI.

The conclusion is clear: this technology must be much more strictly regulated. “It is becoming increasingly clear that in various areas, AI amplifies inequalities, places information and means of control in the hands of those who have power, thereby depriving those who do not already have it,” the document states.

The danger of algorithmic biases

“The AI industry is terribly homogeneous,” the report warns. This lack of diversity has consequences for algorithms that are biased. They can come from two sources. First, the developer himself, who integrates his own cognitive beliefs and biases into his algorithms. Then others are derived from the data that feeds the system, from which it is driven.

The report “Algorithms: bias, discrimination and equity” by researchers from Télécom ParisTech and the University of Nanterre, published in March 2019, takes the example of Amazon. In 2015, the e-commerce giant decided to use an automated system to help it choose its recruitments. The initiative was interrupted because only men were chosen. “The data entered were completely unbalanced between men and women, with men constituting the overwhelming majority of managers recruited in the past. The algorithm did not give any chance to the newly qualified candidates,” the report explains.

To combat these biases, researchers at the AI Now Institute advocate truly opening engineering positions to women and minorities. They also believe that the creation of algorithms should not remain in the hands of information science researchers alone, but should be open to the social sciences.

Study the risks of facial recognition

Two points are of particular concern to American scientists: facial recognition and algorithmic biases. “Governments and companies should stop using facial recognition in sensitive social and political contexts until the risks are fully investigated, and appropriate regulations are in place,” says the AI Now Institute.

The report notes that this technology gained considerable momentum in 2019, particularly in China, where citizens are forced to have their faces scanned to purchase a telephone package. However, it has been shown that these technologies are far from perfect. For example, in July 2019, the National Institute of Standards and Technology (NIST) in the United States published a study showing that these systems have difficulty distinguishing the faces of women with black skin.

The report advises the legislator to adopt a moratorium imposing transparency requirements that would allow researchers, policymakers, and civil society to decide on the best approach to regulating facial recognition. The public must also be able to understand how these technologies work to form their own opinion.

How to supervise such an industry?

This report is not the first to be alarmed by the lack of regulation around AI. It must be said that States are struggling to adopt legislation on this subject. At the end of August 2019, we learned that the European Commission was working on a text. At the end of November 2019, UNESCO was mandated to draft a “global standard-setting instrument” in 18 months. But this exceptional initiative is hampered by the concrete reality of international law. In the vast majority of cases, it is impossible to make international sanctions useful and, therefore, adequate. States can avoid it very quickly.

The most iconic high-tech devices from the 90s

Your smartphone now has the power of a computer. You can access millions of tracks via Spotify or watch your movies in 4K. Things have changed a lot. So let yourself be carried away by a wind of nostalgia by (re)discovering some of the most popular technological devices of the 90’s.


Long before the advent of laptops, it was the pager family that reigned supreme in high school classes. Kobby, Tatoo, Tam-Tam, all names that accompanied teenagers and adults in the mid-1990s. These small boxes used a radio message service to send messages to the recipients. The Tatoo allowed you to receive a message (first numeric and then alphanumeric!) to go to the phone booth to call back the number received.

The models that followed even allowed to read up to 80 characters from a sender’s message, which he had previously dictated to an operator. Pagers became scarce when the SMS arrived, but the system was still used for a long time by firefighters and hospital staff because the radio transmission range is extensive.

Floppy disks

Are you worried because your smartphone has only 64 GB of storage? Well, you should know that a few years ago, we still stored our documents on floppy disks (in 5, 25 or 8 inches). The latter is always the icon of choice for reporting a recording.

Invented at the end of the 1960s, it could initially contain 80,000 characters or one day of typing. It later developed and rocked many geeks during the 1990s. We must say that with 1.44 MB of storage (and sometimes much more if you put the price), it was already possible to keep many documents. A relic from the past!

The “almighty” Gameboy

The Gameboy is probably the only object you could get out on the bus without shame. That’s Nintendo’s extraordinary success. This mythical handheld console has stood the test of time and is still considered one of the best portable consoles in the history of video games. Faced with a more powerful Game Gear, this grey plastic block has been able to offer hours of happiness to nomadic gamers. Super Mario Land 2, Wario Land, Zelda: Link’s Awakening, Metroid II, and many other games.


The VHS cassette (for Video Home System) appeared well before the 90s, but this format became massively popular and was still in its prime at that time. Resolutely targeting the family market, the cassettes were composed of a magnetic tape reel wrapped around a drum, capable of reading, but also (and especially!) of recording video or audio signals.

Coupled with a video recorder, they made it possible to preserve the programmes or films shown on television. Ease of use, combined with a strong commitment from the film studios, has contributed significantly to its success and our memories.

Peak And Valleys – A Brief History Of Bitcoin

ou acheter du bitcoin en france

In 2009 Bitcoin burst onto the scene with the promise that cryptocurrencies would revolutionize the way that transactions were performed. No more would governments control the flow of money and tax would become a thing of the past. It would be up to everyman – aided by advanced algorithms to be the master (or mistress) of their destinies in a brave new world of digital currency.

The question still remains in many people’s minds – is Bitcoin a legitimate investment, can it live up to the promise of its early days and become even more widely accepted as a payment method – or is it merely another in a long line of currency-focused scams. The extreme volatility of Bitcoin has also fueled fears that it would not be able to operate as a bona fide currency without the guiding hand of a central banking system to smooth out the rocky patches of extreme swings in value.

What is the truth? As usual, it lies somewhere between the poles of Bitcoin being the broom that sweeps the current deck of fiat currencies clean – and the being simply a fad and one that is doomed to the dustbin of history.

Let’s take a closer look.

Pundits, such as financial consultants and stockbrokers who have for decades been at the forefront of tracking traditional currency performance are by no means upbeat about Bitcoin. They write buyers of the crypto currency off as incredibly naive and falling for the wiles of the modern day crypto snake oil salesmen and women.

However, despite some enormous swings in value Bitcoin, much to the surprise of many of these pundits are still around – and that might be due to the influence of a diverse group of buyers, each of whom invests in the currency for very different reasons.

First, there were the early adopters who could to a certain extent be termed the idealists. They wanted to buy into (literally) the idea of a digital currency that was untraceable, private and not subject to the scrutiny of third parties such as financial institutions. Bitcoin, thanks to its cryptographically secured (public) ledger seemed to tick all the boxes. According to its inventor, Satoshi Nakamoto, it would also have the advantage of eliminating charges on credit cards and reducing transaction fees.

Then there were those who longed merely for less government interference in their day to day lives – including in their financial affairs – these have been called the ‘libertarians.’ Nakamoto was scathing when it came to the ability of central government’s ability to manipulate the money supply by just making more of it available. Bitcoin had a hard ceiling – there could only be so much of it around. If you are looking for ou en acheter, Bitcoin and crypto currencies are available from a lot of online exchanges!

Then there were the young idealists. These were savvy youngsters who were caught up in the idea that technology could transform the world around us for the better. In their minds tech entrepreneurs were the leaders of a world in which inequality based on access by an elite to vast sums of money would become a relic of the past. Much to their surprise they quickly joined the elite after buying small amounts of Bitcoin and seeing the price skyrocket.

Of course, two other groups have influenced the value of Bitcoin, traditional investors and those who want a hedge against market volatility that might affect the value of their portfolios.

Whatever the future of Bitcoin it is apparent that the diverse nature of investors means that it is protected from extinction at least. Volatility in value, however, seems to be something that investors will have to live with.

The cloud: a fast-growing technology

You have probably heard or used the cloud before. Its functions are not limited to storing photos and videos. It also allows you to run devices, watch movies and series anywhere, even with your mobile phone.

The cloud: what is this technology used for?

Maybe you don’t realize it, but when you watch videos on YouTube or when you post on Facebook, you use the cloud. Indeed, this technology provides software, services and storage space via the Internet. It is different from traditional hard drives, used in various devices such as computers or smartphones.

Clouds work thanks to servers that host a lot of data. For Internet users, they serve as storage for e-mails, photos, music or films remotely, via the Internet. It is thanks to this technology that video and music streaming exists (Netflix, Deezer or other)

For business use, many companies use the cloud to run their applications, run online services, and analyze or store their files. To do this, they use systems provided by third parties such as Google, Microsoft, Amazon. Users of this technology pay according to the storage volume and the power used.

This system is very flexible and less expensive than traditional IT. Besides, if a company wants to keep all its data, it can choose a private cloud that will be specially designed for it. However, this system is a little expensive.

Most companies need both public and private clouds. And many of them are picking from different cloud providers. This technique is beneficial because it avoids depending on a single cloud that is likely to be the victim of a cyber-criminal attack. It is necessary that the different clouds be compatible, which explains the buybacks of Open Source software (free and editable) allowing interactions.

The cloud: a huge market in 2019

In a short time, the cloud has become a high-growth and profitable IT market. The U.S. Department of Defense has issued a call for tenders that could reach $10 billion.

Specialists point out that we live in an era where the cloud will allow new technologies to emerge. According to Gartner, this sizeable public cloud market could grow to $340 billion by 2022, now knowing that it is at $287 billion worldwide.

Amazon dominates

AWS (Amazon Web Services) is the market leader. Amazon took the lead by starting to provide public cloud about ten years ago. And now it dominates the market. Indeed, AWS is highly profitable, with a turnover of 6.7 billion.

Google does not give exact figures, but its non-advertising revenues are $4.6 billion.

The cloud in 2019: multi-cloud, serverless and containers.

Three technological trends already occupy people’s minds when it comes to deploying data in the cloud: multi-cloud, serverless and containers.

If they continue to be used by businesses next year, their full integration may take time. What is less obvious is the speed at which companies will continue to adopt these technologies.

Based on the “active data” of more than 1,600 customers using the Sumo Logic platform, one report points to three trends in the cloud. First, multi-cloud adoption and deployments doubled, with Amazon Web Services leading, but followed by Microsoft Azure and Google Cloud Platform. Second, the adoption of serverless architectures continues to grow: one in three companies uses AWS Lambda technologies. And three, one in three companies uses native orchestration solutions or managed by Kubernetes. Finally, 28% of companies use Docker containers in AWS.

This data should not surprise you since the market explosion. But it is interesting to confirm that companies are rapidly migrating to the cloud. Societies are neglecting basic storage and IT infrastructure services for the services used by the most modern companies, for example, multi-cloud management, serverless computing, and containers. But what does this reveal about next year’s deployments?

Growing Adoption, but still inconsistent

First, there are many costs associated with multi-cloud technologies, in addition to managing the additional complexity that will be required. However, enterprises will quickly take the lead, primarily through the use of advanced cloud and multi-cloud platforms. Second, the absence of servers will become systematic for the development of most future cloud services, including databases. Easier to use, it allows you to eliminate third parties from decisions to provide additional computing or scaling resources. Besides, these developments of cloud services with this technology will enable serverless subsystems to be improved and listened to more quickly. It puts additional pressure on traditional PaaS systems.

As the container market grows, so does Kubernetes. It has become a standard for those who want to run container clusters, at any scale because the solution has become a heavyweight in the industry and so much. What is less obvious, however, is the speed at which companies will continue to adopt these technologies, which can be inconsistent. There is likely to be a slowdown in integrations shortly if the technologies and bandwidth needed for deployment are saturated.

Nobody can precisely predict the future of the cloud technology. But with the competition and constant innovation, we can hope that it will be for the better.

WordPress Hosting Information For Beginners

wordpress hosting solutions

If you want to get WordPress hosting for your website, this article is here to help. You have to make sure you find an excellent hosting service for your website if you want your website to do well. Here is how to find a company that offers WordPress as one of their hosting options.

Managed WP or regular host

You’re going to want to know whether you can use WordPress with a hosting service or not. It means you’re going to want to look on their website to see if they mention that they have the ability to host this kind of website. If you can’t find any information on their website about it, then you should contact them to ask if their hosting services include the ability to install and use WordPress. If the company doesn’t give you a definite answer, move on and work with a company that will tell you straight up whether they can host your site correctly or not.

Reputation matters

wp logoYou’re going to want to find a host that is well reviewed before you work with them. There are many websites offering quality reviews out there such as les meilleures sociétés d’hébergement wordpress de 2018.

You need to find out whether they are worth your time to work with or not. You need to find out what people have been saying in recent days regarding the work they have done with the hosting service. If the reviews you find show that the company does good work, then it’s clear that they are good to work with. However, if all you find are negative reviews, then you know not to waste your time with them.

Pricing is critical

Figure out what you’re going to have to pay to host your website. You want to make sure that it’s not that expensive because there are plenty of great hosts that will host your website for a few dollars each month. Know that if you sign up for a hosting service for more than a few months at a time, you generally get a more significant discount. You may also be able to find coupon codes that you can use when getting a service’s assistance so look into that and see if you can save more money on the price of their hosting services.

Many hosting companies will have WordPress built into the dashboard that you use to control what goes onto your website. You’re going to want to ask them whether you can install WordPress from the dashboard or if you have to do the work yourself. If you have to do it on your own, that’s okay because it’s not that difficult to do if you follow the instructions you can find online. Just search for WordPress installation guide, and you should be presented with results that show you what to do.

WordPress hosting is now something you know a little more about. You want to find a place that is going to host your site for a reasonable price, and that has the features you need. There are plenty of great hosts out there so take your time and look for them.

Huawei Mate 20 Pro: same ultrasonic fingerprint sensor under the screen as Galaxy S10

The Huawei Mate 20 Pro would be equipped with the same ultrasonic fingerprint sensor like the Samsung Galaxy S10. Anxious to burn the priority to its eternal rival, the Chinese outsider signed an exclusive contract with Qualcomm, which designs the fingerprint scanner.

According to our Korean colleagues from ETNews and Biometric Update, the Huawei Mate 20 Pro would be the very first smartphone equipped with an ultrasonic fingerprint sensor. More potent than the optical sensor of the Vivo Nex or Vivo X20 Plus UD, this sensor allows a much more precise scan of your fingerprints and works even if your fingers are wet or a little dirty.

High tech specs

The Huawei Mate 20 Pro and Galaxy S10 will be equipped with the same fingerprint sensor under the screen!

The latest news is that the Galaxy S10 relied heavily on the addition of this new technology, developed primarily by Qualcomm, to showcase itself. Samsung’s CEO praised the fingerprint sensor over the technology used by Vivo or its Chinese competitors. As a reminder, optical sensors are mostly designed by the Chinese firm Gudix, PhoneArena specifies.

Huawei has signed an exclusive contract with Qualcomm until February 2019 to gain a foothold over Samsung. This strategic deal will enable the Chinese manufacturer, which aims to become the world market leader as of next year, to beat its competitor in the pillory.

In addition to this print sensor under the screen, the Huawei Mate 20 Pro would be equipped with a triple rear photosensor, a Kirin 980 SoC and a large AMOLED borderless screen. The Mate 20 Pro would be presented around next November.

Until then, you can watch the following video which is a first presentation and introduction of the new Huawei Mate 20 Pro:

Google maps replace its planisphere with a terrestrial globe.

google earth

Last week, Google updated the desktop version of its online mapping service by replacing the planisphere system with a globe.

With all due respect to flat earth supporters, such as Kyrie Irving from the Boston Celtics, last weekend’s Google maps update gave a much more spherical shape to its virtual planet Earth by offering a world’s planisphere matching the famous Mercator’s projection.

If this modification does not seem to bring any modification within the framework of the traditional use of Google Maps, it is by zooming out that the magic of the cartography service works by proposing not anymore a flat world map, but well a terrestrial globe in 3D.

A desire to represent the earth more realistically.

If Google chose to abandon the planisphere, it is because the giant of Mountain View wishes to propose a vision of the earth much more realistic than the planisphere of the world allows according to the Mercator projection.

Imposed as the standard planisphere in the world thanks to the precision it could bring during sea voyages, Mercator’s projection has the main flaw of enormously distorting the areas that are furthest from the equator, by presenting Europe, for example, as more extensive on earth than South America, when in reality it is almost twice as large as Europe.

To overcome this problem, Google has opted for a representation of the earth in the form of a globe, which provides a representation of the world much more realistic than a planisphere allows.

Google Maps conquering space

However, the introduction of the globe is not the only novelty that the Mountain View giant has brought to its mapping service. By activating the satellite mode, it is now possible to travel in space.

With this update, Google has doubled its efforts by giving its users the possibility to observe live which surface of the earth and planets are illuminated by the sun, the effects of lighting on the earth from space or the possibility to visit other planets or natural satellites.

First iPhone X Plus and iPhone 9 photos unveiled

Today is a big day as it seems that the first pictures of iPhone X Plus and iPhone 9 have just been leaked. We’re letting you know right away as we know there are quite a few fan boys in our audience 😉

We already had a small idea of the look of the iPhone model 2018, especially with some of the few realistic designs of the iPhone X leaked, but a few hours ago, things have been confirmed, through the leak of several photos concerning the brand’s very high-end new iPhone X Plus phone.

This phone will benefit from a 6.5-inch OLED screen, as well as the iPhone 9, which will be the entry level model. This smartphone should integrate this time a 6.1-inch LCD panel. The two phones appeared on Twitter, via the account of the leaker Ben Geskin, which revealed some photos, explaining nevertheless that they were dummy models, intended for the manufacturers of hulls or accessories, so that they can take advantage on the manufacture of their products.

twitter source

Dummy models were leaked, but which allow knowing a little more about the design of the two new iPhone, and in particular to confirm the presence of a double photo sensor on the iPhone X Plus.

However, we do not know if these new Apple products will be able to charge each other. Up to now, we had mostly been given computer-generated images, but it has to be said that if this new leak proves to be true, the rumours previously revealed were finally quite close to the truth. To have an official confirmation, and hope for answers to all our questions, we will probably have to wait for the keynote of September 2018.

As usual, let’s wait and SEE people !

What are your main expectations regarding this new flagship model?

The Galaxy Watch, a Samsung connected watch?

samsung Galaxy Watch

Is this a complete change of direction for Samsung in the connected watch market? The South Korean company already has a range called Gear, but one cannot say that it is a great success. More and more indications tend to prove that the company would take a new course with a range of Galaxy Watch.

Farewell Gear?

According to the SamMobile portal, Samsung is about to make a significant strategic change. To support this statement, the site relies on a “Samsung Galaxy Watch” logo. A generally well-informed Twitter user goes even further and claims that Samsung would give up his home operating system Tizen in the process. Instead, it would return to WearOS, the system developed by Google, a complete change of direction. According to some rumors, Samsung would also integrate his vocal assistant Bixby.

range of watches

Range of watches

Reviving a dying market

As for Samsung, no confirmation for the moment and it will probably be necessary to wait at least the end of August to know more about it. A period generally is chosen by Samsung to unveil its new watch models connected during the IFA.

Information which in any case should globally confirm that there is no big revolution to predict. Samsung is probably looking to combine its different products under the Galaxy banner. A label that should be a little more meaningful and more attractive to customers than Gear.

Something to take Samsung out of the niche market and finally compete with Apple, the primary player in the market? If the Swiss giants, specialists in watchmaking, are unable to do so, the task still looks complicated. We’ll have to wait a bit to find out…

Browser: Chrome, Edge, Firefox… Is there a better browser?


We often hear fans of a browser defend it, arguing that it is better than the competition, whether it be Chrome, Edge or Firefox. The question we are therefore entitled to ask ourselves is: which is the best web browser between these three alternatives? Chrome, Firefox or Edge?

Like many, you used a web browser for a while before getting angry at it. Then you tried the competitors and finally choose another one. So finally, which is the best web browser to surf the web serenely and efficiently?

What is the best web browser between Chrome, Firefox, and Edge?

The easiest way to get a more or less precise idea is to submit them to a series of different benchmarks and rank them according to the results.

which browser do your preferHere are 5 benchmarks we selected to see which browser is better:

• HTML5Test, to check efficiency with HTML5 language
• JetStream, to check JavaScript support
• Kraken, also for a little test on the issue of JavaScript
• MotionMark, which studies graphics performance
• Speedometer to get an idea of the overall performance

What came out of these different benchmark tests?

• HTML5Test: Chrome is first before Firefox, then Edge
• Jetstream: Edge ahead of Firefox and Chrome in the third position
• Kraken: Firefox passes Edge and Chrome on the third step
• MotionMark: Edge takes a considerable lead over Firefox and Chrome which show equivalent results
• Speedometer: Chrome crushes by far Firefox and Edge

Note: it is essential to specify with which configuration this series of Benchmarks was carried out because apparently with another model the results could evolve in one direction or another… It is also important to specify which version of the browser was used in each case.

The configuration:

The five benchmarks were run on an Intel Core-i5 laptop, supported by 256GB SSD and 8GB RAM, all under Windows 10 (64-bit).

In conclusion, as you will have noticed all alone, there is not a big winner concerning the Internet navigation and the three browsers are globally equal. We can slightly give an advance to Chrome this time, but remember that in 2015, the same test gave Edge the winner. The tests depend mainly on the optimizations that each publisher performs in its new versions before the competition aligns itself…

And you, which browser do you use every day and why?

Magic Leap One: AR glasses coming this summer

augmented reality in action

Last December, Magic Leap lifted the veil on his augmented reality glasses, which had been making a lot of headlines for several years without us knowing exactly whether it was another standard AR product or a technological revolution. After another few long months of silence, the Florida-based company, which managed to raise 2.2 billion dollars (2 billion euros) for this project, finally announced something concrete for the launch of its Magic Leap One glasses. This will be released this summer in the US.

No precise date for the moment, but we do know that the product will be distributed exclusively by the AT&T operator with whom an agreement has just been sealed. As far as price is concerned, Magic Leap has not given up anything more than his initial indication that glasses should cost at least as much as a high-end smartphone. Based on the prices of an iPhone X or a Samsung Galaxy S9+, the Magic Leap One could be worth between 1,000 and 1,500 dollars, at least.

In a long video presentation broadcast via the Twitch platform (which can be viewed on YouTube), developers in charge of the project delivered some technical elements on the hardware and software configuration of their AR glasses. We learn that it uses a SoC (system on a chip) Tegra X2 from NVidia that incorporates two 64-bit ARM processors. The operating system is a hybrid creation that includes 64-bit Linux as well as loans “to other systems” that are not mentioned. On the other hand, we still do not know what is the autonomy of Magic Leap One nor how it manages the execution of applications (locally or via Internet ?).

What about the actual AR field of view ?

The presentation was punctuated by extracts demonstrating how the helmet works. We discover a game featuring a golem that the user “installs” into the real setting with finger pinching gestures. Visual cues let you know where you can insert virtual objects. The character integrates into the environment in a credible way, sometimes on the ground, on a sofa or the work surface of a kitchen.

Note that the headset not only detects hand movements, but also the user’s movements. When the golem throws a rock in its direction, it can avoid it by taking a side step or block it with its hand. It will be up to video game and application developers to make the most of these capabilities. The demonstration is quite convincing, but let us remember that it has been pre-recorded and that we do not know if it has been embellished or not. We remember that Magic Leap admitted having used special effects for some of his previous demonstrations.

One of the remaining questions concerns the real field of vision provided by the Magic Leap One, which will necessarily be restricted to a frame. The same question had arisen with the demonstrations of HoloLens glasses from Microsoft which suggested a total immersion whereas the vision in real conditions was more limited. As for the Magic Leap One, we should know more about it and how it works as soon as it is on the market.


The blockchain revolution : how does it work ?

blockchain tech

The blockchain is often associated with Bitcoin, the virtual currency created in 2008. It is a sort of decentralized account book controlled collectively, on the peer-to-peer principle, from a distributed database. It is the blockchain that ensures the security of transactions by sharing trust. A system reputed to be transparent and tamper-proof, the scope of which goes far beyond currency alone.

What is the blockchain and why are we talking about a revolution about it ? Behind this system for securing Bitcoin transactions lies a concept that bases its reliability on transparency and mutual trust. Some think that the blockchain is led to play a central role in our existence by replacing trusted centralized third parties such as banks, notaries, insurance companies and more.

The blockchain was born with Bitcoin

The blockchain was born at the same time as the cryptographic currency called Bitcoin and appeared in 2008. Bitcoin is used to purchase goods and services ; it can also be exchanged for other currencies. Unlike traditional currencies, Bitcoin is not administered by a single banking authority, it operates in a decentralised fashion through a set of nodes. These form the network through which all transactions are made. A secure public register keeps the history of all these operations. It is considered forgery-proof since it is based on the principle of shared trust.

« Mining » form the nodes to the blockchain

Each transaction is encrypted and stored in a block, which may contain several separate transactions. A block contains a numerical marking from the previous block that attests to its validity. This marking operation is carried out by voluntary users, who are called “miners”. The latter make their time and the computing power of their machines available to administer the blockchain. The miners form the nodes, or rather the links of the blockchain.

This operation called “mining” allows these people to be paid, in Bitcoin of course. The Bitcoin value is maintained by software that adapts the intensity of the calculations to the number of active miners. The more bitcoin miners there are, the more complex the calculations are and the safer the blockchain is.

The future of the blockchain beyond Bitcoin

The blockchain has many advantages. First of all, it reduces bank transaction costs and even eliminates banks as trusted third parties. In addition to digital payments, this technology can be used to transfer other assets, such as securities, bonds, shares, voting rights…

Furthermore, the transparency of the system and its decentralised architecture give it a potential for applications that go beyond the financial sphere. The blockchain being a register, it can be used to establish a traceability on all kinds of products and services. It can also be used to ensure the application of intelligent contracts, programs that automatically execute the terms of a contract.


How OBD Scanners Work in Cars

obd scan tool

If you own a car made from 1996 going forward up until today, you might have noticed that the vehicle has some standardized computer systems. The computer systems are responsible for monitoring emissions and electrical sensors within your car as you drive. This has been a major innovation in the automobile industry over the last 2 decades!

However, after recording the information, you will require an On-Board Diagnostics popularly known as OBD tool to retrieve it. OBD scanners can detect problems in your car before you even notice so they can be essential. In this article, we focus on how these OBD scanners work and how they are essential for your vehicle.

How the Scanners Work

If your car is not working correctly, usually there would be a dashboard warning or the Malfunction Indicator Light (MIL) will illuminate to indicate that something is not right. However, these signals only tell you that something is wrong but do not say what exactly is wrong with your car. That is where the OBD scanners come into use.
By plugging the scanner into your car and reporting a code, the device will give you detailed information on what is wrong. Given that today’s cars are far more involved in both electrical and mechanical systems, it can be hard to detect a problem in your car without an OBD scanner.

What are the different types of OBD Scanners?

diagnosing car issuesTwo types of OBD scanners are available, the “code reader” and the “Scan tool” scanners. The code reader scanners can read and clear necessary codes in your car while the scan tool scanners can perform functions that are more advanced.

It is advisable you have the scan tool OBD scanner because with it you can view both real-time and recorded data. What is more is that the advanced scanners can provide advanced troubleshooting information. Even though most people use ODB scanners to test for emissions, the devices have more uses that include measuring different aspects of your car’s performance.

Using a vehicle scanner

The ability of these devices’ ability to provide accurate information, however, depends on how well you use them. For instance, once you get a code, you must define it correctly. If not careful, you might mistake a P0303 code for a P0455 with the first code meaning that you have a misfiring cylinder while the second one means that you may not have screwed the gas cap correctly.

Why Having a car engine scanner is Important

Regardless of how you want to use your scanner, it is essential that you have one in your car. As aforementioned, the scanner can be very critical in testing your car’s performance and possible troubleshooting problems. Moreover, the scanners are very active and suitable bearing in mind that the cars manufactured in this era are far more complicated.


For cars manufactured from the year 1996 going forward, they computerized systems that have made it possible for the use of OBD scanners. The scanners help to measure the car’s performance, troubleshoot problems and provide you with detailed results. However, you should be very careful while defining the codes while using the scanners or you will end up getting the wrong information. An OBD scanner is one that you should have in your car at all times because you can never tell when disaster will strike.