Search results:
No results: try another search
Tech trends
The latest news on emerging technology—AI, blockchain, VR, big data and more.

Exploring Azure Spring Clou...

Introduction Cloud computing is not something new and upcoming. It has been around since 2006 when big companies like Google began using the term to describe the paradigm shift of services and applications hosted on servers on the cloud. It allows organisations to scale on-demand, to leverage on virtualisation, and to reduce costs on maintenance and rentals for server rooms. But more than a decade since then, we still have not yet been able to fully utilise the total capacity of this amazing technology. One of the major impediments behind this is that Cloud setup can require some level of expertise. The reason behind that is the layer of complexity before you get it all working smoothly. There are options and parameters to work with, to get them in sync in order to give you an optimal experience. And THIS is what often makes people warier to adopt this revolutionizing technology. No one wants to put their money on something which they might lack the expertise to deal with fully. There is always some amount of technical risk to, therefore, be considered. And this is where, in my opinion, Azure Spring Cloud steps in. It seems to offer a one-stop solution, a sort of a pre-compiled packaging of settings even, that can get you started quickly. It helps consolidate all that you need to do, in the setup of your application. Also, depending on the demand of your product, it gives you the freedom to make modifications to those settings as necessary. As they say, with great power comes greater responsibility, and if you don’t feel up to the task to take that responsibility just yet, well I guess Azure cloud gets you started in the right direction at least. What makes things complicated So what is Spring Cloud? It is a popular framework for Java microservices. It provides developers the tools to quickly set up complex pre-defined patterns for distributed systems. This way, one can quickly start up services and applications, while implementing those patterns eg. Service Discovery, Routing, Service to Service calls, to name a few. Therefore this works great in typical use cases of the current day business solutions. Sounds great actually. But the reality is, once you go to the documentation page for Spring Cloud, you will notice that in a bid to offer a lot of different services, there are a lot of things to choose from. A lot of things to read through. Depending on how long you have been working with a certain technology, you may or may not feel comfortable going through this very detailed and technical documentation. It is definitely a well-documented plug and play of services, but the complications associated have not been fully abstracted. Also sometimes you are only tackling a simple problem instead of a complex one. In these cases, the effort required to get onboarded with Spring Cloud might not break even with the immediate benefits. Thus, this might often-times leave us with the following problems: 1. A complicated application lifecycle 2. A challenge to manage the Cloud infrastructure 3. And troublesome troubleshooting of issues And this is where Azure Spring Cloud steps in. A joint collaboration of VMware and Microsoft, this Cloud-as-a-Service platform helps to make our lives easier. Value proposition of Azure Spring Cloud Well, it is not news for anyone that Spring Framework and Java are still heavily used in the tech industry. If only, the usage has been growing. Frameworks like Spring Cloud, Spring MVC, Spring Boot are super helpful in setting up your application with a lot of ease and quickly. But this process gets even simpler with the introduction of Azure Spring Cloud. Some of the direct advantages of Azure Spring Cloud are: 1. Accelerates development — There are easy one-click methods to switch between different services or different images of the same services. This tends to be especially useful when you are performing parallel development on complex systems, which have shared microservices. For those of us who have been there, regression testing or maintaining multiple versions could be a big challenge in such complex environments. Hence one could leverage on Azure Spring Cloud to even plug and play with different versions of the same services. Also the API Gateway features greatly simplifies the communication between services, making the setup dynamic. Thus it takes away some of the complexities that arise with Microservices, leaving you to reap the benefits of the awesome design pattern. 2. Reduces Deployment Risk — Azure Spring Cloud provides you with a UI tool to switch around the images that you are working with, with just the click of a button. It also greatly assists the Blue-Green Deployment strategy, which can be used a contingency management strategy, of having two easily switchable environments, one production and the other contingency. This in return greatly reduces the downtime as well. Azure Spring Cloud makes it really simple to implement this, and I am sure all of us are happy to offload some of the complications to deal with at the time of deployments. Stability in deployment, definitely in turn paves the way for enabling Continuous Deployment. 3. Simplified Management of Configuration — One of the most exciting things that I feel about Azure Spring Cloud is the Spring Cloud Config Server. It is similar to the configuration files that you would need to generally maintain as part of your Spring Boot application or service. And also a similar way of referencing in the code as well. But the differentiating factor being that it can be maintained online, as part of the Azure Spring Dashboard. It allows for a centralized location for all configurations instead of scattering them across different server boxes for each microservice. This, in turn, allows for an easier rollback or change when needed. Also, in certain ways, it improves the level of security, where you can move sensitive parameters of your application from code to this centralised configuration location. It allows for an externalized configuration and is particularly useful in a distributed system. So one could simply link the configuration file using your code repository url, along with appropriate authentication to access it. 4. Integrated Logging and Reporting — Usually when we are working with distributed services, logging and reporting both tend to be critical features. Because of the possibility of multiple failure points. One of the features which Azure Spring Cloud offers is the ability to view service logs and system reports in the same dashboard on dedicated windows. This allows us to capture matrices about your application and further optimise performance. 5. Ease of use — Well this is a no-brainer, provided we have already gone through the advantages mentioned above. But just to mention once more, that even features like provisioning or scaling of your app, tend to be extremely simplified with the use of the simple UI dashboard to update these. It might not be a unique feature here, but it is worth a mention that this is something possible with Azure Spring Cloud as well. One is able to even control things like which Java version to use, from a drop-down list, which in my opinion could be pretty handy. Especially at the time of regression testing during version upgrade activities. Therefore, as clearly enlisted above, the primary value proposition I find for Azure Spring Cloud, is that it greatly enhances one’s ability to make the most out of Cloud and distributed systems, making the experience as smooth and streamlined as possible. It helps to take care of all the plumbing effort which you might have had to perform, in order to have an integrated streamlined solution for your use-case. Final word All in all, I think it is an amazing step ahead by Microsoft and VMware, towards streamlining the app management process and integrating all of the different modules into this single product. This, therefore, has great potential, especially when working with complex systems. If you already have some components of your application with Azure and Microsoft, this might be a great addition to explore. Because this could become the single point of control, and hence greatly reduce the complexity, by getting everything under a single umbrella. Currently, it is only available for Java Spring, but there are plans for upcoming support for the other technologies in the near future, and C# would be one of the next candidates in the list. Therefore this is certainly an avenue to look out for if being technology-agnostic is one of your concerns. All in all sounds like an exciting thing to get your hands in. And if you haven’t yet delved into cloud computing yet, maybe it’s time to check it out.

Azure DevOps: Everyt...

The Internet has transformed the world and its industries, from shopping and entertainment to banking and finance. One of the most significant developments is in software. Software no longer merely supports a business; rather it has become an integral component. Indeed, companies are now interacting with their customers through software delivered as online services or applications and on all sorts of devices. They are also leveraging software to increase operational efficiencies by improving every part of the value chain, such as logistics, communications, and operations. Today, because of how critical it has become, companies have changed how they build and deliver software. In engineering practice, we follow a reliable, secure, and maintainable solution for our software systems. These traits make the software system robust and deliver value to the customers. Back in the day, operation modules were handled by an SRE or maintenance team who would carry out 24/7 monitoring, operation, and deployment. These processes were labor-intensive, cost-inefficient, error-prone, and not adhering to the latest software methodology such as lean processes. Skipping ahead to the present, modern businesses are now leveraging on the latest tool and technologies for these operation modules. This process is gradually being achieved by DevOps tools. Why DevOps Matter? With a DevOps tool, some of these practices are now automated and require less manual intervention. Delegating these tools to perform automation tasks and system testing allows us to focus on other aspects of the software domain, and produce a robust and less error-prone software system. As quoted from Gartner: “DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture), and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology — especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.” DevOps is essentially a culture for seamlessly integrating the process of software development with IT operations. More importantly, the meaning of DevOps has broadened to become an umbrella term for the processes, culture, and mindset used to shorten the software development life cycle, using fast feedback loops to deliver features, fixes, and updates more frequently. There are many tools in the market that support the DevOps cycle. Many tech giants have so much to offer in the realm of DevOps like Microsoft, Google, Amazon, and IBM, so how can we pick the best one for our teams? Much like software in general, there are multiple variables. But after discovering the wide variety of solutions offered by Microsoft Azure, I believe they have a competitive edge. In this article, I will delve into its various services and benefits, as well as explain why Microsoft Azure is my ideal choice for orchestrating a DevOps toolchain. What is Azure DevOps? Azure DevOps is a Software as a service (SaaS) all-in-one platform from Microsoft that provides an end-to-end DevOps toolchain for developing and deploying software and integrates with most of the leading tools in the market. It provides features to build web, mobile, or desktop applications and deploy to any cloud platform or on-premises. As an Application Lifecycle Management (ALM) system, it helps the entire project team with: - Capturing requirements - Planning agile and traditional projects - Managing work items - Version control - Building, deploying, delivering, and testing Despite being launched in October 2018, Azure DevOps is not the new kid on the DevOps block. Azure DevOps is the evolution of Visual Studio Team System (VSTS) which was launched back in 2006. This is a mature product with a rich feature-set that has over 80,000 internal users at Microsoft. Furthermore, in the open-source community, people use Visual Studio as a development environment that integrates seamlessly with Azure DevOps. The Five Key Services Azure Board A tool for Agile planning, portfolio management, and processes, Azure Board provides the entire team with more visibility of project progress. It helps to: Integrate the business requirement from epics to delivery of business goals Plan, manage, and track work across your entire team Track sprint backlog, user stories, Kanban board, and related work items Increase team collaboration by facilitating discussions Create dashboards and track the status and trends of accomplished work Set and send instant notification alerts when an issue is created, change or modified Azure Repo Azure Repo is a cloud-hosted private Git repository service and serves as a tool for version control of codebases like GitHub and GitLab. It has features to: Create branches and pull requests See commit history, tags, and discussion Support any Git client Integrate webhooks and API Automate with built-in CI/CD Azure Pipeline Azure Pipelines is a cloud service deployment system that connects to any Git repository, automatically-built set, and test code project, making it available to other users. It combines continuous integration (CI) and continuous delivery (CD) to constantly and consistently test and build your code, and ship it to any target which could be Azure cloud or any other cloud provider. You are also able to: Build on any platform or on any language code which increases deployment speed Leverage a wide range of community-generated builds, tests, and deployment tasks Link hundreds of extensions ranging from Slack to SonarCloud Azure Test Plans Azure Test Plans is a manual test management tool that provides solutions for test planning, execution, and capturing data about defects. It is equipped with: End-to-end traceability because tests and defects are automatically linked to the requirements and the build being tested, which also helps you track the quality of the requirements. Three main types of test management artifacts: test plans, test suites, and test cases A browser-based test management solution for exploratory, planned manual, and user acceptance testing Azure Artifacts Azure Artifacts is a fully integrated package management service for continuous integration/continuous delivery (CI/CD) pipelines, where a project can create and share Maven, npm, NuGet, and Python package feeds from public and private sources with teams of any size. It is also a hosting facility for universal packages that seamlessly integrate with the CI/CD pipeline. Extensible, Flexible & Distributed Each of these Azure DevOps services is open and extensible and can be used with all varieties of applications, regardless of the framework, platform, or cloud. Built-in cloud-hosted agents are provided for Windows, Mac OS, and Linux, and workflows are enabled for native container support and Kubernetes deployment options, virtual machines, and serverless environments. Moreover, you have the flexibility of adding customized agents depending on the project. You can also take advantage of an integrated suite that provides end-to-end DevOps functionalities. Since they are broken up into separate components, Azure DevOps gives users the flexibility to choose the specific services they want to employ without the need for the full suite. For example, Kubernetes has a standard interface and runs the same way on all cloud providers. Azure Pipelines can be used for deploying to Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or clusters from any other cloud provider without requiring the other Azure DevOps components. Developers and teams using Azure DevOps are able to work securely from anywhere, in any format, and truly embrace open-source technology. It also addresses the vendor lock-in problem from its earlier version by providing extensive integration with industry and community tools. Furthermore, with the many integrations available, users can log in using SSO tools like Azure AD or communicate with their team via Slack integration while accessing both cloud and on-premises resources such as Atom, CPython, Visual Studio Code, and TypeScript. How Azure DevOps Can Benefit Your Team Planning — Team members can easily manage work with full visibility across products and projects. You can also define, track, and layout work with Kanban boards, backlogs, custom dashboards, and reporting capabilities using Azure Boards. This helps to keep development efforts transparent and on schedule. Developing — Team members can share code and collaborate more effectively with Visual Studio Code. With Azure Pipelines, you can create automatic workflows for automated testing and continuous integration in the cloud. Delivery — Team members can deploy applications to any Azure service automatically and with full control. You can define and spin up multiple cloud environments with Azure Resource Manager or HashiCorp Terraform, and then implement continuous delivery pipelines into these environments using Azure Pipelines or tools such as Jenkins and Spinnaker. Operations — Team members can implement full-stack monitoring, get actionable alerts, and gain insights from logs and telemetry using Azure Monitor. Azure DevOps is the Future The demand for cloud services has undoubtedly skyrocketed with more and more businesses seeking to innovate and increase agility. Amazon was the first company that provided cloud services, followed by Google and Microsoft. While competition is fierce at the top between these three giants, I have observed that the market share for Microsoft is gaining traction, with adoption rates among startup and SME companies slowly increasing. Microsoft has interesting startup services to offer like Azure credits, software tools (Office 365, BizSpark), consultation sessions, and marketing support that makes it appealing for companies to build their innovative software solution and infrastructure. An uptick of hybrid cloud trends will see companies adopt multi-cloud providers for different segments of the business, and Microsoft Azure will predictively be in a better position as it provides greater support. Aside from the aforementioned offerings, Microsoft runs three startup solutions: Reactors, ScaleUp, and Ventures. Reactors are locations in major cities where learning, networking, and resource sharing take place. ScaleUp, previously known as Microsoft Accelerator, provides sales, marketing, and technical support for enterprise-ready companies as they market and sell their solution. Ventures is a Microsoft-led funding program for software companies looking for Series A, B, or C funding. While the tiered system reinforces the need for an incubator, accelerator, or venture capitalist backing, the overall program is another step towards helping companies drive innovation. Plan Smarter, Collaborate Better, Ship Faster DevOps brings together people, processes, and technology to provide continuous delivery with high software quality. In the arena of DevOps, Azure DevOps has everything you need. It supports the Agile methodology and has a huge marketplace of tools to support all programming languages and other available products in the market. Most importantly, it integrates seamlessly with other available cloud providers. To simplify and accelerate the continuous delivery of high quality and reliable products, I would highly recommend using Azure DevOps for new projects to improve your planning, development, delivery, and operations to meet your business needs. Happy coding!! If you would like to share your experience of using Azure DevOps with me, I would be glad to have a discussion. In my next article, I will show how we can deploy our frontend and backend coding using Azure DevOps. So stay connected and see you next time.

Blockchain-based ass...

Groundbreaking technologies that are used in the banking industry were oftentimes developed by institutional actors. The arpanet, ancestor of the internet was developed by the US Department of Defense, SHA256 and other cryptographic standards were developed by the NSA, etc. Alternatively, the first blockchain based cryptocurrencies and other decentralised technologies arose from open source communities. Without a doubt, this technology could benefit the banking industry. Indeed, blockchain technologies offer high transparency and high accountability—just what the banking system as a whole is aiming for. So now the question arises. How can these two worlds come together? The genesis of the Bitcoin ideology During the early days of blockchain and cryptocurrency, the community was a mishmash of cypherpunks, grey hats, anarcho capitalists, libertarians and various flavours of ideologies. This group rallied around the first decentralised and electronic version of cash, emerging right after the 2008 financial crisis. People lost their jobs, their houses, their cars, etc. If you’re reading this, you were likely affected in some way too. After such a catastrophe, the world as a whole was looking for answers. The straightforward conclusion that came out of this “people's court” is that we should upend the banking system. This idea rubbed off on a bunch of grey hats and cypherpunks, and Bitcoin was born! At this point, the majority of the blockchain community believed that if the monetary policy was ruled by a self governed, decentralised, code is law protocol (such as Bitcoin) then no more financial crises would be a thing of the past. A more mature approach In retrospect, the banking system itself was both the illness and the cure. While it was responsible for predatory lending, incorrect pricing of risks and many other charges, unconventional monetary policies such as quantitative easing played a huge part in reducing the impact of the crisis. So who’s to blame? Well, as you might expect, several actors carried part of the responsibility, and it would require much more than a single article to cover why this crisis took place. But, if we needed to sum it up concisely : Lack of regulation Lack of transparency Lack of forecasting And what of our current crisis? Currently, the world is facing a major and unprecedented sanitary crisis. Without a doubt, this crisis is strongly impacting our economy. The FED already plans to inject $1.5 trillion to ease things, while France’s president Emmanuel Macron announced a €45 billions emergency plan, with most countries seeming to follow this trend. My question on this subject is how would a self-governed, code is law protocol face such eventualities? Summing up There are two points here to really reiterate: The banking system as a whole is not responsible for the 2008 financial crisis, lack of regulation opacity and bad governance is. We need a supreme authority to regulate the economy in cases where forecasting is pointless, such as epidemics or natural disasters. Once you are able to draw those conclusions (and I say this as a cryptocurrency enthusiast) you might make a U-turn on your anti-system beliefs. The primary goal of decentralised currency was to get rid of financial institutions, and if you read between the lines, it was to avoid financial crises. At some point, “getting rid of financial institutions” and “avoiding financial crises” got mixed up, leading us into a pointless “Bitcoin vs Banks” war. But we’re missing the main point—steering clear of financial crises altogether! It doesn’t seem as though a decentralised and self-governed protocol could handle all the eventualities we’re facing in the real world. On the contrary, a fully centralised and opaque banking system will surely lead us to a new crisis down the road. My point of view is that the solution for a stable economy lies somewhere between those extremes and blockchain technology is a good candidate for striking this balance. When two worlds collide We now have in our hands a technology that allows high transparency, high auditability and high accountability. Part of our role as the blockchain community is to provide the banking system with this technology. The good news is La Banque de France, France’s central bank, has just launched a call for applications for central bank digital currency experimentations. The goal of the project is to explore interbank settlements through the use of a digital currency. This is an unprecedented opportunity for the crypto community to use tech as a force for good. The opacity of current systems have led us to financial crises, but blockchain technologies could improve this.  Through the necessary encryption and control mechanisms, blockchain safeguards transparency by storing information in such a way that it  cannot be altered without recording the changes made. Thanks to the ability of the technology to prove to third parties—in a cryptographic way—that data is immutable. It  has the potential to make  payments more transparent and systems more accountable. TL;DR Blockchain technologies were first developed as an anti-system tool. Instead of fighting the system as a whole, blockchain could change the way the systems function, for the greater good. I think it’s high time for the blockchain community to switch from a revolutionary ideology to a reformist one. Blockchain could benefit the banking system in an ethical way because we are all pursuing the same goal—economic stability.

Who’s Responsible Fo...

Modern society is moving towards an increasingly digital world at a rapid speed. Humanity will change more in the next 20 years than in the previous 300 years, and this exponential growth will create a cybersecurity workforce gap. This exposes us to all kinds of digital cyber risks. For organisations to remain secure, the urgency for more cybersecurity talent than what we currently have has become greater than ever. In this blog post, I will be delving into three key components of cyber security: Cyber risk assessment and talent gap The traits of security professionals Roles and responsibilities of the CISO 1. Cyber Risk Assessment and Talent Gap According to McAfee, Cyber-attack is currently the 4th most significant risk to human life in the world. This is even more alarming considering that the top 3 are natural phenomena. Furthermore, there are eight new threat samples found every second. Can you imagine the huge impact they would have on our digital world? At the moment, most job opportunities lie in the area of security operations. It is a high percentage that needs to be filled, especially when Asia-Pacific’s cybersecurity workforce gap is predicted to hit 2.14m. At C-level, there is still a disconcerting apathy towards cybersecurity. 2. The Traits of Security Professionals Believe it or not, being a gamer or gaming as a hobby actually helps you become a successful cybersecurity professional. Research has found that you have an 78% chance of becoming a successful cybersecurity professional. Here are some desirable traits: Be (think like) a gamer Be a nice human being (you might be a “superstar” but that doesn’t mean you need to act like one) Inquisitive Logical and Analytical Creative Fast learner Passionate 3. Roles and Responsibilities of the CISO A Chief Information Security Officer (CISO) is an emerging role in IT. The CISO is the senior-level executive within an organisation responsible for establishing and maintaining the enterprise vision, strategy, and program to ensure information assets and technologies are adequately protected. A typical CISO should hold non-technical certifications (like CISSP & CISM), although a CISO coming from technical roles might have expanded his or her skillset in C-Level areas. In addition, experience or specialised training in other areas can also help the CISO, such as Project Management to manage the Information Security Program, Financial (e.g. holding an accredited MBA) to manage InfoSec budgets, and Soft-Skills to direct heterogeneous teams of Information Security Managers, Directors of Information Security, Security Analysts, Security Engineers, and Technology Risk Managers roles in major corporations and organisations. The CISO acts as a bridge between C-Level and the technical team. They must develop the competency to confidently persuade and explain technical risks and its impact on business. For instance: “This risk (i.e cross-site scripting) will potentially result in a loss of 60 million dollars for the organisation.” It is crucial as a CISO to not only have an understanding of business, but also the ability to influence board members in order to solve company-level problems and address cybersecurity risks. With careful planning and design that aligns with business impact, technology can help you solve a problem. Any ambitious project without a vision tends to fail. Since 2017, there has been a steady increase in financial losses due to cyber attacks. In 2018 alone, the FBI IC3 received a total of 351,936 complaintswith losses exceeding $2.7 Billion. This is alarming and truly a cause for concern, and reinforces the need for all businesses to take cyber risks very seriously. Typically, the CISO’s influence should be felt across the entire organisation. Responsibilities may include, but not limited to: Computer emergency response team/computer security incident response team Cybersecurity Disaster recovery and business continuity management Identity and access management Information privacy Information regulatory compliance ( Europe GDPR etc) Information risk management Information security and information assurance Information security operations center (ISOC) Information technology controls for financial and other systems IT investigations, digital forensics, eDiscovery Apart from their responsibilities, some essential qualities that a CISO should possess includes: The art of storytelling Craft engaging stories instead of using technical jargon, which is often difficult to understand by business people. Navigating the CXOS You need to have as many advocates as possible to sustain your influence, especially at C-Level. Understanding the environment Knowing your environment is important so that you can achieve your goals. Personal accountability With great power comes great responsibility. You must take ownership of everything that happens in the organisation regarding security. Dealing with intelligence Instead of waiting for things to happen, you should drive preventive action by finding all types of security issues before anyone else. Red Team & Blue Team approach can help with this. The cybersecurity landscape is undergoing dramatic shifts where you will be challenged by the greatest minds (hackers). However, you will have the opportunity to make the digital world more secure, and that itself is a journey worth taking.

Robotic Process Auto...

“Automation is key for digital transformation for any organizations.” – Deven Samant, Head of Enterprise Data and Cloud Practice at Infostretch Robotic Process Automation (RPA) is an emerging, tech-oriented process that configures and instructs a computer software “robot” to mimic human user interaction over information systems. RPA software robots can execute multiple tasks within multiple workflows, trigger responses, and communicate with other systems to perform a vast variety of repetitive and rule-based processes faster, and with fewer errors than humans. The goal of this kind of automation is to diminish human intervention, allowing people to stay focused on meaningful things, tasks that require reasoning, creativity, and (most importantly) enabling human interaction.  “RPA is more about people and culture”. – Lachlan Colquhoun, Curtin University Benefits of RPA Frees up employee time Reduces risk and ensures compliance Acts as middleware to core systems Reduces costs Eliminates human error through consistency Agile and easily scalable There are three types of RPA software robots... Task Bots A task bot is the core of RPA, and the most widely-used bot in enterprise environments. It is instructed by repeating actions. The advantage of this bot is that it can be easily implemented. One of the cons of this kind of automation is that if some application within the automated environment changes or updates, the whole bot must be reconfigured.  You should implement the bot to automate the following process:  Data migration and entry Data validation Extracting data from PDFs, scanned documents and other formats Meta Bots We can understand meta bots as the building blocks for scaling  into larger and more complex automation initiatives, primarily because of their app-resilient nature.  If some application within the automated environment changes or updates, a meta bot needs only minimal edits adapt. This change is automatically applied to any other process or entity (like a task bot) using that meta bot.  Cognitive Bots In terms of automation, a cognitive bot is the combination of RPA and an intelligent, self-learning system. They are the brains in complex automation strategies. For example, cognitive bots are able to learn from people how to understand and process structured or unstructured, unclear data.   This concept is known as Cognitive Automation, and its goal is to mimic human behavior. The ins and outs of RPA tools Organised according to their functionality there are three main categories of RPA tools: Robotic desktop automation (attended automation) Unattended automation (background processes) Hybrid automation Organised according to their usability there are four main categories: Coding Low coding Drag & drop (no coding) Macro recording (no coding) A basic exercise in RPA ☝️For this exercise you will need some basic technical knowledge in order to  create an online account, download and install the tools that we need on a Windows OS Machine. You will also need a  small PDF file with few lines of text. We will create a very simple task bot using an RPA tool that comes with a Community Edition: UIPath. You can register and download  UIPath Community Edition for this exercise here. First, download and execute: UiPathStudioSetup. The first time UIPath executes, the application will ask you to activate the product. Just click on “Activate Community Edition”to contnue. This will open your default internet browser and  a confirmation message should appear. Ok, we’re ready to start! With the help of the following steps you will learn how to instruct and run your bot to extract data from a PDF file and save it into a text file, or show it as a message on screen. Step 1 − Open UiPath Studio and start a new blank project. Step 2 − Copy the PDF file from which you want to extract information in the new project folder. The path should be something like ‘C:\Users\<UserName>\Documents\UiPath.’ We will use a file called readme.pdf  for this exercise. Step 3 − Drag and drop a ‘sequence’ in the ‘designer panel.’ Step 4 − Drag and  drop ‘read PDF text’ activity intothe sequence in the ‘designer panel.’   If the activity is not present it can be located in the ‘activity panel.’ If the activity is not present, you can add it from the link that automatically appears below the search box: When the ‘manage packages’ window appears,  select ‘official,’ from the centre panel choose ‘UiPath.PDF.Activities’ and  then install and save your changes. Then accept the license. Step 5 − Back to the ‘sequence’ that we just added, we need to provide the origin path for the PDF file. Click on the ‘browse’ button, this action will open the location of your current project where you saved the PDF file.  Select the file and the path will be shown in ‘read PDF text.’ Step 6 − In the properties of the ‘read PDF text’ activity, provide the name of output data table as variable. To do this, right click in the data table text box and then click ‘create variable.’ We also need to provide a range of pages to be read. By default the value is set as ‘all,’ but you can give it a pages range or a specific page number. Step 7 − Now, we should write the output in the text file. We need to drag and drop ‘write text file’ activity under ‘read PDF text’ activity. Step 8 − Provide the name of the output file in double quotes. In this case, we write “output.txt”.  Also, in the properties of ‘write text file’ activity provide the name of ‘output text’ as variable (as done in step 6). It must be the same provided in ‘read PDF text’ activities output, i.e. readtextoutput. Step 9 − After getting the output, we will instruct UiPath to display it.  Search and then drag and drop a ‘message box’ under ‘write text file.’ In the text box of ‘message box’ write: readtextoutput. Finally, the project should look like this: Step 11 − Run the project! The bot will extract the PDF information and create a text file named output.txt, which will show the output in a message box. From this point, we can take the information from the PDF file and do what we need with it. The Future of RPA RPA is growing fast and well beyond business processes. Every time it’s implemented and scaled new initiatives are created, notably in the field of cognitive automation.  On an even larger scale, in the coming years we’ll see greater incorporation of AI in advanced decision making, inferencing and prediction. Meanwhile, we're also seeing some interesting initiatives in RPA-oriented test automation and DevOps. So, give RPA tools a try! There are plenty of applications for them out there, many of which can improve both your personal, and organisational processes.

Understand Ethereum ...

In this tutorial, we are going to demystify blockchain technologies and the concept of cryptocurrency by letting you create your own token. Firs off, it's important to distinguish “coins” from “tokens”—a coin is the native currency of a particular blockchain (e.g. Ether is Ethereum’s coin) while tokens are tradable assets living on a host blockchain. Here is a list of tokens by market capitalisation. Tokens don’t have their own blockchain, instead they are emitted and managed by a smart contract on a host blockchain. We will unpack that point later. You don't need any particular skills or knowledge to follow this tutorial, it’s equally useful for novice users, but before we start, we need to define three things: Decentralised: anything that doesn’t rely on a single entity, but rather on a distributed network (see the image below). In the cryptocurrency world, entities are connected to nodes, and nodes are connected between them. Ethereum: You can imagine it as a single “decentralised computer connected and running on the Internet”. Remember the old days when you had to pay to use a computer in a cyber café? Well, it’s kind of the same thing, you can pay (with the Ether cryptocurrency) to use this global computer, which is composed of all the interconnected nodes. Smart contract: The Ethereum global computer is a bit different from the one you’re using to read this article. Meaning that standard programs and apps you run on your computers and smartphones cannot run on the Ethereum computer. Smart contracts are a kind of program or entities written in a specific language that can be executed on the Ethereum network. 1. Set up an Ethereum wallet First, you are going to need an ethereum wallet. In the cryptosphere, a wallet refers to a blockchain client. It allows you to interact with the blockchain and to store coins and tokens. The easy path to get your wallet is to install the browser extension Metamask. Go ahead and install the chrome extension. Once Metamask is installed, you'll be prompted to input an 8+ character password. At some point you'll be prompted with a 12-word pass phrase which lets you create your wallet and automatically add it to your Metamask account. Store it somewhere safe and offline. For a main network wallet, it’s common to write it down on a piece of paper and lock it somewhere safe. For the sake of simplicity and because we will be working with test coins, a simple .txt file works. To make certain you have written down the 12-word pass phrase you will be prompted to input it again. At this point your wallet is fully set up. You might be prompted to buy coin, but just click on ‘get access to my ether wallet’. 2. Get some coins In order to start deploying smart contracts, you will need some coins. Indeed, in order to process transactions on the Ethereum network, you have to pay a small transaction fee. This fee is paid in what is called "Gas", you can think of it as the computational power your transaction will cost on the decentralised Ethereum computer, and therefore the fee you will have to pay to use the network. We will be using the Rinkeby testnet. This network implements the exact same protocol as Ethereum but testcoins are free (thus worthless), it is used to test contracts before uploading them to the main network. You can imagine it as a free "sandbox" for developers. First you will need your Ethereum Rinkeby testnet address: Open Metamask, select Rinkeby testnet. Click on "Account 1" to copy your address to the clipboard, you'll need to paste it on the faucet to get testnet Ethereum Go here and follow the procedure, you’ll have to post your Ethereum address to a social network (Twitter, Facebook, Google+). Get back to Metamask and check that your coins have been sent (make sure the Rinkeby network is selected). This process shouldn’t take long. When completed, you'll see your fresh new Ether in Metamask. 3. Create and deploy your ERC20 token In order to start developing smart contracts, you will need a Solidity development environment. Solidity is the native Ethereum smart contract development language. If you are familiar with Javascript, Solidity will seem very familiar. There are plenty of ways to write, compile and deploy Solidity contracts, but the easy way (and the method recommended by the Ethereum community) is remix IDE. Quick tip: Don’t be intimidated by the last paragraph. You can continue this step without having any programming skills. You just need to know that an IDE (integrated development environment) is a tool that lets you write and execute code. Remix is a web based IDE, so you can access it through your browser or download the app. For the sake of simplicity, we will use the web app. Go ahead and access remix IDE. On the left side select ballot.sol and delete the code (if you prefer, like me, a darker interface you can choose the "Dark Theme" in the "Settings" tab). Now add the following code instead (You can read the code if you want to, but it’s not compulsory to understand it to continue): pragma solidity >=0.4.0 <0.6.0; /** * @title ERC20Basic * @dev Simpler version of ERC20 interface * @dev see https://github.com/ethereum/EIPs/issues/179 */ contract ERC20Basic { function totalSupply() public view returns (uint256); function balanceOf(address who) public view returns (uint256); function transfer(address to, uint256 value) public returns (bool); event Transfer(address indexed from, address indexed to, uint256 value); } contract BasicToken is ERC20Basic { mapping(address => uint256) balances; uint256 totalSupply_; /** * @dev total number of tokens in existence * @return totalSupply_ An uint256 representing the total amount of Tokens ever issued. */ function totalSupply() public view returns (uint256) { return totalSupply_; } /** * @dev Gets the balance of the specified address. * @param _owner The address to query the the balance of. * @return An uint256 representing the amount owned by the passed address. */ function balanceOf(address _owner) public view returns (uint256) { return balances[_owner]; } /** * @dev transfer token for a specified address * @param _to The address to transfer to. * @param _value The amount to be transferred. */ function transfer(address _to, uint256 _value) public returns (bool) { require(_to != address(0)); require(_value <= balances[msg.sender]); balances[msg.sender] = balances[msg.sender] - _value; balances[_to] = balances[_to] + _value; emit Transfer(msg.sender, _to, _value); return true; } } /** * @title Jug Token * */ contract PaloToken is BasicToken { string public constant name = "PaloToken"; string public constant symbol = "PITT"; uint8 public constant decimals = 8; uint256 public ethRaised_ = 0; function buyToken() public payable { uint tokens = msg.value * 1000000; balances[msg.sender] = balances[msg.sender] + tokens; ethRaised_ = ethRaised_ + msg.value; totalSupply_ = totalSupply_ + tokens; } constructor() public{ totalSupply_ = 500000000000; balances[msg.sender] = totalSupply_; emit Transfer(address(0), msg.sender, totalSupply_); } } ERC20 is a smart contract standard to implement tokens on Ethereum. If you want more details, check the official wiki page about ERC20 tokens. At line 56, inside the "contract PaloToken is BasicToken", you can change the token name from "PaloToken" and symbol "PITT" to whatever you want. The symbol is your token symbol—a "shorter" name, like a ticker symbol as used on the stock market (e.g. BTC for Bitcoin, FB for Facebook). Now we need to run the code so we can create a smart contract, then we have to deploy it to the test network: Step 1: Select the right compiler. The compiler will “translate” the code into a language computers can understand (only composed of “0” and “1”). On the upper right side of your ‘Remix’ window, click on "Select new compiler version", then choose the "0.5.2+commit1df8f40c" as shown in red in the following screenshot: Step 2: Select the ‘Run’ tab on the upper right side (shown in green). Step 3: Select ‘PaloToken’ on the right side panel instead of ‘BasicToken’ (see the orange arrow in the next screenshot). If you only choose ‘BasicToken’, you will have a non-standard ERC20 token, and you'll have very few functionalities: Step 4 : Click on ‘Deploy’ You should now have a Metamask popup window open. If not you should see a notification on the Metamask icon in your browser (see the red circle). Confirm your transaction by clicking on the ‘confirm’ button: You are now able to see your deployed contract: Expand your contract, and click on the ‘copy’ button shown in green to copy your contract address. Now go to https://rinkeby.etherscan.io/ and paste your contract address in the search form in the upper right: If everything worked fine you'll see something like this: You can now click on your token name next to "Token Tracker". You'll get information like the total supply, the number of token holders, the full transactions record, etc... Open Metamask again. click on the menu on the upper left, then scroll down and choose ‘Add Token’: Then choose ‘Custom Token’ and paste your contract address—the same one that you had to copy in Etherscan to check your token creation process—in the ‘Token Contract Address’ field. It will automatically fill the other fields: Symbol Decimal represents the maximum number of decimals you can divide each coin by (like 0.0001 PTT) Now click ‘Next’ and ‘Add Token’. Wait a moment... Congratulations! You just created a brand new cryptocurrency! You now own 5,000 coins of your own token. 4. Give me some tokens! I deserve it as I helped you creating your first cryptocurrency 😁! Open Metamask again, and select your coin on the menu that appears by clicking the top-left button. Now hit the ‘Send’ button. Insert my address on the ‘Recipient address’ field: 0xE9E197Fc393FbBF56A403D2037eB5b9F5c38782e Pay attention to the ‘advanced option’ link on the bottom, we may use it if the transaction fails. Remember we are on a testnet, it is much less stable than the Ethereum Mainnet (the real production network). If everything worked, Metamask will prompt you, saying that your attempt was a success. First, let me thank you for your coins. Now, have a look at the public ledger: https://rinkeby.etherscan.io/address/0xe9e197fc393fbbf56a403d2037eb5b9f5c38782e#tokentxns If it failed, try again, but this time you'll have to hit the ‘Advanced options’ and increase the gas limit (let's say to 10,000,000). This is my ERC20 token wallet. It may be deleted in the near future as tests network often get wiped clean due to stability issues or other reasons. If this happens, follow this link and pick an address randomly from the ‘From’ field, and click on it: Then copy this address (click the button on the right): 5. Bonus: The first step in launching your ICO Go back to Remix and have a look at the right panel. In ‘Value’ input 100,000 wei (that's 0.0000000000001 Ether). Step 1 in red. Hit the ‘buyToken’ button (don't forget to expand your deployed contract interface by clicking on it if it hasn't been done before). Step 2 is shown in green. Accept your transaction (abbreviated as "Tx") in Metamask. Congratulations! You’ve now received some newly generated tokens in exchange them for Ether! This is exactly how things are playing out with ICOs. You send them Bitcoin, Ethereum, or another cryptocurrency and they give you back their own token. Right now you can only do it via Remix or some other simple solution. Your next step would be creating a website with a nice front-end interface so you can easily interact with your contract, and spread it on the Internet. After reading this tutorial, hopefully you’ve realised that creating a personalised cryptocurrency isn't so difficult. Someone with little programming skill can add many functionalities and even start to raise funds.

Blockchain for Enter...

Blockchain for Enterprise is a controversial topic. On one hand, you have the Blockchain “purists” who insist that Blockchain is a tool for decentralisation, and was meant to take power back from the “greedy corporates” and put it back into the hands of the masses. On the other hand, you have people who are strong believers that Blockchain can fix every problem that plagues the enterprise world. My opinion on the matter rests somewhere in the middle. While I do agree with the former when we’re talking about ventures like Bitcoin, I think most of these people seem to forget that Blockchain is so much more than cryptocurrencies. While decentralisation is a fundamental pillar in the cryptocurrency world, it is not applicable to several interesting use cases in the Enterprise world. On the other end of the spectrum, there are people who tout Blockchain as a silver bullet for pretty much everything in an Enterprise context. While I think Blockchain can be used in a large variety of situations, a general rule of thumb for most companies should be: If your problem statement does not include interactions between a large consortium of companies, don’t use blockchain. This is where the benefits of Blockchain are truly realised. Firstly, you reduce cost by getting rid of third parties that might be in the middle of you and other companies you deal with. It also makes for greater transparency, as all transactions are recorded and visible. Security is one of the major benefits too as data is shared across multiple parties, making it significantly harder for hackers to change anything. At PALO IT, we have received multiple requests from big enterprises that want to do POCs on Blockchain. While some of these do fall into the lets-use-blockchain-because-why-not category, there are others who actually could benefit a lot from Blockchain. I think some of the emerging themes that we see are centred around Education (for instance, a network of decentralised education providers that could track certifications of people on the Blockchain), Settlement Systems (between collaborating Banks or global Telco partners), Health (for instance, unified patient medical history across different clinics or hospitals). These use cases are just the tip of the iceberg, and the benefits that you could get in these industries through cheaper and easier sharing of information, are huge. A lot of clients who approach us are mainly just trying to explore Blockchain technology and trying to discover if/how it could help them with certain problems within the company. Unfortunately, exploration is not always very easy for a person who is jumping headfirst into the technology. There is a host of things that a person would need to know before they even reach a place where they could work on their original use case. This is why we decided at PALO IT to create Hash-It, a simple tool for people looking to easily get started with Blockchain. The vision is that from a UI, someone can deploy a new private Blockchain network with the click-of-a-button. Once the network is created, the tool will provide the user with ready-made application templates (smart contracts) that could tackle some of the common themes I mentioned above. Users will be able to easily customise or “code” the smart contracts through the UI and tweak them according to their own requirements. Once done, they can deploy the smart contracts onto the private network and interact with the smart contracts through the UI as well. All of this makes it very easy to visualise how a decentralised Blockchain application works. Once we had outlined our vision, we worked on the “How” of this project. The first step was to choose a Blockchain platform that would allow us to build smart contracts. After going through some options, we decided that we would leverage on Ethereum as it met our basic requirements. Ethereum provides out-of-the-box support for Proof-of-Authority private networks which we think are perfect for an Enterprise context. On the infrastructure layer, we worked closely with our friends at Microsoft and decided to use the Azure Blockchain Workbench. Workbench makes it ridiculously easy to create and manage an Ethereum network, while scaling up/down is also very easy. The Hash-It tool itself would be an AngularJS UI, served by a set of Java micro-services running on a Kubernetes cluster. To build the actual tool, we conducted an internal hackathon across all our offices globally over the course of one weekend. The event itself was fun and challenging! A large number of interested Palowans came down over the weekend to help build the tool and work on their areas of interest. Some engineers from our partner, Microsoft, were also present at the hackathon. The end result is something you can see in the demo video below! Overall, we believe this tool would be really valuable to a lot of enterprises keen on exploring the Blockchain world, as it can drastically reduce the time spent understanding Blockchain and developing MVPs. If you’re looking to create a Blockchain application, this tool can help you deploy your own private Ethereum network and help you write, deploy, and interact with your own smart contract in a matter of minutes! It makes experimentation a lot more fun as you don’t have to worry about the details of the underlying infrastructure setup or the deployment details, and can focus solely on achieving your business function. Do get in touch with us if this is something you’re interested in and we can arrange a demo for you!

Comment tirer profit...

De quoi parlons nous ? Nous appelons“logiciel cognitif”un logiciel qui mime les capacités cognitives d’un homme, c’est à dire qui résout un problème comme un homme le ferait. Des technologies variées (machine learning, deep learning, auto-apprentissage…) évoluent très rapidement et savoir en tirer parti rapidement est un réel accélérateur d’innovation et de compétitivité pour l’entreprise. Ce billet de blog propose une approche pour concevoir et exploiter au mieux des logiciels cognitifs. Le modèle SAMR aide à y voir plus clair Le modèle SAMR aide à concevoir le niveau d’impact d’une technologie (en l’occurrence,  l’intelligence artificielle) dans l’entreprise. Il définit quatre niveaux d’adoption d’une technologie sur un Produit ou Service (appelé “PoS” dans la suite de ce document) : Substitution: La technologie est appliquée comme substitut dans le PoS mais il ne change pas son fonctionnellement, Augmentation: La technologie est appliquée dans le PoS et il est amélioré fonctionnellement, Modification: La technologie permet de repenser la destination et les fonctions du PoS, Redéfinition: La technologie fait naître des nouveaux usages, manières de faire et impose une redéfinition totale du PoS et de son usage. Les néophytes s’engouffrent dans SAMR ! Il est souvent difficile pour une entreprise non entraînée d’utiliser l’IA pour concevoir ses usages métier. Le champ des applications est tellement vaste que l’entreprise a du mal à trouver par où commencer. La méthode des néophytes “pas trop pressés” est de se baser sur le cadre SAMR et de progresser sur des cas d’usage de l’IA, en partant de Substitutions pour aller progressivement vers la Redéfinition. Mais ce processus est peu efficace car les substitutions ne génèrent que peu ou pas de valeur : elles lassent donc rapidement le management qui s’intéresse au retour sur investissement. Cette approche est donc lente, coûteuse et finalement peu convaincante. Les “exigeants” font de l’IA Design Il est donc nécessaire de trouver les espaces d’opportunité apportant rapidement beaucoup de valeur quelque soit le niveau SAMR du cas d’usage métier (existant ou imaginé). Chez PALO IT, nous avons conçu une méthode de design qui permet : De naviguer dans le labyrinthe des possibles, De trouver ces opportunités Et de positionner la solution IA dans les processus, le système d’information et l’organisation.   Notre cadre de travail est représenté ci-dessous : En entrée de notre méthode de design : Expertise IA La connaissance approfondie des offres et technologies de l’IA, de leurs capacités et limitations permet d’être créatif. Expertise Métier Les experts métiers sont embarqués dans le processus de design où ils décrivent, imaginent et dévoilent des espaces d’usage et d’opportunité (fonctions, valeurs). Système d’information existant Les architectes du SI décrivent le système et ses capacités d’évolution pour incorporer du logiciel cognitif. En sortie de notre méthode de design, sont produits : Cas d’usage / Fonctions cognitives / SI Le cas d’usage proposé et les fonctions cognitives utilisées et leur positionnement dans le SI. Processus Organisation Transformation L’IA a un impact majeur sur l’organisation et les processus. Cela demande un programme de transformation associé. Cet impact est d’autant plus important que le cas d’usage est proche de la Redéfinition, au sens SAMR. La méthode “IA Design” Elle utilise les outils méthodologiques de recherche, cadrage de problèmes, priorisation, ateliers collaboratifs, proto/POC… Toutefois, les capacités fonctionnelles et interactives des technologies de l’IA sont systématiquement injectées dans les réflexions à des points bien stratégiques qui maximisent la valeur et l’identification des possibles. Cette méthode est enrichie au fur et à mesure des projets, ce qui en fait un outil particulièrement pertinent. En conclusion L’adoption de l’IA dans l’entreprise est une affaire de méthode de design avant tout. La vision holistique de l’impact de l’IA est indispensable pour trouver les cas d’usage à forte valeur et les clefs de transformation associées. PALO IT a pensé et éprouvé sa méthode de design IA pour ne pas tomber dans le piège d’une progression linéaire SAMR lente et peu convaincante.

Quelle relation entr...

Comme tout responsable de développement business, je m’interroge constamment sur les clefs de la croissance dans notre monde en perpétuel changement. J’ai découvert que le Cloud est un puissant contributeur à la croissance. Pour nous en convaincre, je m’appuie sur l’étude menée par Salim Ismail dans son livre “Exponential Organizations”. L’ouvrage présente 10 pratiques amenant à une croissance ultra-rapide. Ces pratiques, utilisées par les grands du monde Internet (Google, BlaBlaCar, Netflix, Airbnb…), s’appuient sur le Cloud. La bonne nouvelle est qu’elles sont également applicables à toute entreprise. Le livre propose de répartir les 10 pratiques de croissance en deux groupes : celles orientées vers l’audience extérieure (clients, partenaires…), et celles orientées vers l’interne (employés, ressources internes). Pratiques orientées vers l’externe Savoir profiter de l’effet de masse engendré par l’accès à Internet amène la croissance rapide. Les entreprises dans ce cas ont une capacité de scalabilité et d’élasticité très importante du SI en contact avec Internet. Par exemple, en sachant créer et animer des communautés ouvertes autour de sa marque, l’entreprise doit accueillir un trafic croissant et rapidement variable. D’autres pratiques telles que l’usage de la data science (Algorithmes) demande de créer des environnements de travail de manière dynamique et de profiter des dernières technologies du domaine sans passer des mois de R&D à les acquérir (usage du PaaS). Le fait d’utiliser les actifs des tiers est sûrement la pratique la plus “nativement cloud” : Le cloud lui-même porte cette idée de bâtir sur les actifs des opérateurs cloud et de variabiliser les coûts d’infrastructure. Pour finir sur cette partie, les fonctions d’engagement des clients et partenaires font appel à des mécanismes transactionnels plus complexes (par exemple des décisions liées au marquage des données de navigation web) et on doit sans arrêt renouveler les services offerts aux clients pour maintenir leur engagement. Le cloud permet le déploiement rapide d’applications variées et évolutives augmentant l’engagement des clients envers la marque. Pratiques orientées vers l’interne Il se pose la question de comment les interactions massives avec l’extérieur sont digérées par l’intérieur de l’entreprise. Pour cela, un système interface sait condenser ou router en temps réel les informations venant de l’extérieur. Le cloud et les middlewares de nouvelle génération (par exemple les apporte bases de données non structurées) apportent toute l’élasticité nécessaire à ce type de fonction. Les décisions de l’entreprises à forte croissance peuvent avoir un impact majeur sur leurs résultats. Avoir un retour rapide et fiable sur les actions est apporté par le développement des tableaux de bord et leur adaptation continuelle. Le cloud, grâce aux offres de data visualisation et la capacité à déployer des applications adaptées en continu, offre cette souplesse. Les trois dernières pratiques sont présentes dans le concept « d’entreprise apprenante ». D’une part, la capacité d’expérimenter des offres et des nouvelles manières de faire à coût variable, sans investissement, est largement supportée par le cloud. D’autre part, le partage des informations à travers les réseaux et outils collaboratifs sont également massivement utilisés. Les offres SaaS dans ce domaine offrent toutes possibilités. Conclusion Ainsi, les entreprises en forte croissance s’appuient sur le Cloud, car il offre 5 propriétés supportant les pratiques de croissance des entreprises : Elasticité: monter et descendre la capacité du SI à la demande Scalabilité : pouvoir augmenter sans limite la capacité Coût à la consommation : utiliser les actifs en coût variable Déploiements et environnements à la demande : rénover les applications rapidement Dernières technologies disponibles sans R&D : s’appuyer sur la R&D des opérateurs cloud

L’IA qui répond inte...

L’intelligence artificielle semble créer un mouvement d’idées important dans le monde entier. Les états comme la Chine, la Franceet d’autres en font une priorité économique. Les entreprises en font une méthode de communication vis-à-vis de leurs actionnaires sans d’ailleurs avoir une idée juste de leur promesse (sauf cas exceptionnel). Chez PALO IT, nous ne prétendons pas tout connaitre dans le domaine de l’intelligence artificielle car le sujet est très vaste et très varié. Mais depuis 2 ans, nous travaillons dans le domaine de la compréhension du langage humain écrit. Dans ce domaine,  nous voulons vous persuader que la technologie est devenue industrialisable donc utilisable. Comme il y a 3-5 ans, les réseaux de neurones dit de convolution ont permis d’immenses progrès dans la reconnaissance d’images, les réseaux de neurones de type Long short-term memory(LSTM) nous ouvrent aujourd’hui une bonne perspective dans la compréhension du langage. OBJECTIFS DE CE POST Replacer le buzz sur l’IA à sa place de “bla-bla buzz”. En quelques mots, vous décrire ce qu’est réellement l’IA actuelle. Le travail et les difficultés rencontrées depuis 30 ans sur la compréhension du langage humain. La révolution qui se prépare, en comprendre les contours économiques et surtout en expliquer les capacités et les limites pour l’instant. L’Intelligence artificielle une croyance ou une réalité : La terminologie – malheureuse ! – d’Intelligence Artificielle est apparue en 1956 : L’Intelligence Artificielle (IA) est la science dont le but est de faire faire par une machine des tâches que l’homme accomplit en utilisant son intelligence. C’est FAUX On peut lui préférer celle de Calcul Informatique capable de simuler des tâches humaines sans présager de ce qu’est l’intelligence. "Depuis 30 ans, les chercheurs cherchent à comprendre le langage humain mais les chercheurs qui cherchent, on en trouve mais des chercheurs qui trouvent, on en cherche !" Après la dernière Guerre Mondiale, les états riches comme les Etats-Unisont investi beaucoup d’argent sur la promesse suivante : Il suffit d’avoir un ordinateur puissant et un logiciel qui connaît tous les mots et leurs règles de liaison pour comprendre la grammaire d’un langage humain comme un programme(compilateur) interprète un programme écrit en Cobol,Java ou autres. Raté : Le langage humain n’entre pas dans la catégorie des grammaires formelles aussi complexes soient-elles, contrairement à ce que des générations de professeurs ont voulu nous persuader. Vous connaissez les messages véhiculés par Twitter donc vous savez que le langage humain change en fonction de la culture et du contexte quitte à inventer de nouveaux concepts à chaque échange. Mais à l’époque, il n’y avait pas Twitter donc nous dirons que l’erreur est humaine. La révolution du Word Embedding et des réseaux de neurones profonds (Deep Learning) est une histoire d’hommes qui n’ont pas lâchés leur conviction. Le résultat de leur travaux est maintenant disponible et mesurable : La révolution est venue de l’idée que la sémantique d’un mot ne s’explique que par la connaissance de la présence possible des mots voisins dans toutes les phrases imaginables. Cette connaissance est connue par vous et par vos interlocuteurs donc vous vous comprenez. Un langage humain transporte ainsi votre culture qui est partagée par votre interlocuteur sinon vous devriez faire comme les italiens parler avec vos mains, pas simple ! Prenons un exemple : un chaperon : vous comprenez que l’on parle d’une profession qui a disparu (et heureusement). un chaperon rouge : vous comprenez que je parle d’une héroïne d’un conte de Charles Perrault Un mot peut changer de sens en fonction des mots qui l’entourent. Et pourtant, il a toujours un sens quand il est seul. Mais ce sens pour vous vient des mots qui peuvent l’entourer dans votre culture. Nous sommes passés du mot parmi d’autres à une représentation du mot par rapport aux autres mots. Un mot n’est plus un point dans la liste des mots mais un vecteur dans l’espace de représentation des mots. Collobert & Weston, chercheurs américains, ont trouvé une méthode pour calculer ces vecteurs en 2008 en utilisant des réseaux de neurones alors que tout le monde avait abandonné ce type de sujet. Ensuite la communauté scientifique a largement amélioré et utilisé cette vision en utilisant les réseaux de neurones car le deep learning à base de réseaux de neurones, avait révolutionné le domaine du traitement de l’image. Le véhicule autonome est le premier résultat industriel de ce progrès technique. Plus spécifiquement, la communauté scientifique a utilisé cette vision sur les problèmes difficiles comme la réponse à des questions ouvertes sur des textes fournis. L’université de Stanforda organisé une compétition dans ce domaine.N’hésitez pas à consulter ce site :SQUAD Ce site donne accès à un QCM avec 100 000 questions réponses sur des textes. Chaque compétiteur est mesuré avec un jeu de données que personne ne connaît. Ci-dessous, les qualités de prédictions sur 19 mois en arrière sachant cette mesure a été faite en mars 2018. La ligne rouge est le meilleur résultat des étudiants de Stanford qui ont fait le test. En bleu, la progression des algorithmes depuis 19 mois. On peut voir que les calculs semblent pouvoir faire mieux que l’homme sachant lire. À partir des écrits des chercheurs, nous avons refait un modèle et fait tourner sur ce jeu de données en apprentissage et en test. Nous avons atteint la qualité de 79,6. Ouf… cela fonctionne comme dans les papiers des chercheurs. Mais comment la machine répond sur des textes particuliers et ciblés ? Pour le tester, nous avons choisi un texte en anglais difficile à comprendre. Il s’agit du texte juridique qui régule l’utilisation des données personnelles, dit la RGPD. Et voilà, un résultat sur ce texte de lois : Question: “Who should infringe the regulation?” Réponses du calcul : Public authorities Member States The supervisory authorities Mais surtout les réponses sont intéressantes quand on a l’article qui contient la réponse. Vous pouvez venir tester le bot RGPD –> Ici <–  en cliquant sur “Sign Up” pour obtenir un droit d’accès. Nous pensons vous avoir prouvé qu’il est maintenant possible de répondre à des questions ouvertes sur un texte fourni par l’homme. L’humain est le professeur et le créateur des contenus. La machine apprend et répète intelligemment en répondant à des questions ouvertes.

Blockchain : où en s...

Introduction La technologie phare de ces dernières années est bien la blockchain. Certains la décrivent comme une avancée aussi importante que l’était Internet à la fin du XXe siècle. Si vous n’en avez jamais entendu parler, vous avez probablement dû entendre parler de cryptomonnaie comme par exemple leBitcoin. L’objectif de cet article est de comprendre l’avancée technologique que procure la Blockchain, puis de s’intéresser aux nouvelles générations de cryptomonnaies. Dix années après l’apparition de la première blockchain mondialement connue, le Bitcoin, nous allons voir quelles sont les avancées dans ce domaine. Historie de la blockchain Afin d’introduire le concept de la blockchain nous avons besoin de nous situer dans le contexte de la crise bancaire et financière de l’automne 2008. Cette crise a provoqué un problème de confiance vis-à-vis des intermédiaires et plus précisément des institutions bancaires. Ces intermédiaires dictent leurs lois, abusent parfois de leurs autorités et, plus important, concentrent les risques. C’est dans ce climat de méfiance que sont nées les cryptomonnaies. La première et de loin la plus célèbre est le Bitcoin. Elle a été créée en 2008 par Satoshi Nakamoto et repose sur plusieurs principes. L’idée de base était de créer une monnaie d’échange entre utilisateurs en se passant des institutions bancaires afin de transférer des biens et des services de façon plus libre et indépendante. Néanmoins ce principe introduit plusieurs problématiques : L’échange d’argent se fait d’utilisateurs à utilisateurs ll faut assurer l’unicité de l’attribution d’un même bitcoin Et il faut garantir que la monnaie se gère sans autorité supérieure ou centrale Pour répondre à ces problématiques, le Bitcoin utilise la technologie de la blockchain et plus précisément la combinaison innovante de trois mécanismes : les réseaux Peer-to-peer, la cryptographie asymétrique et la preuve de travail. Les réseaux Peer-to-peer Le réseau pair à pair est un protocole de transfert de données proche du modèle client-serveur où chaque client est lui-même un serveur. De cette manière un utilisateur du réseau, appelé un noeud, met à la disposition des autres noeuds, un fichier. Ce fichier est copié par tous les noeuds de proche en proche jusqu’à ce que l’ensemble du réseau en possède une copie. Dans le cas du Bitcoin, les utilisateurs échangent un registre et à chaque fois qu’un noeud apporte un changement à ce registre tous les noeuds du réseau mettent à jour leur registre local. La cryptographie asymétrique La cryptographie asymétrique est une méthode de chiffrement de message. Elle permet de générer mathématiquement deux nombres. Ces deux nombres sont appelés clé privée et clé publique. La clé publique peut être diffusée à quiconque souhaite chiffrer un message, mais la clé privée est gardée secrète. Ce système peut être utilisé pour : Chiffrer un message à envoyer : imaginons Bob qui utilise la clé publique d’Alice afin de crypter un message. Alice va alors déchiffrer ce message à l’aide de sa clé privée qu’elle est la seule à détenir. Le message est alors totalement confidentiel et seul Alice pourra consulter le contenu de ce message. S’assurer de l’authenticité de l’expéditeur : le rôle des clés est interchangeable. Une clé privée peut aussi servir à chiffrer un message pour qu’une clé publique le déchiffre. Alice qui souhaite authentifier un message va utiliser sa clé privée pour crypter celui-ci et tous les détenteurs de la clé publique d’Alice peuvent valider qu’elle est bien l’auteur du message. Dans notre cas, afin d’éviter la possibilité qu’un même bitcoin soit attribué deux fois, la technologie Bitcoin va utiliser l’authenticité de l’expéditeur grâce à la cryptographie asymétrique. Si Alice veut devenir membre du réseau Bitcoin, elle va alors générer une clé privée et une clé publique. La clé privée va rester en sa possession et elle va diffuser la clé publique à tous les utilisateurs du réseau. Afin d’effectuer une transaction, elle va signer celle-ci avec sa clé privée. De cette manière, tous les utilisateurs du réseau sont sûrs que c’est Alice qui a créé cette transaction. Tant que Alice ne dévoile pas sa clé privée, elle sera la seule personne à pouvoir effectuer une transaction avec ses bitcoins. Maintenant que les transactions sont sécurisées, comment peut-on savoir que l’émetteur possède réellement le bitcoin à transférer ? La preuve de travail Afin de s’assurer qu’un utilisateur possède réellement le bitcoin à transférer et que la monnaie se gère sans autorité supérieure et/ou centrale, il va falloir trouver une astuce et mettre tous les noeuds du réseau d’accord sur l’ordre des transactions. Si l’ordre des transactions est la même pour tous les noeuds du réseau, on peut remonter dans les blocs et savoir si un émetteur possède réellement un bitcoin. C’est le concept de la preuve de travail (Proof-of-Work en anglais). Toutes les transactions sont groupées dans des blocs et chaque bloc référence le bloc précédent. De cette manière il n’y a qu’une seule chaîne de blocs qui est répliquée sur tous les noeuds du réseau. Nous pouvons alors affirmer qu’une transaction qui se trouve dans un bloc sera obligatoirement postérieure aux transactions présentes sur tous les blocs précédents. Les transactions sont donc ordonnées selon une suite chronologique. La question maintenant est de savoir comment créer de nouveaux blocs contenant les nouvelles transactions. Avant d’être incluses dans un bloc, les transactions sont dites en attente et doivent être confirmées. Pour créer un nouveau bloc tous les noeuds du réseau font appel au minage. Le minage Le concept du minage est simple. Chaque bloc est identifié par uneclé de hachage, et l’objectif des noeuds du réseau qui minent est de créer un identifiant (une clé de hachage) valide pour le nouveau bloc. Toutes les transactions en attente seront alors regroupées dans le nouveau bloc alors disponible dans la blockchain. Afin de trouver un identifiant valide, les noeuds du réseau font appel à une fonction de hachage. Grâce à cette fonction, les noeuds hachent l’identifiant du bloc précédent, les identifiants des transactions du bloc que l’on essaie de former et une chaîne de caractères aléatoires. Un noeud pourra créer le nouveau bloc si le hash qu’il vient de créer est valide; c’est-à-dire s’il comporte un grand nombre de zéro dans ses premiers chiffres. Le noeud met alors à jour la blockchain et propage cette dernière de proche en proche dans le réseau Peer-to-peer. Sachant que le hash est généré aléatoirement, un noeud a besoin de beaucoup de tentatives pour en trouver un qui soit valide. Ces tentatives génèrent beaucoup de travail d’où le nom preuve de travail. Dans l’exemple du bitcoin, la probabilité de trouver un noeud valide est de 9,18×10^-22 et un bloc est créé toutes les 10 minutes. Cette énorme quantité de travail est assurée par la puissance de calcul du matériel des mineurs (serveur, GPU, etc..). Pour les récompenser, une somme de 12,5 bitcoins leur est reversée lorsqu’une clé de hachage valide est trouvée. De cette manière les transactions sont vérifiées et stockées dans des blocs et si quelqu’un de mal intentionné veut changer une transaction dans un bloc; il doit changer l’identifiant de ce bloc (sa clé de hachage). Sachant que chaque bloc référence le bloc précédent, il doit aussi changer cette référence et donc l’identifiant des blocs suivants et ainsi de suite. Sachant que trouver un seul identifiant de bloc est extrêmement coûteux, frauder le système est presque improbable. Les réseaux peer-to-peer, la cryptographie asymétrique et la preuve de travail (PoW) qui permet de trouver un consensus distribué sont les trois mécanismes qui constituent la blockchain. Prise de recul Aujourd’hui les cryptomonnaies pèsent des centaines de milliards de dollars et prouvent au quotidien la fiabilité de la blockchain. Cette technologie est en application dans le domaine de la finance, il est temps de voir si elle est assez mature pour s’étendre à d’autres types d’informations. La blockchain s’appuie sur trois idées fondamentales décrites dans le chapitre précédent : La « désintermédiation » : nous pouvons nous passer des intermédiaires de confiance grâce au réseau peer-to-peer ce qui entraîne une diminution des coûts. La traçabilité : l’utilisation d’un registre permet de rendre traçable chaque transaction car elles sont publiques. Cette traçabilité nous apporte une preuve numérique d’existence non effaçable. Néanmoins, malgré le fait que les transactions soient publiques, elles sont signées avec les clés de hachage privées des utilisateurs. Un Consensus distribué : permet la conservation du registre par tous les utilisateurs et la vérification permanente des transactions, qui a pour effet de faciliter le travail ensemble et d’augmenter la sécurité. C’est dans ce contexte que certaines personnes se sont posées une question simple mais cruciale : Est-il possible d’exécuter un code ? Le bitcoin est une excellente innovation, mais est-il possible d’effectuer un virement à une certaine date par exemple. C’est de cette idée qu’est né le contrat intelligent (smart contract en anglais). C’est ce que je considère comme étant la deuxième génération de blockchain et de cryptomonnaie. Cette deuxième génération a été introduite par la technologie Ethereum. Un smart contract est la distribution sur le réseau de petits programmes avec une mini-base de données sur la blockchain. De cette manière il ne peut pas y avoir de litige, l’exécution du programme est garantie par la blockchain. Actuellement il existe plusieurs applications qui se basent sur un smart contract. Ces applications sont appelées les dApps pour « decentralized application ». Une application dApp s’exécute sur la blockchain au lieu de s’exécuter localement sur un appareil. Néanmoins les dApps regroupent un grand nombre de contraintes. La première et la plus dangereuse selon moi c’est qu’un code déployé sur une blockchain est immutable. C’est-à-dire qu’il n’est pas modifiable. Lorsqu’il est déployé sur la blockchain il y sera présent pour toujours. Au vue de la maturité de la technologie, il est quasiment sûr que du code déployé sur une blockchain contient du bug. Pour corriger ces bugs il faut déployer un nouveau contrat sur la blockchain et récupérer les informations du contrat précédent. Il existe plusieurs scandales, où des hackers ont détourné des montants énormes (plusieurs millions de dollars) sur des dApps en exploitant du code bugé. Une autre contrainte, les réponses sont très lentes. Lorsque l’on appelle une méthode d’un contrat via une transaction, il faut attendre que cette dernière soit vérifiée, donc qu’un bloc soit miné afin de pouvoir la stocker. Dans l’exemple de Ethereum, un bloc est miné chaque minutes. Troisième génération Les limites des blockchains de première et deuxième génération ont introduit l’arrivée de blockchain de troisième génération. Elles ont pour objectif de répondre aux problèmes rencontrés dans les versions précédentes tout en introduisant des innovations. C’est le cas par exemple de la technologie Cardano qui, selon moi, propose un tournant majeur dans le monde des cryptomonnaies. Cardano a été lancée en septembre 2017 et se présente comme la blockchain de troisième génération. Cette technologie reprend les idées de base du Bitcoin et de l’Ethereum, et répond partiellement aux problèmes soulignés précédemment. Selon ses créateurs, elle offrirait une solution scalable, interopérable et durable. Scalabilité La scalabilité recouvre trois problématiques : Le nombre de transactions par seconde, Sachant que ces transactions manipulent des données, nous rencontrons un questionnement sur la fiabilité du réseau, Etant donné que chaque transaction est stockée sur la blockchain, celle-ci grossit de jour en jour. Actuellement la blockchain du bitcoin pèse plus de 150 Go. Nous allons donc rencontrer un problème de stockage de data car chaque noeud du réseau doit stocker le registre en intégralité afin de vérifier les transactions. Nous avons vu précédemment que la première génération de blockchain (Bitcoin) permettait de trouver un consensus distribué grâce à la preuve de travail (PoW). Pour répondre au problème du nombre de transactions par seconde, Cardano utilise lui un algorithme de Proof-of-Stake (PoS) afin de parvenir à un consensus. La preuve de travail, pour le minage, consiste à effectuer des calculs mathématiques afin de créer des nouveaux blocs dans la chaîne ou de valider des transactions. Le problème identifié précédemment, est que la création de bloc était très lente (par exemple 10min pour le bitcoin), et donc la vérification de transaction de même. Sans rentrer dans les détails, le protocole utilisé par Cardano permet de générer des blocs et vérifier les transactions de manière beaucoup plus rapide. Ensuite, afin de réussir à effectuer des milliers de transactions par seconde, il est impossible de maintenir une topologie de réseau homogène. Tous les noeuds du réseau ne peuvent pas non plus étudier chaque transaction car ils n’auront pas une fiabilité du réseau suffisante. Cardano utilise l’architecture réseau RINA (Recursive InternetWork Architecture) http://irati.eu/the-recursive-internetwork-architecture/ afin de pallier à ce problème de support réseau. Pour finir, tous les utilisateurs n’ont pas nécessairement besoin de toutes les données pour vérifier leurs transactions tout en ayant un niveau de sécurité élevé. Cardano n’a pas encore implémenté de solution pour réduire la taille de la blockchain, mais une réponse à ce problème est en cours de développement et devrait être disponible courant 2018. Cette réponse consiste en l’union de plusieurs méthodes : l’élagage, l’abonnement, la compression et la partition de données. Un noeud du réseau pourra alors manipuler une version « light » de la blockchain afin de valider une transaction ou de créer un bloc. Voici comment Cardano répond au problème de scalabilité de la blockchain que rencontraient les deux premières générations. Interopérabilité L’idée d’interopérabilité est la même que dans le monde actuel de la finance. Le constat est qu’une seule monnaie ne peut pas gouverner le monde. Il existe une multitude de cryptomonnaies différentes, et l’idée ici est de pouvoir utiliser n’importe quelle autre monnaie avec Cardano. Chaque technologie comme le Bitcoin, l’Ethereum ou le système bancaire actuel possède son propre réseau mais il n’existe pas de standard permettant de communiquer entre eux. Il existe des plateformes d’échange, ce sont celles qui sont utilisées au quotidien pour échanger des devises en cryptomonnaie mais elles concentrent les risques. Il faut aussi avouer que c’est contradictoire de promouvoir une technologie de décentralisation totale sur des plateformes aussi centralisées. L’objectif est de pouvoir utiliser n’importe quelle monnaie avec Cardano, c’est-à-dire une manière de pouvoir « lier » les blockchains de réseaux différents. Pour respecter l’idée générale de la blockchain, ces échanges devraient se faire sans l’utilisation d’un tiers de confiance (plateformes d’échanges). Pour répondre à ce problème, Cardano introduit les chaînes latérales. Le concept de base est que lorsqu’une transaction est faite d’une blockchain à une seconde blockchain différente, les chaînes latérales doivent regrouper un ensemble d’informations compressées qui nous donnent l’aptitude de savoir si la transaction est légitime. La transaction doit respecter les mêmes règles de base que les transactions des blockchains de première génération. À ce stade les blockchains pourraient communiquer entre elles mais il est pertinent de constater qu’il reste une barrière entre le réseau bancaire et le réseau Blockchain. Cette barrière existe seulement car il y a un besoin, côté réseau bancaire, de traçabilité des transactions. Qui envoie à qui et pourquoi? Pour y répondre, Cardano ajoute un système de métadonnées dans les transactions que l’utilisateur aura le choix d’inclure ou non. La finance pourra alors exiger qu’une transaction soit transparente. Cardano fournit donc un alliage astucieux entre l’anonymat de la cryptomonnaie et la nécessité de traçabilité du monde de la finance. Durabilité Pour que cette technologie soit durable dans le temps il est important de répondre à une question : Comment payer les choses? En effet, il est nécessaire de développer un modèle économique permettant de  créer une trésorerie afin de rémunérer les personnes contribuant à l’évolution de cette technologie. De cette manière elle pourra perdurer dans le temps. Pour anticiper cela, Cardano a pour objectif de mettre en place un système de trésorerie de la manière suivante. Auparavant, les mineurs des cryptomonnaies recevaient une récompense lorsqu’un bloc était miné. Avec Cardano les mineurs reçoivent également une récompense, mais une partie de celle-ci est redirigée vers une trésorerie, vers un compte bancaire décentralisé. Lorsqu’un utilisateur à un projet qui pourrait faire avancer la technologie il le soumet à la trésorerie, ensuite tous les détenteurs de jeton pourront alors voter. Si l’utilisateur reçoit un montant élevé de vote, la trésorerie lui débloque une rémunération afin de développer son projet. Ce modèle économique permet alors de faire évoluer la technologie de manière sûre et durable dans le temps par la communauté elle-même. Conclusion Le monde de la blockchain évolue de manière efficace mais pour l’instant les résultats sont encore trop immatures. Créée en 2008, la technologie a déjà bien avancé par rapport au stade initial mais elle est encore beaucoup trop marquée par des scandales et des exploitations de bugs par des hackers. Les blockchains de troisième génération proposent des solutions permettant de démocratiser cette technologie. Je pense que Cardano est à suivre car selon moi, elle respecte entièrement la philosophie de la blockchain tout en répondant partiellement aux problèmes de scalabilité, d’interopérabilité et de durabilité liés aux générations précédentes. Néanmoins, il reste encore du chemin à parcourir car la technologie est encore très jeune. Quelque soit l’issue de cette innovation, révolution totale ou flop, il est passionnant pour un jeune développeur comme moi de voir l’évolution en direct de cette technologie et ses impacts sur la société actuelle.

Adopt a cloud soluti...

On November 3rd 2016 took place the very first Devfest conference in Toulouse. The event was a tremendous success and brought together a large community of technology addicts. In this event, you could choose from numerous breakout sessions covering a broad range of technical topics focused on web, mobile and cloud development. I was particularly interested in the sessions presented by Aurélie Vache (Atchik), Didier Girard (Sfeir) and Alain Regnier (Alto Labs) dealing with Big Data implementation using the Google Cloud Platform (GCP), and especially how to facilitate the implementation of Big Data projects using the services of the google cloud stack. " The Google Cloud Platform offers an integrated end to end Big Data solution." It lets you capture, process, store and analyse your data within a single platform. These services are serverless which frees you from the need to build, manage and operate complex and costly infrastructures. This is obviously a major benefit in terms of implementation cost for your Big Data projects. " We can particularly thank Didier Girard for his excellent session where he described the challenges that developers face when dealing with Big Data and how to solve them using the GCP." According to him, the volume of data being generated by today’s applications is increasing at dizzying rates. The real challenge is not to store a big amount of data to analyse them at a future date, but to be able to consume a large volume of events and extract meaningful informations in real time. He then presented the typical GCP architecture used to build such a solution and did a live demo of an application consuming roughly 100000 events per second. Cloud Pub/sub Is a real-time messaging service that allows you to send and receive data between independent applications. A messaging queue supports many to many communications, meaning that a Pub/Sub can be configured to support multiple publishers and subscribers. It is highly scalable, as the service can handle 10000 messages per second by default and up to several millions on demand. That makes it a major asset to capture big amount of real time events originating from various sources, such as Web analytics or IOT devices. It is also secure as all data on the wire is encrypted. Cloud Storage Is an object storage solution in the cloud similar to the very well known Amazon S3. It can be used for live data serving, data analytics or data archiving. Objects stored within Cloud Storage are organized in buckets that can be controlled and secured independently. The service also provides a Restful API that allows developers to programmatically access the storage and dynamically upload data to be processed, such as batch files. Cloud DataFlow Is a data processing service for both batch and real-time data streaming. It allows to set up processing pipelines for integrating, preparing and analyzing large data sets. Partly based on Google frameworks MillWheel and FlumeJava, it is designed for large scale data ingestion and low latency processing. Cloud Dataflow can consume data in publish-and-subscribe mode from Google Cloud Pub/Sub or, in batch mode from any database or file system (such as Cloud Storage). It fully supports SQL queries via Google BigQuery which makes it the natural solution to interface Google Pub/Sub and BigQuery. BigQuery is a fully managed data warehouse for large-scale data analytics. It offers users the ability to manage data using SQL-like queries against very large data sets and get results with an excellent response time. BigQuery uses columnar storage and Tree architecture to dispatch queries and aggregate results across multiple machines. As a result, BigQuery is able to run queries on datasets containing terabytes of data within a few seconds. BigQuery also provides a REST API that lets you control and query your data using your favorite client language and libraries. Alternatively, you can access it through command-line tools or google online console. "In conclusion, I would say that the Google Cloud Platform is definitely a solution to consider when implementing your Big Data projects." Its serverless and scalable approach represent a major asset to meet today's requirements of applications generating an unprecedented amount of data from diverse sources. The GCP services are also very well documented, and an infrastructure can be put in place fairly quickly. Lastly, the major benefit is probably the pricing, as the cost associated to the use of these services is very minimal and can be finely controlled through the google online console (e.g. BigQuery Pricing).