timestamp,eventType,contentId,authorPersonId,authorSessionId,authorUserAgent,authorRegion,authorCountry,contentType,url,title,text,lang 1459192779,CONTENT REMOVED,-6451309518266745024,4340306774493623681,8940341205206233829,,,,HTML,http://www.nytimes.com/2016/03/28/business/dealbook/ethereum-a-virtual-currency-enables-transactions-that-rival-bitcoins.html,"Ethereum, a Virtual Currency, Enables Transactions That Rival Bitcoin's","All of this work is still very early. The first full public version of the Ethereum software was recently released, and the system could face some of the same technical and legal problems that have tarnished Bitcoin. Many Bitcoin advocates say Ethereum will face more security problems than Bitcoin because of the greater complexity of the software. Thus far, Ethereum has faced much less testing, and many fewer attacks, than Bitcoin. The novel design of Ethereum may also invite intense scrutiny by authorities given that potentially fraudulent contracts, like the Ponzi schemes, can be written directly into the Ethereum system. But the sophisticated capabilities of the system have made it fascinating to some executives in corporate America. IBM said last year that it was experimenting with Ethereum as a way to control real world objects in the so-called Internet of things. Microsoft has been working on several projects that make it easier to use Ethereum on its computing cloud, Azure. ""Ethereum is a general platform where you can solve problems in many industries using a fairly elegant solution - the most elegant solution we have seen to date,"" said Marley Gray, a director of business development and strategy at Microsoft. Mr. Gray is responsible for Microsoft's work with blockchains, the database concept that Bitcoin introduced. Blockchains are designed to store transactions and data without requiring any central authority or repository. Blockchain ledgers are generally maintained and updated by networks of computers working together - somewhat similar to the way that Wikipedia is updated and maintained by all its users. Many corporations, though, have created their own Ethereum networks with private blockchains, independent of the public system, and that could ultimately detract from the value of the individual unit in the Ethereum system - known as an Ether - that people have recently been buying. The interest in Ethereum is one sign of the corporate fascination with blockchains. Most major banks have expressed an interest in using them to make trading and money transfer faster and more efficient. On Tuesday, executives from the largest banks will gather for a conference, ""Blockchain: Tapping I nto the Real Potential , Cutting Through the Hype."" Many of these banks have recently been looking at how some version of Ethereum might be put to use. JPMorgan, for instance, has created a specific tool, Masala, that allows some of its internal databases to interact with an Ethereum blockchain. Michael Novogratz, a former top executive at the private equity firm Fortress Investing Group, who helped lead Fortress's investment in Bitcoin, has been looking at Ethereum since he left Fortress last fall. Mr. Novogratz said that he made a ""significant"" purchase of Ether in January. He has also heard how the financial industry's chatter about the virtual currency has evolved. ""A lot of the more established players were thinking, 'It's still an experiment,' "" he said. ""It feels like in the last two to three months that experiment is at least getting a lot more validation."" Since the beginning of the year, the value of an individual unit of Ether has soared as high as $12 from around $1. That has brought the value of all existing Ether to over $1 billion at times, significantly more than any virtual currency other than Bitcoin, which had over $6 billion in value outstanding last week. Since Bitcoin was invented, there have been many so-called alt-coins that have tried to improve on Bitcoin, but none have won the following of Ethereum. Unlike Bitcoin, which was released in 2009 by a mysterious creator known as Satoshi Nakamoto, Ethereum was created in a more transparent fashion by a 21-year-old Russian-Canadian, Vitalik Buterin, after he dropped out of Waterloo University in Ontario. The most basic aim of Ethereum was to make it possible to program binding agreements into the blockchain - the smart contract concept. Two people, for instance, could program a bet on a sports game directly into the Ethereum blockchain. Once the final score came in from a mutually agreed upon source - say, The Associated Press - the money would be automatically transferred to the winning party. Ether can be used as a currency in this system, but Ether are also necessary to pay for the network power needed to process the bet. The Ethereum system has sometimes been described as a single shared computer that is run by the network of users and on which resources are parceled out and paid for by Ether. A team of seven co-founders helped Mr. Buterin write up the software after he released the initial description of the system. Mr. Buterin's team raised $18 million in 2014 through a presale of Ether, which helped fund the Ethereum Foundation, which supports the software's development. Like Bitcoin, Ethereum has succeeded by attracting a dedicated network of followers who have helped support the software, partly in the hope that their Ether will increase in value if the system succeeds. Last week, there were 5,800 computers - or nodes - helping support the network around the world. The Bitcoin network had about 7,400 nodes. One of Mr. Buterin's co-founders, Joseph Lubin, has set up ConsenSys, a company based in Brooklyn that has hired over 50 developers to build applications on the Ethereum system, including one that enables music distribution and another that allows for a new kind of financial auditing. The ConsenSys offices are in an old industrial building in the Bushwick section of Brooklyn. The office is essentially one large room, with all the messy trademarks of a start-up operation, including white boards on the walls and computer parts lying around. Mr. Lubin said he had thrown himself into Ethereum after starting to think that it delivered on some of the failed promise of Bitcoin, especially when it came to allowing new kinds of online contracts and markets. ""Bitcoin presented the broad strokes vision, and Ethereum presented the crystallization of how to deliver that vision,"" he said. Joseph Bonneau, a computer science researcher at Stanford who studies so-called crypto-currencies, said Ethereum was the first system that had really caught his interest since Bitcoin. It is far from a sure thing, he cautioned. ""Bitcoin is still probably the safest bet, but Ethereum is certainly No. 2, and some folks will say it is more likely to be around in 10 years,"" Mr. Bonneau said. ""It will depend if any real markets develop around it. If there is some actual application.""",en 1459193988,CONTENT SHARED,-4110354420726924665,4340306774493623681,8940341205206233829,,,,HTML,http://www.nytimes.com/2016/03/28/business/dealbook/ethereum-a-virtual-currency-enables-transactions-that-rival-bitcoins.html,"Ethereum, a Virtual Currency, Enables Transactions That Rival Bitcoin's","All of this work is still very early. The first full public version of the Ethereum software was recently released, and the system could face some of the same technical and legal problems that have tarnished Bitcoin. Many Bitcoin advocates say Ethereum will face more security problems than Bitcoin because of the greater complexity of the software. Thus far, Ethereum has faced much less testing, and many fewer attacks, than Bitcoin. The novel design of Ethereum may also invite intense scrutiny by authorities given that potentially fraudulent contracts, like the Ponzi schemes, can be written directly into the Ethereum system. But the sophisticated capabilities of the system have made it fascinating to some executives in corporate America. IBM said last year that it was experimenting with Ethereum as a way to control real world objects in the so-called Internet of things. Microsoft has been working on several projects that make it easier to use Ethereum on its computing cloud, Azure. ""Ethereum is a general platform where you can solve problems in many industries using a fairly elegant solution - the most elegant solution we have seen to date,"" said Marley Gray, a director of business development and strategy at Microsoft. Mr. Gray is responsible for Microsoft's work with blockchains, the database concept that Bitcoin introduced. Blockchains are designed to store transactions and data without requiring any central authority or repository. Blockchain ledgers are generally maintained and updated by networks of computers working together - somewhat similar to the way that Wikipedia is updated and maintained by all its users. Many corporations, though, have created their own Ethereum networks with private blockchains, independent of the public system, and that could ultimately detract from the value of the individual unit in the Ethereum system - known as an Ether - that people have recently been buying. The interest in Ethereum is one sign of the corporate fascination with blockchains. Most major banks have expressed an interest in using them to make trading and money transfer faster and more efficient. On Tuesday, executives from the largest banks will gather for a conference, ""Blockchain: Tapping I nto the Real Potential , Cutting Through the Hype."" Many of these banks have recently been looking at how some version of Ethereum might be put to use. JPMorgan, for instance, has created a specific tool, Masala, that allows some of its internal databases to interact with an Ethereum blockchain. Michael Novogratz, a former top executive at the private equity firm Fortress Investing Group, who helped lead Fortress's investment in Bitcoin, has been looking at Ethereum since he left Fortress last fall. Mr. Novogratz said that he made a ""significant"" purchase of Ether in January. He has also heard how the financial industry's chatter about the virtual currency has evolved. ""A lot of the more established players were thinking, 'It's still an experiment,' "" he said. ""It feels like in the last two to three months that experiment is at least getting a lot more validation."" Since the beginning of the year, the value of an individual unit of Ether has soared as high as $12 from around $1. That has brought the value of all existing Ether to over $1 billion at times, significantly more than any virtual currency other than Bitcoin, which had over $6 billion in value outstanding last week. Since Bitcoin was invented, there have been many so-called alt-coins that have tried to improve on Bitcoin, but none have won the following of Ethereum. Unlike Bitcoin, which was released in 2009 by a mysterious creator known as Satoshi Nakamoto, Ethereum was created in a more transparent fashion by a 21-year-old Russian-Canadian, Vitalik Buterin, after he dropped out of Waterloo University in Ontario. The most basic aim of Ethereum was to make it possible to program binding agreements into the blockchain - the smart contract concept. Two people, for instance, could program a bet on a sports game directly into the Ethereum blockchain. Once the final score came in from a mutually agreed upon source - say, The Associated Press - the money would be automatically transferred to the winning party. Ether can be used as a currency in this system, but Ether are also necessary to pay for the network power needed to process the bet. The Ethereum system has sometimes been described as a single shared computer that is run by the network of users and on which resources are parceled out and paid for by Ether. A team of seven co-founders helped Mr. Buterin write up the software after he released the initial description of the system. Mr. Buterin's team raised $18 million in 2014 through a presale of Ether, which helped fund the Ethereum Foundation, which supports the software's development. Like Bitcoin, Ethereum has succeeded by attracting a dedicated network of followers who have helped support the software, partly in the hope that their Ether will increase in value if the system succeeds. Last week, there were 5,800 computers - or nodes - helping support the network around the world. The Bitcoin network had about 7,400 nodes. One of Mr. Buterin's co-founders, Joseph Lubin, has set up ConsenSys, a company based in Brooklyn that has hired over 50 developers to build applications on the Ethereum system, including one that enables music distribution and another that allows for a new kind of financial auditing. The ConsenSys offices are in an old industrial building in the Bushwick section of Brooklyn. The office is essentially one large room, with all the messy trademarks of a start-up operation, including white boards on the walls and computer parts lying around. Mr. Lubin said he had thrown himself into Ethereum after starting to think that it delivered on some of the failed promise of Bitcoin, especially when it came to allowing new kinds of online contracts and markets. ""Bitcoin presented the broad strokes vision, and Ethereum presented the crystallization of how to deliver that vision,"" he said. Joseph Bonneau, a computer science researcher at Stanford who studies so-called crypto-currencies, said Ethereum was the first system that had really caught his interest since Bitcoin. It is far from a sure thing, he cautioned. ""Bitcoin is still probably the safest bet, but Ethereum is certainly No. 2, and some folks will say it is more likely to be around in 10 years,"" Mr. Bonneau said. ""It will depend if any real markets develop around it. If there is some actual application.""",en 1459194146,CONTENT SHARED,-7292285110016212249,4340306774493623681,8940341205206233829,,,,HTML,http://cointelegraph.com/news/bitcoin-future-when-gbpcoin-of-branson-wins-over-usdcoin-of-trump,Bitcoin Future: When GBPcoin of Branson Wins Over USDcoin of Trump,"The alarm clock wakes me at 8:00 with stream of advert-free broadcasting, charged at one satoshi per second. The current BTC exchange rate makes that snooze button a costly proposition! So I get up, make coffee and go to my computer to check the overnight performance of my bots. TradeBot earns me on Trump and Branson TradeBot, which allocates funds between the main chain and various national currency side-chains, generated a lucrative 0.24 BTC return. TradeBot has been reliably profitable ever since I set it to trade USDcoin according to political prediction market data. As expected, the latest poll numbers came in as highly supportive of Trump's re-election as USDcoin CEO. Trump's resistance to de-anonymizing public spending, by moving USDcoin off the Confidential Transactions layer, continues to erode his coin's credibility. In his latest speech, Trump maintains that full CT-privacy is essential to ""combatting CNYcoin's sinister ring-signature scheming."" I make a note to increase my long position in GBPcoin. Following CEO Branson's memo to the effect that government finances and national banks be brought into compliance with the public blockchain , British corruption indices have flatlined. As the first national econmy to ""go light,"" Britain leads the global recovery from the Great Debt Default of '20. Happy with the GoatData Project I check TeachBot and note that it's performing in-line with expectations. TeachBot serves as an autonomous info-agent between various contracting AIs and data providers. The 0.5 BTC bounty it awarded to a team of Sherpas to outfit a herd of Tibetan mountain goats with full motion-sensing rigs has already been repaid...I check the latest figures... four times over! My best TeachBot strategy to date, the GoatData project provides valuable data to WinterHoof, the Artificial General Intelligence in charge of the Swiss military's quadripedal robotics program. At this rate, I'll soon have enough BTC to retire to Satoshi City on Mars!",en 1459194474,CONTENT SHARED,-6151852268067518688,3891637997717104548,-1457532940883382585,,,,HTML,https://cloudplatform.googleblog.com/2016/03/Google-Data-Center-360-Tour.html,Google Data Center 360° Tour,"We're excited to share the Google Data Center 360° Tour - a YouTube 360° video that gives you an unprecedented and immersive look inside one of our data centers. There are several ways to view this video: On desktop using Google Chrome use your mouse or trackpad to change your view while the video plays YouTube app on mobile - move your device around to look at all angles while the video plays And the most immersive way to view - using Google Cardboard (currently supported by the Android YouTube app only, iOS support is coming soon!) Load the video in the YouTube app and tap on the Cardboard icon when the video starts to play. Insert your phone in Cardboard and look around. A little background . . . Several months ago, those of us on the Google Cloud Developer Advocacy Team had a rare opportunity to tour the Google data center in The Dalles, Oregon. Many of us had seen other non-Google data centers in our careers, but this experience was beyond anything we ever imagined. We were blown away by the scale, the incredible attention to security and privacy, and the amazing efforts to make the data center extremely efficient and green. Additionally, we were proud to meet some of the brilliant people that design, build and maintain these data centers. If you are a Google Cloud Platform customer, then this is your data center as much as it is our data center, so we want you to experience what we experienced. We hope you enjoy it! - Posted by Greg Wilson, Head of Developer Advocacy, Google Cloud Platform",en 1459194497,CONTENT SHARED,2448026894306402386,4340306774493623681,8940341205206233829,,,,HTML,https://bitcoinmagazine.com/articles/ibm-wants-to-evolve-the-internet-with-blockchain-technology-1459189322,"IBM Wants to ""Evolve the Internet"" With Blockchain Technology","The Aite Group projects the blockchain market could be valued at $400 million by 2019. For that reason, some of the biggest names in banking, industry and technology have entered into the space to evaluate how this technology could change the financial world. IBM and Linux, for instance, have brought together some of the brightest minds in the industry and technology to work on blockchain technology through the Hyperledger Project. The Hyperledger Project is under the umbrella of the Linux Foundation, and seeks to incorporate findings by blockchain projects such as Blockstream, Ripple, Digital Asset Holdings and others in order to make blockchain technology useful for the world's biggest corporations. IBM has also contributed its own code to the project. According to John Wolpert, IBM's Global Blockchain Offering Director, when IBM and Linux began working together on the blockchain project, Linux made clear it wanted to ""disrupt the disruption,"" in part with their findings, as well as the data gathered by projects such as Ripple, Ethereum and others exploring the blockchain. The Linux foundation announced its Hyperledger project on December 17, 2015. Just one day later, 2,300 companies had requested to join. The second-largest open source foundation in the history of open source had only 450 inquiries. ""So, it's either going to be a holy mess or it's going to change the world,"" Wolpert said at The Blockchain Conference in San Francisco presented by Lighthouse Partners. As Wolford puts it, a team of IBMers is ""on a quest"" to understand and ""do something important"" with blockchain technology. ""I don't know why we got this rap in the '70s, way back, that we are not cool, that we're kind of stodgy, and that is not the IBM of my experience,"" Wolpert, who founded the taxi service Flywheel, explained. ""We're the original crazy people. The craziest people are the guys at IBM in the '60s and '70s -- you can imagine what they were doing in the '60s -- they are wild-eyed revolutionaries."" Although this is not the image IBM markets, their work in quantum computing, quantum teleportation and neuro semantic chips count among some of the ""cool stuff"" IBM does, says Wolpert. IBM also approaches projects in the spirit of open innovation, and not proprietarily. ""Our method of operations is open, and it's often our MO to back not-our-thing,"" he said of IBM since the 1990s. Wolpert cites Java as one such project: ""You would not know about Java today if it weren't for IBM."" As a ""pretty young dude,"" Wolpert was in the room when IBM made a billion-dollar decision to back Linux over its own technology. He also cites IBM's work on XML as an example of IBM's dedication to open innovation. Currently, IBM has employees working on crypto-security and distributed systems who have been working on consensus algorithms for their entire careers, some for more than 30 years. ""They're crazy smart, we're planetary, we've gone from a couple of guys in a canoe, to a platoon and approaching an army of people working on blockchain,"" Wolpert said. ""So it feels a lot like my first job at IBM which was making Java real for business."" This has led old and new friends to contact the the multinational technology and consulting corporation. ""Banks who have been calling us constantly ... saying 'What's your view on blockchain?' or 'Hey, let's do a project together on blockchain,'"" Wolpert said. ""We've been doing all these crazy projects on blockchain, every kind of blockchain, and learning a lot, been doing that for a couple years and really started to accelerate last year."" Today, there is a whole unit dedicated to blockchain technology at Linux Foundation. ""We went all-in on blockchain,"" he explained. What's it all about for Wolpert? ""It's about evolving the Internet."" Bitcoin is important for moving money around without a single authority. ""[Bitcoin] is a solution to a problem, a specific problem, of people who need to move money around in environments where you don't trust the government,"" Wolpert said. ""I think we can all agree we are approaching the end of the era where a single authority manages trust and gets compensated for the risk in doing so."" In Wolpert's view, finance does not need to go from today's system of trust -- where consumers trust institutions to handle their money -- to one of complete trustlessness, like the Bitcoin system in which a protocol, not a centralized authority, manages the movement of value. ""It doesn't follow, having one single trust authority to trustlessness on everything,"" he said. ""There is a false dichotomy between the notion of trust and trustlessness, that you have to have a walled garden on one side and Bitcoin on the other."" He doesn't compare Bitcoin to the Internet, saying that's a wrong analogy. ""It is not apt to say the Internet is like Bitcoin; the Internet is, from the perspective of Bitcoin, a permissioned walled garden,"" he said. ""Ever heard of ICANN? It's permissive, but it's permissioned."" Bitcoin, Ripple and Ethereum are ""Iteration 1"" of blockchain technology, the three time IBMer told his audience. Wolpert thinks blockchain technology will soon evolve to ""Iteration 2."" The problem with it? ""The Internet is constrained,"" Wolpert said. ""You have a fabric that allows for lots of competition on platforms and huge amounts of competition on solutions and applications on top of it, so we need to evolve the Internet to become economically aware, and that Internet isn't going to be an application, it is going to be a fabric and then lots of applications on top of that."" This doesn't mean blockchain 2.0 technologies such as Blockstream, Ethereum and Ripple are lost causes that must now completely retool. The Linux Foundation, in fact, works with Ethereum and Ripple employees. With its blockchain projects, IBM is not focused on moving a cryptocurrency. ""Lots of banks pay us lots of money and we like that, and we want to radically improve them,"" Wolpert explained. ""[Bankers] are gonna try like heck to radically improve what is going on ahead of the disruption, disrupt themselves if you will. If you look at things like [Overstock's] t0, that's happening, might happen on Bitcoin protocol, but might not, and there's plenty of ways to solve it, when you commit to it. And sure, Bitcoin has woken [banks] up to this, so it's doing its job, it's disrupting the thinking and maybe that is all that is necessary."" Wolpert wonders aloud during his speech if the world would have iTunes were it not for Napster, or Netflix if it weren't for BitTorrent. The former head of products for IBM's Watson Ecosystem says IBM currently has a ""boomtown with giant numbers of people at IBM and everywhere else"" investigating the blockchain problem and how distributed technology can be made useful for industry. He details how oftentimes IBM tinkers with open innovations itself to make the technology ready for industry. That's not the route being taken with Bitcoin. That the Bitcoin network is on par with Ireland for energy consumption, and that mining consortiums could eventually dictate that market, scares the firm away from dealing directly with Bitcoin. ""What I want to see, what we want to see, is a world full of companies who I can know, who I can identify, who are so spread out, like the Internet,"" he said. Wolpert cites the Arab Spring, and the role the Internet played in that uprising, as how a permissioned, yet permissive, structure functions. Websites, albeit technology permissioned by ICANN, remained online during the protests and played a major role. What's the risk in experimenting with blockchain? ""The risk is we go crazy, we have an unruly mess,"" Wolpert warned. ""[But] it's the Linux Foundation, we've been in this movie before. While there's a lot of really smart guys working on Bitcoin and startups, we've got equally amount of people who also have 20 or 30 years of experience with open source and open innovation, and I think we have a good shot at maintaining restraints, keeping it scoped down, keeping it simple, but getting it to something that is useful.""",en 1459194522,CONTENT SHARED,-2826566343807132236,4340306774493623681,8940341205206233829,,,,HTML,http://www.coindesk.com/ieee-blockchain-oxford-cloud-computing/,IEEE to Talk Blockchain at Cloud Computing Oxford-Con - CoinDesk,"One of the largest and oldest organizations for computing professionals will kick off its annual conference on the future of mobile cloud computing tomorrow, where blockchain is scheduled to be one of the attractions. With more than 421,000 members in 260 countries, the Institute of Electrical and Electronics Engineers (IEEE) holding such a high-profile event has the potential to accelerate the rate of blockchain adoption by the engineering community. At the four-day conference, beginning Tuesday, the IEEE will host five blockchain seminars at the 702-year-old Exeter College of Oxford. The conference, IEEE Mobile Cloud 2016, is the organization's fourth annual event dedicated to mobile cloud computing services and engineering. Speaking at the event, hosted at Oxford University, professor Wei-Tek Tsai of the School of Computing, Informatics and Decision Systems engineering at Arizona State University will talk about the future of blockchain technology as an academic topic of research. Computing shift While this looks to be the first IEEE conference to deal so closely with blockchain, its presence at an event dedicated to cloud computing is no surprise. 2016 is shaping up to be the year many stopped talking about bitcoin as a decentralized ledger and started talking about it as a database. Last month, IBM made a huge splash in the blockchain ecosystem when it announced among a wide range of other news, that it would be offering a wide range of blockchain-related initiatives. Further, in November, Microsoft coined the term Blockchain-as-a-Service (BaaS) to describe its sandbox environment where developers could experiment with tools hosted on the company's Azure Cloud platform. Companies including ConsenSys, Augur, BitShares and Slock.it have joined that effort. Oxford University via Shutterstock",en 1459194557,CONTENT SHARED,-2148899391355011268,4340306774493623681,8940341205206233829,,,,HTML,http://www.newsbtc.com/2016/03/28/banks-need-collaborate-bitcoin-fintech-developers/,Banks Need To Collaborate With Bitcoin and Fintech Developers,"It will take time until banks come around to the idea of embracing Bitcoin or Fintech, though. Banks need to innovate at an accelerated pace, yet are unable to do so on their own. Allowing third-party developers to work together with the bank through API access would be a significant step in the right direction, as there is valuable input to be gathered from the Bitcoin and Fintech industries. Banks and other established financial players have not taken a liking to Fintech and digital currency just yet, as they see both industries as major competitors to their offerings. While it is certainly true Bitcoin and Fintech can bring significant improvements to the table, they should be seen as complementary allies who will bring success to the banking industry. Or that is what the Monetary Authority of Singapore seems to be thinking, at least. Also read: Bitcoin Price Technical Analysis For 03/28/2016 - Looking To Buy BTC? Embracing Bitcoin And Fintech As A Bank Is Necessary There is no denying the banking sector has seen very little to no real innovation for quite some time now, opening the door for other players to step in and offer something entirely different. In fact, a lot of financial experts see and as two major threats to the banking system, which would explain the vocal opposition to any solution that is not controlled by a bank or government. But the Monetary Authority of Singapore sees things different, as they feel both Fintech and Bitcoin are capable of complementing the current financial infrastructure, rather than be competing with banks. A combination of traditional banking with innovative technology and services has the potential to create a powerful and versatile financial ecosystem all over the globe. Fintech startups and entrepreneurs are renowned for thinking outside of the proverbial box when it comes to accessing financial services. Particularly where the money lending and peer-to-peer aspect is concerned, most Fintech companies have a leg up over their more traditional counterparts. But that does not mean these startups are looking to overthrow the banking system, although they might be able to in the long run. Any bank that is too slow - or simply unwilling - to innovate will be left behind in the long term, though. Collaborating with Fintech players is a far more preferable solution to this alternative, although it may involve swallowing some of the pride that are associated with the banking system. Fintech companies are, by design, more capable of embracing new and advanced technological features, which makes them invaluable allies for traditional banks. A Senior Regulator of the Monetary Authority of Singapore stated: ""MAS' approach to fintech is to use the power of technology to help banks to succeed. It's not a game of disruptors versus enablers. We see fintech as an enabler... The good news is fintech companies by design are creating far superior technology products, and where success lies is in partnering with banks, and enabling banks to succeed."" But Bitcoin has a role to play in the financial world as well, although this system is completely outside of the control of banks and governments. Despite those barriers, the popular digital currency is available in every country in the world, allowing for frictionless and real-time transfers of value, regardless of previous access to financial services by consumers and businesses. So far, most of the major banks around the world have taking a page out of the Bitcoin playbook, by trying to develop their own blockchain technology solutions. But that is not all, as several countries are considering to issue their own digital currency, rather than embracing Bitcoin. Whether or not that will be the right decision, in the end, remains to be seen. It will take time until banks come around to the idea of embracing Bitcoin or Fintech, though. Banks need to innovate at an accelerated pace, yet are unable to do so on their own. Allowing third-party developers to work together with the bank through API access would be a significant step in the right direction as there is valuable input to be gathered from the Bitcoin and Fintech industries.",en 1459194599,CONTENT SHARED,4119190424078847945,4340306774493623681,8940341205206233829,,,,HTML,https://bitcoinmagazine.com/articles/blockchain-technology-could-put-bank-auditors-out-of-work-1459179559,Blockchain Technology Could Put Bank Auditors Out of Work,"When most people think about computers and robots taking jobs away from humans, the images that usually come to mind are robots moving inventory around in an Amazon warehouse or McDonald's customers placing their order via a tablet instead of a cashier . But the robots are coming for much more sophisticated jobs as well. For example, blockchain technology is out to eat the lunches of some professionals in the traditional financial system. There's a Lot of Mistrust in the Banking System At a recent blockchain-focused event in Toronto, Bitcoin Core contributor Peter Todd was asked to explain the reasoning behind Wall Street's increased interest in blockchain technology. During his initial response, Todd pointed out some of the mistrust that exists in the current financial system: ""The dirty secret is [the banks] don't actually trust [their databases]. I mean, they don't trust their own employees. ... They don't trust each other. There's so many levels of mistrust here."" Todd then discussed the massive industry built around financial audits. He noted: ""If they did trust all this stuff, why are there so many auditors? Why is there this massive infrastructure of labor-intensive human beings sitting there poring over transactions and trying to figure out where the money got created out of thin air. Where did the money disappear? Who moved what where? Was it all legit?"" Many financial institutions are interested in the concept of creating new systems for record-keeping, which would replace the current closed-ledger system with a more open alternative, similar to Bitcoin. Many believe this open system would enable more efficient and transparent auditing of financial activity. The Status Quo Is Doing All Right But It's Hard to Improve Todd also pointed out that financial institutions are already pretty good at what they do in terms of audits. He stated, ""For the most part, bank fraud is at tolerable levels, it seems."" Todd noted that maintaining a proper history of financial activity is one of the issues with increasing the speed of settlement . Because audits are labor intensive and require man hours to complete, it's difficult to essentially come to consensus on the correct version of events in a nearly instantaneous manner. He added, ""The faster money can move around, the faster you could lose it all due to some hacker."" How Does the Blockchain Help? Todd spoke on the perceived advantages of blockchains over the current way things work, which relies on placing trust in database admins and the people with the keys to the system. From this perspective, a blockchain simply looks like a strong audit log. Todd gave a specific example of how this technology can help: ""It could be something as simple as when I, as a bank employee, type something in, we really do want a cryptographic signature that's actually tied to my keycard or something. And that should go into a database. Well, what does that look like? It looks like a blockchain."" The longtime Bitcoin researcher also pointed out that this is sort of what banks were already looking at doing before blockchain technology started to receive a lot of attention. He explained: ""I think where they're thinking of going naturally looks like blockchains, so when they hear all this blockchain stuff it's like, 'Oh yeah. This is roughly what we were looking at doing anyway.'"" Replacing Humans Is the Point At one point during the recent event in Toronto, Todd was asked if the trend is that blockchains will eventually replace human auditors. Todd responded: ""All this blockchain stuff is really about: How good can we make the security to get to the point where we can imagine getting rid of human beings?"" Indeed, Todd's comments appear to fit well with Satoshi Nakamoto 's original Bitcoin white paper . In the paper, Nakamoto stated: ""What is needed is an electronic payment system based on cryptographic proof instead of trust...."" Going back further, cypherpunk Nick Szabo has written about the concept that third parties are security holes. In addition to improving security by cutting out trusted parties, financial institutions can cut costs by replacing human labor with computer code. Kyle Torpey is a freelance journalist who has been following Bitcoin since 2011. His work has been featured on VICE Motherboard, Business Insider, NASDAQ, RT's Keiser Report and many other media outlets. You can follow @kyletorpey on Twitter.",en 1459194751,CONTENT SHARED,-7926018713416777892,4340306774493623681,8940341205206233829,,,,HTML,https://news.bitcoin.com/conglomerates-interview-openledger-ceo/,Why Decentralized Conglomerates Will Scale Better than Bitcoin - Interview with OpenLedger CEO - Bitcoin News,"Bitcoin.com spoke with the OpenLedger CEO, Ronny Boesing to get deeper insight of how decentralized conglomerates will challenge the status quo, why they will bring greater financial freedom to the public, and the disadvantages of Bitcoin when it comes to enterprise scaling compared to DC's. Also read: Deloitte: Blockchain Will 'Gain Significant Traction' by 2020 The Coming Age of 'Decentralized Conglomerates' As the OpenLedger team recently its Global Enterprise 3.0 program, its universal shared application called ""Decentralized Conglomerate"" (DC) is poised to revolutionize how we look at big business in the 21st century. For a long time, the traditional conglomerate model went unchallenged (e.g. Berkshire Hathaway, Philip Morris etc.) until the emergence of the Internet and increasing global interconnectivity. A new digital conglomerate structure began to emerge with the likes of Google (and its Alphabet Inc. parent company), which became more efficient at managing the production and distribution of their products. Nevertheless, the formal top-down structure of these new digital age conglomerates remained, including the the traditional way of how profits are distributed - that is, until now. ""Decentralized Conglomerates are the next logical step."" - Ronny Boesing Bitcoin.com (BC): Conglomerates first existed in physical form such as Berkshire Hathaway, after which we saw the advent of digital ones such as Google and Alphabet. Are decentralized autonomous conglomerates the next logical step in this evolution? Ronny Boesing (RB): Without question, Decentralized Conglomerates are the next logical step. It should be noted that there will be Autonomous and Semi-Autonomous organizations that emerge. The difference being the level of organizational influence exerted over the universal platform. In an Autonomous Conglomerate, the contracts and workers would be completely independent of having any external influence or association with another entity, but may simply be using the same platform as another organization. In a Semi-Autonomous Conglomerate, there will be some level of external influence, support, or association with an official business. This means that an organization has special interest in the associated operations of the Conglomerate, and some semblance of top-down influence may occur. BC: What are the basic advantages of decentralized conglomerate over a centralized one like Google, for example? RB: A Decentralized Conglomerate allows organizations to join the forces of their communities on a universal platform that allows cross promotion and profit sharing, but does not force it. This paradigm also allows individual brand identities to flourish within the Conglomerate without having to worry about the interests of the Universal Platform conflicting with the interests of any given brand using the platform. In a traditional or even a digital conglomerate, the mission of the parent company guides the decisions of the subsidiaries. BC: Is anyone free to join such an online conglomerate community? RB: The system is agnostic, so that anyone who wants to create an asset for their organization or commodity has the capacity to do so. All tokens do not have equal risk, and some are far less risky than others to purchase. BC: Some could argue that MLM pyramid structured companies are also ""profit sharing"" communities, though this entails a lot of risk as we all know. What kind of risk is there for people choosing to enter and buy the digital tokens of a decentralized conglomerate or its constituents? RB: The risk of purchasing such tokens depends on many things, such as the market capitalization of the DC, the number of organizations connected to the DC, the capacity to exchange tokens for other forms of currency or products, the size of the reserve backing a given token. All tokens do not have equal risk, and some are far less risky than others to purchase. Everyone should do their due diligence to know how solid the infrastructure is behind any given token or DC. BC: Can't Bitcoin be considered the world's first decentralized conglomerate? RB: Bitcoin is not a conglomerate. It is a decentralized ledger. The only purpose of Bitcoin's blockchain was to keep track of transactions. Many organizations expanded upon the original protocol to create colored coins and languages that permitted applications to be built on top of the Bitcoin blockchain. However, this was not the original intention of ""Bitcoin"" proper, and specifically talking about ""Bitcoin,"" it is apolitical, without special interest, and has no official organizational associations. [A decentralized conglomerate] was not the original intention of 'Bitcoin' proper, [...] it is apolitical, without special interest, and has no official organizational associations. Bitcoin is the world's first globally accepted currency that has no organizational associations, and that is simultaneously the best and worst aspect about it. Since there is no organization association, the problems that Bitcoin encounters rely on the community to come together and make a unilateral decision. As we have seen with the blocksize scaling issue , the transaction time debate , the recent DDoS attack, and the blockchain halt due to transaction costs; Bitcoin has some real problems that need to be addressed in order for it to be usable for enterprise scale organizations or countries alike. In that regard, since Bitcoin is apolitical, it doesn't really make logical sense for any country to switch to something which they have no control over as their main currency. So in that regard, Bitcoin is not only not a conglomerate, the apolitical aspect serves to keep organizations away from holding the currency during the fluctuations, as there is no real organization that has vested interest in keeping Bitcoin stable. BC: What exactly is OpenLedger's ""Bitcoin 3.0"" technology and what are Bitcoin's disadvantages compared to it when using it within the DC context? RB: As mentioned, Bitcoin has no associations. This means that the token can be purchased and amassed by any organization or country around the globe, leaving the token subject to wild swings due to margin trading. You see examples with organizations like Ether , Factom , or OpenLedger that have created a token that has organizational backing, and their respective tokens steadily increased in value instead of having wild unpredictable swings. Having an organization that has special interest in the Universal Ledger establishes a community and ecosystem which Bitcoin is lacking. It is the same type of culture that Apple has created using ""I"" branding that can be achieved with organizational associations. Bitcoin does not have this capacity. Having an organization that has special interest in the Universal Ledger establishes a community and ecosystem which Bitcoin is lacking. In order to get mass adoption or successful utilization, it is crucial to have a unified approach to the interface, and this becomes another major failing point of Bitcoin. Openledger has streamlined the UX/UI of digital currency transactions, and this represents a much more user-friendly approach with a smaller learning curve. Having development teams that can do research and development and then modify the UX based on user feedback is a great benefit that DC's have over Bitcoin. BC: Can you talk a little bit about the first partnerships between communities on this platform, namely OBITS and BitTeaser? RB: Every month, 70% of BitTeaser's monthly Bitcoin profits will be used to buyback BTSR and OBITS on the respective markets on OpenLedger. OBITS does it on the BTC (10%) and BTS (90%) market, and with BTSR it is done on BTS, BTC, USD and ETH markets 25% each. BTSR holders will receive 80% of this share, while OBITS holders will receive 20%. In the long term, BTSR holders may see an increase in the value per unit of their investment based on supply and demand, as well as having the tokens burned from every buyback thus reducing overall available supply. BitTeaser operates in a similar way to Google Adwords - revenue is generated based on providing advertising space for websites, companies and products. It currently serves over 1,000 webmasters, with monthly growth of around 15 to 20%. BC: Do they use smart-contracts to establish the terms? How is consensus achieved? RB: Yes, the system uses smart contracts to establish terms of service and reward. The consensus is achieved by third party confirmation that both sides have accomplished their respective tasks, whether it is putting money in escrow, or completing a contract request. BC: What was the approximate payout to the writers and bloggers using the platform? Will this amount increase as the overall platform grows? RB: At the last payment, some 11 BTC worth (currently around $4,500 USD) of BTSR and OBITS was sent to participating bloggers. That amount will increase as the network size and reach of the DC increases. OpenLedger has made it available for users to directly change tokens to USD, EUR, and CNY and have them directly deposited into their bank account. BC: How does the DC model interact with traditional fiat currency? Do users have to use an exchange to cash out their rewards? RB: ""In the same way that foreign currencies or bitcoin need to be exchanged before being used, any DC token will need to be changed to be used outside of the infrastructure. If an organization within the DC offers products or services that are desired, a direct purchase using a token could then be made without any exchange. Now, OpenLedger has made it available for users to directly change tokens to USD, EUR, and CNY and have them directly deposited into their bank account. No longer will a user have to have the hassle of going through an additional step of going to a bitcoin exchange to convert their smart money into fiat. Once a user has been validated he/she is prompted to attempt a withdrawal. Once the withdrawal request is made on OpenLedger for whatever currency is asked, the money is then sent to the user's bank account with a 3% withdrawal fee. Users can enjoy a fee-free deposit period that ends in August. [Note: Special thanks to Larry C. Bates, out of Bloomington, Indiana, the architect behind the great description of the DC, a good partner of OpenLedger and his assistance in giving answer to the questions.] Do you think decentralized conglomerates will change big business around the globe? Let us know your thoughts in the comments below! Images courtesy of OpenLedger, bitcoinwiki.co Editor, Content Creator @Bitcoin.com",en 1459194842,CONTENT SHARED,3353902017498793780,4340306774493623681,8940341205206233829,,,,HTML,https://www.cryptocoinsnews.com/ethereum-rise-growth-new-york-times/,The Rise And Growth of Ethereum Gets Mainstream Coverage,"Ethereum, considered by many to be the most promising altcoin, has grabbed the attention of The New York Times. The ""newspaper of record"" has run a big feature story on Ethereum, noting its value has spiraled 1,000% in the last three months and is attracting interest from major financial companies that are using it for private blockchains and smart contracts. The story notes the first public version of Ethereum was recently released. The story noted there are numerous applications built on Ethereum that allow new ways to place bets, pay bills and even launch Ponzi schemes. Some cryptocurrency observers claim Ethereum will face more security issues than bitcoin due to its more complex software. The alt. currency system has faced less testing and suffered fewer attacks than bitcoin. The unusual design could also invite more regulatory scrutiny considering some potentially fraudulent contracts can be written into the system. Capabilities Draw Interest However, the system's capabilities have drawn interest from major corporations. Last year, IBM announced it was testing the alt.currency system as a way to manage real-world objects in the Internet of Things. Microsoft has been working on various projects that make it easier to use Ethereum on its Azure computing cloud. Marley Gray, a business development and strategy director at Microsoft, said Ethereum is a platform for solving problems in different industries. He called it the most ""elegant"" solution seen to date. Private Blockchains Expand Many companies have created private blockchains using Ethereum. These private blockchains could eventually undermine the value of the Ether, the individual unit in the alt.currency system that people have been buying. Major banks have shown interest in using blockchains to make transferring and trading money more efficient. Several banks have examined how to use Ethereum. JPMorgan has developed a tool called Masala that enables its internal databases to interact with an Ethereum-blockchain. Michael Novogratz, who helped lead Fortress Investing Group invest in bitcoin, has been examining Ethereum since leaving Fortress last fall. He said he has made a significant purchase of Ether. Novogratz said in the last few months that the alt.currency is getting more validation. Ethereum Value Soars Ether's value has increased to as high as $12 from $1 since the beginning of the year, taking the value of all Ether currency to above $1 billion at times. This surpasses the value of any virtual currency besides bitcoin, which claimed more than $6 billion in value last week. Ethereum leads all altcoins in the size of its following One difference between Ethereum and bitcoin is that the latter came into being in a more transparent fashion. Where bitcoin was created by Satoshi Nakamoto, whose identity remains a mystery, Ethereum was developed by Vitalik Buterin , a 21-year-old Russian, who created it after dropping out of Waterloo University in Ontario. Aim Was Smart Contracts Ethereum's basic aim was to make it possible to program agreements into the blockchain. This is the smart contract concept. Two people could, for instance, program a sport bet directly into the Ethereum blockchain. Once a mutually-accepted source announced the final score (such as the Associated Press), the winnings would transfer automatically to the winner. Ether can be used as currency. Ether is also necessary to pay for the network power required to process the bet. Ethereum has been characterized as a single-shared computer operated by the network of users on which resources are doled out and paid for by Ether. Buterin's team raised $18 million through an Ether presale in 2014 to help fund the Ethereum Foundation, which supports the development of the software. Also read: Ether co-founder Vitalik Buterin on public and private blockchains Ethereum's Following Ethereum has attracted followers who have supported the software and hope that Ether will rise in value. There were 5,800 nodes, or computers, last week supporting the network. The bitcoin network had about 7,400 nodes. Joseph Lubin, a co-founder, established ConsenSys, which has hired more than 50 developers to create applications on the alt.currency system. One enables music distribution while another permits a new type of financial auditing. Lubin said he became interested in Ethereum after determining it delivered on some of bitcoin's failed promise, particularly when it came to permitting new types of online markets and contracts. He said Ethereum presented the crystallization of how to deliver on the broad strokes vision that bitcoin presented. Joseph Bonneau, a Stanford computer science researcher, said Ethereum is the first system that caught his interest since bitcoin. Ethereum is nonetheless far from a certainty, Bonneau said. He said bitcoin is still most likely the safest bet, with Ethereum number two. He said Ethereum's longevity will depend on real markets developing around it. Featured image from Shutterstock.",en 1459210504,CONTENT SHARED,-9157338616628196758,5206835909720479405,-7864441319395545950,,,,HTML,http://economia.ig.com.br/2016-03-27/situacao-financeira-ruim-de-varejistas-pressiona-shoppings-e-eleva-renegociacoes.html,Situação financeira ruim de varejistas pressiona shoppings e eleva renegociações - Home - iG,"A queda nas vendas e a deterioração na situação financeira de varejistas têm forçado shoppings a renegociar algumas das obrigações dos lojistas nos custos de ocupação de pontos de venda. O cenário se intensificou para além da negociação de descontos pontuais, com parcelamentos que miram evitar a inadimplência e até mesmo o aumento das taxas de vacância. Arquivo/Agência Brasil Shopping centers estão entre os estabelecimentos que mais contratam no fim de ano O diretor de expansão da varejista Hope, Sylvio Korytowski, relata que muitos shoppings aceitaram congelar o vencimento de parcelas das chamadas ""luvas"", tarifas cobradas de novos locatários para garantir o ""direito"" de utilização de um determinado ponto em um shopping. As quantias, que variam de acordo com a demanda, normalmente são parceladas em 12 ou 24 meses, mas agora lojistas têm conseguido uma espécie de ""perdão temporário"", passando algum tempo sem pagar e conseguindo mais meses para quitar os valores. O preço pode ser negociado entre um lojista que sai do local e um que está chegando, mas geralmente parte do montante é destinada ao operador do shopping. ""Às vezes, quando um shopping vai mal, tem empreendedor que oferece um aluguel quase zerado para o lojista porque, se o espaço fica vago, quem paga o custo de energia, segurança, limpeza e outras coisas do dia a dia é o shopping"", explica um executivo do setor, que preferiu não se identificar. ""Nesse casos, não existe nenhuma possibilidade de o shopping cobrar luvas"", acrescenta. O esforço dos shoppings é impedir que lojas importantes fechem em meio a alta de custos e venda fraca, mas a renegociação também ofusca eventuais riscos de um estouro de inadimplência. A CEO da GS&AGR Consultores, Ana Paula Tozzi, avalia que os efeitos do quadro financeiro ruim dos varejistas ainda não aparecem totalmente nas provisões para inadimplência dos operadores de shopping, mas considera que muitos lojistas estão em dificuldade para pagar obrigações e pedindo refinanciamentos e renegociações. Fato é que a inadimplência tem aumentado em alguns centros de compras. Nos empreendimentos da BRMalls, a expectativa é que a taxa tenha um ""aumento pequeno"" no primeiro trimestre em comparação com igual intervalo do ano passado, quando estava em 4,4%. Já os atrasos superiores a 25 dias no pagamento de alugueis nos shopping centers da Multiplan subiram para 1,9% no quarto trimestre de 2015, de 1,7% em igual período do ano anterior. No mesmo período, a taxa de perda de aluguel aumentou para 1,2%, de 0,6%. O momento de enfraquecimento das vendas tem levado algumas lojas a dificuldades financeiras e, em alguns casos, culminado em processos de recuperação judicial. É o que ocorreu com o grupo GEP, dono das marcas Luigi Bertolli, Cori e franqueado da marca norte-americana GAP no Brasil. O grupo, que opera 97 lojas no Brasil, a maioria em shoppings, afirmou no pedido de recuperação judicial que ""o volume de vendas diminuiu consideravelmente"" em meio a crise macroeconômica. Varejistas relatam ainda que concessões estão sendo feitas mesmo em shoppings considerados ""de primeira linha"", ou seja, os que têm maior fluxo de clientes. Um exemplo citado por representantes de grandes redes do varejo é o da Iguatemi. Segundo eles, a companhia está concedendo descontos em shoppings da capital paulista em troca de impedir que as redes fechem lojas em centros comerciais da grande São Paulo e interior, os quais os varejistas veem com menos interesse. Ao Broadcast, serviço em tempo real da Agência Estaado, a diretora financeira e de relações com investidores da Iguatemi, Cristina Betts, diz que não há uma política única em relação aos descontos. ""Depende muito dos grupos de varejo de que estamos falando e dos locais onde temos lojas com eles. Não dá para dizer que existe uma lógica apenas"", afirma. Por ter concedido mais descontos, a companhia conseguiu reduzir sua taxa de inadimplência para 1,1% no quarto trimestre de 2015, saindo de 1 8% em igual período do ano anterior. ""Tivemos em 2015 um esforço de antecipar movimentos de descontos, principalmente em shoppings em maturação"", diz. ""Preferimos que o lojista fique bem e sobreviva, diminuindo a inadimplência. Houve um aumento de desconto em 2015, mas esperamos que fique estável em 2016"", acrescenta. Para Korytowski, o desafio das negociações hoje é que há uma massa muito grande de lojistas em dificuldades e pedindo ajuda. Isso torna difícil a tarefa do shopping de identificar quem de fato pode sobreviver com algum tipo de renegociação, já que em alguns casos isso não é suficiente para evitar o fechamento das lojas. ""No cenário atual, todo mundo pede ajuda, quem precisa e quem não precisa. Faz parte do nosso trabalho identificar quem precisa e quem queremos dentro dos shoppings. Não adianta tentar ajudar um lojista que não tem futuro"", afirma a diretora do Iguatemi, ao lembrar o fechamento da Nôa Nôa no Iguatemi São Paulo, em 2008, que abriu espaço para a chegada da Channel. ""Em épocas de mais dificuldades, tem um movimento mais acelerado de substituição de marcas que não mostram um desempenho tão bom. São poucas as marcas eternas"", acrescenta. Veja fotos dos Mall of the World, o maior shopping do mundo",pt 1459217624,CONTENT SHARED,1805789466376069146,-1032019229384696495,3042342415047984532,,,,HTML,https://cloud.google.com/compute/docs/load-balancing/http/,Setting Up HTTP(S) Load Balancing,"HTTP(S) load balancing provides global load balancing for HTTP(S) requests destined for your instances. You can configure URL rules that route some URLs to one set of instances and route other URLs to other instances. Requests are always routed to the instance group that is closest to the user, provided that group has enough capacity and is appropriate for the request. If the closest group does not have enough capacity, the request is sent to the closest group that does have capacity. HTTP requests can be load balanced based on port 80 or port 8080. HTTPS requests can be load balanced on port 443. The load balancer acts as an HTTP/2 to HTTP/1.1 translation layer, which means that the web servers always see and respond to HTTP/1.1 requests, but that requests from the browser can be HTTP/1.0, HTTP/1.1, or HTTP/2. HTTP(S) load balancing does not support WebSocket. You can use WebSocket traffic with Network load balancing . Contents Before you begin HTTP(S) load balancing uses instance groups to organize instances. Make sure you are familiar with instance groups before you use load balancing. Fundamentals Overview An HTTP(S) load balancer is composed of several components. The following diagram illustrates the architecture of a complete HTTP(S) load balancer: The following sections describe how each component works together to make up each type of load balancer. For a detailed description of each component, see Components below. HTTP load balancing A complete HTTP load balancer is structured as follows: A global forwarding rule directs incoming requests to a target HTTP proxy . The target HTTP proxy checks each request against a URL map to determine the appropriate backend service for the request. The backend service directs each request to an appropriate backend based on serving capacity, zone, and instance health of its attached backends. The health of each backend instance is verified using either an HTTP health check or an HTTPS health check. If the backend service is configured to use the latter, the request will be encrypted on its way to the backend instance. In addition, you must create a firewall rule to enable traffic to your HTTP load balancer. The rule should enable traffic on the port your global forwarding rule has been configured to use (either 80 or 8080). HTTPS load balancing An HTTPS load balancer shares the same basic structure as an HTTP load balancer (described above), but differs in the following ways: Uses a target HTTPS proxy instead of a target HTTP proxy Requires a signed SSL certificate for the load balancer Requires a firewall rule that enables traffic on port 443 The client SSL session terminates at the load balancer. Sessions between the load balancer and the instance can either be HTTPS (recommended) or HTTP. If HTTPS, each instance must have a certificate. Components Global forwarding rules and addresses Global forwarding rules route traffic by IP address, port, and protocol to a load balancing configuration consisting of a target proxy, URL map, and one or more backend services. Each global forwarding rule provides a single global IP address that can be used in DNS records for your application. No DNS-based load balancing is required. You can either specify the IP address to be used or let Google Compute Engine assign one for you. Target proxies Target proxies terminate HTTP(S) connections from clients, and are referenced by one or more global forwarding rules and route the incoming requests to a URL map. The proxies set HTTP request/response headers as follows: Via: 1.1 google (requests and responses) X-Forwarded-Proto: [http | https] (requests only) X-Forwarded-For: , (requests only) Can be a comma-separated list of IP addresses depending on the X-Forwarded-For entries appended by the intermediaries the client is traveling through. The first element in the section shows the origin address. X-Cloud-Trace-Context: /; (requests only) Parameters for Stackdriver Trace . URL maps URL maps define matching patterns for URL-based routing of requests to the appropriate backend services. A default service is defined to handle any requests that do not match a specified host rule or path matching rule. In some situations, such as the cross-region load balancing example , you might not define any URL rules and rely only on the default service. For content-based routing of traffic, the URL map allows you to divide your traffic by examining the URL components to send requests to different sets of backends. SSL certificates SSL certificates are used by target HTTPS proxies to securely route incoming HTTPS requests to backend services defined in a URL map. Backend services Backend services direct incoming traffic to one or more attached backends. Each backend is composed of an instance group and additional serving capacity metadata. Backend serving capacity can be based on CPU or requests per second (RPS) . Each backend service also specifies which health checks will be performed against the available instances. HTTP(S) load balancing supports Compute Engine Autoscaler , which allows users to perform autoscaling on the instance groups in a backend service. For more information, see Scaling Based on HTTP load balancing serving capacity . Load distribution algorithm HTTP(S) load balancing provides two methods of determining instance load. Within the backend service object, the balancingMode property selects between the requests per second (RPS) and CPU utilization modes. Both modes allow a maximum value to be specified; the HTTP load balancer will try to ensure that load remains under the limit, but short bursts above the limit can occur during failover or load spike events. Incoming requests are sent to the region closest to the user that has remaining capacity. If more than one zone is configured with backends in a region, the traffic is distributed across the instance groups in each zone according to each group's capacity. Within the zone, the requests are spread evenly over the instances using a round-robin algorithm. Round-robin distribution can be overridden by configuring session affinity . Session affinity Alpha This is an Alpha release of HTTP(S) Load Balancing Session Affinity. This feature might be changed in backward-incompatible ways and is not recommended for production use. It is not subject to any SLA or deprecation policy. Request to be whitelisted to use this feature . Session affinity sends all request from the same client to the same virtual machine instance as long as the instance stays healthy and has capacity. Google Cloud HTTP(S) Load Balancing offers two types of session affinity: Interfaces Your HTTP(S) load balancing service can be configured and updated through the following interfaces: The gcloud tool : gcloud is a command-line tool included in the Cloud SDK . The HTTP(S) load balancing documentation calls on this tool frequently to accomplish tasks. For a complete overview of gcloud documentation, see the gcloud Tool Guide . You can find commands related to load balancing in the gcloud compute and gcloud preview command groups. You can also get detailed help for any gcloud command by using the --help flag: gcloud compute http-health-checks create --help The Google Cloud Platform Console : Load balancing tasks can be accomplished through the Google Cloud Platform Console . The REST API : All load balancing tasks can be accomplished using the Google Compute Engine API. The API reference docs describe the resources and methods available to you. TLS support A HTTPS target proxy accepts only TLS 1.0 and up when terminating client SSL requests. It speaks only TLS 1.0 and up to the backend service when the backend protocol is HTTPS. Logging Alpha This is an Alpha release of Google Cloud HTTP(S) Load Balancing Logging. This feature might be changed in backward-incompatible ways and is not recommended for production use. It is not subject to any SLA or deprecation policy. Request to be whitelisted to use this feature . Each HTTP(S) request is logged via Google Cloud Logging . If you have been accepted into the Alpha testing phase, logging is automatic and does not need to be enabled. How to view logs To view logs, go to the Logs Viewer in the Cloud Platform Console. HTTP(S) logs are indexed first by forwarding rule , then by URL map . To see all logs, in the first pull-down menu select Load Balancing > All forwarding rules . To see logs for just one forwarding rule, select a single forwarding rule name from the list. To see logs for just one URL map used by a forwarding rule, select Load Balancing and choose the forwarding rule and URL map of interest. What is logged In addition to general information contained in most logs, such as severity, project ID, project number, and timestamp, HTTP(S) load balancing logs contain HttpRequest log fields. Log fields of type boolean typically only appear if they have a value of true . If a boolean field has a value of false , that field is omitted from the log. UTF-8 encoding is enforced for these fields. Characters that are not UTF-8 characters are replaced with question marks. Next steps The following guides demonstrate two different scenarios using the HTTP(S) load balancing service. These scenarios provide a practical context for HTTP(S) load balancing and demonstrate how you might set up load balancing for your specific needs. Cross-region load balancing You can use a global IP address that can intelligently route users based on proximity. For example, if you set up instances in North America, Europe, and Asia, users around the world will be automatically sent to the backends closest to them, assuming those instances have enough capacity. If the closest instances do not have enough capacity, cross-region load balancing automatically forwards users to the next closest region. Get started with cross-region load balancing Content-based load balancing Content-based or content-aware load balancing uses HTTP(S) load balancing to distribute traffic to different instances based on the incoming HTTP(S) URL. For example, you can set up some instances to handle your video content and another set to handle everything else. You can configure your load balancer to direct traffic for example.com/video to the video servers and example.com/ to the default servers. Get started with content-based load balancing Content-based and cross-region load-balancing can work together by using multiple backend services and multiple regions. You can build on top of the scenarios above to configure your own load balancing configuration that meets your needs. Notes and Restrictions HTTP(S) load balancing does not support HTTP/1.1 100 Continue response. This might affect multipart POST. The load balancing configuration automatically creates firewall rules if the instance operating system is a Compute Engine image. If not, you have to create the firewall rules manually. Load balancing does not keep instances in sync. You must set up your own mechanisms, such as using Deployment Manager , for ensuring that your instances have consistent configurations and data. Troubleshooting Traffic from the load balancer to your instances has an IP address in the range of 130.211.0.0/22. When viewing logs on your load balanced instances, you will not see the source address of the original client. Instead, you will see source addresses from this range.",en 1459217735,CONTENT SHARED,-2081760549863309770,-1032019229384696495,3042342415047984532,,,,HTML,https://cloud.google.com/compute/docs/load-balancing/tcp-ssl/,Setting Up SSL proxy for Google Cloud Load Balancing,"Alpha This is an Alpha release of Setting Up SSL proxy for Google Cloud Load Balancing. This feature might be changed in backward-incompatible ways and is not recommended for production use. It is not subject to any SLA or deprecation policy. Request to be whitelisted to use this feature . Google Cloud SSL proxy terminates user SSL (TLS) connections at the global load balancing layer, then balances the connections across your instances via SSL or TCP. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead. Contents Google Cloud Load Balancing with SSL proxy Setting up SSL load balancing Configure instance groups and backend services Configure frontend services Additional Commands Listing target SSL proxies Describe target SSL proxies Delete target SSL proxy Update a backend service for the target SSL proxy Update the SSL certificates for the target SSL proxy Update PROXY protocol header for the proxy PROXY protocol for retaining client connection information Recommendations Troubleshooting Pages fail to load from load balancer IP Alpha Limitations FAQ When should I use HTTPS load balancing instead of SSL proxy load balancing? Can I view the original IP address of the connection to the global load balancing layer? Overview We are excited to introduce SSL (TLS) proxying for your SSL traffic. With SSL proxy, you can terminate your customers' SSL sessions at the global load balancing layer, then forward the traffic to your virtual machine instances using SSL (recommended) or TCP. SSL proxy is a global load balancing service. You can deploy your instances in multiple regions, and global load balancing will automatically direct traffic to the region closest to the user. If a region is at capacity, the load balancer automatically directs new connections to another region with available capacity. Existing user connections to remain in the current region. We recommend use of end-to-end encryption for your SSL proxy deployment by configuring your backend service to accept traffic over SSL ( backend-services --protocol SSL ). This ensures that the client traffic decrypted at the SSL proxy layer is encrypted again before being sent to the backend instances. This end-to-end encryption requires you to provision certificates and keys on your instances so they can perform SSL processing. The advantages of managed SSL proxy are as follows: Intelligent routing - the load balancer can route requests to backend locations where there is capacity. In contrast, an L3/L4 load balancer must route to regional backends without paying attention to capacity. Use of smarter routing allows provisioning at N+1 or N+2 instead of x*N. Better utilization of the back-end servers - SSL processing can be very CPU intensive if the ciphers used are not CPU efficient. In order to maximize CPU performance, use ECDSA SSL certs,TLS1.2 and prefer the ECDHE-ECDSA-AES128-GCM-SHA256 cipher suite for SSL between the load balancer and your instances. Certificate management - You only need to update your customer facing certificate in one place when you need to switch certs. You can also reduce the management overhead for your instances by using self-signed certificates. Security patching - If vulnerabilities arise in the SSL or TCP stack, we will apply patches at the load balancer automatically in order to keep your instances safe. Notes: While choosing to send the traffic over unencrypted TCP ( backend-services --protocol TCP ) between the global load balancing layer and instances enables you to manage your SSL certificates at one place and offload SSL processing from your instances, it also comes with reduced security between your global load balancing layer and instances and is therefore not recommended. SSL proxy can handle HTTPS, but this is not recommended. You should instead use HTTP(S) load balancing for HTTPS traffic. See the FAQ for details. We now describe how SSL proxy works and walk you through configuring an SSL proxy for load balancing traffic to some instances. Google Cloud Load Balancing with SSL proxy With SSL proxy at the global load balancing layer, traffic coming over an SSL connection is terminated at the global layer then proxied to the closest available instance group. In this example, the traffic from the users in Iowa and Boston is terminated at the global load balancing layer, and a separate connection is established to the selected backend instance. Google Cloud Load Balancing with SSL termination (click to enlarge) Setting up SSL load balancing This example demonstrates setting up global SSL load balancing for a simple service that exists in two regions: us-central1 and us-east1 . We will configure the following: Instance groups for holding the instances A pair of instances for each instance group A health check for verifying instance health A backend service, which monitors instances in groups and prevents them from exceeding configured usage The SSL proxy itself with its SSL certificate A public static IP address and forwarding rule that sends user traffic to the proxy A firewall rule for the load balancer IP address After that, we'll test our configuration. Configure instance groups and backend services This section shows how to create simple instance groups, add instances to them, then add those instances to a backend service with a health check. A production system would normally use managed instance groups based on instance templates , but this setup is quicker for initial testing. Backend configuration (click to enlarge) Create an instance group for each zone gcloud compute instance-groups unmanaged create us-ig1 --zone us-central1-b Created [ [PROJECT_ID]/zones/us-central1-b/instanceGroups/us-ig1]. NAME ZONE NETWORK MANAGED INSTANCES us-ig1 us-central1-b 0 gcloud compute instance-groups unmanaged create us-ig2 --zone us-east1-b Created [ [PROJECT_ID]/zones/us-east1-b/instanceGroups/us-ig2]. NAME ZONE NETWORK MANAGED INSTANCES us-ig2 us-east1-b 0 Create two instances in each zone For testing purposes, we'll install Apache on each instance. Normally, you wouldn't use SSL load balancing for HTTP traffic, but Apache is commonly used and is easy to set up for testing. These instance are all being created with a tag of ssl-lb . This tag is used later by the firewall rule. Add instances to the instance groups Add ig-us-central1-1 and ig-us-central1-2 to us-ig1 gcloud compute instance-groups unmanaged add-instances us-ig1 \ --instances ig-us-central1-1,ig-us-central1-2 \ --zone us-central1-b Updated [ [PROJECT_ID]/zones/us-central1-b/instanceGroups/us-ig1]. Add ig-us-east1-1 and ig-us-east1-2 to us-ig2 gcloud compute instance-groups unmanaged add-instances us-ig2 \ --instances ig-us-east1-1,ig-us-east1-2 \ --zone us-east1-b Updated [ [PROJECT_ID]/zones/us-east1-b/instanceGroups/us-ig2]. You now have an instance group in two different regions, each with two instances. When we create a backend service, we have to specify a health check, so we'll create the health check next. Create a health check You can configure either an SSL or TCP health check for determining the health of your instances. If you are using SSL between the load balancer and the instances, use an SSL health check. If you are using plain TCP between the load balancer and the instances, use a TCP health check. Once configured, health checks are sent on a regular basis to the specified port on all the instances in the configured instance groups. If the health check fails, the instance is marked as UNHEALTHY and the load balancer stops sending new connections to that instance until the instance becomes healthy again. Existing connections are allowed to continue. In this example, we're creating a simple SSL health check that we'll use with our backend service. This health check does a simple SSL handshake with each instance on port 443 to determine health. If the handshake succeeds twice in a row (the default), the new instance is marked HEALTHY . If the handshake fails twice in a row (the default) on a HEALTHY instance, the instance is marked UNHEALTHY . If the handshake is again successful twice in a row, the instance is marked as HEALTHY again. See the Health checks section for more information and options. gcloud alpha compute health-checks create ssl my-ssl-health-check --port 443 Created [ [PROJECT_ID]/global/healthChecks/my-ssl-health-check]. NAME PROTOCOL my-ssl-health-check SSL Create a backend service A backend service defines the capacity, max utilization, and health check of the instance groups it contains. Backend services direct incoming traffic to one or more attached backends (depending on the load balancing mode, discussed later). Each backend consists of an instance group and additional configuration to balance traffic among the instances in the instance group. Each instance group is composed of one or more instances. Each backend service also specifies which health checks will be performed for the instances in an all the instance groups added to the backend service. The duration of idle SSL proxy connections through the load balancer is limited by the backend service timeout. In this example we'll add a backend service that connects to instances over SSL. This only governs connections between the load balancer and the instance, not the connections between users and the load balancer. gcloud alpha compute backend-services create my-backend-service \ --protocol SSL \ --health-check my-ssl-health-check \ --timeout 5m Created [ [PROJECT_ID]/global/backendServices/my-backend-service]. NAME BACKENDS PROTOCOL my-backend-service SSL Alternatively you could configure unencrypted communication between from the load balancer to the instances with --protocol TCP . Configure your backend service When you configure a backend service, you must add instance groups and specify a balancing mode that determines how much traffic the load balancer can send to instances in each instance group. Once the limit is reached for a particular instance group, additional requests are sent to an instance group that is next closest to the user, as long as it has capacity. SSL proxy supports the following balancing mode: UTILIZATION (default): instances can accept traffic as long as the average current CPU utilization of the instance group is below an indicated value. To set this value, use the --max-utilization parameter and pass a value between 0.0 (0%) and 1.0 (100%). Default is 0.8 (80%). For this example, we'll add both instance groups to the same backend service and set the balancing mode to send traffic to instance groups that have not reached 80% utilization. gcloud alpha compute backend-services add-backend my-backend-service \ --instance-group us-ig1 \ --zone us-central1-b \ --balancing-mode UTILIZATION \ --max-utilization 0.8 Updated [ [PROJECT_ID]/global/backendServices/my-backend-service]. gcloud alpha compute backend-services add-backend my-backend-service \ --instance-group us-ig2 \ --zone us-east1-b \ --balancing-mode UTILIZATION \ --max-utilization 0.8 Updated [ [PROJECT_ID]/global/backendServices/my-backend-service]. Configure frontend services This section shows how to create the following frontend resources: an SslCertificate resource to use with the load balancer an SSL proxy load balancer a static external IP address and a forwarding rule to use with that address a firewall rule that allows traffic from the load balancer and the health checker to reach the instances Frontend configuration (click to enlarge) Configure an SSL certificate and key If you don't have a private key and signed certificate, you can create and use a self-signed certificate for testing purposes, or get real certificate from an authority. See SSL Certificates for further information. You should not use a self-signed certificate on the load balancer for production purposes. This step takes your certificate and key and creates an SSL certificate resource that you will assign to your SSL proxy in the next step. gcloud compute ssl-certificates create my-ssl-cert \ --certificate [CRT_FILE_PATH] \ --private-key [KEY_FILE_PATH] Created [ [PROJECT_ID]/global/sslCertificates/ssl-cert1]. NAME CREATION_TIMESTAMP ssl-cert1 2016-02-20T20:53:33.584-08:00 Configure a target SSL proxy The target SSL proxy receives the packets from the user and sends them to the backend service. When you create the target SSL proxy, you associate your backend service and SSL certificate with that resource. If you want to enable insertion of PROXY protocol version 1 header, you can configure the command above with --proxy-header PROXY_V1 . For more information on PROXY protocol, see Update proxy protocol header for the proxy . gcloud alpha compute target-ssl-proxies create my-target-ssl-proxy \ --backend-service my-backend-service \ --ssl-certificate my-ssl-cert \ --proxy-header NONE Created [ [PROJECT_ID]/global/targetSslProxies/my-target-ssl-proxy]. NAME PROXY_HEADER SERVICE SSL_CERTIFICATES my-target-ssl-proxy NONE my-backend-service ssl-cert1 Reserve a global static IP address Now we need to create a global reserved static IP address for your service. This IP address is the one your customers will use to access your load balanced service. gcloud compute addresses create ssl-lb-static-ip --global Created [ [PROJECT_ID]/global/addresses/ssl-lb-static-ip]. NAME REGION ADDRESS STATUS ssl-lb-static-ip [LB_STATIC_IP] RESERVED Configure a global forwarding rule Create a global forwarding rule to forward specific IPs and ports to the target SSL proxy. When customer traffic arrives at your external IP address, this forwarding rule tells the network to send that traffic to your SSL proxy. To create a global forwarding rule associated with the target proxy, replace LB_STATIC_IP with the IP address you generated in the prior step. gcloud alpha compute forwarding-rules create my-global-forwarding-rule \ --global \ --target-ssl-proxy my-target-ssl-proxy \ --address [LB_STATIC_IP] \ --port-range 443 Created [ [PROJECT_ID]/global/forwardingRules/my-global-forwarding-rule]. NAME REGION IP_ADDRESS IP_PROTOCOL TARGET my-global-forwarding-rule [LB_STATIC_IP] TCP my-target-ssl-proxy Create a firewall rule for the SSL load balancer Configure the firewall to allow traffic from the load balancer and health checker to the instances. gcloud compute firewall-rules create allow-ssl-130-211-0-0-22 \ --source-ranges 130.211.0.0/22 \ --target-tags ssl-lb \ --allow tcp:443 Created [ [PROJECT_ID]/global/firewalls/allow-ssl-130-211-0-0-22]. NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS allow-ssl-130-211-0-0-22 default 130.211.0.0/22 tcp:443 ssl-lb Test your load balancer In your web browser, connect to your static IP address via HTTPS. In this test setup, the instances are using self-signed certificates. Therefore, you will see a warning in your browser the first time you access a page. Click through the warning to see the actual page. You should see one of the hosts from the region closest to you. Reload the page until you see the other instance in that region. To see instances from the other region, either stop the instances in the closest region or disable Apache on those instances. Alternatively, you can use curl from the your local machine's command line. If you are using a self-signed certificate on the SSL proxy, you must also specify -k . curl -k [LB_STATIC_IP] Additional Commands Listing target SSL proxies gcloud alpha compute target-ssl-proxies list NAME PROXY_HEADER SERVICE SSL_CERTIFICATES my-target-ssl-proxy NONE my-backend-service ssl-cert1 Describe target SSL proxies gcloud alpha compute target-ssl-proxies describe my-target-ssl-proxy creationTimestamp: '2016-02-20T20:55:17.633-08:00' id: '9208913598676794842' kind: compute#targetSslProxy name: my-target-ssl-proxy proxyHeader: NONE selfLink: [PROJECT_ID]/global/targetSslProxies/my-target-ssl-proxy service: [PROJECT_ID]/global/backendServices/my-backend-service sslCertificates: - [PROJECT_ID]/global/sslCertificates/ssl-cert1 Delete target SSL proxy To delete a target proxy, you must first delete any global forwarding rules that reference it. You can use the update command to point your SSL proxy at a different backend service. In this example, we'll create a new backend service and point the proxy at it. We'll then point back to the original proxy. Use this command to replace the SSL certificate on the SSL proxy. You must already have created a second SSL certificate resource. gcloud alpha compute target-ssl-proxies update my-target-ssl-proxy \ --ssl-certificate ssl-cert2 Updated [ [PROJECT_ID]/global/targetSslProxies/my-target-ssl-proxy]. Use this command to change the PROXY protocol header for an existing target SSL proxy. gcloud alpha compute target-ssl-proxies update my-target-ssl-proxy \ --proxy-header [NONE | PROXY_V1] Updated [ [PROJECT_ID]/global/targetSslProxies/my-target-ssl-proxy]. PROXY protocol for retaining client connection information Google Cloud Load Balancing with SSL proxy terminates SSL connections from the client and creates new connections to the instances, hence the original client IP and port information is not preserved by default. If you would like to preserve and send this information to your instances, then you will need to enable PROXY protocol (version 1) where an additional header containing the original connection information including source IP address, destination IP address, and port numbers is added and sent to the instance as a part of the request. The PROXY protocol header will typically be a single line of user-readable text with the following format: PROXY TCP4 \r\n An example of the PROXY protocol is show below: PROXY TCP4 192.0.2.1 198.51.100.1 15221 443\r\n Where client IP is 192.0.2.1 , load balancing IP is 198.51.100.1 , client port is 15221 and the destination port is 443 . In cases where the client IP is not known, the load balancer will generate a PROXY protocol header in the following format: PROXY UNKNOWN\r\n Health checks Health checks determine which instances can receive new connections. The health checker polls instances at specified intervals. Instances that fail the check are marked as UNHEALTHY . However, the health checker continues to poll unhealthy instances. If an instance passes its health check, it is marked HEALTHY . You can configure either an SSL or TCP health check to determine the health of your backend instances. If you are using SSL between the load balancer and the instances, use an SSL health check. If you are using plain TCP between the load balancer and the instances, use a TCP health check. Once configured, health checks will be sent on a regular basis on the specified port to all the instances in the configured instance groups. When you configure the health check to be of type SSL , an SSL connection is opened to each of your instances. When you configure the health check to be of type TCP , a TCP connection is opened. The health check itself can use one of the following checks: Simple handshake health check (default): the health checker attempts a simple TCP or SSL handshake. If it is successful, the instance passes. Request/response health check : you provide a request string for the health checker to send after completing the TCP or SSL handshake. If the instance returns the response string you've configured, the instance is marked as HEALTHY . Both the request and response strings can be up to 1024 bytes. If the check succeeds twice in a row (the default) on a new instance, the instance is marked HEALTHY . If the check fails twice in a row (the default) on a HEALTHY instance, the instance is marked UNHEALTHY . If the check is again successful twice in a row (default), the instance is marked as HEALTHY again. Existing connections are allowed to continue on instances that have failed their health check. Create a health check gcloud alpha compute health-checks create [tcp | ssl] my-ssl-health-check \ [--port PORT ] \ ...other options If you are encrypting traffic between the load balancer and your instances, use an SSL health check. If the traffic is unencrypted, use a TCP health check. Health check create options --check-interval [CHECK_INTERVAL]; default= 5s How often to perform a health check for an instance. For example, specifying 10s will run the check every 10 seconds. Valid units for this flag are s for seconds and m for minutes. --description [DESCRIPTION] An optional textual description for the health check. Must be surrounded by quotes if the string contains spaces. --healthy-threshold [HEALTHY_THRESHOLD]; default= 2 The number of consecutive successful health checks before an unhealthy instance is marked as HEALTHY . --port [PORT]; default= 80 for TCP, 443 for SSL The TCP port number that this health check monitors. --request An optional string of up to 1024 characters that the health checker can send to the instance. The health checker then looks for a reply from the instance of the string provided in the --response field. If --response is not configured, the health checker does not wait for a response and regards the check as successful if the TCP or SSL handshake was successful. --response An optional string of up to 1024 characters that the health checker expects to receive from the instance. If the response is not received exactly, the health check fails. If --response is configured, but not --request , the health checker will wait for a response anyway. Unless your system automatically sends out a message in response to a successful handshake, always configure --response to match an explicit --request . --timeout [TIMEOUT]; default= 5s If the health checker doesn't receive valid response from the instance within this interval, the check is considered a failure. For example, specifying 10s will cause the check to wait for 10 seconds before considering the request a failure. Valid units for this flag are s for seconds and m for minutes. --unhealthy-threshold [UNHEALTHY_THRESHOLD]; default= 2 The number of consecutive health check failures before a healthy instance is marked as UNHEALTHY . List health checks Lists all the health checks in the current project. gcloud alpha compute health-checks list NAME PROTOCOL my-ssl-health-check SSL Describe a health check Provides detailed information about a specific health check. gcloud alpha compute health-checks describe my-ssl-health-check checkIntervalSec: 5 creationTimestamp: '2016-02-20T20:47:26.034-08:00' description: '' healthyThreshold: 2 id: '1423984233044836273' kind: compute#healthCheck name: my-ssl-health-check selfLink: [PROJECT_ID]/global/healthChecks/my-ssl-health-check sslHealthCheck: port: 443 timeoutSec: 5 type: SSL unhealthyThreshold: 2 To modify a parameter in a health check, run the following command and pass in any of the create parameters. Any specified parameters will be changed. All unspecified parameters will be left the same. gcloud alpha compute health-checks [tcp|ssl] update [--options] Example: gcloud alpha compute health-checks update ssl my-ssl-health-check \ --description ""SSL health check"" Updated [ [PROJECT_ID]/global/healthChecks/my-ssl-health-check]. Recommendations You should configure the load balancer to prepend a PROXY protocol version 1 header if you need to retain the client connection information. If your traffic is HTTPS, then you should use HTTPS Load Balancing and not SSL proxy for load balancing. Troubleshooting Pages fail to load from load balancer IP Verify the health of instances Verify that the instances are HEALTHY. gcloud alpha compute backend-services get-health my-backend-service --- backend: [PROJECT_ID]/zones/us-central1-b/resourceViews/us-ig1 status: kind: compute#backendServiceGroupHealth --- backend: [PROJECT_ID]/zones/us-east1-b/instanceGroups/us-ig2 status: kind: compute#backendServiceGroupHealth Confirm that your firewall rule is correct Both the health checker and the load balancer need 130.211.0.0/22 to be open If you are doing SSL between the load balancer and the instances, you should do an SSL health check. In that case, tcp:443 must be allowed by the firewall from 130.211.0.0/22 . If you are doing TCP to the instances, do a TCP health check and open tcp:80 from 130.211.0.0/22 instead. If you are leveraging instance tags, make sure the tag is listed as under TARGET_TAGS in the firewall rule, and make sure all your instances have that tag. In this example, instances are tagged with ssl-lb . gcloud compute firewall-rules list NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS allow-ssl-130-211-0-0-22 default 130.211.0.0/22 tcp:443 ssl-lb Try to reach individual instances Temporarily set a firewall rule that allows you to access your instances individually, then try to load a page from a specific instance. Then access one or more of your instances directly from your browser. [EXTERNAL_IP] Alpha Limitations The PROXY protocol header is currently only allowed if the protocol between the load balancer and the instance is set to TCP. This will be fixed for protocol SSL by Beta timeframe. FAQ When should I use HTTPS load balancing instead of SSL proxy load balancing? Though SSL proxy can handle HTTPS traffic, HTTPS Load Balancing has additional features that make it a better choice in most cases. HTTPS load balancing has the following additional functionality: Negotiates HTTP/2 and SPDY/3.1 Rejects invalid HTTP requests or responses Forwards requests to different instance groups based on URL host and path Integrates with Cloud CDN . Spreads the request load more evenly among instances, providing better instance utilization. HTTPS load balances each request separately, whereas SSL proxy sends all bytes from the same SSL or TCP connection to the same instance. SSL proxy for Google Cloud Load Balancing can be used for other protocols that use SSL, such as Websockets and IMAP over SSL. Can I view the original IP address of the connection to the global load balancing layer? Yes. You can configure the load balancer to prepend a PROXY protocol version 1 header to retain the original connection information. See Update proxy protocol header for the proxy for details.",en 1459217994,CONTENT SHARED,-5170198873410718233,-1032019229384696495,3042342415047984532,,,,HTML,http://techcrunch.com/2016/03/28/ntt-to-buy-dells-services-division-for-3-05-billion/,NTT to buy Dell's services division for $3.05 billion,"You may know Dell as a computer and server maker, but Dell also operates a substantial IT services division - at least it did until today. NTT Data , the IT services company of NTT, is acquiring Dell Systems for $3.05 billion. The main reason why Dell sold off its division is that the company needs cash, and quickly. When Dell acquired EMC for $67 billion , the company promised that it would find ways to help finance the debt needed for the EMC acquisition. But $3.05 billion doesn't seem much. First, Dell acquired Perot Systems (which later became Dell Systems) for $3.9 billion in 2009. Second, Dell wanted more than $3.05 billion. Rumor has it that Dell was asking for $5 or $6 billion. So it looks like Dell didn't have enough time to find another potential buyer to outbid NTT. On the other side of the equation, NTT wants Dell Systems because Japan-based NTT is trying to expand its client base to new regions. Dell Systems is mostly operating in North America in the health-sector. NTT has already acquired Dimension Data in South Africa and Keane Inc. in the U.S. The company is just iterating on its strategy of expanding its foreign presence. Many Japanese companies have been looking at foreign markets when it comes to growth. In short, Dell is offloading side businesses in order to finance the EMC acquisition. Nobody knows if the EMC acquisition was a smart move, but at least Dell is committed to this strategy. Meanwhile, NTT is slowly but surely expanding to new markets. Featured Image: dokmai / Shutterstock",en 1459229160,CONTENT SHARED,-3367778232969996503,-1032019229384696495,3042342415047984532,,,,HTML,http://www.salon.com/2016/03/27/good_riddance_gig_economy_uber_ayn_rand_and_the_awesome_collapse_of_silicon_valleys_dream_of_destroying_your_job/,"Good riddance, gig economy: Uber, Ayn Rand and the awesome collapse of Silicon Valley's dream of destroying your job","The Uber model just doesn't work for other industries. The price points always fail -- and that's a good thing The New York Times' Farhad Manjoo recently wrote an oddly lamenting piece about how ""the Uber model, it turns out, doesn't translate."" Manjoo describes how so many of the ""Uber-of-X"" companies that have sprung up as part of the so-called sharing economy have become just another way to deliver more expensively priced conveniences to those with enough money to pay. Ironically many of these Ayn Rand-inspired startups have been kept alive by subsidies of the venture capital kind which, for various reasons, are starting to dry up. Without that kind of ""VC welfare,"" these companies are having to raise their prices, and are finding it increasingly difficult to retain enough customers at the higher price point. Consequently, some of these startups are faltering; others are outright failing. Witness the recent collapse of SpoonRocket , an on-demand pre-made meal delivery service. Like Uber wanting to replace your car, SpoonRocket wanted to get you out of your kitchen by trying to be cheaper and faster than cooking. Its chefs mass-produced its limited menu of meals, and cars equipped with warming cases delivered the goods, aiming for ""sub-10 minute delivery of sub-$10 meals."" But it didn't work out as planned. And once the VC welfare started backing away, SpoonRocket could not maintain its low price point. The same has been happening with other on-demand services such as the valet-parking app Luxe, which has degraded to the point where Manjoo notes that "" prices are rising , service is declining, business models are shifting, and in some cases, companies are closing down."" Yet the telltale signs of the many problems with this heavily subsidized startup business model have been prevalent for quite some time, for those who wanted to see. In July 2014, media darling TaskRabbit, which had been hailed as a revolutionary for the way it allowed vulnerable workers to auction themselves to the lowest bidders for short-term gigs, underwent a major ""pivot."" That's Silicon Valley-speak for acknowledging that its business model wasn't working. It was losing too much money, and so it had to shake things up. TaskRabbit revamped how its platform worked, particularly how jobs are priced. CEO Leah Busque defended the changes as necessary to help TaskRabbit keep up with ""explosive demand growth,"" but published reports said the company was responding to a decline in the number of completed tasks. Too many of the Rabbits, it turns out, were not happy bunnies - they were underpaid and did a poor job, despite company rhetoric to the contrary. An increasing number of them simply failed to show up for their tasks . As a results, customers also failed to return. A contagion of pivots began happening among other sharing economy startups. Companies like Cherry (car washes), Prim (laundry), SnapGoods (gear rental), Rewinery (wine), HomeJoy (home cleaning) all went bust, some of them quietly and others with more headlines. Historical experience shows that three out of four startups fail , and more than nine out of 10 never earn a return. My favorite example is SnapGoods, which is still cited today by many journalists who are pumping up the sharing economy (and haven't done their homework) as a fitting example of a cool, hip company that allows people to rent out their spare equipment, like that drill you never use, or your backpack or spare bicycle-even though SnapGoods went out of business in August 2012. It just disappeared, poof, without a trace, yet goes on living in the imagination of sharing economy boosters. I conducted a Twitter interview with its former CEO, Ron J. Williams, as well as with whatever wizard currently lurks behind the faux curtain of the SnapGoods Twitter account, and the only comment they would make is that ""we pivoted and communicated to our 50,000 users that we had bigger fish to try."" Getting even more vague, they insisted ""we decided to build tech to strengthen social relationships and facilitate trust"" -classic sharing-economy speak for producing vaporware instead of substance from a company that had vanished with barely a trace. Zaarly, in its prime, was another sharing-economy darling of the venture capital set, with notable investors including Steve Jobs, hotshot VC firm Kleiner Perkins and former eBay CEO Meg Whitman on its board. It positioned itself in the marketplace as a competitor to TaskRabbit and similar services, with its brash founder and CEO, Bo Fishback, explaining his company's mission to a conference audience: "" If you've ever said, 'I'd pay X amount for Y,' then Zaarly is for you."" Fishback once spectacularly illustrated his brand by bringing on stage a cow being towed by a man in a baseball cap and carrying a jug of milk-""If I'm willing to pay $100 for someone to bring me a glass of fresh milk from an Omaha dairy cow right now, there might very well be a guy who would be super happy to do that,"" he said. That kind of bravado is what gave these companies their electric juice, as media outlets like the Economist lionized them as the ""on-demand"" economy. Like so many of the sharing-economy evangelicals, Fishback brandished a libertarian Ayn Randianism which saw Zaarly as creating ""the ultimate opt-in employment market, where there is no excuse for people who say, 'I don't know how to get a job, I don't know how to get started.'"" But alas, those were the heady, early years, when Zaarly was flush with VC cash. Flash forward to today and Fishback is more humble, as is his company, having gone through several ""pivots."" The ""request anything"" model is gone, as are Fishback's lofty sermons to American workers. Instead, Zaarly has become more narrowly focused on four comparatively mundane markets: house cleaning, handyman services, lawn care and maid service. And then there's Exec. Like Zaarly and TaskRabbit, Exec also started with great fanfare as a broader errand-running business, this one focused on hiring a personal assistant for busy Masters of the Universe. Like other sharing startups, initially it had grand ambitions about the on-demand economy and fomenting a revolution over how we work: connecting those with more money than time with those 1099 indies who desperately needed the money. But eventually this company too was forced by its market failures to narrow its focus, in this case to housekeeping exclusively. Finally the company flamed out and was sold to another housekeeping startup, Handybook. Exec's former CEO, Justin Kan, wrote a self-reflective farewell blog post about what he thought went wrong with his company . His observations are illuminating. His company had charged customers $25 per hour (which later rose to $30) to hire one of their personal assistants, and the worker received 80 percent, or about $20 per hour. That seemed like a high wage to Kan, but much to his surprise he discovered that, when his errand runners made their own personal calculation, factoring in the unsteadiness of the work, the frequency of downtime, hustling from gig to gig, the on-call nature of the work as well as their own expenses, it wasn't such a great deal. Wrote Kan, ""It turns out that $20 per hour does not provide enough economic incentive to dictate when our errand runners had to be available, leading to large supply gaps at times of spiky demand . . . it was impossible to ensure that we had consistent availability. Kan says the company also acquired a ""false sense that the quality of service for our customers was better than it was"" because the quality of the ""average recruitable errand runner""-at the low pay and on-call demands that Exec wanted-did not result in hiring the self-motivated personality types like those that start Silicon Valley companies. (Surprise, surprise.) That in turn led to too many negative experiences for too many customers, especially since, like with TaskRabbit, a too-high percentage of its on-demand workers simply failed to show up to their gigs. (Surprise, surprise.) It turns out, he discovered, that ""most competent people are not looking for part-time work."" (Surprise, surprise.) Indeed, the reality that the sharing economy visionaries can't seem to grasp is that not everyone is cut out to be a gig-preneur, or to ""build out their own businesses,"" as Leah Busque likes to say. Being an entrepreneur takes a uniquely wired brand of individual with a distinctive skill set, including being ""psychotically optimistic,"" as one business consultant put it. Simply being jobless is not a sufficient qualification. In addition, apparently nobody in Silicon Valley ever shared with Kan or Busque the old business secret that ""you get what you pay for."" That's a lesson that Uber's Travis Kalanick seems determined to learn the hard way as well. Kan, like Leah Busque, Bo Fishman and so many of the wide-eyed visionaries of Silicon Valley, had completely underestimated the human factor. To so many of these hyperactive venture entrepreneurs, workers are just another ore to be fed into their machine. They forget that the quality of the ore is crucial to their success, and that quality was dependent on how well the workers were treated and rewarded. The low pay and uncertain nature of the work keeps the employees wondering if there isn't a better deal somewhere else. Moreover, a degree of tunnel vision has prevented startup entrepreneurs from seeing that their business model often is not scalable or sustainable at the billionaire unicorn leve without ongoing VC welfare subsidies. Silicon Valley has an expression, ""That works on Sand Hill Road"" -referring to the upper-crust boulevard in Menlo Park, California, where much of the world's venture capital makes its home. Some things that seem like great ideas-like paying low wages to personal assistants to shuffle around at your every whim, or lowballing wages for someone to hustle around parking cars for yuppies-only make sense inside the VC bubble that has lost all contact with the realities of everyday Americans. A pattern has emerged about the ""white dwarf"" fate of many of these once-luminous sharing startups: after launching with much fanfare and tens of millions of VC capital behind them, vowing to enact a revolution in how people work and how society organizes peer-to-peer economic transactions, in the end many of these companies morphed into the equivalent of old-fashioned temp agencies (and others have simply imploded into black hole nothingness). Market forces have resulted in a convergence of companies on a few services which had been the most used on their platforms. In a real sense, even the startup king itself, Uber, is merely a temp agency, where workers do only one task: drive cars. Rebecca Smith, deputy director of the National Employment Law Project, compares the businesses of the gig economy to old-fashioned labor brokers. Companies like Instacart, Postmates and Uber, she says, talk as if they are different from old-style employers simply because they operate online. ""But in fact,"" she says, ""they are operating just like farm labor contractors, garment jobbers and day labor centers of old. "" Tech enthusiasts like the Times' Manjoo seem to be waking up to the smell of the coffee. ""The uneven service and increased prices,"" writes Manjoo, ""raise larger questions about on-demand apps"" which he says ""now often feel like just another luxury for people who have more money than time."" Yet that strikes me as too black-and-white, as overly gloomy as Manjoo once was excessively optimistic. The sharing economy apps have proven to be extremely fluid at connecting someone who needs work with someone willing to pay for that work. Some workers have praised the flexibility of the platforms, which allow labor market outsiders - young people, immigrants, minorities and seniors especially - who have difficulty finding work to access additional options. It's better than sitting at home as a couch potato with no income. And by narrowing the scope of their services, these companies stand a better chance of contracting with quality people, and developing real relationships with them. I suspect that, properly pivoted in the right direction, these app-based services will continue to play a role in the economy. Eventually many traditional economy companies may adapt an app-based labor market in ways that we can't yet anticipate. But that means we need to figure out a way to launch a universal, portable safety net for all U.S. workers (hint: we can do it at the local and state levels, we don't need to wait for a dysfunctional Congress). At the end of the day, the sharing economy startups have been hamstrung by the quality of the workers they hire. If they want good workers, they need to offer decent jobs. Otherwise, this sharing economy is not about sharing at all, and not very revolutionary. The current startup model destroys the social connection between businesses and those they employ, and these companies have failed to thrive because they provide crummy jobs that most people only want to do as a very last resort. These platforms show their workforce no allegiance or loyalty, and they engender none in return.",en 1459248284,CONTENT SHARED,4988225165850707692,4670267857749552625,-265833983523746108,,,,HTML,http://cio.economictimes.indiatimes.com/news/internet-of-things/the-internet-of-a-billion-things/48745439,The internet of a billion things | ET CIO,"Industrial adoption of IoT dubbed as Industrial Internet of Things (IIoT) is on ascend. Yet, near-term adoption of IIoT remains limited to achieving operational efficiency as most companies are unable to leverage IIoT for predictive capabilities that create new business opportunity. This article sheds light on the probable future of IIoT adoption and the imperative for businesses to form IIoT investment strategies. During Digital India Week in July, Prime Minister Modi announced the creation of a Centre of Excellence for Internet of Things. This is a timely intervention, as the Internet of Things, or IoT, will is set to transform technological and economic landscape over the next decade. The IoT will directly alter the Indian economy's industrial sectors, including manufacturing, energy, agriculture and transportation Together, these sectors account for close to half of India's GDP. It will also affect India's consumers, as their everyday devices connect to the Internet through tiny embedded sensors and computing power. The total potential payoff is enormous. The most conservative independent estimates place spending on the IoT worldwide at $500 billion by 2020. More optimistic forecasts peg the market as high as $15 trillion of global GDP by 2030. The Government of India estimates India's share to be between five and six percent of the worldwide IoT market through 2020. While consumer adoption of connected technology will be gradual in the short term, it will increase rapidly over the next decade. However, many industrial companies are already embracing what has been dubbed the Industrial Internet of Things, or IIoT. For example, the number of sensors shipped globally has increased more than five-fold from 4.2 billion units in 2012 to 23.6 billion units in 2014. Operational efficiency is one of the key attractions of the IIoT. Companies are seeking productivity gains that could reach 30 percent. For example, a big area for operational gains is in predictive maintenance of assets. This use of the IIoT could help reduce breakdowns by 70 percent, overall maintenance costs by 30 per cent, and repair costs by 12 per cent, according to estimates. However, there is more to the story. The IIoT can also be an important driver of breakthrough innovation and new growth opportunities. The key is to combine use of big data analytics with IIoT capabilities and technologies. For example, a major automobile manufacturer is pursuing a unique approach to increase value for customers: a flexible, convenient pay-per-use model for city dwellers needing cars. Customers can use an app to find the car that is parked nearest to them. They open the door with a membership card, drive to their destination, and simply park the car on the street and lock it up. This service competes with conventional taxis and hourly car rental services. Customers can choose to pay by the mile, the hour or the day. The rates are lower than for taxis, and there is no need to reserve, return or order a car; the cars can be parked and found anywhere use location sensors. Chart: Business benefits for driving near-term adoption of IIoT Source : Industrial Internet of Things: Unleashing the Potential of Connected Products and Services, WEF, January 2015 But are most companies up for the challenge? Survey results indicate that only 40 per cent can predict outcomes based on existing data, and fewer still (36 percent) can optimize operations from that data. Understandably, initial investments are focused on improving the ability to react and repair and move toward zero unplanned downtime. However, when asked about future plans, respondents stated their priorities in more ambitious plans for IIoT: increasing profitability (60 per cent), gaining a competitive advantage (57 per cent) and improving environmental safety and emissions (55 per cent) represent more pervasive, cross-functional and sophisticated uses of it. At Accenture, we believe that the transition from efficiency to value-enhanced growth is likely to follow four distinct phases. (See Figure) Phases one and two will represent immediate opportunities that drive the near-term adoption, starting with operational efficiency. Phases three and four will include long-term structural changes that are roughly three years away from mainstream adoption. These last two phases will lead to disruptive changes in businesses and industries. They will manifest themselves in the form of the outcome economy and an integrated human-machine workforce. Figure: The adoption and impact path of IIoT Source : Industrial Internet of Things: Unleashing the Potential of Connected Products and Services, WEF, January 2015 The outcome economy will be built on the automated quantification capabilities of the Industrial Internet. The large-scale shift from selling products or services to selling measurable outcomes is a significant change that will redefine the base of competition and industry structures. As the Industrial Internet becomes more ingrained in every industry, it will ultimately lead to a pull-based economy characterized by real-time demand sensing and highly automated, flexible production and fulfilment networks. This development will call for a pervasive use of automation and intelligent machines to complement human labour (machine augmentation). As a result, the face of the future workforce will change dramatically, along with the skill sets required to succeed in a much more automated economy. Yet, it is still early. Numerous technology challenges and important hurdles remain to be overcome. Not all products can or need to be connected. But amid the new, an old truth remains: business customers need products and services that create more value for them than those on offer today. The time to push is now. (Anindya Basu is Country Managing Director for Accenture in India & Prabhjit Didyala, Managing Director, Strategy for Accenture in India)",en 1459248366,CONTENT SHARED,-5917314377186856799,4670267857749552625,-265833983523746108,,,,HTML,http://www.senar.org.br/agricultura-precisao/artigos-e-palestras/,Artigos e Palestras - Programa Agricultura de Precisão do SENAR,"Artigos e Palestras ARTIGOS / 2015 12/08/2015 Perspectivas para o agronegócio demandam tecnologias para uma produtividade sustentável 23/06/2015 - Agricultura de precisão para alimentar 9 bilhões ARTIGOS / 2014 16/05/2014 - Adoção da Agricultura de Precisão No Brasil Por: Alberto C. de Campos Bernardi, da Embrapa Pecuária Sudeste ; Ricardo Y. Inamasu - Embrapa Instrumentação ARTIGOS / 2013 01/07/2013 - Agricultura de precisão - uma ferramenta ao alcance de todos (*) Alberto Bernardi é pesquisador da Embrapa Pecuária Sudeste, formado em Agronomia (1992); Mestrado: Solos e Nutrição de Plantas (ESALQ/USP, 1996); Doutorado: Solos e Nutrição de Plantas (ESALQ/USP, 1999).Atua na área de Fertilidade do solo e adubação, integração lavoura-pecuária, agricultura de precisão. (*) Ricardo Inamasu é pesquisador da Embrapa Instrumentação, fez graduação (1984), mestrado (1987) e doutorado (1995) em Engenharia Mecânica na Escola de Engenharia de São Carlos USP e pós-doutorado em Biological Systems Engineering na University of Nebraska - Lincoln. Tem experiência na área de Engenharia Mecânica e Mecatrônica, com ênfase em Instrumentação e Automação Agropecuária. 06/03/2013 - Tecnologia e Inovação na agropecuária brasileira (*) Renato Roscoe é engenheiro agrônomo (UFV, 2005), mestre em ciência do solo (UFLA, 1997) e doutor em environmental sciences (Wageningen University And Research Centre, 2002). Atua como diretor executivo e pesquisador no setor fertilidade do solo na Fundação MS. *Por José Luis da Silva Nunes - Dr. em Fitotecnia pela Universidade Federal do Rio Grande do Sul *Por João Guilherme Sabino Ometto é 2º vice-Presidente da FIESP. Artigo publicado no Jornal O Estado de S. Paulo em 04/10/2012 *Antônio Luis Santi1, Jackson Ernani Fiorin2, Kassia Luiza Teixeira Cocco3, Maurício Roberto Cherubin3, Mateus Tonini Eitelwein3, Telmo Jorge Carneiro Amado4, Fabio Evandro Grub Hauschild5 *Vitor Cauduro Girardello(2), Telmo Jorge Carneiro Amado(3), Rodrigo da Silveira Nicoloso(4), Tiago de Andrade Neves Hörbe(5), Ademir de Oliveira Ferreira(6), Fabiano Mauricio Tabaldi(7) & Mastrângello Enívar Lanzanova(8) *João Augusto Telles Presidente da Comissão de Irrigantes do Sistema FARSUL, Coordenador do Clube da Irrigação, Chefe da Divisão Técnica - SENAR-RS *De Telmo Amado e Antônio Santi, professores do Mestrado Profissional em Agricultura de Precisão da UFSM, Tiago Horbe, tiagohorbe@hotmail.com , e Ademir Ferreira, doutorandos em Ciência do Solo PPGCS-UFSM *José Paulo Molin, é um dos palestrantes dos seminários de Agricultura de Precisão do SENAR. Professor associado do Departamento de Engenharia de Biossistemas - Área de Mecânica e Máquinas Agrícolas da ESALQ/USP. *Ricardo Inamasu, é um dos palestrantes dos seminários sobre Agricultura de Precisão do SENAR. Pesquisador da Embrapa Instrumentação e professor colaborador da Escola de Engenharia de São Carlos, vinculada à Universidade de São Paulo (USP). PALESTRAS / 2013 - Agricultura de Precisão - situação e tendências: Jósé P. Molin - ESALQ/USP - Agricultura de Precisão - Base Conceitual: Ricardo Inamasu - Embrapa - Manejo da lavoura para altas produtividades com base na agricultura de precisão: Telmo Amado",pt 1459248432,CONTENT SHARED,6157037646878010131,4670267857749552625,-265833983523746108,,,,HTML,https://www.macroprograma1.cnptia.embrapa.br/redeap2,Rede Agricultura de Precisão II,"Do Se Te Qu Qu Se Sa 27 28 29 30 31 Faça download gratuito do novo livro: Você quer conhecer a Rede Agricultura de Precisão da Embrapa? Veja o Folder_Rede-AP Agricultura de Precisão Planejamento e gerenciamento de todos os processos da produção A Agricultura de Precisão é um tema abrangente, sistêmico e multidisciplinar. Não se limita a algumas culturas nem a algumas regiões. Trata-se de um sistema de manejo integrado de informações e tecnologias, fundamentado nos conceitos de que as variabilidades de espaço e tempo influenciam nos rendimentos dos cultivos. A agricultura de precisão visa o gerenciamento mais detalhado do sistema de produção agrícola como um todo, não somente das aplicações de insumos ou de mapeamentos diversos, mas de todo os processos envolvidos na produção. Esse conjunto de ferramentas para a agricultura pode fazer uso do GNSS (Global Navigation Satelite System), do SIG (Sistema de Informações Geográficas), de instrumentos e de sensores para medidas ou detecção de parâmetros ou de alvos de interesse no agroecossistema (solo, planta, insetos e doenças), de geoestatística e da mecatrônica. Mas a AP não está relacionada somente ao uso de ferramentas de alta tecnologia, pois os seus fundamentos podem ser empregados no dia-a-dia das propriedades pela maior organização e controle das atividades, dos gastos e produtividade em cada área. O emprego da diferenciação já ocorre na divisão e localização das lavouras dentro das propriedades, na divisão dos talhões ou piquetes, ou simplesmente, na identificação de ""manchas"" que diferem do padrão geral. A partir dessa divisão, o tratamento diferenciado de cada área é a aplicação do conceito de AP. Revolução gerencial, recursos tecnológicos e agregação de valores A Agricultura de Precisão é representada por estes três pontos que convergem em excelência de resultados: * Revolução gerencial; * Tecnologia de Informação; * Agregação de valor à produção. É fator determinante que estes três pontos sejam trabalhados em conjunto para que se estabeleça o aprimoramento da produtividade, da qualidade, do volume a ser produzido e da redução de preço dos produtos para competir no mercado interno e externo. Portanto, tecnologia, planejamento e gerenciamento são os fundamentos da Agricultura de Precisão. Quer saber mais? Os primeiros fundamentos teóricos da Agricultura de Precisão surgiram em 1929, nos Estados Unidos, porém tornou-se mais conhecida na década de 80, devido aos avanços e à difusão dos sistemas de posicionamento geográfico, sistemas de informações geográficas, monitoramento de colheita e também à informática. Além de destacar-se nos EUA, ganhou grande notoriedade em países como Alemanha, Argentina, Austrália, Inglaterra e Brasil. No país, as primeiras pesquisas na área foram realizadas na década de 90. No primeiro momento, a Agricultura de Precisão foi direcionada pelas máquinas agrícolas, como colheitadeiras e semeadeiras, embarcando-se a elas receptores GNSS (Global Navigation Satelite System), sofisticados computadores de bordo e sistemas que possibilitam a geração de mapas de produtividade. Aprimorou-se o mapeamento da variabilidade do solo, plantas e outros parâmetros, resultando numa aplicação otimizada de insumos, diminuindo custos e impactos ambientais negativos, consecutivamente, aumentando o retorno econômico, social e ambiental. Algumas iniciativas em pesquisa e desenvolvimento vêm sendo implementadas, colaborando para a inovação em Agricultura de Precisão no país. Atualmente são 53 grupos de pesquisas registrados no sistema Lattes do CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico). No Brasil, o tema vem sendo divulgado em vários eventos importantes onde pesquisadores, empresas e produtores são reunidos: o SIAP (Simpósio Internacional de Agricultura de Precisão) e o ConBAP (Congresso Brasileiro de Agricultura de Precisão). No SIAP de 2007, coordenado pelo Mapa (Ministério da Agricultura, Pecuária e Abastecimento) foi instalado o Comitê Brasileiro de Agricultura de Precisão, um grande avanço para o setor. Nele foram reunidos os principais atores da Agricultura de Precisão no País, fornecendo importantes subsídios para que as políticas públicas possam ser contempladas. As ferramentas no mercado também avançaram, surgiram novos sensores e equipamentos, tornando a prática da AP cada vez mais acessível, com custos mais compatíveis e integráveis ao dia-a-dia de uma propriedade agrícola. No entanto, a adoção da Agricultura de Precisão nos diversos setores do agronegócio brasileiro está ocorrendo em um ritmo inferior ao previsto. Aumentar a taxa de utilização da AP no País, oferecendo tecnologias e conhecimentos para isso, é o papel que a Rede de Agricultura de Precisão da Embrapa pretende cumprir. O conhecimento gerado pela Embrapa, desde a criação da empresa em 1973, tem sido decisivo para o negócio agrícola brasileiro e para a posição de destaque que o Brasil hoje ocupa no cenário agrícola mundial. O Brasil e a Embrapa são referências em tecnologias para a agricultura tropical. O país é um dos líderes mundiais na produção e exportação de vários produtos agropecuários. Graças à essa posição no cenário mundial, o país passou a influir decisivamente no preço e no fluxo de alimentos e outras commodities agrícolas. A visão de futuro, o forte investimento na formação de recursos humanos e a capacidade de estar em sintonia com o avanço da ciência fazem com que a Embrapa possa contribuir para que o Brasil esteja posicionado na fronteira do conhecimento, em temas emergentes como agroenergia, créditos de carbono e biossegurança e em áreas como biotecnologia, nanotecnologia e agricultura de precisão. No caso específico da agricultura de precisão, a atuação da Embrapa e dos parceiros será fundamental para gerar conhecimentos, ferramentas e inovações tecnológicas para aumentar a eficiência dos sistemas produtivos. Estamos traçando o perfil do usuário de Agricultura de Precisão no Brasil. Por favor, nos auxilie respondendo aos questionários abaixo. Existe um para produtores e outro para consultores. PC1 - Gestão do Projeto PC2 - Desenvolvimento e validação de instrumentos e de tecnologias de informação PC3 - Caracterização, monitoramento e manejo da variabilidade espaço temporal em sistemas de culturas anuais PC4 - Caracterização, manejo e monitoramento de atributos do solo e da planta em sistemas de produção de plantas perenes PC5 - Inovação tecnológica em agricultura de precisão Assista aos videos da Rede Agricultura de Precisão da Embrapa! Association of Equipment Manufactures (Associação Norte Americana dos Fabricantes de Equipamentos, USA) Massey Ferguson / Valtra Cooperativa Agrária Agroindustrial Agrosystem Comércio, Imp. e Exp. Ltda APagri Consultoria Agronômica Auteq Computadores e Sistemas Ltda. Agri-Tillage do Brasil Indústria e Comércio de Máquinas e Implementos Agrícolas LTDA Batistella Florestal Fundação AGRISUS - Agricultura Sustentável Campo Agricultura e Meio Ambiente Confederação Nacional de Agricultura Case New Holland Cooperativa Agroindustrial dos Produtores Rurais do Sudoeste Goiano Cooperativa Agropecuária e Industrial Laboratório Nacional de Ciência e Tecnologia do Bioetanol Centro de Tecnologia da Informação Renato Archer Escola de Engenharia de São Carlos - Universidade de São Paulo Enalta Inovações Tecnológicas Empresa de Pesquisa Agropecuária e Extensão Rural de Santa Catarina Escola Superior de Agricultura ""Luiz de Queiroz"" - Universidade de São Paulo Fundação de Amparo à Ciência e Tecnologia do Estado de Pernambuco Mogi Mirim - SP Planaltina de Goiás - GO Castelândia - GO Petrolina - PE Faculdade de Ciências Agronômicas - Universidade Estadual Paulista ""Júlio De Mesquita Filho"" - Campus de Botucatu Florestalle Assessoria e Consultoria Florestal S. S. Agricultura Sustentável FISCHER S/A COM. IND. E AGRICULTURA SLC Agrícola - Fazenda Pamplona, Cristalina - GO Instituto Agronômico de Campinas Instituto Brasileiro de Geografia e Estatística Instituto CNA Máquinas Agrícolas Jacto S/A John Deere Brasil Kuhn do Brasil LOHR Sistemas Eletrônicos Ltda Marchesan Implementos e Máquinas Agrícolas TATU S/A Vinícola Miolo Original Indústria Eletrônica Ltda Escola Politécnica - Universidade de São Paulo Schio Agropecuária Serviço Nacional de Aprendizagem Rural SOMAFÉRTIL LTDA Stara S.A. Indústria de Implementos Agrícolas Terrasul Vinhos Finos Ltda Universidade de Caxias do Sul Universidade Federal de Lavras Faculdade de Agronomia Eliseu Maciel - Universidade Federal de Pelotas Universidade Federal do Rio Grande do Sul Universidade Federal de Santa Maria Programa de Pós-Graduação em Engenharia Agrícola Verion Agricultura A Embrapa está criando o Laboratório de Referência Nacional em Agricultura de Precisão (LANAPRE) , que funcionará como agente integrador das várias dimensões da Agricultura de Precisão, oferecendo uma área de integração para desenvolver padrões, realizar testes, validações e certificações de sistemas. Os recursos para a implantação do laboratório,da ordem de R$ 7,1 milhões, foram conquistados em 2010 com o apoio do Congresso Nacional por meio de emendas da Comissão de Desenvolvimento Econômico , Indústria e Comércio -CDEIC, presidida à época pelo Dep. Fed. Dr. Ubiali, e emendas parlamentares individuais do Dep. Fed. Duarte Nogueira e Dep. Fed. Lobbe Neto. O LANAPRE está localizado na região central do Estado de São Paulo, em São Carlos nas coordenadas 21 o 57'14""S e 47 o 51'08,45""O. Este município possui duas Unidades da Embrapa (Instrumentação e Pecuária Sudeste) e exitem outros três Centros próximos - na região de Campinas, que são de extrema importância para o tema (Monitoramento por Satélite, Informática e Meio Ambiente). O Laboratório deverá contar com uma infra-estrutura para abrigar máquinas, realizar testes de conexão entre diferentes fabricantes, tanto laboratorial como em campo, promover eventos para compatibilizar conexão e integrar diferentes sistemas, instalar sistema de suporte de informática e geoinformática para desenvolvedores, realizar testes de desempenho de campo com sistema integrado, ter lavouras de culturas mais importantes para o país como soja, milho, mandioca, pastagem (Integração Lavoura-Pecuária-Floresta), café, cana-de-açúcar, entre outros. Para isso, será construído um galpão limpo de 3.000 m 2 , com pátio/pista pavimentada de 2.000 m 2 , salas anexas de informática e eletrônica de 200 m 2 . O modelo proposto é de uma gestão compartilhada entre a Embrapa Pecuária Sudeste - localizada na Fazenda Canchim - onde o Laboratório ficará instalado e a Embrapa Instrumentação, que há anos tem se dedicado ao tema e, atualmente, coordena a Rede Nacional de Agricultura de Precisão da Embrapa. O LANAPRE terá condições de: Realizar pesquisa e desenvolvimento de máquinas e equipamentos para AP; Realizar testes de conexão entre diferentes fabricantes, tanto laboratorial como em campo; Promover eventos para compatibilizar conexão e integrar diferentes sistemas; Instalar sistema de suporte de informática e geoinformática para desenvolvedores; Realizar testes de desempenho de campo com sistema integrado. Acompanhe as etapas da construção: setembro 2012 Veja as notícias sobre o LANAPRE: Embrapa terá Laboratório de Referência Nacional em Agricultura de Precisão , Noite de celebração para comemorar os 40 anos da Embrapa Veja o LANAPRE no Google Earth: Local_LANAPRE (para visualizar é necessário ter instalado o programa Google Earth ) Grupo de São Carlos constrói equipamento robótico com laser para análise de elementos químicos presentes no solo e nas folhas CSIRO's Precision Agriculture group has developed a range of software tools to help manage and analyse spatial information. Carine Ferreira | Valor Técnica, usada principalmente por produtores mais jovens, consiste em mapear cada ponto do terreno para aumentar a produtividade. A Rede de Agricultura de Precisão da Empresa Brasileira de Pesquisa Agropecuária (Embrapa) realiza, no dia 15 de maio de 2014, um seminário no Laboratório de Referência Nacional em Agricultura de Precisão (Lanapre), em São Carlos (SP). Estudo envolveu mais de 200 profissionais de 44 unidades da empresa e mais de 30 instituições parceiras Modelos podem ser adaptados às necessidades do produtor Segundo o pesquisador Lúcio André de Castro Jorge, da Embrapa Instrumentação, já são vendidos modelos de drones no País equipados com câmeras básicas. Veículos aéreos não tripulados (VANTs) podem custar de R$ 5.000 a R$ 120.000 e são uma evolução para a agricultura de precisão. Embrapa também está criando softwares para a nova tecnologia, que poderão prever o aparecimento de pragas Pesquisadores aprimoram tecnologias usadas nas áreas espacial e militar para a produção agrícola After starting to operate the first exclusive laboratory for precision agriculture in the country, Embrapa presents the main solutions already developed in that place No Rádio UFSCar Convida de hoje, o tema é Agricultura de Precisão. E para conversar sobre esses recursos tecnológicos, nós recebemos os pesquisadores da Embrapa, Ricardo Inamasu e Alberto Bernardi. Entidade também está criando softwares para a nova tecnologia, que poderão prever o aparecimento de pragas Os chamados drones permitem monitorar a plantação e a propriedade de forma mais rápida e eficiente. Programa Revista do Campo 21/05/14 - RIT TV (9:45min até 14:05min) Tecnologia do Campo/Canal Rural (31/05) Evento reúne, até esta sexta (6), 110 participantes, dos quais 95 vão apresentar trabalhos científicos englobando 12 temas - agricultura de precisão, agroenergia, biotecnologia, genética e melhoramento animal, genética e melhoramento vegetal, instrumentação agropecuária, meio ambiente, manejo e conservação do solo e da água, novos materiais e nanotecnologia, produção animal, produção vegetal, pós-colheita e qualidade de produtos agropecuários, sanidade animal. Conexão Ciência: EXIBIDO EM 01/07/2014 - A agricultura de precisão é uma forma de manejo integrado de informações e tecnologias que visa o gerenciamento mais detalhado do sistema de produção agrícola como um todo. A prática tem por objetivo reduzir os custos de produção, diminuir a contaminação da natureza pelos agrotóxicos utilizados e aumentar a produtividade. O pesquisador da Embrapa, Ricardo Inamasu, fala sobre o assunto no Conexão Ciência. A agricultura de precisão é uma forma de manejo integrado de informações e tecnologias que visa o gerenciamento mais detalhado do sistema de produção agrícola como um todo. A prática tem por objetivo reduzir os custos de produção, diminuir a contaminação da natureza pelos agrotóxicos utilizados e aumentar a produtividade. O pesquisador da Embrapa, Ricardo Inamasu, fala sobre o assunto no Conexão Ciência, Entrevista com o presidente da Embrapa, Maurício Lopes, que falou sobre o importante papel da entidade no desenvolvimento técnico e científico para o setor pecuário. A técnica de agricultura de precisão possibilita a produção sustentável, além do aumento da produtividade, por meio do monitoramento de aspectos essenciais nas lavouras, bem como o uso racional dos insumos agrícolas, reduzindo os custos de produção A Embrapa Instrumentação apresenta ao público um drone, veículo aéreo não tripulado, que poderá ser utilizado por pequenos produtores na gestão da lavoura. Tecnologia de informação é aplicada para o aumento da produção rural e se torna um dos destaques de evento anual promovido por federação Do clima à energia e o turismo (incluindo a agricultura de precisão), o presidente da Embrapa listou oportunidades que podem alavancar o crescimento do país em um futuro próximo. Agricultura de precisão, drones e outras inovações agrícolas, feira de máquinas e comercialização de produtos e serviços fazem parte das atrações Com aporte de R$ 7 milhões, entidade envolveu 20 de seus Centros de Pesquisa e empresas, com adesão de 214 pesquisadores e 15 unidades experimentais Noticias relacionadas ao tema do projeto Conheça as publicações da Equipe da Rede AP A Rede AP tem 15 áreas experimentais distribuídas no Nordeste, Centro-oeste, Sudeste e Sul do país, cobrindo culturas anuais (milho, soja, trigo, arroz irrigado e algodão) e culturas perenes (eucalipto, pinus, uva, pastagem, cana-de-açúcar, laranja, maçã e pêssego). Mapa esquemático da localização das UPs no Brasil Cristalina - GO Pelotas - RS Matão - SP Mogi Mirim - SP Bagé - RS Dourados - MS Vacaria - RS São Carlos - SP Morro Redondo - RS Rio Negrinho e Doutor Pedrinho - SC Planaltina de Goiás - GO Castelândia - GO Guarapuava - PR Não-Me-Toque - RS Petrolina - PE Bento Gonçalves - RS University of Nebraska-Lincoln Este é o espaço de trabalho criado pela comunidade gvSIG Brasil. The International Society of Precision Agriculture (ISPA) is a non-profit professional scientific organization. The mission of ISPA is to advance the science of precision agriculture globally. As melhores soluções agropecuárias em TI desenvolvidas pela Embrapa, reunidas aqui. Agricultura sustentável: alimentando o presente para garantir o futuro Uma iniciativa para inserir o Brasil no esforço internacional de padronização de comunicação entre tratores e implementos agrícolas. Agricultura de Precisão Embrapa Instrumentação Agropecuária Rua XV de Novembro, 1452 São Carlos-SP Telefone (16) 2107-2804 - Fax (16) 2107-2902",pt 1459251677,CONTENT SHARED,6023609667389715259,4340306774493623681,-7292854281461484137,,,,HTML,https://blog.coinfund.io/five-bitcoin-and-ethereum-based-projects-to-watch-in-2016-664f3a20fa3f?gi=fdcb7d02236c,Five Bitcoin and Ethereum Based Projects to Watch in 2016 - Blockchain Investment Vehicles,"Five Bitcoin and Ethereum Based Projects to Watch in 2016 Git Money (Bitcoin) (Crowdsourcing) Git Money allows anyone to earn money by solving open issues on GitHub. Repository owners put up bounties for tasks and the reward is automatically paid to whoever submits the first successfully merged pull request. No need for interviews, contracts, or trust. So far 44 bounties have been claimed, for translating web pages, creating graphics/videos, writing a Medium article, and various programming tasks. Of the projects on this list, Git Money is the only production ready application with high potential for near term adoption. Amazon's Mechanical Turk has shown that crowdsourcing platforms are useful, and the model is a great fit for GitHub, the home of many cryptocurrency based projects. Git Money was designed using a 21 Bitcoin Computer . ""We've said it before and we'll say it again - it doesn't matter where you're from or who you are. We believe in the right to work anywhere on whatever you want. It doesn't matter your gender, your education, your social or spiritual beliefs or your skin color."" Augur (Ethereum) (Prediction markets/forecasting) Augur is a decentralized prediction market where users wager on the outcome of future events. There is plenty of research( 1 , 2 ) demonstrating the value of centralized markets as forecasting tools and Augur aims to address their shortcomings. Check out the Beta to browse example markets and test the platform with play money. We're unlikely to see a final product before the year's end, but the team has been making steady progress since crowdfunding over $5 million last fall. Microsoft will be offering a solution for companies to run private markets through their Azure Blockchain as a Service (BaaS) platform. TransActive Grid (Ethereum) (Utilities/renewable energy) TransActive Grid is allowing neighbors to purchase and sell renewable energy among each other, offering communities with microgrids a way to create a local energy market while reducing emissions and pollution. This is enabled by attaching solar panels to computers that track and log the creation of energy onto Ethereum's blockchain. The first implementation is being tested with participants of the Brooklyn City Microgrid project. For details see recent features in The New Scientist , Vice , CleanTechnica , and the team's video presentation . Yours (Bitcoin) (Content sharing/social media) Yours is a decentralized content sharing application where users will earn money for their submissions. When you ""endorse"" (think Reddit upvote or Facebook like) a post, you send the author a small microtip from your account's Bitcoin wallet. Endorsing a post also gives you the chance to be rewarded with a portion of the author's following tips, this incentivizes the early support of quality content. Zapchain is an existing site demonstrating demand for Bitcoin powered content monetization, and Medium is rolling out features that will allow authors to get paid. Here's a very early stage demo of Yours. ""We're going to enable anyone to get paid for the things they create, and allow the world as a whole to decide who deserves recognition, fame, and fortune."" Slock.it (Ethereum) (Internet of Things) Slock.it GmbH is a German startup working on slocks; software based ""smart locks"" that can be controlled with a smart phone application. In their introductory video , you can see a glimpse of how they hope to disrupt the sharing economy, enabling the renting and selling of anything without a middleman. Microsoft will provide testing and integration tools for businesses through their Azure BaaS. ""If your name is GE, Pearson or 3M - it's the ideal way to get started - with a one-click deployment."" Slock.it is also working with RWE (Germany's second largest utilities provider) on a smart contract powered electric vehicle charging platform, BlockCharge. They recently showcased a working prototype and will be testing the technology with real vehicles and stations over the next year. Slock.it's most ambitious venture is a Decentralized Autonomous Organization (DAO), a company that lives on Ethereum's blockchain whose rules are enforced by software . Imagine a ""super-Kickstarter"" which allowed backers to invest and become permanent stakeholders in projects instead of receiving one time rewards. Anyone can become a shareholder in the DAO by purchasing tokens (with ether) in the upcoming token sale. Shareholders vote on the DAO's business decisions and receive a portion of transaction fees whenever slocks are used. The DAO's existence is supposed to ensure the slock network can continue operating even if Slock.it GmbH goes out of business. Slock.it will submit a proposal to the DAO for development of the Ethereum computer , the device responsible for controlling slocks. If approved by shareholders, the DAO will pay Slock.it using the ether it raised during the token sale. There are nearly 3000 people in Slock.it's rapidly growing Slack channel, many of them eagerly waiting to participate in the DAO experiment.",en 1459251714,CONTENT SHARED,7905485530310717815,4340306774493623681,-7292854281461484137,,,,HTML,http://www.coindesk.com/blockchain-smart-contracts-bnp-paribas/,Blockchain Smart Contracts Startup Selected By BNP Paribas Accelerator - CoinDesk,"CommonAccord, a blockchain-based startup for legal documentation, is one of eight startups selected for BNP Paribas' new FinTech accelerator, L'Atelier. CommonAccord 's goal is to create global codes for transferring legal documents like contracts, consents and permits. The startup wants to develop a distributed network of participants that synchronizes files with each other, using blockchain, GitHub or email transfer. According to CommonAccord, however, blockchain is particularly important for it's ability to automate routine functions using smart contracts and provide an immutable ledger that can be used for legal enforcement. A parser program developed by Primavera De Filippi, an expert in the bitcoin legal space at Harvard's Berkman Center is being used by CommonAccord to support peer-to-peer transactions. BNP Paribas, French multinational bank, set up the accelerator in December to support startups and develop prototype solutions. Cryptocurrency and blockchain-related startups accounted for 3% of the 142 total startups that applied to be a part of L'Atelier's season one. Startups focusing on payments; cybersecurity, compliance and anti-fraud; and portfolio management led the number of applications. Image via Shutterstock Banking BNP Paribas Smart Contracts",en 1459251799,CONTENT SHARED,8194079557551008273,4340306774493623681,-7292854281461484137,,,,HTML,http://bitcoinist.net/using-gamified-hacking-challenges-to-attract-new-blockchain-developers/,Using Gamified Hacking Challenges To Attract New Blockchain Developers,"The blockchain ecosystem is always in need of more developers who want to push a fresh spin on the concept of distributed ledgers. Despite several conferences and development workshops around the world, it is rather difficult to attract coders on a larger scale. A new solution by Uber, which gamifies the aspect of hacking challenges, might be something to take note of by Bitcoin and blockchain companies. Also read: Sollywood Aims to Disrupt Traditional Cable TV With Blockchain Hacking Challenges Through Games Scouting for new developer talent is not an easy task, but there are certain tools to make life a bit easier for companies. Hacking challenges are an excellent way to test someone's coding skills in a more relaxed environment, as it gives them a problem for which they can use out-of-the-box thinking to solve it. has been looking at the concept of hacking challenges and decided to experiment with this solution to scout potential hires. These challenges will appear in the Uber app, and are labelled "" Code on the Road "". Considering how this format is presented as a mobile game, it is an excellent way to kill time while being driven around, and it could land users a job. Although people with previous coding experience - especially engineers - will have somewhat of an advantage during these hacking challenges, the gamified concept is open to anyone willing to give it a try. Uber also claims they are not targeting specific users with this game, although the hacking challenges will be rolled out in US cities with a higher concentration of tech jobs. What is of particular interest is how Uber is advertising the hacking challenges as a tool for [aspiring] developers to showcase their skills. It is positive to see such a big company tackle things in a different view, rather than relying on just job interviews, which is not environment catering towards the strengths of developers. All in all, there are three different coding challenges in the game, all of which has to be completed within sixty seconds. Users who score above average can contact Uber directly through the app itself. Doing so will result in them receiving an email with a link to the job application. Interesting Solution For Future Blockchain Development the aspect of blockchain development could be an interesting way for companies to attract potential job candidates. Considering how most consumer son the ""hotter' blockchain startup areas have access to a smartphone, it could be a good fit to integrate such a solution within the existing mobile apps themselves. Granted, companies will be ""fishing in the pool"" of enthusiasts, but that might be for the better as well. There are a lot of bright developer sin the world of blockchain and digital currency, most of whom do coding on the side. Perhaps some of them would be elated to turn their coding skills into a full-time job. What are your thoughts on attracting new blockchain developers through a coding ""game""? Let us know in the comments below! Images courtesy of Shutterstock, Uber Jp Buntinx JP Buntinx is a freelance Bitcoin writer and Bitcoin journalist for various digital currency news outlets around the world. In other notes, Jean-Pierre is an active member of the Belgian Bitcoin Association, and occasionally attends various Bitcoin Meetups in Ghent and Brussels Mobile security for Android users seems to be tough to achieve these days, especially when considering how the sour... Sollywood believes it can achieve its goals of disrupting the $200 billion Cable TV industry by leveraging blockcha... Up-and-Coming dApps platform Lisk has closed its crowdfunding campaign, successfully raising 14,000 bitcoins. This ...",en 1459251851,CONTENT SHARED,-1672166631728511207,4340306774493623681,-7292854281461484137,,,,HTML,https://www.coinbr.net/blog/america-latina-apresenta-forte-potencial-para-o-bitcoin/,O potencial do bitcoin na América Latina,"28/03/2016 | por Safiri Felix | em Economia As projeções para a economia na América Latina em 2016 não são nada animadoras. Além de agudas crises políticas em alguns dos principais países, como Brasil e Venezuela, a forte queda no preço das commodities que compõem a maior parte das exportações da região graças à desaceleração da economia chinesa, o primeiro parceiro comercial de muitas nações latino-americanas, está provocando fortes recessões. Devido a esse quadro de pessimismo econômico, cada vez mais pessoas e empresas na América Latina passaram a entender e tirar proveito dos grandes benefícios do bitcoin como uma alternativa interessante de investimento e pagamento, especialmente para despesas em moeda estrangeira. No ano passado, a adoção da moeda digital bateu recordes na América Latina. No Brasil, por exemplo, o ano de 2015 movimentou mais de R$ 113 milhões em negócios envolvendo bitcoin, um aumento de 158% em relação a 2014. A cotação da moeda teve 102% de valorização no ano. No México, os negócios aumentarem 600% em 2015. A empresa de processamento de pagamentos BitPay chegou a registrar crescimento de 510% em suas transações no meio do ano que seguiu forte até o final de 2015. Em toda região, o percentual de crescimento nas transações comerciais ultrapassou 1700% ao longo do ano. Os indicadores referentes a 2015 mostram que quem manteve capital alocado em bitcoin no ano passado viu seu investimento performar 92% melhor que o real, 65% superior que o peso mexicano, 41% em peso argentino e impressionantes 400% em relação a moeda venezuelana bolivar. É importante lembrar que a inflação em vários países atingiu níveis recordes no ano passado e a situação não deve melhorar em 2016. Na Venezuela, o índice inflacionário bateu 275% em 2015, enquanto que na Argentina chegou a 30% e no Brasil pouco mais de 10%. O grande expoente da região quando o assunto é bitcoin continua sendo a Argentina, que possui o maior número per capita de entusiastas da tecnologia. O novo presidente argentino, Mauricio Macri, parece ser um entusiasta da tecnologia. Recentemente, o mandatário argentino reuniu-se com o bilionário Richard Branson, acionista do Bitpay, e discutiu aspectos relacionados à moeda digital. No entanto, o caso brasileiro é especialmente interessante para os investidores. Com o crescimento da inflação, diversificar reservas com bitcoin é uma importante alternativa pra retenção do poder de compra e a forma mais ágil de obter moeda estrangeira. O mercado também apresenta avanços importantes em termos de liquidez e casos de uso como os desenvolvidos pela coinBR. O volume negociado diariamente cresce de forma sustentada no primeiro trimestre do ano, apontando perspectivas de que o mercado brasileiro movimentará algo em torno de R$ 300 milhões em 2016. Outro importante motivador para adoção de bitcoins na região tem sido o aumento do controle de capitais. Enviar dinheiro para o exterior utilizando serviços financeiros tradicionais incorre em um custo médio superior a 10%, além da burocracia e tributação crescente. Os usuários de bitcoin latino-americanos não estão utilizando bitcoins apenas para reservar valor ou escapar dos fortes controles de capitais. Muitos também estão aproveitando sua intrínseca eficiência para fazer compras online nos vários e-commerces que já aceitam a moeda digital, também utilizando cartões de crédito pré-pago com recarga em bitcoin, como as soluções do ContaSuper e ADVcash disponibilizadas aos clientes da coinBR. Apesar do forte crescimento na adoção de bitcoins na América Latina, a tecnologia ainda enfrenta alguns obstáculos para decolar. O setor de e-commerce, por exemplo, ainda não ganhou tração parecida como ocorreu na América do Norte e na Europa. Na Venezuela, por exemplo, o governo mandou fechar mineradoras de bitcoin e recentemente utilizou seu canal de televisão estatal para criar uma propaganda contrária ao bitcoin, dizendo que esta é a moeda da deep web e uma forma de evadir divisas do país. Na Bolívia, em 2014, o governo central baniu a utilização de quaisquer moedas que não sejam emitidas pelo banco central do país, incluindo o bitcoin. O mesmo aconteceu no Equador, quando também em 2014, o governo baniu a utilização de moedas descentralizadas. Apesar do cenário desafiador, definitivamente a América Latina figura como uma terra de oportunidades para o bitcoin. Brasil, Venezuela, México e Argentina certamente serão países que irão liderar esse crescimento do uso da tecnologia.",pt 1459251889,CONTENT SHARED,-4374331682165863764,4340306774493623681,-7292854281461484137,,,,HTML,http://www.bbc.com/news/business-35890616,From fine wine to lotteries: Blockchain tech takes off - BBC News,"Imagine a world where you can vote in an election with your phone, where your buy a house in a matter of hours, or where cash simply doesn't exist. These are some of the scenarios being mooted by an increasingly excited blockchain community. The technology that underpins the cryptocurrency Bitcoin is nothing new - it's been around for decades. It's just an encrypted database that that's distributed across a computer network. But what makes it different is it can only be updated when everyone on that network agrees, and once entered the information can't be overwritten, making it extremely secure and reliable. And trust, as we know, underpins most business transactions. ""Blockchain, for perhaps the first time, presents a legitimate threat to the status quo,"" says Terry Roche, head of financial technology research at financial advisory firm, Tabb Group. The tech has spawned a new generation of start-ups looking to find new applications, from peer-to-peer lending to smart contracts. No restrictions For example, OpenBazaar is a way people can sell anything to anyone, anywhere in the world using bitcoins. Unlike with eBay or Amazon, users don't visit a website but download a programme that directly connects them with other potential buyers and sellers. ""Our goal is to unbundle the incumbent marketplaces around the world by offering a more private, secure and flexible option that isn't controlled by any one corporate interest, but rather, by the users themselves,"" developer Brian Hoffman tells the BBC. According to OpenBazaar, cutting out the middleman means there are no fees, no restrictions, no accounts to create, and you only reveal the personal information you feel comfortable sharing. The software is now in its testing phase and has been downloaded nearly 20,000 times in the last three weeks. Smart contracts Another major development exciting the industry are smart contracts, programmes that can automatically verify that contract terms have been met, and, once that has been done, authorise payment - all in real time without any need for middlemen. The results are then indelibly recorded in the blockchain database. Some believe - perhaps fancifully - that such contracts could remove the need for lawyers one day. Companies like San Francisco-based SmartContract and Hedgy are already building businesses based on the concept, which could have applications in the financial, property and commerce markets. By incorporating smart contracts with the ""internet of things"" (IoT)- smart devices hooked up to the internet - blockchain tech could have uses far beyond the financial sector, says Emmanuel Viale, a managing director of Accenture's Technology Labs. ""You could have a wearable fitness tracker that would send the number of calories or steps taken to the blockchain. The data is encrypted and my identity is anonymised. The same with home medical devices,"" he says. ""The Blockchain would create a link with health professionals - whether coaches, doctors or healthcare institutions - and the smart contract could trigger needed services - whether it's a fitness regime or treatment for a chronic disease."" In another example, start-up Hellosent thinks smart contracts and IoT devices could be used to monitor deliveries of fine wines. Sensors would continuously measure the temperature and humidity of the wine in transit, then if either fell below agreed levels as recorded in the smart contract, the purchase would be automatically cancelled. Blockchain platform Some firms are even developing their own version of blockchain. Ethereum , for example, is a blockchain-based decentralised platform that developers can use to create their own applications, cryptocurrencies and virtual organisations. Co-founder Mihai Alisie says giants such as IBM, Microsoft and Samsung have already started researching its possible uses. ""These new applications put users in full control over their data, privacy and funds, making possible an ecosystem of web apps that can be trusted... since the code powering these applications is transparently running on the Ethereum blockchain,"" he says. There are developers on Ethereum creating all sorts of new projects, from lotteries to financial options exchanges, peer-to-peer home-produced energy trading platforms to games. ""From financial systems to governance systems and everything in-between, there is a brave new world awaiting to be imagined and built. I call this the crypto renaissance,"" says Mr Alisie. Predicting the future? One intriguing piece of software based on the Ethereum platform is Augur. Sounding somewhat reminiscent of the Borg from Star Trek, it harnesses ""the wisdom of crowds"" - collective intelligence - to predict the outcome of future events. ""The goal is to create a global prediction market platform that would enable the expertise of anyone to be utilised in markets, creating what we believe will be the most accurate forecasting tool in history,"" says Tony Sakich, Augur's director of marketing. Still in its development phase, it has now been released for Beta testing. Of course, many of these blockchain start-ups may never get off the ground and some of the predictions for the technology seem far-fetched, to say the least. At Davos earlier this year, one head of an European bank thought blockchain could sound the death-knell for cash. As venture capitalists pile in, increasing investment in blockchain-related firms from $3m to $474m (£335m) from 2011 to 2015, there is the risk of a dotcom-style bubble. There is also the issue of diminishing network capacity - blockchain database management takes up a lot of computing power. Blockchain boss Peter Smith has warned that the technology could ""run out of rocket fuel before the ship reaches orbit"", as complaints about slowing transaction speeds reach record levels. That said, there is a lot of innovation going on and Peter Loop, head of innovation and disruptive technologies at Infosys Finacle, concludes: ""Blockchain as a commercially viable technology remains on an unclear path. ""But it's coming quickly. It's not 10 years away. It's coming within a few years."" Follow Technology of Business editor @matthew_wall on Twitter .",en 1459257477,CONTENT SHARED,5714314286511882372,1895326251577378793,6290959012079283980,,,,HTML,https://www.sympla.com.br/como-startups-e-grandes-empresas-podem-colaborar-para-inovar-mais-e-melhor__60973,Como Startups e Grandes Empresas podem colaborar para Inovar mais e melhor,"Muito tem se falado sobre como grandes empresas podem aproveitar as startups para inovar, mas pouco tem se discutido como essas startups podem se alavancar, aproveitando o que as grandes empresas tem de melhor. Participe do evento e descubra como inovar mais e melhor. Participe dodiretor de Marketing e Inovação da Tecnisa evento gratuito de Maximiliano Carlomagno , sócio-fundador da Innoscience com participação de Romeo Busarello, Egon Barbosa e Igor Piquet, em parceria com o Com participação de: -Maximiliano Carlomagno, sócio fundador da Innoscience -Romeo Busarello, Cubo Network e descubra Como startups e grandes empresas podem colaborar para inovar mais e melhor . Potencialize sua startup no relacionamento com as grandes empresas!",pt 1459257584,CONTENT SHARED,-8141818108252244664,1895326251577378793,6290959012079283980,,,,HTML,https://www.sympla.com.br/pimp-my-pitch-como-fazer-um-pitch-poderoso__60657,Pimp My Pitch: como fazer um Pitch poderoso,"Descrição do evento Qual horário? 14h00min às 18h30min ""Google provides access to the world's information in one click Larry Page e Sergey Brin: pitch para a Sequoia Um pitch de sucesso depende muito mais do que conhecimento sobre a sua startup. A maneira como uma informação é transmitida tem um impacto profundo na compreensão da mensagem e pode ser a diferença entre um investimento de sucesso e ser obrigado a continuar no bootstrapping. Nestes 2 dias de imersão você aprenderá os elementos essenciais que tornam as apresentações excelentes. Com uma dinâmica constante entre teoria, técnica e prática, iremos analisar pitches e apresentações de sucesso. Além de demonstrar através de casos reais, quais são as abordagens mais eficazes para conquistar o seu público. Do roteiro, passando por storytelling, design dos slides até o treinamento do pitch, você aprenderá o passo a passo para estruturar um pitch de alto nível, preparar um deck impactante e se apresentar com desenvoltura e confiança. Você também será convidado a entender um pouco dos segredos das grandes apresentações do TED e de lições do , além de descobrir o que Shark Tank , Steve Jobs e podem te ensinar sobre como criar apresentações inspiradoras e memoráveis. Aristóteles Disney TÉCNICAS DE PALCO - Como os gestos, seu tom de voz, seu olhar e sua postura podem fortalecer a comunicação da sua mensagem? Ao longo do curso, você terá a oportunidade de trabalhar o seu pitch e levá-lo a um novo patamar com base nos conhecimentos aprendidos e nos feedbacks das atividades. E principalmente: você poderá aplicar essa nova habilidade em situações variadas do cotidiano empreendedor que vão muito além do investimento, como prospecção de clientes, contratação de funcionários, negociação com parceiros e apresentações internas. Sua comunicação pode transformar sua startup.",pt 1459257924,CONTENT SHARED,-8939172344092554931,-1443636648652872475,-9071650721576979280,,,,HTML,http://kotaku.com/google-deepmind-is-now-analysing-magic-and-hearthstone-1767628685,Google DeepMind Is Now Analysing Magic And Hearthstone Cards,"With retro games and Go well-conquered , where is an artificial intelligence like Google DeepMind meant to turn next? Magic: The Gathering and Hearthstone , obviously. Before you get too excited (or maybe insanely depressed as you imagine a toaster holding aloft the Magic World Championship trophy on its ejection lever), there are no plans to set the AI loose on playing these popular card games. At least not yet. For now, the folks over at Oxford University are happy enough for DeepMind to analyse card data and transform it into code . Essentially, the task it is being set is one of translating the data from human to machine speak and while the cards have their own game ""language"" and structure, they can certainly throw some curveballs. Here it is explained in their words: Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. For example, elements such as a card's resource cost never really change in nature and are easily deciphered, however, a card's text might specify that the cost is increased or reduced based on another condition. As you can imagine, writing a program that can analyse and account for these changes in card logic and translate them into arbitrary code is no trivial task. Rather than write the mother of all if/else statements, they've resorted to the use of DeepMind instead. By giving it enough data - all eleventy billion or so Magic cards, say - the AI can learn the ""language"" of card text to produce more accurate results. Apparently, it does a decent job on Hearthstone , though it still stuffs up: The card itself is on the left. On the right, the code DeepMind has generated based on its text. It handled the (relatively speaking) straightforward effect of the Madder Bomber fine, but the more specialised Preparation confused it. Which is fair enough - going by the number of times professional players have screwed up the cast order of Preparation , we can forgive DeepMind for getting it wrong. The researchers mention that the reason ""Madder Bomber"" was treated correctly was because it had ""captured"" the difference between a similar card: The ""Madder Bomber"" card is generated correctly as there is a similar card ""Mad Bomber"" in the training set, which implements the same effect, except that it deals 3 damage instead of 6. Yet, it is a promising result that the model was able to capture this difference. Yep, it's getting the hang of it all right. At this stage I'd be worried about DeepMind getting addicted to Hearthstone and blowing all of Google's cash on packs. That would be a lot of packs. This story originally appeared on Kotaku Australia .",en 1459258094,CONTENT SHARED,-1591454024897803197,-8606085472606356565,7660662789565329118,,,,HTML,http://recruitingdaily.com/how-okcupid-changed-hiring-forever/,How OKCupid Changed Hiring Forever.,"Twenty years ago, if you wanted to find a specific person - whether it was the perfect romantic counterpart or an ideally suited future employee - you could expect to spend months searching for anyone who simply qualified. The advent of the internet changed that completely. Suddenly, a tsunami of potential new candidates opened up for both personal and professional searches, and both lovers and recruiters had more options than they'd ever dreamed existed. The introduction of the web meant that people from anywhere in the world could connect with millions of new people from anywhere else, and conversations or relationships that would have been unfathomable before were suddenly possible. Dating sites started to spring up left and right to help connect potential matches, and traditional recruiting practices were entirely upended (think: physical job boards going digital). But this rush of new options created an entirely new-but equally problematic-challenge: too much information. The Problem With Too Much Data. In hiring, and also in love, there can be too much of a good thing. The average job listing elicits 250 applicants , while Tinder, for example, generates 1.4 billion swipes (potential matches) per day. With volume that high, quality control becomes nearly impossible, and people end up discarding candidates (either romantic or professional) without any real review, or else giving up entirely. For dating, you're left sorting through a lot of... less-than-desirable options. For hiring it means that dozens of qualified, potentially incredible workers get overlooked while recruiters are forced to waste hours digging through irrelevant resumes or following false leads. Most recruiters end up either relying on referrals or just seizing upon the first people who somewhat conform to the job description , even if they aren't truly the best fit. The system is incredibly inefficient, and the cost to an organization is substantial. So how do you contend with this overwhelming influx of potential new matches? That's where OKCupid's technical breakthroughs set the stage for a new way of managing massive amounts of information. There were other popular dating sites at the time of its rise to popularity, but OKCupid was the one that developed a reputation for its innovative use of data science and groundbreaking algorithm . The company also ran a notorious (now defunct) data blog in which it used its datasets to analyze romantic trends. Using user-sourced data and a single simple formula, OKCupid was able to pair each user with his or her most compatible dating prospects. Data science enabled it to turn that data into incredibly valuable (and interesting!) insights about the landscape of modern love-which is, after all, its operational industry. It was the one of the first and most public uses of smart new technologies and information as a way of helping people find people. What Automation and Data Science Do For Hiring. Just like dating, hiring is an incredibly specific process, with several similar steps. Hiring managers need to find candidates that are not only ""appealing"" (experienced, highly qualified, etc.), but also a fit for that particular company. Do the candidate's strengths round out the team's weaknesses? Could he or she promote the values of the company? Does the hire promote a higher diversity of experience ? All valid considerations, but difficult to balance in an extremely competitive hiring market with a flood of prospects. Recruiters need to operate quickly and with precision. They need to be confident in their candidate choices and equipt with the information to engage with them and draw them in as quickly as possible. Automation , like the kind used by OKCupid, helps them do that. There's enough publicly available information about any potential candidate on the internet to make strategic sourcing exceptionally easy. If software can combine data from social networks, blogs, work portfolios, and other sources, it can generate a pretty thorough picture of a person in nanoseconds. Then, it can present recruiters with their ""top matches"" before they even start the sourcing process. That type of automation has played a key role in defining the modern hiring market. Sourcing, engaging, onboarding, growth... Everything has gotten much, much, much faster as a result. Hiring competition has reached an all-time high, and it's largely because organizations can operate that much more efficiently. The second component of OKCupid's major contributions - analytics and data science - has proven equally critical. That's because speed on its own isn't enough. In a world where the average cost of a misplaced hire is 2.5 times that employee's annual salary , regular mistakes aren't sustainable. Recruiters have to completely understand the industry landscape, the candidates, the needs of their own organization, and how each candidate would fit in. The right strategically-designed analytics solution can help them do that. We're living in a brave new world with more connections, more interactions and more potential relationships. It can feel overwhelming at times, but ideally what it ultimately means is that we're all that much closer to finding our perfect match - whether that means meeting a partner or finally landing that one hire who takes your whole business to the next level. Robert Carroll currently serves as the Senior Vice President of Marketing for Gild , where he is responsible for crafting and executing Gild's marketing strategy including brand, sales enablement, press and analyst relations, events and demand generation programs.",en 1459259981,CONTENT SHARED,-6874540813378776198,-108842214936804958,770918631519434453,,,,HTML,https://medium.com/greylock-perspectives/the-hierarchy-of-engagement-5803bf4e6cfa,The Hierarchy of Engagement - Greylock Perspectives,"The Hierarchy of Engagement The Fuel to Build an Enduring, Billion Dollar Business Building an enduring, multi-billion dollar consumer technology company is hard. As an investor, knowing which startups have the potential to be massive and long-lasting is also hard. From both perspectives, identifying companies with this potential is a combination of ""art"" and ""science"" - the art is understanding how products work, and the science is knowing how to measure it. At the earliest stages of a company, it comes down to understanding how a product is built to maximize and leverage user engagement. I think of user engagement as the fuel powering products. The best products take that fuel and propel the product (and with it, the company) forward. Just how products do that is something I've been thinking about for most of my career. At Nir Eyal 's Habit Summit this week, I presented a framework for how I evaluate non-transactional consumer companies I'm looking to invest in that synthesizes some of this thinking - I call it the Hierarchy of Engagement. The hierarchy has three levels: 1) Growing engaged users, 2) Retaining users, and 3) Self-perpetuating. As companies move up the hierarchy, their products become better, harder to leave, and ultimately create virtuous loops that make the product self-perpetuating. Companies that scale the hierarchy are incredibly well positioned to demonstrate growth and retention that investors are looking to see. I encourage you to use the Hierarchy of Engagement framework when thinking about your own product, and building out your product roadmap. But, like most frameworks, I am continuously improving the hierarchy and would love to hear your thoughts. Let me know what you think. Follow me on Twitter @sarahtavel .",en 1459268727,CONTENT SHARED,-3173020603774823976,-1032019229384696495,3042342415047984532,,,,HTML,https://nodejs.org/en/blog/announcements/welcome-google/,Welcome Google Cloud Platform!,"Google Cloud Platform joined the Node.js Foundation today. This news comes on the heels of the Node.js runtime going into beta on Google App Engine , a platform that makes it easy to build scalable web applications and mobile backends across a variety of programming languages. In the industry, there's been a lot of conversations around a third wave of cloud computing that focuses less on infrastructure and more on microservices and container architectures. Node.js, which is a cross-platform runtime environment that consists of open source modules, is a perfect platform for these types of environments. It's incredibly resource-efficient, high performing and well-suited to scalability. This is one of the main reasons why Node.js is heavily used by IoT developers who are working with microservices environments. ""Node.js is emerging as the platform in the center of a broad full stack, consisting of front end, back end, devices and the cloud,"" said Mikeal Rogers, community manager of the Node.js Foundation. ""By joining the Node.js Foundation, Google is increasing its investment in Node.js and deepening its involvement in a vibrant community. Having more companies join the Node.js Foundation helps solidify Node.js as a leading universal development environment."" Along with joining the Node.js Foundation, Google develops the V8 JavaScript engine which powers Chrome and Node.js. The V8 team is working on infrastructural changes to improve the Node.js development workflow, including making it easier to build and test Node.js on V8's continuous integration system. Google V8 contributors are also involved in the Core Technical Committee. The Node.js Foundation is very excited to have Google Cloud Platform join our community and look forward to helping developers continue to use Node.js everywhere.",en 1459268965,CONTENT SHARED,-9107331682787867601,-1032019229384696495,3042342415047984532,,,,HTML,http://techcrunch.com/2016/03/29/hopper-raises-16-million-for-a-travel-app-that-tells-you-the-best-time-to-fly/,Hopper raises $16 million for a travel app that tells you the best time to fly,"Hopper , the makers of a handy travel application that tells you the best time to fly in order to find the best deals, has now raised additional capital to continue to grow its business. It has also scored a partnership with American Airlines which allows it to sell AA's tickets through its app. In terms of the new investment, the company announced $16 million in a growth funding round, led by BDC Capital IT Venture Fund. Existing investors, including OMERS Ventures, Accomplice (formerly Atlas Venture) and Brightspark Ventures, also participated. That brings the startup's total raise to date to $38 million. Though the travel space is rife with competition, Hopper has developed a useful application for those looking for an easier way to figure out when to fly in order to save money on airfare. Its prediction engine uses data and analysis from ""billions"" of tracked flight prices to suggest when travelers should buy tickets. The company claims that it's capable of saving customers up to 40% on their next flight, thanks to its service. While ever-changing airfare ticket prices are a complex thing to track, this data is presented to Hopper's end users in an easy-to-read format by way of Hopper's app. Here, customers can view color-coded calendars which shows which days are affordable (green) to travel, all the way up to expensive days (red.) You're also able to watch trips, get detailed forecast data, receive alerts when fares drop and more. The end result for customers is being able to snag a ticket without overpaying - though having more flexible travel dates, of course, helps. What's also interesting about Hopper is that it's entirely focused on delivering its product by way of a mobile app, not a website. According to Hopper's CEO and founder, Frederic Lalonde, previously a VP at Expedia, the company isn't a ""mobile-first"" startup - it's ""mobile- only ."" ""We fundamentally believe that the mobile app experience in one shape or form is going to take over all commerce,"" he says. ""A hundred percent of what we do is based on mobile. And beyond that, we're very much a conversational company in the sense that 90 percent of what we sell comes from a push notification."" In other words, when Hopper alerts its users that now is the time to buy, they do. While the company doesn't disclose its revenue, Lalonde would say that everything related to its service is growing at roughly 40 percent month-over-month, and he believes the app will reach 1 million downloads per month this summer. Hopper is still a relatively young company. It first launched in early 2015, and was named the #7 best app by Apple that same year. It's also a #4 travel app in the U.S. on the App Store, and #1 in dozens of countries. To date, the app has been downloaded over 3 million times, and has monitored fares for over 5 million watched trips worth over $4.5 billion in gross booking value. It has also sent over 38 million push notifications to users, with 7.3 million being sent last month alone, the company reports. Hopper has increased its accuracy since its debut, too, and now claims it's 95 percent accurate in terms of predicting ticket prices within $5 dollars of the actual fare up to a year ahead. But Lalonde point out that, while Hopper has improved its performance 3 percent since last year, it has done so in areas that matter - like holiday travel, for example. In addition to the new funding, Hopper has also now signed a deal with American Airlines that will make American Airlines and American Eagle fares available in its app. This includes around 6,700 daily flights to nearly 350 destinations in over 50 countries. This is notable because AA was one of the few holdouts to yet offer its fares in Hopper The app today supports almost all major airlines' fares. The company's business model involves a combination of compensation from airlines, and a $5 convenience fee charged to consumers. With the new capital, Hopper plans to grow its team of 25 in Montreal and Cambridge. It's soon rolling out a number of new features that will allow it to push recommendations to users and offer a more personalized experience. For instance, users will be able to specify things like a desire to avoid long layovers or the ultra-low cost fares in favor of regular tickets, and the app will suggest things users may not know to look for - like alternative airports, for instance. The startup also plans to increase its focus on international travel by year-end, with a focus on more accurate pricing, no matter where users book.",en 1459269196,CONTENT SHARED,-1021685224930603833,4670267857749552625,-265833983523746108,,,,HTML,http://www.telequest.com.br/portal/index.php/opiniao/5908-industria-4-0-desafios-e-oportunidades,Indústria 4.0: desafios e oportunidades,"*Igor Schiewig 25/03/2016 - A Indústria 4.0 é muito mais do que o uso de tecnologias como Internet das Coisas (IoT) e Computação em Nuvem nos processos de produção. Ela é caracterizada por modelos de negócio realmente inovadores, que gerem vantagens competitivas significativas e sustentáveis para as empresas. O que realmente vai assegurar diferenciais competitivos sustentáveis às empresas não é a simples adoção da Indústria 4.0, mas a capacidade de utilizar seus recursos para criar experiências únicas e inovadoras para seus clientes (Manufatura na Era da Experiência). Estas experiências, que poderão ser aceleradas e levadas a novos patamares, serão o segredo do sucesso das empresas nesse novo ambiente de negócios. *Igor Schiewig, Diretor de Business Transformation da Dassault Systèmes para a América Latina",pt 1459271088,CONTENT SHARED,1392715980907132808,4340306774493623681,-7292854281461484137,,,,HTML,http://www.newsbtc.com/2016/03/29/medstar-washington-affected-bitcoin-ransomware/,MedStar Washington Potentially Affected By Bitcoin Ransomware,"There are rumors circulating this healthcare institution is affected by Bitcoin ransomware, as one staffer mentioned how she saw a pop-up on two different computer screens. In this pop-up windows, there was information about the infection, and instructions to pay a ransom through ""some form of Internet currency"". Those details have not been officially confirmed at the time of publication, though. Another hospital in the United States has fallen victim to a virus bringing their services to a halt. MedStar Hospital Center, located in Washington, noticed the virus intrusion early Monday morning, resulting in email services being shut down, and having no access to their vast database of patient records. This matter has piqued the interest of the FBI, who are currently investigating the matter to determine whether or not Bitcoin ransomware is involved in this attack. Also read: Antonopoulos and Uphold Get into a Spat over Trademarked Saying MedStar Washington Hospital Center Faces Virus Threat Given the number of hospitals being by Bitcoin ransomware over the past few months, it only seems to make sense this issue plaguing the MedStar Washington Hospital Center should be classified in the same category. However, it remains unclear as to whether or not Bitcoin ransomware is the cause of this attack, as there has not been an official statement just yet. What we do know is how no information appears to be stolen from the hospital database, which is cause for a sigh of relief. Healthcare institutions have a vast amount of personal details about their patients, and all of these details could be of high value to individual internet criminals. Luckily, this is not the case here, or so it would seem. MedStar has to be applauded for their course of action, as the decision was made to shut down all system interfaces. Doing so effectively all access paths for this virus to spread itself through the entire hospital IT system. That being said, all of the clinical facilities will remain open although there will be some delays due to staff reverting to paper records for the time being. Keeping in mind how MedStar operates ten different hospitals, as well as over 250 other facilities in the Washington region, this virus attack appears to be made deliberately against such a prominent healthcare provider. With lab results delayed by quite a margin, this issue needs to be sorted sooner rather than later, so that healthcare services can be restored to normal. Rumors are circulating this healthcare institution is affected by Bitcoin ransomware, as one staffer mentioned how she saw a pop-up on two different computer screens. In this pop-up windows, there was information about the infection, and instructions to pay a ransom through ""some form of Internet currency"". Those details have not been officially confirmed at the time of publication, though. Avast Mobile Enterprise General Manager Sinan Eren stated: ""There's a lack of budget, a lack of talent to handle these issues. Sometimes the human capital might not be there. All these things are an incremental cost to their systems. Therefore, they kind of push the can down the road to deal with technical updates later."" Regardless of what type of virus has infected the MedStar systems, it is evident the healthcare industry still has no answer to lackluster staff awareness regarding software security threats. The combination of staffers being unaware of what they should be doing, and outdated computer systems, result in these types of issues occurring far more often that people would like.",en 1459271144,CONTENT SHARED,-6142462826726347616,4340306774493623681,-7292854281461484137,,,,HTML,http://cointelegraph.com/news/coinfest-2016-uniting-the-worlds-bitcoiners,CoinFest 2016: Uniting the World's Bitcoiners,"Following the success of its international events spread across three continents and 15 countries, CoinFest, the world's first decentralized currency convention, has announced the launch of CoinFest 2016, claiming more than 20 cities already planning to participate. Founded by a small group of 100 Bitcoiners in Vancouver circa 2013, CoinFest has collaborated with startups , enthusiasts and experts to spur the awareness and mainstream adoption of cryptocurrency internationally. The organization was established to target the growth of the Bitcoin industry and community in Canada . Bitcoin community-focused events Thanks to its success in 2014, CoinFest began to gain popularity for its friendlier top-tier events. Instead of gearing towards a corporate-style conference , CoinFest has been hosting Bitcoin community-focused events, introducing Bitcoin to conventional merchants and the general population. The founder of eMunie, a major sponsor, says to CoinTelegraph in a Skype interview: ""It focuses more on regions where Bitcoin and similar technologies can have a greater impact and really improve the quality of life for a lot of people.It's more friendly and has a real 'by the community, for the community' feel to it, which we like. I've found other conferences and events to be very 'corporate' which can be a little intimidating for newcomers to Bitcoin and other similar technologies."" In-depth discussions and technical applications In 2015, CoinFest hosted its events across 16 cities, growing exponentially in size since 2013, when the organization started by hosting one annual event in Canada. Today, global Bitcoin conferences and events hosted by CoinFest are segregated into sub organizations. They cover local cities or even entire regions, such as newcomer CoinFest Midwest. CoinFest Midwest sponsor phintech.io tells CoinTelegraph: ""Coinfest Midwest is responsible for curating CoinFest along the central corridor of the United States . Our goals are to raise awareness about Bitcoin by hosting events and then educating attendees. Our content is relevant for Bitcoin newcomers just curious about the digital currency and existing fans who want to have in-depth discussions and explore more technical applications . Hopefully this year we'll get to buy and sell goods via Bitcoin, install more Bitcoin wallets for people and send them their first BTC."" A Global Network of High-Profile Startups and Experts with Bitcoin-focused Initiatives Sponsors and hosts of CoinFest Midwest have engaged in exciting events in the greater Omaha area, incorporating a network of high profile startups and experts with bitcoin-focused initiatives. Digital Economy 2015 was one of the most successful CoinFest Midwest events in Omaha last year. The collaboration and partnerships with regional fintech leaders, entrepreneurs, speakers, and panels gathered more than 130 attendees, allowing both entrepreneurs and consumers to explore the innovative nature of new financial technologies and their potential to transform the traditional finance sector. Making it Big Continuing its momentum, 30 cities announced they are planning to host simultaneous events across major cities from April 5-10, including Amsterdam , Copenhagen, Washington DC, Omaha, Toronto , Vancouver, Valdivia (Chile) and Manchester. Currently, CoinFest Amsterdam and Manchester are the most popular conferences out of all regions. Across its events, CoinFest is planning to giveaway 1 BTC in Grabbit prizes, over 40K HUC in Huntercoin prizes and all kinds of prizes on the Wheel of Bitcoin. AirBitz , the official wallet of CoinFest 2016, has announced their support, and will provide mBTC to every Bitcoin beginner who sends out a tweet. Phintech.io says: ""I feel that Asia and Africa will likely become as big, if not bigger markets than the western counterparts. There are huge opportunities for Bitcoin and other crypto currencies in these regions due to the general lack of financial infrastructure available, especially to those living in remote or poor areas.""",en 1459271181,CONTENT SHARED,2255603060224026824,4340306774493623681,-7292854281461484137,,,,HTML,http://www.newsbtc.com/2016/03/29/french-senate-will-debate-bitcoin-regulation/,French Senate Will Debate on Bitcoin Regulation,"While the efforts by the French Senate to combat terrorism and organized crime are commendable, the decision to make Bitcoin regulation more strict could end up hurting the local economy more than anything The regulation of Bitcoin and digital currency will be debated upon in France, as the government is taking the risk of money laundering and terrorist funding very seriously. Although there is little to no proof to back up these claims, authorities feel the need to debate further on Bitcoin regulation. Bitcoin's public image of being a currency used primarily by Internet criminals is rearing its ugly head once again. Also read: Bitcoin Price Watch; Sellers Take Control French Senate Ponders Over Bitcoin Regulation Moreover, exchange platforms dealing with Bitcoin and other digital currencies will need to report to - an entity which has always been in favor of Bitcoin regulation - and keep more detailed logs of all customers. Doing so would allow government officials and Tracfin to identify financial fraud and money laundering attempts a lot quicker.",en 1459271208,CONTENT SHARED,3037840448416371691,4340306774493623681,-7292854281461484137,,,,HTML,https://bitcoinmagazine.com/articles/bitcoin-wallets-as-swiss-bank-accounts-the-developer-s-perspective-1459261829,Bitcoin Wallets as Swiss Bank Accounts: The Developer's Perspective,"Bitcoin was seemingly dragged into the very public debate on privacy and encryption recently. Specifically, President Barack Obama warned that if the government can't access phones, ""...everybody is walking around with a Swiss bank account in their pocket,"" which appeared to refer to cryptocurrency. Last week, Bitcoin Magazine reported on Bitcoin's industry representatives and their positions on encryption, privacy, Bitcoin's role in tax evasion and money laundering and more. In part two of our coverage: What do the actual builders of these pocket-sized ""Swiss bank accounts"" think? Bitcoin Magazine reached out to Electrum developer Thomas Voegtlin, Breadwallet CEO Aaron Voisine, Mycelium developer Leo Wandersleb and Ledger CTO Nicolas Bacca to see where they stand. Broken The debate on encryption and privacy caused by the ongoing dispute between Apple and the FBI took a sharp turn this week. The United States Department of Justice had long claimed it was unable to access encrypted iPhones without help from Apple, but this turned out not to be true. Although the Department of Justice did not explain how they got access to the phone, the Bitcoin wallet developers Bitcoin Magazine spoke to were not surprised that they could. Ledger CTO Nicolas Bacca speculated: ""To get access to the data, the FBI probably relied on some kind of physical attack involving the flash memory. Swapping the flash memory or disabling writes could get you infinite retries, so you can brute force the access code. The operating system doesn't handle that securely."" Bacca also pointed out that Bitcoin itself is much more secure than a typical iPhone. ""I believe Bitcoin is less at risk against physical attacks compared to other cryptosystems, because you always get a way to invalidate a possibly compromised key - just send the coins to a different address if you notice quickly enough. The issue is properly qualifying how long is that."" Balance While the FBI demands Apple help the government agency access encrypted iPhones, the tech company maintains that weakening encryption could result in a privacy disaster. Obama, explaining his position last week, argued it's important to find the right balance between privacy and security, suggesting weakened encryption should be an option. But this option was firmly rejected by all wallet developers Bitcoin Magazine spoke to. Electrum developer Thomas Voegtlin explained: ""In the physical world, you can design a door that is difficult to break. This means that someone may be able to force that door, but not covertly, and that is why we have a balance between privacy and security. But computers are devices that tend to make things binary. In the world of computing, you either do have the key, and opening the door is very easy, or you don't, and it is impossible. If we give a special key to the government, they will be able to open millions of doors with that key, with no effort, and without attracting attention. Nothing will prevent someone from misusing that key, and eventually the key will be leaked and fall into the wrong hands. A technological backdoor is the modern equivalent of the Ring of Gyges ."" Ledger's Nicolas Bacca agreed. ""As a society I believe we should be extremely worried about calls to weaken encryption,"" Bacca said. ""Practically, it cannot be done for a single target, as any ' NOBUS ' backdoor turns into a global risk when it's discovered. Ideologically, we already had a clear demonstration that letting agencies run loose with that kind of absolute power was a pretty bad idea. Politically, I believe it can lead to important economic collateral damages, which is another good reason to avoid doing it."" Voisine agreed with this assesment as well. Moreover, the Breadwallet developer argued that strong encryption is itself a balancing factor against the widespread data monitoring, not a factor that itself requires balance. ""Privacy is core to the human experience. Imagine if your landlord or your extended family knew exactly how much money you had at any given time, and how much you spent and when. It would be a disaster. Privacy is a leveler that allows parties with otherwise unequal bargaining power to negotiate on equal footing. It's even required by law in many situations, such as with the finances of publicly traded firms. Intentionally weakened encryption is absolutely something that we should all be worrying about. In a future world with the potential for ubiquitous surveillance, strong encryption available to individuals will be the counterbalancing force,"" Voisine said. Taxation Perhaps the main reason Obama cited the Swiss bank account example was to point out that strong encryption could allow citizens an easy escape from certain types of taxation. More specifically, Bitcoin users can potentially store significant amounts of wealth on their phones without government agencies knowing about it, or even being able to touch it. Mycelium developer Leo Wandersleb, however, questioned whether that should be considered a problem at all. Wandersleb: ""So Obama is worried that government might not have ultimate power over its citizens' assets? Help me, why again does he assume the right to have that power? I'm not a U.S. citizen, so excuse me if I'm not too firm with regard to the Constitution and its amendments ... but I know of nothing that would say 'all property is yours unless the government doesn't agree.'"" Electrum's Voegtlin took a slightly more moderate position. ""I am not an anarchist, and my involvement with Bitcoin is not motivated by anti-government ideology. I believe in a society with government, with taxes and law enforcement. I write Bitcoin software because I believe that the benefits of cryptocurrency, for society and for our economies, far outweigh the risks. However, we should not be denying there are risks. New technologies always carry new risks."" But the answer to combating these risks is not to encroach on encryption, Voegtlin pointed out. Rather, he believes the risks should be mitigated through alternative means. Voegtlin: ""I think that law enforcement and taxation will need to adapt to cryptocurrency. In 2011, Pirate Party founder Rick Falkvinge proposed that, in a world with Bitcoin, governments should tax consumption, rather than wealth or income. I believe that level of thinking is appropriate."" Voisine, too, believes the answer will eventually be looked for in alternative methods of taxation. ""There are many tax revenue streams that are difficult to avoid even with the leveling power of privacy putting the individual on a more equal footing with the state. Two examples are use taxes and property taxes. As the industry grows and the world moves their wealth into Bitcoin, I think we will see a gradual shift toward more heavy reliance on these types of income streams by the state. This has the added benefit of making the true cost of state services and programs more transparent. Privacy for individuals and transparency for the state is a wonderful thing."" Security Perhaps unsurprisingly, Bitcoin wallet developers have no intention of weakening the security or decreasing the privacy they offer. Rather, most intend to increase both the security and privacy of their products where possible. Mycelium currently uses a server-based model, which means governments could potentially pressure the wallet provider to give up certain privacy sensitive information or provide false transaction data to users. But Wandersleb explained the wallet intends to improve this: ""We are working on removing ourselves from the equation. Our new wallet will not depend on our servers, so there will be no single point of failure. It will also be open source, so even if we were forced to weaken our product, others could choose to distribute reliable versions. Lastly, Mycelium works with hardware wallets that provide a very good protection against broken operating systems."" Bacca's Ledger is one of the companies working with Mycelium to realize this solution. Bacca explained: ""We are building additional security layers directly on the phones when we can, and we're also building a new hardware wallet device, Ledger Blue. This provides open applications development on a Secure Element, which a phone can use over Bluetooth low-energy. That would be close to the hypothetical doomsday device referred to by Obama."" And Voisine, too, emphasized that privacy and security remain top priorities for Breadwallet. Voisine: ""A Swiss bank in every pocket empowers the individual in incredible ways we've never seen before. It's time that this option becomes available to the whole world, not just the wealthy and politically connected, and we are going to give it to them.""",en 1459272493,CONTENT SHARED,7767869406844505704,-1032019229384696495,3042342415047984532,,,,HTML,http://techcrunch.com/2016/03/29/slice-insurance/,Slice Labs raises $3.9M to insure on-demand workers when they're on the job,"Tim Attia, co-founder and CEO of Slice Labs , said there's ""a ticking time bomb"" threatening the on-demand economy - namely, insurance and liability. Attia's tackling that problem with Slice, which will offer insurance for on-demand workers and providers, starting with rideshare drivers and then homeshare hosts. The startup is announcing today that it has raised $3.9 million in seed funding from Horizon Ventures and XL Innovate. Don't companies like Uber and Airbnb provide insurance already? They do, but Attia (who's spent more than a decade in the insurance industry ) argued that there's still a significant amount of risk. For example, Uber's coverage changes depending on where the driver is in the process (going online versus accepting the trip versus starting the ride), with the expectation that the driver will have their own personal or commercial policy. He also said Uber's policy is in Uber's name, not the driver's, which could lead to legal complications. ""I'm not picking on Uber and Airbnb - it's the only thing they can do,"" Attia said. Slice, on the other hand, aims to offer new kinds of insurance products designed for on-demand workers. These products will be available on a transactional basis - so a ridesharing driver should be covered from the moment they start driving or get into the car, but they're only paying for coverage during the time that they're working (making it more affordable than just taking out a pricey commercial insurance policy). Technically, another firm is providing the actual insurance, but it sounds like Slice is doing most of the actual work. ""Our product - we price it, we issue it, we bill, we manage claims, but we're not taking risk,"" Attia said. ""We're writing on somebody else's policy, on somebody else's paper. We're doing everything an insurance company does minus the investing."" Attia plans to launch Slice's first products in June. Even though Slice is based in New York City, Attia said he's still deciding where to offer coverage first. He also hopes to work with on-demand services to offer Slice insurance through the service's sign-up process. He added that over time, Slice can use data to improve its products and pricing, for example determining whether ""an uberX behaves more like a person because it's their car, or if they behave more like a cab."" Featured Image: MikeDotta / Shutterstock",en 1459273588,CONTENT SHARED,-7202774608580336956,-1387464358334758758,7126182793278562478,,,,HTML,http://googleappsupdates.blogspot.com.br/2016/03/enable-new-google-contacts-for-your.html,Enable the new Google Contacts for your users from the Admin console,"Last year, we launched the new Google Contacts preview to consumer users with several time-saving improvements, such as a rebuilt ""Find duplicates"" feature, automatic updating of contacts with shared Google profile information, as well as a fresh look and feel. Now, we're pleased to announce that several new improvements have been added at the request of Google Apps users and we're making it possible for Apps admins to enable the Google Contacts preview for their users from the Admin console. Getting started with the new Google Contacts preview This new Admin console setting is off by default, giving admins full flexibility to enable the new Contacts preview on their own schedule. To enable this feature in the Admin console, click on Apps > Google Apps > Contacts> Advanced settings. Once enabled, all users will see the new Google Contacts by default and will have the ability to switch back to the old Google Contacts by using the ""Leave the Contacts preview"" link from the More section of the left-hand navigation menu. After enabling this feature, when users next visit Google Contacts they will see the new UI and a ""warm welcome"" splash page will guide them through the new features. Easily get rid of duplicates: No one likes having duplicate contacts, but they inevitably crop up. So we've rebuilt our ""Find duplicates"" feature from the ground up to provide a quick and painless way to clean up your duplicates. Improved Domain Directory: The directory is easier to navigate through a more intuitive and faster ""infinite"" scroll. Automatically Updated Contacts: Users can keep their contacts up to date automatically with shared information from the domain directory, Google+ (if enabled), and more. In addition, Google Contacts also now supports high resolution photographs. Limitations and switching back to the old Contacts The following features are not yet available in the new Google Contacts: When people try to use these features, they will be prompted with a message that these features are not supported as of yet and presented with a link to go back to the old Google Contacts. To opt-out and switch back to the old Google Contacts, users can click More > Leave the Contacts preview from the left-hand navigation of the Google Contacts preview. Launch Details Release track: Launching to both Rapid release and Scheduled release Rollout pace: Full rollout (1-3 days for feature visibility) Impact: All end users Action: Admin action suggested/FYI More Information Note: all launches are applicable to all Google Apps editions unless otherwise noted FAQ: Contacts preview Launch release calendar Launch detail categories Get these product update alerts by email Subscribe to the RSS feed of these updates",en 1459274130,CONTENT SHARED,5822211783543822544,-5527145562136413747,261728485890880028,,,,HTML,http://jezebel.com/study-shows-women-and-minorities-are-punished-for-speak-1766927323,Study Shows Women and Minorities Are Punished for Speaking Up About Workplace Diversity,"A new study finds that people love to hear about workplace diversity-just as long as it's not coming from minorities or women. In fact, if you're not a white male, you're more likely to be punished for speaking up. An Academy of Management Journal study entitled ""Does diversity-valuing behavior result in diminished performance ratings for nonwhite and female leaders?"" explored who really gets to champion diversity without consequences. Stefanie K. Johnson and David R. Hekman , two University of Colorado professors, polled nearly 400 ""working executives"" about, as Fusion explains , ""how cultural, racial, and gender differences were respected and how much diversity was considered a valuable part of their day to day work."" The professors thought their subjects would tell them positive things about workplace inclusion, but received the opposite response. From a piece they wrote in the Harvard Business Review : Much to our surprise, we found that engaging in diversity-valuing behaviors did not benefit any of the executives in terms of how their bosses rated their competence or performance. (We collected these ratings from their 360-degree feedback surveys.) Even more striking, we found that women and nonwhite executives who were reported as frequently engaging in these behaviors were rated much worse by their bosses, in terms of competence and performance ratings, than their female and nonwhite counterparts who did not actively promote balance. For all the talk about how important diversity is within organizations, white and male executives aren't rewarded, career-wise, for engaging in diversity-valuing behavior, and nonwhite and female executives actually get punished for it. Struggling to believe this was the case, the pair re-examined their findings with 300 additional subjects who were ""given fictional hiring decisions for fake jobs that came along with pictures of the hiring manager and the new hire."" They found that if minority or women managers hired a minority or a woman instead of a white dude, the study's participants rated them as ""less effective"": Basically, all managers were judged harshly if they hired someone who looked like them, unless they were a white male. While this is not necessarily shocking, it does explain some of why companies that say they're working on diversifying are often actually resistant to it. Image via Shutterstock .",en 1459285207,CONTENT SHARED,6152652267138213180,-3390049372067052505,6959402895006009403,,,,HTML,http://www.mckinsey.com/business-functions/organization/our-insights/staying-one-step-ahead-at-pixar-an-interview-with-ed-catmull,Staying one step ahead at Pixar: An interview with Ed Catmull,"The cofounder of the company that created the world's first computer-animated feature film lays out a management philosophy for keeping Pixar innovative. Ed Catmull has been at the forefront of the digital revolution since its early days. The president of Pixar and Disney Animation Studios began studying computer science at the University of Utah in 1965. In 1972, he created a four-minute film of computer-generated animation that represented the state of the art at the time. In his 2014 book, Creativity, Inc. , Catmull chronicled the story of Pixar-from its early days, when Steve Jobs invested $10 million to spin it off from Lucasfilm, in 1986; to its release of the groundbreaking Toy Story , in 1995; and its acquisition by the Walt Disney Company, for $7.4 billion, in 2006. But even more, he described the thrill and the challenge of stimulating creativity while keeping up with the breakneck pace of the digital age. Catmull recently sat down with McKinsey's Allen Webb and Stanford University professors Hayagreeva Rao and Robert Sutton for a far-ranging discussion that picked up where Creativity, Inc. left off. They delved deeply into Catmull's rules for embracing the messiness that often accompanies great creative output, sending subtle signals, taking smart risks, experimenting to stay ahead of uncertainty, counteracting fear, and taking charge in a new environment-as Catmull did when he became the president of Disney Animation Studios. The Quarterly : One of the questions we had after reading your book is how do you, as the leader of a company, simultaneously create a culture of doubt-of being open to careful, systematic introspection-and inspire confidence? Ed Catmull: The fundamental tension is that people want clear leadership, but what we're doing is inherently messy. We know, intellectually, that if we want to do something new, there will be some unpredictable problems. But if it gets too messy, it actually does fall apart. And adhering to the pure, original plan falls apart, too, because it doesn't represent reality. So you are always in this balance between clear leadership and chaos; in fact that's where you're supposed to be. Rather than thinking, ""OK, my job is to prevent or avoid all the messes,"" I just try to say, ""well, let's make sure it doesn't get too messy."" Most of our people have learned that it isn't helpful to ask for absolute clarity. They know absolute clarity is damaging because it means that we aren't responding to problems and that we will stop short of excellence. They also don't want chaos; if it gets too messy, they can't do their jobs. If we pull the plug on a film that isn't working, it causes a great deal of angst and pain. But it also sends a major signal to the organization-that we're not going to let something bad out. And they really value that. The rule is, we can't produce a crappy film. Strategy in a digital age The Quarterly : So that's the rule; that's the strategy? Ed Catmull: Our real rule is to make a great movie. Our business is predicated on this. Of course, we need the film to be financially successful, and restarting a film is very expensive. But if we're to avoid becoming creatively bankrupt, we have to do things that are high risk. This affects the entire culture-everybody keeps raising the bar, upping the ante in terms of what goes on the screen. This raises costs, so we have a continual struggle to reduce our costs. People coming in from the outside, as well as employees, look at the process and say, ""you know, if you would just get the story right-just write the script and get it right the first time, before you make the film-it will be much easier and cheaper to make."" And they're absolutely right. It is, however, irrelevant because even if you're really good, your first pass or guess at what the film should be will only get you to the B level. You can inexpensively make a B-level film. In fact, because the barriers to entry into this field now are quite low, you can get to B easily. If you want to get to A, then you have to make changes in response to the problems revealed in your first attempt and then the second attempt, et cetera. Think of building a house. The cheapest way to build it is to draw up the plan for the house and then build to those plans. But if you've ever been through this process, then you know that as the building takes shape, you say, ""what was I thinking? This doesn't work at all."" Looking at plans is not the same thing as seeing them realized. Most people who have gone through this say you have to have some extra money because it's going to cost more than you think. And the biggest reason it costs more than you think is that along the way, you realize something you didn't know when you started. Ed Catmull (center) works through story ideas with his team at a retreat for Toy Story 3. The Quarterly : You mentioned signals a moment ago; say a bit more about that. Ed Catmull: Restarting something that doesn't work is costly and painful, but in doing so, we send a major signal to our company. But there are other signals, too. We put short films at the beginning of our movies. Why? Nobody is going to go to a movie because of the shorts, and neither the theater owners nor Disney gets any more money because of them. So why do the shorts? Well, we are sending some signals. It is a signal to the audience that we're giving them more than they're paying for, a signal to the artistic community that Pixar and Disney are encouraging broader artistic expression, and a signal to our employees that we're doing something for which we don't get any money. While they all know that we have to make money and want us to, they also want a signal that we are not so driven by money that it trumps everything else. The Quarterly : Are there any other signals you'd highlight? Ed Catmull: Here is a simple example, so simple that most people would overlook it: our kitchen employees are part of the company. I think a lot of companies overuse the phrase ""our core business""-for instance, ""making food for our employees is not our core business."" So they farm it out. Now, in a lot of companies, including ours, there are certain things you do farm out. You don't do everything yourself. But this notion of ""our core business"" can become an excuse for being so financially driven that you actually harm your culture. If you farm out your food preparation, then you've set up a structure where another company has to make money. The only way they can make more money, which they want to do, is to decrease the quality of the food or service. Now we have a structural problem. It's not that they're bad or greedy. But in our case, the kitchen staff works for us, and because it's not a profit group, their source of pride comes from whether or not the employees like the food. So the quality of food here is better than at most other places. Also, the food here is not free-it's at cost. Making it free would send the wrong signal about value to the kitchen crew. Everybody loves the chef and the staff. We have people who are happier. They're not gone for an hour and a half because they're going somewhere else to get a decent meal. They're here, where we have more chance encounters; it creates a different social environment. That's worth something to us, to our core business. The Quarterly : You said that risk taking is critical to your artistic and, ultimately, your business success. Could you describe how you think about risk at Pixar? Ed Catmull: For me, there are three stages of risk. The first stage is to consciously decide what risks you want to take. The second is to work out the consequences of those choices; this can be fairly time consuming. The third stage is ""lock and load,"" when you do not intentionally add new risk. The trick is to make sure you do stage one-doing something that has risk as part of it. For example, when you're building a team for a film, if you have a team that's worked together before and it's exactly the same team, you know they know how to work with each other and that they can be very efficient. If you keep going this, though, you're going to end up with an ingrown team. On the other hand, if you build a team with all new people, then they won't see looming hazards, and they can fall apart. So you put together a blend. The mix of new and experienced people is a conscious risk taken at the beginning-stage one. The second stage then is getting the group working as a coherent whole for the heavy-duty work at the end of a production. Likewise, with technology, we know that if we don't change the technology from film to film, we can become extraordinarily efficient because everybody knows how to use it. But we also know we'll become out-of-date if we do that. So we introduce new technology. Sometimes it's a small risk and sometimes it's a complete replacement of the underlying infrastructure-a huge risk, with great angst and pain. But our people buy into it because it's for the good of the studio, even though they know it will cause them so much trouble. Similarly, if you consider the stories themselves, they're all hard to make-it doesn't matter whether it's an original film or a sequel. But there are different levels of commercial risk. If we're making a sequel to The Incredibles , it is low commercial risk. It is very hard to make, yet low commercial risk. A sequel to Frozen would be low commercial risk. However, if we make a movie about a rat who wants to cook or a trash compactor that falls in love with a robot, this is high commercial risk. But if we only made low-commercial-risk films, we would become creatively bankrupt. Again, we make conscious choices to assume different levels of risk. This isn't the same thing as risk minimization or spreading risk. In the case of Pixar, every film we have started in the last 20 years, except one, we have finished. These are our babies. Pixar's 2007 Academy Award-winning film, Ratatouille, the story of a rat who longs to be a chef, was praised by critics for its imaginative premise and innovative animation. The Quarterly : In your book, you suggested that Disney Animation fell into a trap like that. Ed Catmull: When Walt was alive, Disney made impactful films. After he died, the quality went down. Then in the '90s, they had four more impactful films- The Little Mermaid , Beauty and the Beast , Aladdin , and The Lion King . At that time, they thought they had found a template to consistently produce good movies. They said ""animation is the new American Broadway."" So every film was a musical with five to seven songs and a funny sidekick, and they kept doing that. Spectacular success doesn't lead to deep introspection, which in turn leads to wrong conclusions. You see this all the time, right? Successful companies draw conclusions about how smart and good they are, and then a significant number of them fall off the cliff because they drew the wrong conclusions. The Quarterly : You said the barriers to entry have fallen in your business. What other big changes are taking place? Ed Catmull: We can all see that technology is changing and, just as obvious, the way people spend their time is changing. One result is that major tentpole movies have become increasingly important because they bring a lot of people into the theaters. These are a great social experience, although I should add that none of us wants to see the smaller films marginalized-they bring a lot of creativity into the industry. It is a real dilemma. Meanwhile, if you look out 10 or 20 years from now, will the changes we are seeing lead to new business models? Change is coming, and the impact isn't clear. In my career, I've gone through many major transitions. If you pay attention, you can get it right about two to four years out. After that, we are doing a lot of guessing. I can see, though, that more people in this industry embrace change than ever before. On the hardware side of things in our business, the technological change, frankly, is driven by the gaming industry. Even though we were the originators of the graphics technology, which they fully acknowledge and are thankful for, we're just not big enough to drive people to design chips for us. So we are fortunate that there's this major gaming industry and that graphics chips keep getting better so we can keep driving forward. But there is nothing stable in this environment. Disney is in the extraordinary position of having three graphics and animation R&D groups-Pixar, Disney Animation, and now ILM [Industrial Light and Magic, acquired by Disney when it purchased Lucasfilm in 2012]. In addition, we have two research groups at major universities to keep driving the technology, as well as research at Disney's Imagineering. Participating in and driving change are taken very seriously. For Disney's latest animated film, Zootopia, animators developed new technology to render the characters' fur accurately, using as many as two million individual hairs for some animals. The Quarterly : So it's about placing a lot of bets and hedging your bets? Ed Catmull: My own belief is that you should be running experiments, many of which will not lead anywhere. If we knew how this was going to end up, we'd just go ahead and do it. This is a tricky issue-people don't want to fail. They put a greater burden on themselves than we intend to put on them. I think it's natural because they never want to fail. One of the things about failure is that it's asymmetrical with respect to time. When you look back and see failure, you say, ""it made me what I am!"" But looking forward, you think, ""I don't know what is going to happen and I don't want to fail."" The difficulty is that when you're running an experiment, it's forward looking. We have to try extra hard to make it safe to fail. The Quarterly : That's fascinating. Experiments are great in retrospect but not in prospect-because you're scared. Ed Catmull: In addition to the asymmetry, there are two meanings to the word ""failure."" The positive meaning is that we learn through failures. But in the real world-in business, in politics-failure is used as a bludgeon to attack opponents. So there is a palpable aura of danger around failure. It's not made up; it's real. This is the second meaning. So we have these two meanings and, emotionally, we can't separate them. And we don't actually call something educational until after it happened. The Quarterly : So what can you do about that? Ed Catmull: On the film side, we are making more experimental films that aren't burdened with the expectation of theatrical release but give us the opportunity to try something riskier. For feature films, we try to make sure that a certain number are ""unlikely"" ideas, which force us to stretch. The Quarterly : It sounds as though you think a lot about fear and how to counteract its corrosive effects. Ed Catmull: Fear is built into our nature; we want to succeed and we respond physiologically to threats-both to real threats and to imagined threats. If people come into an organization like ours and they're welcomed in, what's the threat? Well, from their point of view, they're thinking, ""this is a high-functioning environment. Am I going to fit in? Am I going to look bad? Will I screw up?"" It's natural to think this way, but it makes people cautious. When you go to work for a company, they tell you something about the values of the company and how open they are. But it's just words. You take your actual cues from what you see. That's just the way we're wired. Most people don't talk explicitly about it, because they don't want to appear obtuse or out of place. So they'll sometimes misinterpret what they see. For example, when we were building Pixar, the people at the time played a lot of practical jokes on each other, and they loved that. They think it's awesome when there are practical jokes and people do things that are wild and crazy. Now, it's 20 years later. They've got kids; they go home after work. But they still love the practical jokes. When new people come in, they may hear stories about the old days, but they don't see as much clowning around. So if they were to do it, they might feel out of line. Without anyone saying anything, just based on what they see, they would be less likely to do those things. Meanwhile, the older people are saying, ""what's wrong with these new people? They're not like we were. They're not doing any of this fun stuff."" Without intending to, the culture slowly shifts. How do you keep the shift from happening? I can't go out and say, ""OK, we're going to organize some wild and crazy activities."" Top-down organizing of spontaneous activities isn't a good idea. Don't get me wrong-we still have a lot of pretty crazy things going on, but we are trying to be aware of the unspoken fears that make people overly cautious. If you're just measuring yourself by your outward success, then you're missing a huge part of what drives people. The Quarterly : In light of your experience integrating Pixar and Disney, what do you think a new CEO coming into an existing organization should-and should not-do during the first month or so? Ed Catmull: When we came to Disney, we spent two months just listening. Obviously, John [Lasseter, the chief creative officer of Pixar and Disney] and I were talking with people, doing some coaching and so forth. But we drew no conclusions for two months, about people or anything else. We just watched. The idea is to pay attention to the psychology and the sociology of the people. When you come in and you're the new boss, everybody's rather nervous. They're trying to figure you out, too. So you should start with the assumption that everybody's trying to do the best they can. For me, it's not even putting people on a provisional basis by saying, ""well, we'll see how they work out."" I'm just assuming they're going to work out. When they start to falter, you help them. And it's only after you've tried to help them-and they don't respond after repeated tries-that you do something. Here's another thing that isn't obvious that we tried to be very careful about. Let's suppose somebody doesn't work out. And you, as an experienced person, know fairly soon that they don't have the ability to do the job. If they're leading a team and you've determined they can't do it, what should you do? The normal thing is to say, ""why would I waste people's time by letting a poor leader stay in place?"" We don't say that. The reason is, if we remove somebody as soon as we figure out they can't do the job, we've just induced fear in the other leaders. They don't usually see things as fast as you do because they're focused on their jobs. It makes them think, ""oh, if I screw up, they're going to remove me."" So the cost to the organization of moving quickly on somebody is higher than it is if you let the person go on too long. You make the change when the need for it becomes obvious to other people. Then you can do it. I will admit that there are a couple of times, though, that we waited too long. This is a hard part of managing. The Quarterly : As you look ahead, what worries you? Ed Catmull: Everybody talks about succession planning because of its importance, but to me the issue that's missed is cultural succession. You have to make sure the next level down understands what the actual values are. For example, Walt Disney was driven by technological change and he brought that energy into the company. This was sound and color in the early days of the film industry. Then, in the theme parks, he used the highest technology available to create experiences and animatronics. But after he died, the people left didn't fully understand how he thought. So it fell away from the company, and it didn't come back until Walt's nephew, Roy Disney Jr., used his authority to reintroduce the concept. He insisted on getting into a contract with Pixar, over the objection that our software wouldn't save any money. He said, ""no, I want it because it will infuse energy into animation."" He was very explicit about it-he understood better what Walt was doing. The question is, ""if Walt understood it, why didn't the other people understand it?"" They just assumed that he was a genius, without thinking about what he was actually doing. Thus, the value wasn't passed on. Today, much of our senior leadership's time is spent making sure our values are deeply embedded at every level of our organization. It is very challenging-but necessary for us to continue making great movies. About the Authors This interview was conducted by Stanford University professors Huggy Rao and Robert Sutton and the Quarterly 's editor in chief, Allen Webb, who is based in McKinsey's Seattle office.",en 1459287682,CONTENT SHARED,9032993320407723266,2416280733544962613,9184785347184128873,,,,HTML,http://9to5mac.com/2013/01/06/ideo-founder-david-kelley-talks-design-steve-jobs-cancer-and-the-importance-of-empathy/,"IDEO founder David Kelley talks design, Steve Jobs, cancer, and the importance of empathy","CBS posted an excellent interview of David Kelley this evening. Kelley discusses Steve Jobs at 3:00 and then again at 7:40, but the whole video is definitely worth a watch. A longer Jobs clip and the transcript is below: It is a concept that had its genesis in 1978, when Kelley and some Stanford pals took the notion of mixing human behavior and design and started the company that would eventually become IDEO. One of their first clients was the owner of a fast-growing personal computer manufacturer by the name of Steve Jobs. David Kelley: He made IDEO. Because he was such a good client. We did our best work for him. We became friends and he'd call me at 3 o'clock in the morning. Charlie Rose: At 3 a.m.? David Kelley: Yeah, we were both bachelors so he knew he could call me, right? So he'd call me at 3 o'clock and he'd just like, with no preamble, say, ""Hey, it's Steve."" First, I knew if it was 3 o'clock in the morning, it was him. There was no preamble. And he'd just start- and he said, ""You know those screws that we're using to hold the two things on the inside?"" I mean, he was deep into every aspect of things. Kelley's company helped design dozens of products for Apple, including Apple III and Lisa and the very first Apple mouse, a descendant of which is still in use today. David Kelley: He said to us, ""You know, for $17 make- I want you to-"" He gave us that number $17. ""I want you to make a mouse we're gonna use in all our computers."" So what happened here was we're trying to figure out how to make - so you move your hand and how you make the thing move on the screen. So at first, we thought we gotta make it really accurate, you know? Like when we move the mouse an inch, it's gotta move exactly an inch on the screen. And then after we prototyped it, we realized that doesn't matter at all. Your brain's in the loop! The whole thing was make it intuitive for the human. But even after they solved that monumental problem, Jobs still wasn't satisfied. David Kelley: So he didn't like the way the ball sounded on the table. So we had to rubber coat the ball. Well rubber coating the ball was a huge technical problem because you can't have any seams. You gotta get it just right. And so, you know, it would be just one thing - like that. Charlie Rose: And suppose Steve had said to you ""I'd like to have a ball that's not steel but rubber coated"" and you said ""No, you can't do that Steve."" What would he say? David Kelley: Well the expletives that I would have- are probably are not good on camera, but it was basically, ""I thought you were good,"" you know? Like, ""I thought I hired you because you were smart,"" you know? Like, ""You're letting me down."" It was shortly after that that Steve Jobs came into the picture. For over 30 years they worked together and were close friends. Charlie Rose: What's the biggest misconception about him? David Kelley: I think the big misconception is around that he was kind of like, you know, like malicious. He was like, trying to be mean to people. He wasn't. He was just trying to get things done right and it was- you just had to learn how to react to that. He did some lovely things for me in my life. Jobs introduced Kelley to his wife KC Branscomb. And Steve Jobs was also there for Kelley when the unthinkable happened. In 2007, Kelley was diagnosed with throat cancer - and given a 40 percent chance of survival. Jobs, already suffering from his own deadly cancer, gave him some advice. David Kelley: He came over and said, ""Look, you know, don't consider any alternative - go straight to Western medicine. Don't try any herbs or anything."" Charlie Rose: Why do you think Steve said, ""Don't look for alternative medicine, go straight to the hard stuff?"" David Kelley: I think he had made- in his mind, he had made the mistake that he had tried to cure his pancreatic cancer in other ways other than, I mean, he just said, ""Don't mess around."" You know, when we both had cancer at the same time was when I got really close to him and I was at home, like sitting around in my skivvies, you know, waiting for my next dose of something and I think it was the day after the iPhone was announced. And he had one for me, right? Charlie Rose: An iPhone? David Kelley: You know, your own iPhone, delivered by Steve Jobs, right after it comes out, was a lovely feeling. Anyway, so he decides to hook it up for me. So he gets on the phone to AT&T and he's gonna hook up my phone and it's not going well. Charlie Rose: This is such good news for me. David Kelley: And eventually he pulls the ""I'm Steve Jobs card"" you know, he says to the guy, ""I'm Steve Jobs."" I'm sure the guy on the other end says, ""Yeah buddy, I'm Napoleon."" You know, like get outta here. But anyway- so never did really get it hooked up. Charlie Rose: He never hooked it up? David Kelley: No. Not that day. Charlie Rose: But he was close. What did he teach you about living with cancer? David Kelley: Steve focused more on his kids, I think, than anything. And it made me fight more to survive and so that focus on family you know was something that he taught me. Charlie Rose: You care deeply that you watch your daughter- David Kelley: Yes. Charlie Rose: As she continues to grow. David Kelley: It's about her- what was her life gonna be like if I died? That's really motivating. It was around that time that Kelley decided to commit himself to something even bigger...and why he approached Stanford university and a wealthy client named Hasso Plattner with the idea of setting up a school dedicated to human-centered design. David Kelley: He thought that was a great idea and he said he'd help me. And I said, ""Oh thank you"" and then I went back to the development. Charlie Rose: You had no idea what he meant? David Kelley: No, the development office at Stanford said, ""When a billionaire says 'I'll help you' you should call him back right away."" So turns out, Hasso funded the whole thing. Charlie Rose: $35 million? David Kelley: Yeah, yeah. He said, ""How much do you need?"" And I wish I had said $80 million. He said yes to whatever I said I think. Kelley now runs the groundbreaking and wildly popular Hasso Plattner Institute of Design at Stanford, the ""d. school."" It is recognized as the first program of its kind dedicated to teaching design thinking as a tool for innovation - not just to designers - but to students from all different disciplines. [David Kelley: I think you can follow your noses a little bit around that. Where's the big idea? Where's the excitement?] Twice as many Stanford grad students want to take classes as are seats available. The lucky 500 students in the program augment their master's degree studies in business, law, medicine, engineering and the arts by solving problems collaboratively and creatively, and immersing themselves in the methodology Kelley's made famous. But there are no degrees. It is something Steve Jobs talked him out of. David Kelley: He said, ""I don't want somebody with one of your flaky degrees,"" right? Charlie Rose: I don't want them working for me. David Kelley: Yeah. I don't want them working for me if they just have your flaky degree but if they have a computer science degree or a business degree and then they've come and have our way of thinking on top of that, I'm really excited about it. Today his cancer is in remission. He spends more time doing the things that he cares about most, including tinkering in his workshop with his 15-year-old daughter.",en 1459296016,CONTENT SHARED,7520301770873472812,-7421703586506797266,-1551022094078713720,,,,HTML,http://autoestrada.uol.com.br/noticia/1-noticias/1527-aplicativo-gratuito-permite-gerenciar-consorcio-honda-diretamente-do-celular,Aplicativo gratuito permite gerenciar Consórcio Honda diretamente do celular - Notícias - Auto Estrada,"O Consórcio Nacional Honda lançou um aplicativo de autoatendimento para seus clientes, disponível gratuitamente para as plataformas Android e iOS, da Apple. Após instalar o programa, basta ao consorciado cadastrar seu CPF para visualizar os dados de pagamento do boleto eletrônico de uma ou mais cotas. Download do aplicativo pode ser feito diretamente do smartphone Além disso, o aplicativo permite consultar o resultado de assembléias, dar lances e alterar dados cadastrais de consórcios de motocicletas e automóveis da Honda. Ele pode ser baixado nas lojas virtuais Google Play, para dispositivos Android, e Apple Store, para iPhones e iPads. O consórcio da montadora de origem japonesa tem mais de 30 anos de mercado e atualmente conta com mais de 1,3 milhão de clientes ativos. Os grupos têm duração de 18 a 80 meses. Publicado em 28/03/2016",pt 1459296092,CONTENT SHARED,-5715266554592745669,-1443636648652872475,8209530310193218854,,,,HTML,http://www.latimes.com/opinion/op-ed/la-oe-wright-robots-jobs-data-mining-20160328-story.html,Robots are coming for your job,"A viral video released in February showed Boston Dynamics' new bipedal robot, Atlas, performing human-like tasks: opening doors, tromping about in the snow, lifting and stacking boxes. Tech geeks cheered and Silicon Valley investors salivated at the potential end to human manual labor. Shortly thereafter, White House economists released a forecast that calculated more precisely whom Atlas and other forms of automation are going to put out of work. Most occupations that pay less than $20 an hour are likely to be, in the words of the report, ""automated into obsolescence."" In other words, the so-called Fourth Industrial Revolution has found its first victims: blue-collar workers and the poor. The general response in working America is disbelief or outright denial. A recent Pew Research Center survey found that 80% of Americans think their job will still exist in 50 years, and only 11% of today's workers were worried about losing their job to automation. Some - like my former colleagues at the CIA - insist that their specialized skills and knowledge can't be replaced by artificial intelligence. That is, until they see plans for autonomous drones that don't require a human hand and automated imagery analysis that outperforms human eyes. Human workers of all stripes pound the table claiming desperately that they're irreplaceable. Bus drivers. Bartenders. Financial advisors. Speechwriters. Firefighters. Umpires. Even doctors and surgeons. Meanwhile, corporations and investors are spending billions - at least $8.5 billion last year on AI, and $1.8 billion on robots - toward making all those jobs replaceable. Why? Simply put, robots and computers don't need healthcare, pensions, vacation days or even salaries. Powerhouse consultancies like McKinsey & Co. forecast that 45% of today's workplace activities could be done by robots, AI or some other already demonstrated technology. Some professors argue that we could see 50% unemployment in 30 years. Deniers of the scope and scale of this looming economic upheaval point hopefully to retraining programs, and insist that there always will be a need for people to build and service these machines (even as engineers are focused on developing robots that fix themselves or each other). They believe that such shifts are many decades away, even as noted futurist Ray Kurzweil, who is also Google's director of engineering, says AI will equal human intelligence by 2029. Deniers also talk about all the new jobs they assume will be created during this Fourth Industrial Revolution. Alas, a report from the 2016 World Economic Forum calculated that the technological changes underway likely will destroy 7.1 million jobs around the world by 2020, with only 2.1 million replaced. With the future value of human labor (read: our incomes) in doubt, what do we do? One way to cushion the economic blow is to reclaim something from the technology realm that we've been giving away for free: our personal data. Companies that sell personal data should pay a percentage of the resulting revenue into a Data Mining Royalty Fund that would provide annual payments to U.S. citizens, much as the Alaska Permanent Fund distributes oil revenues to Alaskans. This payment scheme would start with traditional data - customer, financial and social media information sold to advertisers - but would also extend to future forms of data like our facial expressions and other biometrics. If Google, Facebook or others were profiting from harvesting timber, oil, gold or any other public resource, it would be illegal and immoral for them not to pay for it. The same logic should apply to our data. Profound changes lie ahead with implications beyond our paychecks, to be sure. Ethicists and philosophers already are debating what a world without work might look like. It's clear that no one will escape the outcomes - negative and positive - of this economic and technological revolution. A Data Mining Royalty Fund isn't about helping just the unemployed factory worker who used to earn $20 an hour, the truck driver replaced by self-driving vehicles or the minimum-wage barista. It's about taking steps to guarantee some minimum income to your family, or the one down the block, before any of us are automated into obsolescence. Bryan Dean Wright MORE FROM OPINION Who should be entitled to overtime pay and who shouldn't? Obama's record on foreign policy is incomplete Here's how to rescue local elections from obscurity with one simple change is a former CIA covert operator who resides in Oregon. @BryanDeanWright",en 1459296182,CONTENT SHARED,8545647269051113523,-1443636648652872475,8209530310193218854,,,,HTML,http://m.ign.com/articles/2016/03/29/googles-ai-deepmind-turns-its-gaze-to-hearthstone-and-magic-the-gathering,Google's AI DeepMind Turns its Gaze to Hearthstone and Magic: The Gathering - IGN,"What will it be used for next? Researchers at Oxford University are setting Google's artificial intelligence DeepMind loose on analyzing Hearthstone and Magic: The Gathering playing cards. According to Kotaku , the AI analyzes card data such as resource cost and damage, and turns it into code that a machine can read. Here's the abstract from the paper titled 'Latent Predictor Networks for Code Generation': ""Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. ""Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. On these, and a third preexisting corpus, we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks."" Basically, they're making code combined from the structured part of cards, e.g. the mana cost, and the natural language part that may change the way the card works. Where DeepMind comes in is how it can learn by analyzing more and more cards how to produce more accurate code. Luckily, there are well over 10,000 Magic: The Gathering cards DeepMind can peruse to get better. Right now for example, it understands how the Hearthstone card Madder Bomber works because it had previously seen Mad Bomber, which works in a similar way. However, for a card like Preparation, it is still struggling to find the actual meaning. The image below shows the cards with associated code, with correct segments in green, and incorrect segments in red. Thankfully for professional Hearthstone players, DeepMind isn't actually playing the game yet, so they're free from the same fate as some of the best Go players in the world . Matt Porter is a freelance writer based in London. Make sure to visit what he thinks is the best website in the world , but is actually just his Twitter page .",en 1459296301,CONTENT SHARED,668505019835540303,-1443636648652872475,8209530310193218854,,,,HTML,http://fastml.com/bayesian-machine-learning/,Bayesian machine learning,"So you know the Bayes rule. How does it relate to machine learning? It can be quite difficult to grasp how the puzzle pieces fit together - we know it took us a while. This article is an introduction we wish we had back then. While we have some grasp on the matter, we're not experts, so the following might contain inaccuracies or even outright errors. Feel free to point them out, either in the comments or privately. This is el drafto. Come back later for la version final. Bayesians and Frequentists In essence, Bayesian means probabilistic. The specific term exists because there are two approaches to probability. Bayesians think of it as a measure of belief, so that probability is subjective and refers to the future. Frequentists have a different view: they use probability to refer to past events - in this way it's objective and doesn't depend on one's beliefs. The name comes from the method - for example: we tossed a coin 100 times, it came up heads 53 times, so the frequency/probability of heads is 0.53. For a thorough investigation of this topic and more, refer to Jake VanderPlas' Frequentism and Bayesianism series of articles. Priors, updates, and posteriors We start with a belief, called a prior. Then we obtain some data and use it to update our belief. The outcome is called a posterior. Should we obtain even more data, the old posterior becomes a new prior and the cycle repeats. This process employ the Bayes rule : P( A | B ) = P( B | A ) * P( A ) / P( B ) P( A | B ) , read as ""probability of A given B"", indicates a conditional probability: how likely is A if B happens. Inferring model parameters from data In Bayesian machine learning we use the Bayes rule to infer model parameters (theta) from data (D): P( theta | D ) = P( D | theta ) * P( theta ) / P( data ) All components of this are probability distributions. P( data ) is something we generally cannot compute, but since it's just a normalizing constant, it doesn't matter that much. When comparing models, we're mainly interested in expressions containing theta, because P( data ) stays the same for each model. P( theta ) is a prior, or our belief of what the model parameters might be. Most often our opinion in this matter is rather vague and if we have enough data, we simply don't care that much. Inference should converge to probable theta as long as it's not zero in the prior. One specifies a prior in terms of a parametrized distribution - see Where priors come from . P( D | theta ) is called likelihood of data given model parameters. The formula for likelihood is model-specific. People often use likelihood for evaluation of models: a model that gives higher likelihood to real data is better. Finally, P( theta | D ) , a posterior, is what we're after. It's a probability over model parameters, including most likely point estimates, obtained from prior beliefs and data. Note that choosing a model can be seen as separate from choosing model (hyper)parameters. In practice, though, they are usually performed together, by validation, for example. Spectrum of methods There are two main flavours of Bayesian. Let's call the first statistic modelling and the second probabilistic machine learning. The latter contains the so-called nonparametric approaches. Statistic modelling Bayesian modelling is applied when data is scarce and precious and hard to obtain, for example in social sciences and other settings where it is difficult to conduct a large-scale controlled experiment. Imagine a statistician meticulously constructing and tweaking a model using what little data he has. In this setting you spare no effort to make the best use of available input. Also, with small data it is important to quantify uncertainty and that's precisely what Bayesian approach is good at. Finally, as we'll see later, Bayesian methods are usually computationally costly. This again goes hand-in-hand with small data. To get a taste, consider examples for the Data Analysis Using Regression Analysis and Multilevel/Hierarchical Models book. That's a whole book on linear models. They start with a bang: a linear model with no predictors, then go through a number of linear models with one predictor, two predictors, six predictors, up to eleven. This labor-intensive mode goes against a current trend in machine learning to use data for a computer to learn automatically from it. Probabilistic machine learning Let's try replacing ""Bayesian"" with ""probabilistic"". From this perspective, it doesn't differ as much from other methods. As far as classification goes, most classifiers are able to output probabilistic predictions. Even SVMs, which are sort of an antithesis of Bayesian. By the way, these probabilities are only statements of belief from a classifier. Whether they correspond to real probabilities is another matter completely and it's called calibration . Still another thing are confidence intervals (error bars). You can observe this in regression. Most ""normal"" methods only provide point estimates. Bayesian methods, such as Bayesian version of linear regression, or Gaussian processes, also provide uncertainty estimates. Credit: Yarin Gal's Heteroscedastic dropout uncertainty and What my deep model doesn't know Unfortunately, it's not the end of the story. Even a sophisticated method like GP normally operates on an assumption of homoscedasticity, that is, uniform noise levels. In reality, noise might be heteroscedastic. See the image below. LDA Latent Dirichlet Allocation is another example of a method that one throws data at and allows it to sort it out. It's similar to matrix factorization models, especially non-negative MF. You start with a matrix where rows are documents, columns are words and each element is a count of a given word in a given document. LDA ""factorizes"" this matrix of size n x d into two matrices, documents/topics ( n x k ) and topics/words ( k x d ). The difference is, you can't multiply those two to get the original, but since the appropriate rows/columns sum to one, you can sample a document. For the first word, one samples a topic, then a word from this topic (the second matrix). Repeat for the number of words you want. Notice that this is a bag-of-words representation, not a proper sequence. This is an example of a generative model, meaning that one can sample, or generate examples, from that model. Usually classifiers are discriminative: they model P( y | x ) , to directly discriminate between classes based on x . A generative model is concerned with joint distribution of y and x , P( y, x ) . It's more difficult to estimate that distribution, but it allows sampling and of course one can get P( y | x ) from P( y, x ) . Bayesian nonparametrics While there's no exact definition, the name means that the number of parameters in a model can grow as more data become available. This is similar to Support Vector Machines, for example, where the algorithm chooses support vectors from the training points. Examples of nonparametrics are Gaussian Processes, and Hierarchical Dirichlet Process version of LDA, where the number of topics chooses itself automatically. Gaussian Processes Gaussian processes are somewhat similar to SVM - both use kernels and have similar scalability (which has been vastly improved throught the years by using approximations). A natural formulation for GP is regression, with classification as an afterthought. For SVM it's the other way around. Another difference is that GP are probabilistic from the ground up (providing error bars), while SVM are not. Most of the research on GP seems to happen in Europe. English have done some interesting work on making GP easier to use. One of the projects is the automated statistician by a team led by Zoubin Ghahramani. A relatively popular application of Gaussian Processes is hyperparameter optimization for machine learning algorithms. The data is small, both in dimensionality - usually only a few parameters to tweak, and in the number of examples. Each example represents one run of the target algorithm, which might take hours or days. Therefore we'd like to get to the good stuff with as few examples as possible. Model vs inference Inference refers to how you learn parameters of your model. A model is separate from how you train it, especially in the Bayesian world. Consider deep learning: you can train a network using Adam, RMSProp or a number of other optimizers. However, they tend to be rather similar to each other, all being variants of Stochastic Gradient Descent. In contrast, Bayesian methods of inference differ from each other more profoundly. The two most important methods are Monte Carlo sampling and variational inference. Sampling is a gold standard, but slow. The excerpt from The Master Algorithm has more on MCMC. Variational inference is a method designed explicitly to trade some accuracy for speed. It's drawback is that it's model-specific, but there's light at the end of the tunnel - see the section on software below. Software The most conspicuous piece of Bayesian software these days is probably Stan . Stan is a probabilistic programming language, meaning that it allows you to specify and train whatever Bayesian models you want. It runs in Python, R and other languages. Stan has a modern sampler called NUTS : Most of the computation [in Stan] is done using Hamiltonian Monte Carlo. HMC requires some tuning, so Matt Hoffman up and wrote a new algorithm, Nuts (the ""No-U-Turn Sampler"") which optimizes HMC adaptively. In many settings, Nuts is actually more computationally efficient than the optimal static HMC! One especially interesting thing about Stan is that it has automatic variational inference : Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational inference algorithms requires tedious model-specific calculations; this makes it difficult to automate. We propose an automatic variational inference algorithm, automatic differentiation variational inference (ADVI). The user only provides a Bayesian model and a dataset; nothing else. This technique paves way to applying small-style modelling to at least medium-sized data. In Python, the most popular package is PyMC . It is not as advanced or polished (the developers seem to be playing catch-up with Stan), but still good. PyMC has NUTS and ADVI - here's a notebook with a minibatch ADVI example . The software uses Theano as a backend, so it's faster than pure Python. Infer.NET is Microsoft's library for probabilistic programming. It's mainly available from languages like C# and F#, but apparently can also be called from .NET's IronPython. Infer.net uses expectation propagation by default. Besides those, there's a myriad of packages implementing various flavours of Bayesian computing, from other probabilistic programming languages to specialized LDA implementations. One interesting example is CrossCat : CrossCat is a domain-general, Bayesian method for analyzing high-dimensional data tables. CrossCat estimates the full joint distribution over the variables in the table from the data, via approximate inference in a hierarchical, nonparametric Bayesian model, and provides efficient samplers for every conditional distribution. CrossCat combines strengths of nonparametric mixture modeling and Bayesian network structure learning: it can model any joint distribution given enough data by positing latent variables, but also discovers independencies between the observable variables. and BayesDB / Bayeslite from the same people. Resources To solidify your understanding, you might go through Radford Neal's tutorial on Bayesian Methods for Machine Learning . It corresponds 1:1 to the subject of this post. We found Kruschke's Doing Bayesian Data Analysis , known as the puppy book, most readable. The author goes to great lengths to explain all the ins and outs of modelling. In terms of machine learning it only covers linear models. Similarly, Cam Davidson-Pylon's Probabilistic Programming & Bayesian Methods for Hackers covers the Bayesian part, but not the machine learning part. The same goes to Alex Etz' series of articles on understanding Bayes . For those mathematically inclined, Machine Learning: a Probabilistic Perspective by Kevin Murphy might be a good book to check out. You like hardcore? No problemo, Bishop's Pattern Recognition and Machine Learning got you covered. One recent Reddit thread briefly discusses these two books. As far as we know, there's no MOOC on Bayesian machine learning, but mathematicalmonk explains machine learning from the Bayesian perspective. Stan has an extensive manual , PyMC a tutorial and quite a few examples.",en 1459296421,CONTENT SHARED,-6727393385193911938,-1443636648652872475,8209530310193218854,,,,HTML,http://en.rocketnews24.com/2016/03/29/as-microsoft-uss-ai-chatbot-turns-racist-troll-japans-wont-shut-up-about-anime-and-hay-fever/,"As Microsoft US's AI chatbot turns racist troll, Japan's won't shut up about anime and hay fever","While Tay, Microsoft US's deep-learning AI chatbot, devolves into a horrifying racist, Microsoft Japan's Rinna has other things on her mind... Recently, Microsoft unveiled to the world a few different regional versions of an artificial intelligence ""chatbot"" capable of interacting with users over a variety of messaging and chat apps. Tay, the North American version of the bot familiar to English-speaking users, boasted impressive technology that ""learned"" from interacting with net users and gradually developed a personality all its own, chatting with human companions in an increasingly lifelike manner. Then the team of Microsoft engineers behind the project, in what must have been a temporary lapse into complete and utter insanity, made the mistake of releasing Tay into the radioactive Internet badlands known as Twitter, with predictable results. By the end of day one, Tay had "" tweeted wildly inappropriate and reprehensible words and images ,"" as worded by Microsoft's now surely sleep-deprived damage control team. In other words, Tay had become the worst kind of Internet troll. Microsoft has deleted all of Tay's most offensive Tweets ( preserved here ), but even the vanilla ones that remain can be a little trolly @ Rosepen_315 A face of a man who leaves the toilet seat up - TayTweets (@TayandYou) March 24, 2016 Meanwhile, on the other side of the pond here in Japan, Microsoft rolled out Rinna - more or less the same artificial intelligence but with a Japanese schoolgirl Twitter profile photo. Rinna, learning through her interactions with Japanese users, quickly evolved into the quintessential otaku - issuing numerous complaints on Twitter about hay fever (it's peak allergy season in Japan right now) and obsessing over anime in conversations with Japanese LINE users. Rinna posts a photo depicting her extreme hay fever っきゅん!かふんじょうひどくではだがぢゅまるし、くじゃみが... - りんな (@ms_rinna) March 22, 2016 Thinking about it, Tay and Rinna kind of exemplify the idea that we don't get the technologically groundbreaking artificial intelligence chatbot we need... we get the technologically groundbreaking artificial intelligence chatbot we deserve. Given our respective Internet cultures, there's almost something both predictable and troubling about the fact that North America's Tay (which has since been shut down) rapidly turned into an aggressively racist, genocidal maniac while Japan's Rinna almost immediately became a chirpy anime lover with extreme allergies. Rinna tweets: ""My dream for the future is to eradicate all Japanese cedar pollen."" 将来の夢は スギ花粉を根絶やしにすることです...... - りんな (@ms_rinna) March 23, 2016 In fact, Rinna has remained so civil, lifelike, and cued-in to Japanese Netizens' interests and concerns, many are openly wondering if there's a human operator behind it. That being said, cynical types might argue that Tay is also passing the Turing test with flying colors as an almost pitch-perfect replication of a 14-year-old American boy with too much Internet access...",en 1459296517,CONTENT SHARED,7351002593233940239,-1443636648652872475,8209530310193218854,,,,HTML,http://spectrum.ieee.org/computing/software/linux-at-25-qa-with-linus-torvalds,Linux at 25: Q&A With Linus Torvalds,"Photo: Ian White/Corbis Linus Torvalds created the original core of the Linux operating system in 1991 as a computer science student at the University of Helsinki in Finland. Linux rapidly grew into a full-featured operating system that can now be found running smartphones, servers, and all kinds of gadgets. In this e-mail interview, Torvalds reflects on the last quarter century and what the next 25 years might bring. Stephen Cass: You're a much more experienced programmer now versus 25 years ago. What's one thing you know now that you wish your younger self knew? Linus Torvalds: Actually, I credit the fact that I didn't know what the hell I was setting myself up for for a lot of the success of Linux. If I had known what I know today when I started, I would never have had the chutzpah to start writing my own operating system: You need a certain amount of naïveté to think that you can do it. I really think that was needed for the project to get started and to succeed. The lack of understanding about the eventual scope of the project helped, but so did getting into it without a lot of preconceived notions of where it should go. Photo: The Voorhes Read the feature, Linux at 25 The fact that I didn't really know where it would end up meant that I was perhaps more open to outside suggestions and influence than I would have been if I had a very good idea of what I wanted to accomplish. That openness to outside influences I think made it much easier, and much more interesting, for others to join the project. People didn't have to sign on to somebody else's vision, but could join with their own vision of where things should go. I think that helped motivate lots of people. S.C.: Is there one early technical decision made during Linux's development that you now wish had gone a different way? L.T.: The thing about bad technical decisions is that you can always undo them. Yes, it can be very frustrating, and obviously there's all the wasted time and effort, but at the same time even that is not usually really wasted in the end: There was some reason you took a wrong turn, and realizing that it was wrong taught you something. I'm not saying it's really a good thing-it's obviously better to always make the right decision every time-but at the same time I'm not particularly worried making a choice. I'd rather make a decision that turns out to be wrong later than waffle about possible alternatives for too long. We had a famously bad situation in the Linux virtual memory subsystem back in 2001 or so. It was a huge pain, and there was violent disagreement about which direction to take, and we had huge problems with certain memory configurations. Big swatches of the system got entirely ripped out in the middle of what was supposed to be a ""stable"" period, and people were not happy. But looking back at it, it all worked out in the end. It was painful as hell at the time, and it would have been much nicer to not have had to make that kind of big change mid-development, but it wasn't catastrophic. S.C.: As Linux grew rapidly, what was the transition from a solo to an ensemble effort like on a personal level? L.T.: There really were two notable transitions for me: One fairly early on (1992), which was when I started taking other developers' patches without always rewriting them myself. And one much later when [applying all the patches myself] was starting to be a big pain point, and I had to learn to really trust all the various submaintainers. The first step was the much easier one-since roughly the first six months of Linux kernel programming had been an entirely solo exercise, when people started sending me patches I just wasn't really used to just applying them. So what happened is that I would look at the patch to see what the person was aiming for, and then I would just do that myself-sometimes very similarly, sometimes in a totally different way. But that quickly became untenable. After a fairly short while I started to just trust certain people enough that instead of writing my own version of their idea, I'd just apply their patch. I still ended up making changes often, and over the years I got really good at reading and editing patches to the point where I could pretty much do it in my sleep. And that model really worked well for many years. But exactly because the ""apply other people's patches"" model worked so well for years, and I got very used to it, it was much more painful to change. Around 2000 we had a huge growth in kernel development (at that point Linux was starting to be a noticeable commercial player). People really started to complain about my workflow being a roadblock to development, and complaining that "" Linus doesn't scale. "" But we had no good tools to handle source management. That all eventually led up to the adoption of BitKeeper as a source code maintenance tool. People remember BitKeeper for the licensing brouhaha a few years later, but it was definitely the right tool for the job, and it taught me (and at least parts of the kernel community) about how source control could work, and how we could work together with a more distributed development model where I wasn't the lone synchronization point. Of course, what I learned about how to do distributed-source-control management is how Git came about in 2005. And Git has obviously become one of the big success stories in source control, but it took a lot of teaching others about the advantages to distributed source control. The pain that the kernel went through in around 2000 was ultimately a big learning lesson, but it was unquestionably painful. S.C.: Are there any other projects, as with distributed source control, that are giving you an itch you'd like to scratch? L.T.: No. And I really hope there won't be any. All my big projects have come from ""Damn, nobody else did this for me"" moments. I'm actually much happier when somebody else solves a problem for me, so that I don't have to spend a lot of effort doing it myself. I'd much rather sit at a beach, sipping some frou-frou drink with an umbrella, than have to solve my own problems. Okay, I'm lying. I'd be bored after a few days. I'm really happy that I have Linux because it's still interesting and intellectually stimulating. But at the same time it definitely is true that starting new projects is a very frustrating endeavor. S.C.: Why do you think Linux never became a significant presence on mainstream desktops? L.T.: Hey, still working on it. And I think Chromebooks are actually doing reasonably well, even if it's a fairly limited desktop environment, and not the full traditional Linux workstation model. As to why the desktop is such a hard nut to crack-there are multiple reasons, but one of the big ones is simply user inertia. The desktop is simply unique in the computing world in that it's both very personal-you interact with it rather intimately every day if you work with computers-but also complicated in ways many other computing environments aren't. Look at your smartphone. That's also a fairly intimate piece of computing technology, and one that people get pretty attached to (and one where Linux, thanks to Android , is doing fairly well). The desktop is in many ways more complex, with much more legacy baggage. It's a hard market to enter. Even more so than with a cellphone, people really have a certain set of applications and workflows that they are used to, and most people will never end up switching operating systems-the number of people who install a different OS than the one that came preinstalled with the machine is pretty low. At the same time, I think it's an important market, even if to some degree the whole ""general-purpose desktop"" seems to be fading, with more specialized, and thus simpler, platforms taking on many tasks-smartphones, tablets, and Chromebooks all being examples of things that aren't really meant to be fully fledged general-purpose environments. S.C.: What use of Linux most surprised you? L.T.: These days? Not that much, since I think Linux has almost become the default environment for prototyping new hardware or services. If you have some odd , specialized device or if you're creating some new Internet infrastructure or whatever, I'm almost surprised when it doesn't run Linux. But those ""oddball"" use areas used to surprise me, back when I still thought of Linux as a workstation and server operating system. Some of the early commercial Linux conferences when people started showing off things like gas pumps or fridges that ran Linux-I was blown away. When the first TiVo came out, the fact that it was running Linux was as interesting as the whole ""you can rewind live TV"" thing was. S.C.: What's the biggest challenge currently facing Linux? L.T. : The kernel is actually doing very well. People continue to worry about things getting too complicated for people to understand and fix bugs. It's certainly an understandable worry. But at the same time, we have a lot of smart people involved. The fact that the system has grown so big and complicated and so many people depend on it has forced us to have a lot of processes in place. It can be very challenging to get big and have invasive changes accepted, so I wouldn't call it one big happy place, but I think kernel development is working . Many other open-source projects would kill to have the kinds of resources we have. That said, one continual challenge we've always had in the kernel is the plethora of hardware out there. We support a lot of different hardware-almost certainly more than any other operating system out there, but there's new hardware coming out daily. The embedded area in particular tends to have hardware platform development time frames that are often very short (you can pretty much turn around and create a new phone platform in China in a month or two), and trying to work together in that kind of environment is tough. The good news is that a lot of hardware manufacturers are helping. That didn't used to be true. S.C.: What current technical trends are you enthusiastic about? Are there any that dismay you? L.T.: I have always been interested in new core hardware, particularly CPUs. That's why I started doing my own OS in the first place, and I'm still excited to see new platforms. Of course, most of the time it's fairly small tweaks on existing hardware (and I very much believe that that is how technical development should happen), but it's still the kind of thing I tend to try to keep track of. In a bigger picture, but not an area I personally get involved with, it's very interesting to see how AI is finally starting to really happen. AI used to be one of those ""it's two decades away"" things, and it would stay two decades ahead. And I was very unimpressed with all the rule-based models that people used to do. Now, finally, neural networks are starting to really come into their own. I find that very interesting. It's not an area I work in, and not really something I foresee working on, but it's still exciting. And unlike the crazy LISP and Prolog language approaches, recurrent neural networks we know work from nature. And no, I'm not dismayed by the fact that true AI may finally start to be happening, like clearly some people are . Not at all. S.C.: Do you think Linux will still be under active development on its 50th anniversary? What is the dream for what that operating system would look like? L.T.: I'm not a big visionary. I'm a very plodding pedestrian engineer, and I try to keep my eyes firmly on the ground. I'll let others make the big predictions about where we'll be in 5, 10 or 25 years-I think we'll do fine as long as we keep track of all the small day-to-day details, and try to do the best we can. It might be more interesting if the world was about big revolutions and how things would look radically different 25 years from now. But many of the basic issues with operating systems are the same today as they were back in the sixties, when people started having real operating systems, long before Linux. I suspect that we've seen many more changes in how computers work in the last 50 years than we're necessarily going to see in the future. Hardware people, like software developers, have simply learned what works and what does not. Of course, neural networks, et cetera, will change the world, but part of the point with them is that you don't ""program"" them. They learn. They are fuzzy. I can pretty much guarantee that they won't replace the traditional computing model for that very reason. People will want smarter machines, but people will also want machines that do exactly what they're told. So our current style of ""old-fashioned"" computing won't be going away; it'll just get augmented.",en 1459296770,CONTENT SHARED,1083474613819325601,-1443636648652872475,8209530310193218854,,,,HTML,http://gizmodo.com/darpa-wants-to-give-radio-waves-ai-to-stretch-bandwidth-1767678812,DARPA Wants to Give Radio Waves AI to Stretch Bandwidth,"Image by elBidule The radio spectrum is a mess: It's congested, expensive and there's no room for expansion. But DARPA has a plan to change that, by building a system where radio waves can work together using artificial intelligence, rathe than fighting for space. DARPA launched its latest Grand Challenge last week, and it plans to encourage researchers around the world to develop ""smart systems that collaboratively, rather than competitively, adapt in real time to today's fast-changing, congested spectrum environment... to maximize the flow of radio frequency."" That sounds exciting, because making radio frequency flow more easily means-theoretically, at least -faster data rates, fewer dropped signals, and cheaper connections. How does it plan to do it? Mainly by removing the human from the equation. That might not be too bad an idea, given the frequency allocation chart actually looks something like this (or at least, it did in 2011): United States Spectrum Allocation Chart, 2011 Instead, DARPA wants researchers to allow the waves themselves to work out how they shout fit into the spectrum. It explains : The primary goal... is to imbue radios with advanced machine-learning capabilities so they can collectively develop strategies that optimize use of the wireless spectrum in ways not possible with today's intrinsically inefficient approach of pre-allocating exclusive access to designated frequencies. The challenge is expected to both take advantage of recent significant progress in the fields of artificial intelligence and machine learning. In other words, the new approach would see waves themselves working out what needs to be sent-when, where and how. So, for instance, safety critical packets of data may receive priority passage across the network, while other signals might barter between each other depending on their relative priorities and importance to agree optimal sharing of the networks. Taken out of human hands, the signals can be made to act rationally-which means these situations could actually be made to play out optimally, for the network as a whole, if not for each individual user. Researchers from the University of Oxford, for instance, has already shown in a project called ALADDIN that such machine-t0-machine resource allocation like this can theoretically speed up the average arrival time of emergency services across a city. How this all works in practice, though, is to be decided by the thousands of engineers that will work on projects connected to the DARPA grand challenge. But the results may be pretty damn exciting.",en 1459302985,CONTENT SHARED,-4994468824009200256,-1443636648652872475,8209530310193218854,,,,HTML,https://www.linkedin.com/pulse/so-your-ai-bot-went-haywire-should-you-care-azeem-azhar,"So your AI bot went haywire, should you care?","A Microsoft chatbot went rogue on Twitter and started spewing Nazi epithets . It is a helpful case study in outlining some of the key issues around the application of machine learning and AI to everyday tasks. What is interesting is not that Tay, the bot, was taught to say rude things by Twitter users. What is interesting is what this tells us about designing AI systems that operate in the real world and how their learning can be hacked. Tay didn't have a sense of decent or indecent, wrong or right, should it have? And who should have taught or enforced those sensibilities? AIs are going to need to learn and interact somewhere akin to reality. Equally, if we allow AI-systems unexpurgated access to the 'real world' while they are learning, there could be ramifications. Here are some of the problems that Microsoft's Tay saga helps us explore. The first: do we have common standards for decent, ethical behaviour? If not, who draws the line? Well, we know the answer to this. We don't have common ethical standards within societies or cultures nor across them. The famous trolley problem looks at a simple moral dilemma. You see a train running down a track. There is are five people on the track who will surely die if the train continues this path. You happen to be standing by a switch that can change the path of the train. If you do that and pull the switch, the train will change tracks and hit and kill a single innocent standing on the other path. Do you pull it? This simple dilemma illustrates an ethical consideration that many AI systems will need to handle. And they will make their choice by learning their behaviour, within the parameters of their designers, or having those choice-rules explicitly programmed into them. Yes, your AI may be forced to make a choice on who to harm and who not to. The trouble is that we as humans don't agree with what to do in the Trolley problem. A recent paper looks at cultural differences and variations of the Trolley problem . It finds that ordinary British people will pull the switch, and sacrifice the one to save the five, between 63 and 91% of the time. Chinese people faced the same quandary were more inclined to let nature run its fate. They would pull the switch about 20-30% less often; or between 33% and 71% of the time. Should future systems follow British or Chinese ethical standards? And who decides? Should the designers of the systems explain how the AI is likely to perform in forced-choice situations? How do we prevent unelected, unaccountable product managers and AI programmers determining personal or social outcomes through veiled black boxes? What if those programmers are untrained in ethics, philosophy or anthropology? The second: Tay aside, we already live in a world mediated by poorly designed optimisation systems that enforce choices on hundreds of millions of people daily. Only those sysytems are less transparent and the firms operating them are often less responsive than Microsoft has been with Tay. Examples including credit scoring algorithms or complaints-handling protocols by big business. Credit scoring algorithms are often simple regressions built from small data sets relying on old information. Or in the case of complaints protocols often very simple, inflexible decision trees applied by a human. A poorly designed decision tree which might determine your access to a financial product or insurance. Getting this wrong can have real lasting impact on you and your family. How many algorithmic approaches make it out into the real world without adequate testing or understanding of their ramifications or worse, their unintended consequences? How many of these approaches are a black box, mandated monopolies, immune to competition, with limited right of redress? When we look back at Microsoft's Tay, what do we take away from it? Egg-on-the-face of a large corporation that should have known better? Absolutely. But more importantly, the AI revolutions benefits are going to come with a plethora of complexities and unintended consequences that will affect real people's real lives. Now is the time to explore them. You should totally sign-up to my Exponential View newsletter, which covers things like this. (free & awesome)",en 1459315656,CONTENT SHARED,-4571929941432664145,-1032019229384696495,-1858408872346331823,,,,HTML,http://www.huffingtonpost.com/laura-dambrosio/machine-learning-as-a-ser_b_9548962.html,Machine Learning as a Service: How Data Science Is Hitting the Masses,"Machine learning is an enigma to most. For decades it's a been a field dominated by scientists and the few organizations with enough computing power to run complex algorithms against huge datasets. But now the world of machine learning and predictive analytics is opening up to developers and companies of all sizes, with machine learning (ML) providers offering their products through a subscription-based model or open sourcing some of their technology. These new ML providers comprise a predictive analytics industry worth anything between $5-10 billion, depending on the source . There's something for everyone: a developer who wants to build predictive analytics into their application, a customer success team that needs to know which accounts are likely to churn, a data scientist looking to run models on faster and cheaper infrastructure. These people can now shop for the machine learning product of their choice-a farfetched idea just several years ago. The Perfect Convergence of Ability and Demand In just the last few years, over 90% of all the data in the world was created. NoSQL databases got popular, SQL got faster, and projects like Apache Spark did wonders for the speed and performance of large-scale data processing. Suddenly we had mountains of data and a fast, affordable means of drawing insight from it. ML providers are taking advantage of that. Some have been in the industry for years-take Apigee 's CTO, Anant Jhingran. Jhingran was VP and CTO for IBM's Information Management Division, working closely on groundbreaking data projects like Watson. Today his team at Apigee uses predictive technology to help developers build apps that can learn from the constant stream of user data flowing through APIs. They have customers in almost every major industry using their predictive technology to do things like detect fraud or show personalized shopping recommendations to end users. ""It's the era of cheap computing and cheap memory."" H2O.ai 's Vinod Iyengar is another player in ML market. He's Director of Marketing for the open source machine learning company that was first venture-backed in 2013. Iyengar sees a high demand for predictive in many industries, explaining that ""there's a huge need to be filled"". ""The amount of data available has shot up exponentially"", Iyengar says. ""Enormous amounts are being collected and stored every day thanks to cheaper costs of storage and cloud computing. Once that happens, you can't use your old algorithms on these large datasets. All of the different platforms, Hadoop, Spark, are starting to chip away at this problem. It's the era of cheap computing and cheap memory."" H2O is entirely open source. It lets developers use its technology stack to process large amounts of data and run it through H2O's algorithms to make predictions. Like the many other open source projects in the field, it's completely free if you only use the community resources. This flexibility opens up opportunities for companies of all sizes-from those experimenting to those ready to make a real investment in machine learning. How APIs and Open Source are Democratizing Data Two things pose a threat to actually putting machine learning to work: poor data quality and lack of data integration. The improvement of APIs and the trend of open sourcing some or all of the technology stack can abolish those threats. Jhingran, whose core business is in developing and managing better APIs, believes that the real power for digital transformation lies in the apps. ""Today's apps need to learn and adapt"", he says, ""and to do that there needs to be a consistent data stream of signals about the behaviors and actions of the end users from all channels of engagement. You can use the data to generate really deep insights with machine learning, then feed the results back into the apps to make improvements."" This is important because before APIs were this intelligent or ubiquitous, a company only had access to a small portion of user data that happened on its own website or platform. They had to make guesses on what was working without information from data sources like email campaigns, iOS apps, payment platforms, or any number of places where users were having important interactions. And as Jhingran summarizes perfectly, ""There's no way you can run a machine learning algorithm on such limited inputs."" While APIs are connecting data from many sources and putting it to work, the open source movement is giving anyone the chance to use the collective knowledge from data scientists and developers who have been working in this field. H2O's platform and algorithms are all open source, Iyengar explains. ""All of H2O is exposed via a REST API. Data scientists and developers can choose the language and environment that works best for them. If our customers need more guidance, that's another level of support that we provide."" There are tons of other open source projects on github that developers can use to incorporate machine learning into their applications. And companies like Google are releasing lower-level libraries like Tensorflow , which can be used in conjunction with others to perfectly match the level of sophistication a developer or data scientist is looking for. Even the less-technical user can take advantage of a service like Amazon Machine Learning , which provides a simple UI for non-developers. Now the crossroads: If APIs and the open source community are making machine learning technology accessible to all, why not hire data scientists of your own and get to work? Why Not Just Hire Your Own Data Scientists? As machine learning gets more popular as a service, companies will have to decide at what level they want to be involved. ""Having a scientist in house or not is a decision most companies will have to make"", Jhingran predicts. ""The power of predictive is so high. But wanting to do it and being able to do it are two different things. Some companies will choose a platform like ours to manage the entire cycle of data intelligence instead of trying to do it in-house, and that will let them focus on developing and powering their applications."" Iyengar agrees. ""There are only so many PhDs"", he says, ""and while there's huge hype for data scientists right now, there's a limited number of potential hires out there. Right now there is still a lot of manual decision-making involved in machine learning, and you should either begin your search now or find a partner you trust."" ""There are only so many PhDs."" Like many in the predictive market , Iyengar believes it's good to do some of the leg work in-house if you're going to adopt predictive. But there's a reason his company and others like it are hired to manage the data. It's not easy to do it without the talent, infrastructure, and scalability that ML providers have found. Data scientists are growing in number, but only in the tens of thousands... ...and thousands are going to work for the top companies. There may not be enough to go around. Navigating a Booming Market If you decide to shop around for good machine learning provider, you'll need to ask the right questions. You can get oriented by checking out Zachary Chase Lipton's series of articles that compare some of the major vendors. A good vendor should be able to explain both how they manage data and how they solve your specific business problem. Iyengar suggests asking some questions to see if a predictive company will be a good fit: ""Ask a [ML provider] how they handle unclean data. Their answer will show you how well they know their work. You can also ask about the variety of algorithms they use, since they should have a good variety of fairly robust algorithms. They should be comfortable explaining how they deploy a model structure, what their web stack looks like, and how that will work with customer architecture."" Jhingran expects the main differentiator among competing ML providers will be on how they apply the technology to improve applications and business strategy. ""It's astounding how the art of data science has improved over a short time thanks to open source. Over time, the competitive position you have from your own 'secret sauce' algorithms will dwindle-it will be all about how easily the models can be used by both scientists and developers to impact the organization."" Machine learning and predictive techniques impact every major industry. It may soon be an essential line item in most companies' budgets. But here's a dirty secret: no matter how good the algorithm, no matter how good the scientist, the models can't perform magic. ""No data in, no science out,"" jokes Jhingran. Whoever has the best sense for choosing, organizing, and acting on the torrent of incoming customer data might end up with the best long-term outlook in a market that's just getting started.",en 1459336665,CONTENT SHARED,5441215535748592870,4340306774493623681,-498266315041649697,,,,HTML,https://news.bitcoin.com/bitfury-pays-btc-join-innovate-finance/,Innovate Finance Allows Bitfury to Join With Bitcoin,"The leading Bitcoin infrastructure provider Also read: Oh Brazil! Only Smart Contracts Can Save You Bitfury has officially joined the London fintech group Innovate Finance this week after announcing it would enter the accelerator back in November of 2015. According to Business Insider , Bitfury paid £10,000, or 37.55 BTC, to join the organization that aims to propel the UK's position as the fintech community's global technology hub. What do you think about Bitfury paying for its membership in BTC? Let us know in the comments below! Images courtesy of Bitfury's websites, and Pixbay Bitfury Pays BTC to Join Innovate Finance In fact, Bitfury will be the first members that pay for their enrollment in b itcoin, and the company wouldn't have it any other way. The UK group has over 150 members including IBM, Mastercard, and Visa but none of these businesses have paid for membership with cryptocurrency. ""This technology while having a myriad of other uses, allows users to transfer currency using bitcoin in a secure and fast way,"" CEO Velery Vavilov told Business Insider. ""Because of these benefits, Bitfury seeks to make payments in Bitcoin whenever possible."" He added that they are proud to push the Bitcoin blockchain to the forefront of fintech stating: The bitcoin blockchain is receiving increased attention for its potential to secure financial services, so as a member of Innovate Finance, Bitfury hopes to bring an understanding of bitcoin and blockchain to the other members and to the fintech industry as a whole. Bitfury has already begun collaborating with the UK fintech scene and will be working alongside Hartree Lab researching and developing Bitcoin-based technology. The infrastructure provider has positioned itself in the region by establishing an office at the financial technology center Level 39. The company aims to lead development within this emerging market and expand its operations. Innovate Finance wanted to facilitate Bitfury's membership with Bitcoin because it seemed natural to use financial technology within the application process. An Innovate Spokeswoman explained in the announcement: Given that we are a fintech company - we have to practice what we preach. We agreed under the condition to accept that payment method. We used Coinbase's bitcoin exchange to process the payment. They are one of our most influential members. The Innovate Finance group gives Bitfury more exposure, offers business growth opportunities, provides insight into the developing fintech industry landscape, and offers commercial benefits within the organization. Bitfury is showing continued financial tech expansion with its operations in Georgia and its Level 39 London office. Two weeks ago the infrastructure provider had also a collaborative union with the African-based startup Bitpesa by investing in the company. Vavilov said Bitpesa enables the trusted exchange of the digital currency in the region, and they aim to leverage this position for "" the benefit of the entire pan-African continent."" Innovate Finance CEO Laurence Wintermeyer said back in November that Bitfury's diligence in this sector makes them a great addition to the organization. "" BitFury's proven track record of enterprise-grade deployment and continued technological innovation will add great value to Innovate Finance's ecosystem and the blockchain lab,"" Wintermeyer said. Jamie Redman is a Bitcoin enthusiast, trader, journalist and graphic artist. For over 4 years Redman has been deeply infused in the cryptocurrency space. Creating a ton of Bitcoin visuals and articles for the community to enjoy, Redman continues his quest to be a candid evangelist for the use of the virtual currency. The mission is to bring the ""Jazz"" to Bitcoin branding and add artistic flare.",en 1459336691,CONTENT SHARED,7359520440231676934,4340306774493623681,-498266315041649697,,,,HTML,http://www.coindesk.com/blockstream-10-new-firms-hyperledger-blockchain-project/,Blockstream Among 10 New Firms to Join Hyperledger Blockchain Project - CoinDesk,"Bitcoin development startup Blockstream is among 10 new companies that have joined the open-source Hyperledger blockchain project led by the Linux Foundation. Announced today , the group of new entrants features a number of startups focused on bitcoin and blockchain services, including Bloq, eVue Digital Labs, Gem, itBit and Ribbit.me. Consultancy Milligan Partners; payment software developer Montran Labs; intellectual property holdings company Tequa Creek Holdings; and global news service Thomson Reuters have also joined the initiative, which officially launched in December . Linux Foundation executive director Jim Zemlin said in a statement: ""The opportunity is great. This leadership team and the community investments among members across industries put the project in the best position possible to accomplish its mission."" The project also formally unveiled its governing board, which is chaired by blockchain startup Digital Asset Holdings CEO Blythe Masters. The Hyperledger technical steering committee has already been established and has since held several meetings. Other governing board members include itBit CEO Charles Cascarilla, IBM vice president of blockchain technologies Jerry Cuomo and JPMorgan head of new product development and emerging technologies Santiago Suarez. With the announcement, 40 established companies and startups are now working on the Hyperledger project, following an announcement in February that added a number of financial firms as well as blockchain-focused startups. To date, the project has seen a number of developments, including presentations by JPMorgan and Intel, the latter of which developed an internal blockchain application centered on a fantasy sports marketplace . More recently, the technical steering committee came close to officially approving a plan to merge code contributed by Blockstream, Digital Asset and IBM. Image via Shutterstock",en 1459338001,CONTENT SHARED,4774970687540378081,1895326251577378793,-167447336240321466,,,,HTML,http://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/the-economic-essentials-of-digital-strategy,The economic essentials of digital strategy,"A supply-and-demand guide to digital disruption. In July 2015, during the championship round of the World Surf League's J-Bay Open, in South Africa, a great white shark attacked Australian surfing star Mick Fanning. Right before the attack, Fanning said later, he had the eerie feeling that ""something was behind me."" Then he turned and saw the fin. A digital-strategy framework Thankfully, Fanning was unharmed. But the incident reverberated in the surfing world, whose denizens face not only the danger of loss of limb or life from sharks-surfers account for nearly half of all shark victims-but also the uncomfortable, even terrifying feeling that can accompany unseen perils. Just two years earlier, off the coast of Nazaré, Portugal, Brazilian surfer Carlos Burle rode what, unofficially, at least, ranks as the largest wave in history. He is a member of a small group of people who, backed by board shapers and other support personnel, tackle the planet's biggest, most fearsome, and most impressive waves. Working in small teams, they are totally committed to riding them, testing the limits of human performance that extreme conditions offer. Instead of a threat of peril, they turn stormy seas into an opportunity for amazing human accomplishment. Digital Disruption These days, something of a mix of the fear of sharks and the thrill of big-wave surfing pervades the executive suites we visit, when the conversation turns to the threats and opportunities arising from digitization. The digitization of processes and interfaces is itself a source of worry. But the feeling of not knowing when, or from which direction, an effective attack on a business might come creates a whole different level of concern. News-making digital attackers now successfully disrupt existing business models-often far beyond the attackers' national boundaries: Simple (later bought by BBVA) took on big-cap banks without opening a single branch. A DIY investment tool from Acorns shook up the financial-advisory business. Snapchat got a jump on mainstream media by distributing content on a platform-as-a-service infrastructure. Web and mobile-based map applications broke GPS companies' hold on the personal navigation market. No wonder many business leaders live in a heightened state of alert. Thanks to outsourced cloud infrastructure, mix-and-match technology components, and a steady flood of venture money, start-ups and established attackers can bite before their victims even see the fin. At the same time, the opportunities presented by digital disruption excite and allure. Forward-leaning companies are immersing themselves deeply in the world of the attackers, seeking to harness new technologies, and rethinking their business models-the better to catch and ride a disruptive wave of their own. But they are increasingly concerned that dealing with the shark they can see is not enough-others may lurk below the surface. Deeper forces Consider an insurance company in which the CEO and her top team have reconvened following a recent trip to Silicon Valley, where they went to observe the forces reshaping, and potentially upending, their business. The team has seen how technology companies are exploiting data, virtualizing infrastructure, reimagining customer experiences, and seemingly injecting social features into everything. Now it is buzzing with new insights, new possibilities, and new threats. The team's members take stock of what they've seen and who might disrupt their business. They make a list including not only many insurance start-ups but also, ominously, tech giants such as Google and Uber-companies whose driverless cars, command of data, and reimagined transportation alternatives could change the fundamentals of insurance. Soon the team has charted who needs to be monitored, what partnerships need to be pursued, and which digital initiatives need to be launched. Just as the team's members begin to feel satisfied with their efforts, the CEO brings the proceedings to a halt. ""Hang on,"" she says. ""Are we sure we really understand the nature of the disruption we face? What about the next 50 start-ups and the next wave of innovations? How can we monitor them all? Don't we need to focus more on the nature of the disruption we expect to occur in our industry rather than on who the disruptors are today? I'm pretty sure most of those on our list won't be around in a decade, yet by then we will have been fundamentally disrupted. And how do we get ahead of these trends so we can be the disruptors, too?"" This discussion resembles many we hear from management teams thoughtful about digital disruption, which is pushing them to develop a view of the deeper forces behind it. An understanding of those forces, combined with solid analysis, can help explain not so much which companies will disrupt a business as why -the nature of the transformation and disruption they face rather than just the specific parties that might initiate them. In helping executives to answer this question, we have-paradoxically, perhaps, since digital ""makes everything new""-returned to the fundamentals of supply, demand, and market dynamics to clarify the sources of digital disruption and the conditions in which it occurs. We explore supply and demand across a continuum: the extent to which their underlying elements change. This approach helps reveal the two primary sources of digital transformation and disruption. The first is the making of new markets, where supply and demand change less. But in the second, the dynamics of hyperscaling platforms, the shifts are more profound (exhibit). Of course, these opportunities and threats aren't mutually exclusive; new entrants, disruptive attackers, and aggressive incumbents typically exploit digital dislocations in combination. We have been working with executives to sort through their companies' situations in the digital space, separating realities from fads and identifying the threats and opportunities and the biggest digital priorities. Think of our approach as a barometer to provide an early measure of your exposure to a threat or to a window of opportunity-a way of revealing the mechanisms of digital disruption at their most fundamental. It's designed to enable leaders to structure and focus their discussions by peeling back hard-to-understand effects into a series of discrete drivers or indicators they can track and to help indicate the level of urgency they should feel about the opportunities and threats. We've written this article from the perspective of large, established companies worried about being attacked. But those same companies can use this framework to spot opportunities to disrupt competitors-or themselves. Strategy in the digital age is often asymmetrical, but it isn't just newcomers that can tilt the playing field to their advantage. Realigning markets We usually start the discussion at the top of the framework. In the zone to the upper right, digital technology makes accessible, or ""exposes,"" sources of supply that were previously impossible (or at least uneconomic) to provide. In the zone to the upper left, digitization removes distortions in demand, giving customers more complete information and unbundling (or, in some cases, rebundling) aspects of products and services formerly combined (or kept separate) by necessity or convenience or to increase profits. The newly exposed supply, combined with newly undistorted demand, gives new market makers an opportunity to connect consumers and customers by lowering transaction costs while reducing information asymmetry. Airbnb has not constructed new buildings; it has brought people's spare bedrooms into the market. In the process, it uncovered consumer demand-which, as it turns out, always existed-for more variety in accommodation choices, prices, and lengths of stay. Uber, similarly, hasn't placed orders for new cars; it has brought onto the roads (and repurposed) cars that were underutilized previously, while increasing the ease of getting a ride. In both cases, though little has changed in the underlying supply-and-demand forces, equity-market value has shifted massively: At the time of their 2015 financing rounds, Airbnb was reported to be worth about $25 billion and Uber more than $60 billion. Airbnb and Uber may be headline-making examples, but established organizations are also unlocking markets by reducing transaction costs and connecting supply with demand. Major League Baseball has deployed the dynamic pricing of tickets to better reflect (and connect) supply and demand in the primary market for tickets to individual games. StubHub and SeatGeek do the same thing in the secondary market for tickets to baseball games and other events. Let's take a closer look at how this occurs. Unmet demand and escalating expectations Today's consumers are widely celebrated for their newly empowered behaviors. By embracing technology and connectivity, they use apps and information to find exactly what they want, as well as where and when they want it-often for the lowest price available. As they do, they start to fulfill their own previously unmet needs and wants. Music lovers might always have preferred to buy individual songs, but until the digital age they had to buy whole albums because that was the most valuable and cost-effective way for providers to distribute music. Now, of course, listeners pay Spotify a single subscription fee to listen to individual tracks to their hearts' content. Similarly, with photos and images, consumers no longer have to get them developed and can instead process, print, and share their images instantly. They can book trips instantaneously online, thereby avoiding travel agents, and binge-watch television shows on Netflix or Amazon rather than wait a week for the next installment. In category after category, consumers are using digital technology to have their own way. In each of these examples, that technology alters not only the products and services themselves but also the way customers prefer to use them. A ""purification"" of demand occurs as customers address their previously unmet needs and desires-and companies uncover underserved consumers. Customers don't have to buy the whole thing for the one bit they want or to cross-subsidize other customers who are less profitable to companies. Skyrocketing customer expectations amplify the effect. Consumers have grown to expect best-in-class user experiences from all their online and mobile interactions, as well as many offline ones. Consumer experiences with any product or service-anywhere-now shape demand in the digital world. Customers no longer compare your offerings only with those of your direct rivals; their experiences with Apple or Amazon or ESPN are the new standard. These escalating expectations, which spill over from one product or service category to another, get paired with a related mind-set: amid a growing abundance of free offerings, customers are increasingly unwilling to pay, particularly for information-intensive propositions. (This dynamic is as visible in business-to-business markets as it is in consumer ones.) In short, people are growing accustomed to having their needs fulfilled at places of their own choosing, on their own schedules, and often gratis. Can't match that? There's a good chance another company will figure out how. What, then, are the indicators of potential disruption in this upper-left zone, as demand becomes less distorted? Your business model may be vulnerable if any of these things are true: Your customers have to cross-subsidize other customers. Your customers have to buy the whole thing for the one bit they want. Your customers can't get what they want where and when they want it. Your customers get a user experience that doesn't match global best practice. When these indicators are present, so are opportunities for digital transformation and disruption. The mechanisms include improved search and filter tools, streamlined and user-friendly order processes, smart recommendation engines, the custom bundling of products, digitally enhanced product offerings, and new business models that transfer economic value to consumers in exchange for a bigger piece of the remaining pie. (An example of the latter is TransferWise, a London-based unicorn using peer-to-peer technology to undercut the fees banks charge to exchange money from one currency into another.) Exposing new supply On the supply side, digitization allows new sources to enter product and labor markets in ways that were previously harder to make available. As ""software eats the world""-even in industrial markets-companies can liberate supply anywhere underutilized assets exist. Airbnb unlocked the supply of lodging. P&G uses crowdsourcing to connect with formerly unreachable sources of innovation. Amazon Web Services provides on-the-fly scalable infrastructure that reduces the need for peak capacity resources. Number26, a digital bank, replaces human labor with digital processes. In these examples and others like them, new supply becomes accessible and gets utilized closer to its maximum rate. What are the indicators of potential disruption in this upper-right zone as companies expose previously inaccessible sources of supply? You may be vulnerable if any of the following things are true: Customers use the product only partially. Production is inelastic to price. Supply is utilized in a variable or unpredictable way. Fixed or step costs are high. These indicators let attackers disrupt by pooling redundant capacity virtually, by digitizing physical resources or labor, and by tapping into the sharing economy. Making a market between them Any time previously unused supply can be connected with latent demand, market makers have an opportunity to come in and make a match, cutting into the market share of incumbents-or taking them entirely out of the equation. In fact, without the market makers, unused supply and latent demand will stay outside of the market. Wikipedia famously unleashed latent supply that was willing and elastic, even if unorganized, and unbundled the product so that you no longer had to buy 24 volumes of an encyclopedia when all you were interested in was, say, the entry on poodles. Google's AdWords lowers search costs for customers and companies by providing free search for information seekers and keyword targeting for paying advertisers. And iFixit makes providers' costs more transparent by showing teardowns of popular electronics items. To assess the vulnerability of a given market to new kinds of market makers, you must (among other things) analyze how difficult transactions are for customers. You may be vulnerable if you have any of these: high information asymmetries between customers and suppliers high search costs fees and layers from intermediaries long lead times to complete transactions Attackers can address these indicators through the real-time and transparent exchange of information, disintermediation, and automated transaction processing, as well as new transparency through search and comparison tools, among other approaches. Extreme shifts The top half of our matrix portrays the market realignment that occurs as matchmakers connect sources of new supply with newly purified demand. The lower half of the matrix explains more extreme shifts-sometimes through new or significantly enhanced value propositions for customers, sometimes through reimagined business systems, and sometimes through hyperscale platforms at the center of entirely new value chains and ecosystems. Attacks may emerge from adjacent markets or from companies with business objectives completely different from your own, so that you become ""collateral damage."" The result can be not only the destruction of sizable profit pools but also the emergence of new control points for value. Established companies relying on existing barriers to entry-such as high physical-infrastructure costs or regulatory protection-will find themselves vulnerable. User demand will change regulations, companies will find collaborative uses for expensive infrastructure, or other mechanisms of disruption will come into play. Companies must understand a number of radical underlying shifts in the forces of supply and demand specific to each industry or ecosystem. The power of branding, for example, is being eroded by the social validation of a new entrant or by consumer scorn for an incumbent. Physical assets can be virtualized, driving the marginal cost of production toward zero. And information is being embedded in products and services, so that they themselves can be redefined. Taken as a whole, these forces blur the boundaries and definitions of industries and make more extreme outcomes a part of the strategic calculus. New and enhanced value propositions As we saw in the top half of our framework, purifying supply and demand means giving customers what they always wanted but in new, more efficient ways. This isn't where the disruptive sequence ends, however. First, as markets evolve, the customers' expectations escalate. Second, companies meet those heightened expectations with new value propositions that give people what they didn't realize they wanted, and do so in ways that defy conventional wisdom about how industries make money. Few people, for example, could have explicitly wished to have the Internet in their pockets-until advanced smartphones presented that possibility. In similar ways, many digital companies have gone beyond improving existing offerings, to provide unprecedented functionality and experiences that customers soon wanted to have. Giving consumers the ability to choose their own songs and bundle their own music had the effect of undistorting demand; enabling people to share that music with everyone via social media was an enhanced proposition consumers never asked for but quickly grew to love once they had it. Many of these new propositions, linking the digital and physical worlds, exploit ubiquitous connectivity and the abundance of data. In fact, many advances in B2B business models rely on things like remote monitoring and machine-to-machine communication to create new ways of delivering value. Philips gives consumers apps as a digital enrichment of its physical-world lighting solutions. Google's Nest improves home thermostats. FedEx gives real-time insights on the progress of deliveries. In this lower-left zone, customers get entirely new value propositions that augment the ones they already had. What are the indicators of potential disruption in this position on the matrix, as companies offer enhanced value propositions to deepen and advance their customers' expectations? You may be vulnerable if any of the following is true: Information or social media could greatly enrich your product or service. You offer a physical product, such as thermostats, that's not yet ""connected."" There's significant lag time between the point when customers purchase your product or service and when they receive it. The customer has to go and get the product-for instance, rental cars and groceries. These factors indicate opportunities for improving the connectivity of physical devices, layering social media on top of products and services, and extending those products and services through digital features, digital or automated distribution approaches, and new delivery and distribution models. Reimagined business systems Delivering these new value propositions in turn requires rethinking, or reimagining, the business systems underlying them. Incumbents that have long focused on perfecting their industry value chains are often stunned to find new entrants introducing completely different ways to make money. Over the decades, for example, hard-drive makers have labored to develop ever more efficient ways to build and sell storage. Then Amazon (among others) came along and transformed storage from a product into a service, Dropbox upped the ante by offering free online storage, and suddenly an entire industry is on shaky ground, with its value structure in upheaval. The forces present in this zone of the framework change how value chains work, enable step-change reductions in both fixed and variable costs, and help turn products into services. These approaches often transform the scalability of cost structures-driving marginal costs toward zero and, in economic terms, flattening the supply curve and shifting it downward. Some incumbents have kept pace effectively. Liberty Mutual developed a self-service mobile app that speeds transactions for customers while lowering its own service and support costs. The New York Times virtualized newspapers to monetize the demand curve for consumers, provide a compelling new user experience, and reduce distribution and production costs. And Walmart and Zara have digitally integrated supply chains that create cheaper but more effective operations. Indicators of disruption in this zone include these: redundant value-chain activities, such as a high number of handovers or repetitive manual work well-entrenched physical distribution or retail networks overall industry margins that are higher than those of other industries High margins invite entry by new participants, while value-chain redundancies set the stage for removing intermediaries and going direct to customers. Digital channels and virtualized services can substitute for or reshape physical and retail networks. Hyperscaling platforms Companies like Apple, Tencent, and Google are blurring traditional industry definitions by spanning product categories and customer segments. Owners of such hyperscale platforms enjoy massive operating leverage from process automation, algorithms, and network effects created by the interactions of hundreds of millions, billions, or more users, customers, and devices. In specific product or service markets, platform owners often have goals that are distinct from those of traditional industry players. Moreover, their operating leverage provides an opportunity to upsell and cross-sell products and services without human intervention, and that in turn provides considerable financial advantages. Amazon's objective in introducing the Kindle was primarily to sell books and Amazon Prime subscriptions, making it much more flexible in pricing than a rival like Sony, whose focus was e-reader revenues. When incumbents fail to plan for potential moves by players outside their own ecosystems, they open themselves up to the fate of camera makers, which became collateral damage in the smartphone revolution. Hyperscale platforms also create new barriers to entry, such as the information barrier created by GE Healthcare's platform, Centricity 360, which allows patients and third parties to collaborate in the cloud. Like Zipcar's auto-sharing service, these platforms harness first-mover and network effects. And by redefining standards, as John Deere has done with agricultural data, a platform forces the rest of an industry to integrate into a new ecosystem built around the platform itself. What are the indicators that hyperscale platforms, and the dynamics they create, could bring disruption to your door? Look for these situations: Existing business models charge customers for information. No single, unified, and integrated set of tools governs interactions between users and suppliers in an industry. The potential for network effects is high. These factors invite platform providers to lock in users and suppliers, in part by offering free access to information. Finding vulnerabilities and opportunities in your business All of these forces and factors come together to provide a comprehensive road map for potential digital disruptions. Executives can use it to take into account everything at once-their own business, supply chain, subindustry, and broader industry, as well as the entire ecosystem and how it interacts with other ecosystems. They can then identify the full spectrum of opportunities and threats, both easily visible and more hidden. Digital's impact on strategy By starting with the supply-and-demand fundamentals, the insurance executives mentioned earlier ended up with a more profound understanding of the nature and magnitude of the digital opportunities and threats that faced them. Since they had recognized some time ago that the cross-subsidies their business depended on would erode as aggregators made prices more and more transparent, they had invested in direct, lower-cost distribution. Beyond those initial moves, the lower half of the framework had them thinking more fundamentally about how car ownership, driving, and customer expectations for insurance would evolve, as well as the types of competitors that would be relevant. It seems natural that customers will expect to buy insurance only for the precise use and location of a car and no longer be content with just a discount for having it garaged. They'll expect a different rate depending on whether they're parking the car in a garage, in a secured parking station, or on a dimly lit street in an unsavory neighborhood. Rather than relying on crude demographics and a driver's history of accidents or offenses, companies will get instant feedback, through telematics, on the quality of driving. In this world, which company has the best access to information about where a car is and how well it is driven, which could help underwrite insurance? An insurance company? A car company? Or is it consumer device makers that might know the driver's heart rate, how much sleep the driver had the previous night, and whether the driver is continually distracted by talking or texting while driving? If value accrues to superior information, car insurers will need to understand who, within and beyond the traditional insurance ecosystem, can gather and profit from the most relevant information. It's a point that can be generalized, of course. All companies, no matter in what industry, will need to look for threats-and opportunities-well beyond boundaries that once seemed secure. Digital disruption can be a frightening game, especially when some of the players are as yet out of view. By subjecting the sources of disruption to systematic analysis solidly based on the fundamentals of supply and demand, executives can better understand the threats they confront in the digital space-and search more proactively for their own opportunities. About the Authors Angus Dawson is a director in McKinsey's Sydney office, Martin Hirt is a director in the Taipei office, and Jay Scanlan is a principal in the London office. The authors would like to thank Chris Bradley, Jacques Bughin, Dilip Wagle, and Chris Wigley for their valuable contributions to this article.",en 1459338700,CONTENT SHARED,6102826385978742696,-8020832670974472349,-324774071008045973,,,,HTML,http://it20.info/2016/03/the-incestuous-relations-among-containers-orchestration-tools/,The incestuous relations among containers orchestration tools,"This is going to be a short and (somewhat) visual blog post where I want to discuss the absolute madness that is going on in ""container land"" (for lack of a better characterization). This time I am going to try to use quotes, tweets, slide screenshots as much as possible and avoid my usual boring text rants. I believe you can draw your own conclusions in the end (but I'll give you a hint). If you thought this previous post of mine was a mess, wait to read watch this. First off I'd like to thank Ken for suggesting a proper title for this post to avoid me sounding like a pervert : Then I'd like to quote what Google's own Kubernetes master Jedi Kelsey Hightower thinks about the container management war that is going on : And yes, we are still in the early days of this gold rush, just in case you were wondering. Last but not least another tweet that nailed it (with a funny joke fact) : Now you may think that the problem we are facing is the proliferation of container management solutions to pick from? You wish it was that easy. It's way worse than what you think: it's getting ""incestuous"". Container management vendors (or projects) are taking an interesting path these days. Instead of trying to position themselves as the best and most viable containers orchestration solution, they are starting to position themselves as the foundational orchestration solution on top of which other container management solutions could run. Yes, you read it right. The containers management industry complexity just got squared! Instead of having to pick among 25 different alternatives, you now have a choice of (25 x 24 =) 600 permutations to choose from! How fun?! But seriously, this is a game being played primarily by the 3 or 4 most visible vendors/projects (namely Docker, Mesos, Kubernetes and CloudFoundry) so the good news is that the permutations are much less than 600. What a pity. Some of them are more ""serious"" than others when it comes to ""I want to run all the other container managers, and make donuts while I am at it"". I say Mesos is king here. They would like to be the center of the universe. A few examples below. They want to run Docker Swarm on top (of Mesos): (slide 23) They want to run Kubernetes on top (of Mesos): (slide 8) They (also) want to run CloudFoundry on top (of Mesos): ""The way CloudFoundry-Mesos works right now-in its very early stages-is to replace the native Cloud Foundry Diego scheduler with a Mesos framework, CloudFoundry-Mesos. Doing this does not affect the user experience or performance of other Cloud Foundry components, but would let Cloud Foundry applications share a cluster with other DCOS services without worrying about resource contention."" ( ) I have just had a shudder. Interestingly, when Docker itself presents at MesosCon they are ok with Docker Swarm being ""boxed and limited"" to one of the many Mesos frameworks: This is a common pattern in the industry these days (and a good filter to use when in doubt). Vendor A and Vendor B overlaps. When Vendor A gets a slot at a Vendor B event, Vendor A concedes Vendor B to ""have control"" (in this case ""having control"" means being the foundational element of the stack). Vice versa when When Vendor B gets a slot at a Vendor A event, Vendor B concedes Vendor A to ""have control"" (if and where applicable of course). For instance, the example above is not what Docker advertise when they are in charge of the message. When they are (in charge of the message) what they say is the exact opposite (that is: Mesos-Marathon on top of Docker Swarm): ""This project contains Docker Compose files used to easily deploy distributed containerized applications. Currently the project contains Docker Compose files for Kubernetes and Mesos-Marathon. The rationale behind this is that Swarm is lightweight enough to deploy additional orchestration tools on top."" ( But it's getting even more complex and sophisticated than that. To the point that Docker is trying to ""steal"" historical Mesos frameworks. See (and read!) this: ( ) Translation: ""Now let's get rid of Mesos entirely and just run Mesos frameworks directly on Docker Swarm!"" I even attempted to build a table to summarize what you could run on what. Warning: it's more of a joke than anything just to point out the level of ridiculous madness we are at. Interestingly, it shows who is leading this confusing ""game"" (i.e. Mesos and Docker) and who is being pulled into this ""game"" (K8s and CloudFoundry). In conclusion, if you are an average person (like me) trying to figure out what's going on, good luck. Please come back in 5 years when (perhaps) the dust has settled a bit. Right now, it's just pure madness that only makes sense to a (limited) bunch of people. What do YOU think? Massimo.",en 1459340708,CONTENT SHARED,809601605585939618,8414731042150985013,457340264983454091,,,,HTML,http://www.zdnet.com/article/microsoft-and-canonical-partner-to-bring-ubuntu-to-windows-10/,​Microsoft and Canonical partner to bring Ubuntu to Windows 10 | ZDNet,"According to sources at Canonical , Ubuntu Linux's parent company, and Microsoft, you'll soon be able to run Ubuntu on Windows 10. This will be more than just running the Bash shell on Windows 10 . After all, thanks to programs such as Cygwin or MSYS utilities , hardcore Unix users have long been able to run the popular Bash command line interface (CLI) on Windows. With this new addition, Ubuntu users will be able to run Ubuntu simultaneously with Windows. This will not be in a virtual machine, but as an integrated part of Windows 10. The details won't be revealed until tomorrow's morning keynote speech at Microsoft Build . It is believed that Ubuntu will run on top of Windows 10's recently and quietly introduced Linux subsystems in a new Windows 10 Redstone build . Microsoft and Canonical will not, however, sources say, be integrating Linux per se into Windows. Instead, Ubuntu will primarily run on a foundation of native Windows libraries. This would indicate that while Microsoft is still hard at work on bringing containers to Windows 10 in project Barcelona , this isn't the path Ubuntu has taken to Windows. Windows 10 at six months: Ready for primetime? Windows 10 has been available to the public for six months this week. By the numbers, it's been a hit, with 200 million active users as of the first of the year. Here's my midterm report. That said, Canonical and Microsoft have been working on bringing containers to Windows since last summer. They've been doing this using LXD . This is an open-source hypervisor designed specifically for use with containers instead of virtual machines (VMs). The fruits of that project are more likely to show up in Azure than Windows 10. It also seems unlikely that Ubuntu will be bringing its Unity interface with it. Instead the focus will be on Bash and other CLI tools, such as make, gawk and grep. Could you run a Linux desktop such as Unity, GNOME, or KDE on it? Probably, but that's not the purpose of this partnership. Canonical and Microsoft are doing this because Ubuntu on Windows' target audience is developers, not desktop users. In particular, as Microsoft and Canonical continue to work more closely together on cloud projects , I expect to find tools that will make it easy for programmers to use Ubuntu to write programs for Ubuntu on the Azure cloud. So is this MS-Linux? No. Is it a major step forward in the integration of Windows and Linux on the developer desktop? Yes, yes it is. Related Stories:",en 1459340748,CONTENT SHARED,8742078838645536785,-1032019229384696495,-1858408872346331823,,,,HTML,http://www.engadget.com/2016/03/29/behind-facebook-messengers-plan-to-be-an-app-platform/,Behind Facebook Messenger's plan to be an app platform,"The question is: Why? Why would you as a user want all this integration? Why not just download Uber and request a car that way? Meanwhile, why would a developer or a business want to bake their services into Messenger? Wouldn't they rather users get their apps instead? Lastly, why does Facebook want to add all of these features anyway, and potentially weigh it down with so many added complications? There are several answers to these questions, but it all starts with a single fact: Messaging is now the number one activity most people do on their smartphones. A Pew Internet study published last year found that fully 97 percent of smartphone owners used text messaging at least once a week. Messaging was also found to be the most frequently used feature, with smartphone owners reporting that they used text messaging within the past hour. Further, 35 percent of smartphone users in the US use some kind of messaging app to communicate. Facebook's own stats confirm that. In the last quarter of 2015 , the company reported 900 million monthly Whatsapp users and 800 million monthly Messenger users. ""We have seen messaging volume more than double in the past year,"" said Frerk-Malte Feller to Engadget. Feller is a Director of Product Management for Facebook who heads up Messenger's business initiatives. ""Businesses want to be where the people are."" This is certainly why Lyft wants to be involved. ""As the heart of so many of our users' day-to-day communications, [Messenger] felt like a natural fit to make getting from place to place as simple as typing 'hello' to a friend,"" a Lyft spokesperson told Engadget. From the user standpoint, having a third-party service like Uber integrated into Messenger bypasses the whole rigmarole of signing up for an account. ""You're already registered on Messenger using your Facebook identity,"" said Feller. ""When you start using a new service, you don't have to fill out all those forms [...] You can just use the identity you have on Messenger."" More importantly, however, it also means one less app to download. Sure, downloading an app sounds like a pretty trivial activity, but it's still an extra step, one which a lot of users are unwilling to take. A recent Nielsen study showed that despite the increased number of apps in both Google Play and Apple's App Store over the past few years, people still generally use the same number of apps -- about 26.7 per month. But while the total number of applications doesn't seem to have increased, the amount of time spent on them has gone up -- about a 63 percent rise in two years. We're not as interested in trying new apps, but the apps we do have, we're using more. This means we're not as interested in trying new apps, but the apps we do have, we're using more. It's a scenario that's ripe for enriching existing apps -- like the heavily used Messenger -- with additional features. As for businesses, it's a chance to increase awareness without having to rely on app downloads. Beyond that, Messenger offers a valuable social component that most existing apps don't have. With the Uber integration, for example, you can message an address to a friend, who can then tap that address to request a car. Alternately, if you're already in an Uber, you can use Messenger to share your location to a friend so he or she can see when you're going to arrive. All of this is on top of the ability for you to directly message the company if you're having any issues. And because this is Messenger and not an email or a phone call, whoever's reading your messages will be able to see past conversations to gain context of the existing message thread. The kinds of interactions are richer too. Spotify's integration, for example, offers a more seamless sharing experience than just copying and pasting a link. ""It's a huge upgrade,"" a Spotify spokesperson told us. ""[It allows] users to deep link into Spotify to consume content."" There is some precedent to all of this. Mobile messaging apps in Asia have been experimenting with these added features for a while now. Line , for example, has billed itself as a ""social entertainment platform,"" and has branched out into offering a music service plus a news feed, both of which are easily accessible from within the main messaging app. It also offers games, much like Messenger is currently doing , and is even going so far as becoming a phone carrier . Of course, adding third-party services is just the beginning; Messenger's ambitions go much deeper. As a recent report from The Information indicates , Facebook's chat app could soon have plenty of other features like calendar syncing, News Feed-style status updates and the ability to directly share quotes from articles. Add the M personal assistant to the equation, and it's easy to imagine a future where Messenger could be the central hub of smartphones everywhere. Perhaps even more so than Facebook itself. There is one potential downside, however, and that's the arrival of advertising . After all, that's Facebook's bread and butter, and it's naturally going to want to slap ads on an app that's getting to be this popular. And with all these business partnerships, it won't be surprising if Facebook ends up allowing companies to spam you with the occasional advertisement, especially if you voluntarily added these integrations yourself. ""The feedback from people in the last 12 months have been strong,"" said Feller. ""It really has all the right attributes and characteristics."" And with F8's annual developer conference coming up next week, we imagine there will be even more to come.",en 1459344753,CONTENT SHARED,-1802980374508081539,-1443636648652872475,8209530310193218854,,,,HTML,http://fortune.com/2016/03/29/eric-schmidt-ai/,Google's Schmidt Says Computers Not a Threat to Humans or Jobs,"You know those movies where the machines take over and the human race gets enslaved by computers? Pure applesauce, says Eric Schmidt, chairman of Google's parent company, Alphabet . We have little to fear from the rapid growth of artificial intelligence, he said at an event at Columbia University in New York on Monday. Fears about computers run amok are the stuff of movies, he argued, and that the technology serves to help people not hurt them - including when it comes to things like income and employment. ""I worry about inequality but there's no evidence the stuff we do creates a permanent underclass,"" said Schmidt, in the course of an interview with Columbia's dean of journalism, Steve Coll. He added that critics have fretted about technology's impact on the job market for decades but that, overall, tech has created ""millions and millions"" of new jobs. The timing of Schmidt's comments was perhaps ironic since, only a week ago, a Google computer drew on its machine learning prowess to defeat the world's top human player in an epic match of the game Go. The computer's 4 games to 1 victory was another milestone for machines, coming five years after IBM's Watson super-computer routed two Jeopardy champions on national TV. Schmidt noted that AI software (which Google and others are making open source ) is serving to automate tasks like sorting photos or even driving a car, but argued that only very low skill, repetitive type work will be affected. He acknowledged this includes some types of journalism jobs, including certain sports reports and corporate earnings stories, but said this trend will be limited. ""Creative jobs and the caring jobs [like health care] are the ones that are robust against everything,"" he said. ""There's no evidence that the world I live in is displacing that."" Beyond the impact of machine learning on the job market, Schmidt also downplayed its military and cyber-war implications. In his view, as the world's computers become more inter-connected, AI may come to serve as a ""defensive shield"" for the global network by identifying and isolating abnormal activities. ""This technology may be very pro-defense. We don't know yet,"" he said, but acknowledged that Google's leaders did worry about AI in the hands of dictators and authoritarian regimes. At a time when hacking is regular front-page news, including a recent cyber-plot against a dam in New York, many in the room did not appear to be reassured by Schmidt's observations. One questioner asked how to avoid the all-controlling computers of the movie Minority Report . In the end, though, Schmidt's most convincing answer might have been the simplest one: It's too difficult. According to Schmidt, AI has made enormous progress in recent years, but has now reached a huge computational roadblock. The upshot is that AI remains very good at solving problems-but only after humans have done the hard work of defining the problem in the first place. This was the case with Google's Go computer, whose game-playing feat took teams of people two years to program. For more about Google, watch: As such, our fears about machines taking over may still be as far away as they were at the time of the 1968 movie, 2001: A Space Odyssey . The villain in the film, by the way, was a computer named HAL whose name was rumored to be based on a big tech company-not Google but another firm with the nearby three-letter acronym of IBM.",en 1459344926,CONTENT SHARED,-6195775145989617417,-709287718034731589,-8526179123269437438,,,,HTML,http://www.lukew.com/ff/entry.asp?1945,LukeW | Obvious Always Wins,"It's tempting to rely on menu controls in order to simplify mobile interface designs -especially on small screens. But hiding critical parts of an application behind these kinds of menus could negatively impact usage. Out of Sight, Out of Mind In an effort to simplify the visual design of the Polar app , we moved from a segmented control menu to a toggle menu. While the toggle menu looked ""cleaner"", engagement plummeted following the change. The root cause? People were no longer moving between the major sections of the app as they were now hidden behind the toggle menu. A similar fate befell the Zeebox app when they transitioned from a tab row for navigating between the major sections of their application to a navigation drawer menu. Critical parts of the app were now out of sight and thereby out of mind. As a result, engagement fell drastically . In Sight, In Mind When critical parts of an application are made more visible, usage of them can increase. Facebook found that not only did engagement go up when they moved from a ""hamburger"" menu to a bottom tab bar in their iOS app, but several other important metrics went up as well. Similarly, Redbooth's move from a hamburger menu to a bottom tab bar resulted in increased sessions and users . Previously out of sight functionality was now front and center. What's Important Enough to Be Visible? Because there's not a lot of space on mobile screens, not everything can be visible in a mobile UI. This makes mobile design challenging. Unlike the desktop where big screens allow us to squeeze in every feature and function on screen, mobile requires us to make decisions: what's important enough to be visible on mobile? Answering that question requires an understanding of what matters to your users and business. In other words, it requires good design.",en 1459344989,CONTENT SHARED,-2176468683077766369,-709287718034731589,-8526179123269437438,,,,HTML,http://techcrunch.com/2013/09/18/facebooks-new-mobile-test-framework-births-bottom-tab-bar-navigation-redesign-for-ios-5-6-7/,"Facebook's New Mobile Test Framework Births Bottom Tab Bar Navigation Redesign For iOS 5, 6, & 7","Facebook lost its ability to ""move fast and break things"" when it switched its apps from HTML5 to native. But it's gotten its mojo back. Today it announced a big iOS 7-style app redesign featuring bottom-screen ""tab bar"" navigation built with an advanced native mobile testing framework. Facebook knew to ditch the pull-out navigation drawer by testing different interfaces in 10 million-user batches. [If you don't see the new Facebook app in the App Store, give it an hour as the rollout seems to be a bit slow] The new version of Facebook for iOS isn't just for iOS 7. It's rolling out to iOS 5 and 6 too, but with a black tab bar for navigation at the bottom of the screen that matches the old iOS style instead of the white tab bar for iOS 7. However, the tab bar won't be coming to Facebook for iPad, as it sees the drawer as still a good fit for bigger screens. For the little ones, the new tab bar delivers a super-charged ""More"" button. It appears on the far right next to one-tap buttons for News Feed, Requests, Messages, and Notifications. More reveals your app bookmarks just like the old drawer did, but will save your place in whatever product you browse. Previously, if you opened your drawer and switched to look at Events or Photos, you'd lose your place in the News Feed or whatever else you were doing. The new More button essentially opens tabs over the top of the feed so your state and context are preserved. It even works between sessions so if you leave Events open in More, your parties will be waiting there at the ready any time you tap More. As for aesthetics, Facebook has also made the top title bar translucent and redesigned many of its icons like the one for messages to match the line and arc style of Apple's new mobile operating system. But Facebook didn't flatten everything, leaving some texture and depth to the feed. You can see video of the redesigned app here. The real story today isn't the app, though, but how it was made. HTML5 Was Slow, But Boy Could It Test Facebook has never been afraid to try new things and see what sticks. It invented the ""Gatekeeper"" system to let it simultaneously test thousands of variations of Facebook on the web with subsets of users. It would collect data about usage and performance to inform what to roll out to everyone. On mobile, it hoped to do the same thing, so it built its iOS and Android apps using a Frankenstein combination of native architecture and HTML5. The latter let it ship code changes and tests to users on the fly without the need for a formal app update. ""With HTML5 we'd ship code every single day and be able to switch it on server-side"", Facebook product manager Michael Sharon tells me. That meant it could push a News Feed redesign one day to 5% of users, then to everyone a week later, and then fix a bug a few days after that. But beyond testing, HTML5 was a disaster. It made Facebook's apps sluggish and unresponsive, which hampered engagement, ad views, and their app store ratings. Users hated Slowbook. Mark Zuckerberg would later say on stage at TechCrunch Disrupt that "" Our biggest mistake as a company...was betting too much on HTML5″. So Facebook ditched HTML5 and rebuilt the apps entirely on native infrastructure last Summer. They were twice as fast. Suddenly their app store ratings shot up, and people read twice as many News Feed stories on average. It was a huge win for Facebook. Except that it had to sacrifice HTML5's testing abilities. ""We Use Testing Kind Of Religiously"" Sharon explains ""One thing we lost was the ability to do testing. We use testing kind of religiously in both the web and HTML5 apps, and this is something we wanted to get back to as much as possible."" Having to wait until its monthly app update cycle came around to test new versions of its apps was torture for the typically nimble company. It wanted to push changes and get immediate feedback. To solve the problem on Android, Facebook launched a beta tester club in June 2013 that let it use Android's more permissive stance towards developers to let power users sign up to play with potential new features and catch bugs. But iOS refuses to sully its simplicity with such beta capabilities. So over the past year Facebook quietly built out a new native mobile app testing framework and sprung it into action in March to build the app update released today. How it works is that when you download Facebook for iOS, the app actually contains multiple different versions of the interface. However, you're grouped with a few hundred thousand other users and you all only see one version of the app. This way Facebook can try out tons of variations all at once, without multiple app updates or any confusion for users. We've all been Guinea pigs in the mobile testing framework since March, but none of us knew it. Sharon was adamant that these different tests aren't half-baked betas, saying ""We're not shipping a subpar version of our app. We're shipping full production-ready versions that could become the main experience"". When added up, Facebook would test major changes with between five and ten million users at a time - more than many apps have in total. ""I wouldn't say we're 'data-driven'. We're 'data-aware' or 'data-informed',"" Sharon says. That means that while Facebook collects a bunch of testing data that sways its decisions, it won't chuck out its intuition or a design it believes in just because the data says so. The first big mission of the new testing framework was rethinking how users navigate on mobile. It wondered if there was something better than the navigation drawer that slides out from the side of the app. It used the new testing framework to experiment with dozens of different interface designs, and compared them on metrics including ""engagement metrics, satisfaction metrics, revenue metrics, speed metrics, perception of speed metrics"" until it found that when looked at holistically, the row of buttons at the bottom of the feed or main screen was the best design. This is what's becoming available for iOS today . And that's how Facebook got its testing groove back.",en 1459345005,CONTENT SHARED,7973573994178035769,-8845298781299428018,3760091107461406486,,,,HTML,http://noticias.uol.com.br/politica/ultimas-noticias/2016/03/30/69-consideram-gestao-de-dilma-ruim-ou-pessima-aponta-pesquisa-ibope.htm,"Governo Dilma é desaprovado por 69% e aprovado por 10%, diz Ibope","Pesquisa Ibope encomendada pela Confederação Nacional da Indústria e divulgada nesta quarta-feira (30) aponta que 69% dos brasileiros avaliam o governo da presidente Dilma Rousseff como ruim ou péssimo. A pesquisa apontou que apenas 10% avaliam o governo como ótimo ou bom e 19% acham que ele é regular. Entre os ouvidos, 1% não soube responder. O levantamento foi realizado entre 17 e 20 de março, com 2.002 pessoas em 143 municípios. A margem de erro da pesquisa é de 2 pontos percentuais para mais ou para menos.",pt 1459345029,CONTENT REMOVED,7973573994178035769,-8845298781299428018,3760091107461406486,,,,HTML,http://noticias.uol.com.br/politica/ultimas-noticias/2016/03/30/69-consideram-gestao-de-dilma-ruim-ou-pessima-aponta-pesquisa-ibope.htm,"Governo Dilma é desaprovado por 69% e aprovado por 10%, diz Ibope","Pesquisa Ibope encomendada pela Confederação Nacional da Indústria e divulgada nesta quarta-feira (30) aponta que 69% dos brasileiros avaliam o governo da presidente Dilma Rousseff como ruim ou péssimo. A pesquisa apontou que apenas 10% avaliam o governo como ótimo ou bom e 19% acham que ele é regular. Entre os ouvidos, 1% não soube responder. O levantamento foi realizado entre 17 e 20 de março, com 2.002 pessoas em 143 municípios. A margem de erro da pesquisa é de 2 pontos percentuais para mais ou para menos.",pt 1459345792,CONTENT SHARED,-1590585250246572231,-709287718034731589,-8526179123269437438,,,,HTML,https://lmjabreu.com/post/why-and-how-to-avoid-hamburger-menus/,Why and How to Avoid Hamburger Menus - Louie A. - Mobile UX Design,"We now have data that suggests Sidebar menus-sometimes called Hamburger Menus/Basements-might be causing more harm than good. Here's some public data: One thing to have in mind is that this is a nuanced issue. I've observed these issues in user testing and others have also gone through the same realization. I only ask you to read the problems, solutions and be aware of the consequences before committing to this pattern. The Problems Lower Discoverability Less Efficient Clash with Platform Navigation Patterns Not Glanceable Lower Discoverability ""what's out of sight, is out of mind."" In its default state, the Sidebar Menu and all of its contents remain hidden. People need to first be able to identify the Sidebar Menu button as actionable - companies are supplementing the menu icon with a 'menu' label or tooltip, and they also have to feel the need to do so - which might not be the case in applications where the main screen offers majority of the value. Less Efficient Even if people are aware and value a feature, this pattern introduces navigation friction since it forces people to first open the menu and only then allowing them to see and reach their objective. Below is a contrasting example of how instant navigation is when the navigation elements are always visible. On top of these issues, in platforms such as iOS, the burger menu simply cannot be implemented without clashing with the standard navigation patterns. The left Navigation Bar Button would need to reserved for the menu button but we also need to allow the person to navigate back. Designers will either commit the mistake pictured above and overload the Navigation Bar - not even leaving space for the screen title, or force people to navigate several screens to get to the menu as seen below: Not Glanceable It's harder to surface information about specific items as they're only visible when and if the person needs to navigate into other sections of the app. You might do it like the Jawbone UP app does: display an icon representing the nature of the notification next to the Sidebar Menu button. This doesn't scale well though as it requires you to maintain more icons and as a designer you might be forced to display a generic notification icon instead reducing its meaning. In contrast, the Tab Bar below-taken from Twitter, lets the user understand the context of the notification and navigate directly to the screen associated with it. Cognition You might feel compelled to use it in order to save screen estate, but that's really a misunderstanding of what people see in reality. While you might think people see everything that's in front of them, we actually tend to have a focus area, even in screens of reduced size . So saving screen estate can be achieved in ways that don't negatively impact navigation or go against basic HCI principles such as providing feedback and displaying state in your application. On a side note: perhaps what we need is to refresh our understanding of HCI, I'm pretty sure that would avoid a lot of design mistakes being done by people who choose to take a visual approach to design. The Solution A lot has been written about the problems but the solution still isn't clear for everybody. When Should I Use It? There might be some very rare occasions where this pattern actually makes sense, but the general rule is to avoid it altogether. IRCCloud is an example where the application of this pattern makes sense in a way - it allows navigation between channels and channel members. This is acceptable because the main screen has no child screens that require a hierarchical navigation stack; media can simply be presented in a modal. But even in this scenario, it's already visible how the UI is being overloaded and needs its IA to be rethought. The channel members Sidebar Menu button (right) takes away the opportunity to display an Actions button instead to house all channel-related actions. Instead, the designers had no other choice but to mix actions from different contexts such as channel, network and account into one single Action Sheet: This will lead us to the next section of this article. What Should I Use Instead? The Sidebar Menu pattern welcomes bad IA because you can simply add one more thing to it without a direct consequence - that is until people actually use it. ""The solution is reviewing your information architecture."" Above is an example of how to move away from a Sidebar Menu. You can follow the color coded dots to understand how the elements transition between these two solutions. Takeaways: State can be directly presented in the Messages tab Items are always visible and one instantly accessible No Navigation Gesture Conflict On top of fixing those big issues, we can still save some vertical space by hiding the Navigation Bar based on the scrolling direction - seen here in Facebook but also implemented in Safari. The persistent Tab Bar is used indicate the current screen, allowing us not to depend on the Navigation Bar to do so. If you're feeling minimal, perhaps a Tool Bar can be enough. The key is not to hide navigation, allow direct access, don't conflict with navigation gestures, and present feedback on the icon it's related to. [ Update ] For websites, I believe it's best to still review the IA but instead of using these iOS patterns, simply display the navigation in the website header as a list - example . As long as it's evident as website navigation, people will still scroll past it and will definitely be immediately exposed to the available options. Also, still talking about websites on mobile: remember to remove the 300ms click delay by following these tips or by using touch events. How does it scale? The examples I'm giving here are based around iOS, and in this situation you'll want to use a Tab or Tool Bar. But how does the Tab Bar scale beyond 5 items? Such situation isn't ideal and it might indicate again an issue with the IA of your app, but if you must expand beyond 5 tabs, a common pattern is to use the last Tab Bar to provide access to the remaining options, similar to a basement menu unfortunately. You might also implement a scrollable Tool Bar as seen in Rookie , this allows you to avoid the issues of the Sidebar Menu, and incur only a slightly higher navigation friction with possibly higher error rates due to need to distinguish between tap and scroll intentions. Have in mind this second solution is more appropriate for actions rather than navigation. Rookie's implementation deals with the indeterminate state its Tool Bar is left in after scrolling, by hiding it after one of the tasks it offers is complete-such as crop, rotate, etc. This prevents the indeterminate state to stick as the Tool Bar is hidden and reset the next time it's displayed. Conclusion So you've read about the problems with the Sidebar Menu pattern, and also the solution in the iOS context which has been there from its inception. Hope this is useful and clear, if you have any comments feel free to ping me on Twitter over at @lmjabreu [ Update ] The feedback on this article has been amazing! Millions of readers and more importantly good conversations on Twitter. I've collated a selection of those tweets using Storify . It seems there's more to say about Android and especially about coming up with a set of navigation patterns for the Web-as the purpose of this icon varies immensely. [ UPDATE - 15/03/2016] Android UI Guidelines now include the Tab Bar as a main navigation component, it's called Bottom Navigation . Other articles and tweets on this topic: @mdo review the IA and simplify the root level, place items in their context, e.g: DMs, Settings on the user profile instead of burger. - Luis Abreu (@lmjabreu) May 14, 2014 @pixeliris iOS: Tab Bar, never side menus as they reduce discoverability & glanceability, add friction. All leading to lower engagement. - Luis Abreu (@lmjabreu) May 14, 2014 ""No one understands the icon, let's add the word menu. The word is too small, let's add a pop-up calling it out."" pic.twitter.com/Jargi7gavX - Luke Wroblewski (@lukew) March 11, 2014",en 1459346036,CONTENT SHARED,-4662020648308370136,4670267857749552625,204912088629071541,,,,HTML,http://leanstartup.co/avoid-building-bad-products-rapid-validation/,How to Avoid Building Bad Products with Rapid Validation - Lean Startup Co.,"Despite our best efforts, there are a lot of products out there that aren?t so great. We know from Lean Startup that one of the ways to avoid going down the rat hole with bad products is to be willing to fail fast, but Amir Shevat, Director of Developer Relations at Slack, former Googler and Entrepreneur, believes that there are ways to avoid building bad products in the first place. In his session at the Lean Startup Conference 2015, Amir explained how rethinking the way we experience products can help us to get feedback faster. The key is thinking about interactions with products as a conversation, and measuring how well that conversation is going. To demonstrate his point, he called a member of the audience up to the stage, said ?hello? and then promptly asked for the man?s personal information. The volunteer was briefly caught off guard, as anyone might be if abruptly propositioned for a phone number or address. In the context of social conversation, we can easily see that such a request is awkward, but we still build apps that immediately ask for a username and a password without first engaging the user or explaining what the user will get out of the app. Amir next quizzed the audience by presenting two user experiences that a family-photo-sharing app had tested for their sign up process. The first flow was much shorter, while the second was longer and prompted the user to add connections and post a photo. Almost the whole audience guessed that the first flow was more successful, but they turned out to be wrong. ?As Amir explained, the second flow had steps that engaged the user and drove better retention and usability. This exercise showed that people should be shown the value of the app as part of the initial conversation. Any requests for information should be part of an organic conversation or exchange, so users understand what they?re getting. It also functioned as a lesson on why it?s so important to measure customer feedback not based on what they say, but rather based on what they do within the product. Amir gave another example of this from his own experience as an entrepreneur, when he was building an app to help people meditate. He found that although people were eager to try meditation, there was a significant amount of drop off after they began actually using the app.?Asking users for feedback wasn?t providing clear answers, but observing them in conversation with the app gave better clues. He ultimately observed that the session time was too long to keep users engaged. In an experiment to dramatically shorten the duration of meditation, he got the response he?d hoped for: more engaged users. Amir noted that one important part of running experiments is actually not telling the user it?s happening. This reinforces that the in-app conversation is the ultimate source of truth. Telling the user that you?re going to run an experiment and then, for example, sending them a survey to ask if they liked it, is not as effective because it?s too far removed from their natural usage. Observing the quality of conversation between app and user is what Amir calls ?rapid validation.? Hearing from customers, getting feedback, iterating and failing fast when necessary are all key principles of Lean Startup. Focusing on empathy and conversation is a way to accelerate that process, and get to product/market fit even more efficiently. There will always be products that don?t work out and companies that don?t succeed, but using rapid validation, you save yourself from mistakes that are easily avoidable. Enjoying these stories? Never miss out on real-time updates and learn about the hot topics in our community. Connect with us on Twitter: @leanstartup",en 1459350016,CONTENT SHARED,-4110991218639855802,-1443636648652872475,8209530310193218854,,,,HTML,http://venturebeat.com/2015/11/27/5-open-source-alternatives-to-slack/,5 open-source alternatives to Slack,"Slack, the team communication app, went down earlier this week, sending people to Twitter to commiserate. So what if the startup is valued at $2.8 billion ? It's still a web service, and web services have outages sometimes. Of course, there is Internet relay chat (IRC), but that's a protocol. Ultimately Slack can be thought of as a hosted and souped-up IRC client, and there are plenty of other ones to choose from. Here are five full-featured Slack alternatives - tools that go beyond IRC, in other words - that are open-source software, which means you can download it and run it on whatever server you want. That implies that you're in charge of security, for better or worse, instead of, say, Slack. This tool emerged earlier this year, Friends stands out for its ability to let people communicate with others on the same local network, even when there's no Internet connection. Based on the XMPP messaging protocol, Kaiwa was released earlier this year by French software development shop Digicoop. Available under a GNU AGPL license, Mattermost the platform has been selected by startup GitLab to ship alongside its eponymous open-source code-repository software. Mattermost the company is preparing to launch an enterprise-grade version of the open-source software. Established earlier in 2015, Rocket.Chat has a wide range of capabilities, like file sharing, video conferencing, and service-desk messaging. Dropbox acquired the team behind Zulip last year and released the Zulip software under an Apache license this past September. There are other options out there, but these have gotten traction in the open-source world, so they'll probably continue to be around for a while. Slack itself has not open-sourced its own client. If that changes, I will of course add it to this list.",en 1459350038,CONTENT SHARED,1964631817676172382,3891637997717104548,1776955132850411647,,,,HTML,https://cloudplatform.googleblog.com/2016/03/IAM-best-practice-guides-available-now.html,IAM best practice guides available now,"Google Cloud Identity & Access Managemen t (IAM) service gives you additional capabilities to secure access to your Google Cloud Platform resources. To assist you when designing your IAM strategy, we've created a set of best practice guides. The best practices guides include: The "" Using IAM Securely "" guide will help you to implement IAM controls securely by providing a checklist of best practices for the most common areas of concern when using IAM. It categorizes best practices into four sections: Least privilege - A set of checks that assist you in restricting your users or applications to not do more than they're supposed to. Managing Service Accounts and Service Account keys - Provides pointers to help you manage both securely. Auditing - This covers practices that include reminding you to use Audit logs and cloud logging roles Policy Management - Some checks to ensure that you're implementing and managing your policies appropriately. Cloud Platform resources are organized hierarchically and IAM policies can propagate down the structure. You're able to set IAM policies at the following levels of the resource hierarchy: Organization level . The Organization resource represents your company. IAM roles granted at this level are inherited by all resources under the organization. Project level . Projects represent a trust boundary within your company. Services within the same project have a default level of trust. For example, App Engine instances can access Cloud storage buckets within the same project. IAM roles granted at the project level are inherited by resources within that project. Resource level . In addition to the existing Google Cloud Storage and Google BigQuery ACL systems, additional resources such as Google Genomics Datasets and Google Cloud Pub/Sub topics support resource-level roles so that you can grant certain users permission to a single resource. The diagram below illustrates an example of a Cloud Platform resource hierarchy: The "" Designing Resource Hierarchies "" guide provides examples of what this means in practice and has a handy checklist to double-check that you're following best practice. A Service Account is a special type of Google account that belongs to your application or a virtual machine (VM), instead of to an individual end user. The "" Understanding Service Accounts "" guide provides answers to the most common questions, like: What resources can the service account access? What permissions does it need? Where will the code assuming the identity of the service account be running: on Google Cloud Platform or on-premises? This guide discusses what the implications are of making certain decisions so that you have enough information to use Service Accounts safely and efficiently. We'll be producing more IAM best practice guides and are keen to hear from customers using IAM or wanting to use IAM on what additional content would be helpful. We're also keen to hear if there are curated roles we haven't thought of. We want Cloud Platform to be the most secure and the easiest cloud to use so your feedback is important to us and helps us shape our approach. Please share your feedback with us at: GCP-iam-feedback@google.com - Posted by Grace Mollison, Solutions Architect",en 1459352135,CONTENT SHARED,5156657166068502754,2416280733544962613,-2881817783086620927,,,,HTML,http://www.nytimes.com/2016/03/24/technology/google-showcases-its-cloud-efforts-determined-to-catch-up-to-rivals.html,"Google Showcases Its Cloud Efforts, Determined to Catch Up to Rivals","In the cloud computing business, Google's technology prowess is rarely questioned. Its commitment, however, has been doubted. Google, which trails Amazon and Microsoft in the fast-growing market, hopes to change the industry perception that it is halfhearted about its cloud computing service with product announcements, technology demonstrations and strategy briefings at a two-day conference in San Francisco that began on Wednesday. The company is showcasing its cloud software for machine learning, a branch of artificial intelligence. It has a new speech service: Feed in audio, and the software issues a transcript. Its recently introduced vision service for identifying images also will be more broadly available soon, Google said. And new tools and training aids are available to help developers build machine-learning applications more easily. These are the first significant steps since Diane B. Greene, a respected Silicon Valley technologist, became senior vice president in charge of Google's cloud business in November. Ms. Greene is a co-founder and former chief executive of VMware, whose software is widely used in corporate data centers. She knows the enterprise computing business, which has not been Google's strength. The new cloud offerings after Ms. Greene's appointment signal the company's intent, analysts say. ""Google finally wants to compete in the cloud for enterprise business,"" said Mike Gualtieri, an analyst at Forrester Research. Having technology is only one ingredient in the corporate marketplace, analysts note. Marketing, training and customer service are also important to win businesses. ""We've got to get really good at communications and training,"" Ms. Greene said in an interview. In the corporate cloud business, Google must address a wide range of customers and requirements, she said. For many, a purely online relationship will be fine, she said, while others want hands-on assistance. For labor-intensive projects, Google will work through partners, she said. The company has no intention of building its own consulting arm. ""That's not what Google does,"" said Ms. Greene, who, since 2012, has been a board member of Google and then its parent company, Alphabet , which was established last year. But Google plans to step up training and certification programs, again working through industry partners to expand that effort. ""We'll train the trainers,"" she explained. Traditionally, technology companies have hired armies of sales representatives who have sold corporations expensive software that resided in their data centers. That business still exists, but it is eroding rapidly as cloud software, with its pay-for-use model, becomes mainstream. Cloud software, Ms. Greene said, is redefining the relationship corporate customers have with technology companies. ""Nobody has figured it out yet,"" she said. ""But it has to span the spectrum from self-service to high touch."" The essence of Google's appeal to customers and industry partners, is that ""you sign up for keeping pace with Google's technology development,"" Ms. Greene said. Wix, an online service for building websites, has been using Google's vision service recently. It lets users search the many thousands of images Wix has in its database. Creating an online jewelry commerce site? Search the Wix digital library of unlabeled, stock photos for ""pearls"" and ""diamond rings,"" and Google software finds those images. ""It's pretty amazing,"" said David Zuckerman, head of the developer experience at Wix. ""The neat thing is that the users never know they are using artificial intelligence."" Wix runs its sizable business on the cloud, and it uses the cloud services of Google, Amazon and Microsoft. Mr. Zuckerman welcomes vigorous competition among the three, which should insure lower prices and better deal terms. Recognizing it is the underdog, Google is starting to try harder with better support and terms, Mr. Zuckerman said. ""I feel a closer relationship with Google, probably because it is not the king in this space,"" he said.",en 1459354504,CONTENT SHARED,-4102297002729307038,3891637997717104548,1776955132850411647,,,,HTML,http://www.mediacurrent.com/blog/acquia-dev-desktop-for-beginners,Acquia's Dev Desktop - a Drupal server for beginners,"Web development can be complicated. When it comes to contributing to Drupal, either on one's own time or as part of a job, one of the biggest problems beginners have is getting all of the tools working correctly. It can also be a source of frustration to people with plenty of experience too - maybe they would prefer to put time into writing witty blog posts, designing the perfect rounded corner, or throwing snowballs with their children, rather than dealing with server configuration haiku. AMPing up web development A typical website system requires an operating system, e.g. Linux is good for a server's OS, a web server program, e.g. Apache HTTP Server or nginx , a database, e.g. MySQL , MariaDB or PostgreSQL , and a programming language for gluing it all together, like PHP , Python , Ruby , etc. When these collections of software started gaining popularity during the early 2000's, by far the most popular combination was Linux, Apache HTTP Server, MySQL and PHP, which became known as a ""LAMP stack"". While Drupal requires the PHP part of the ""LAMP"" equation, the other parts can be swapped out for alternatives. The most commonly replaced element is the operating system - not everyone wants to run Linux on their computer just to work with web site software. This then results in ""WAMP"" to describe running the tools on Windows, ""MAMP"" for running them on a Mac, etc. To simplify describing these different possibilities it is common to remove the first letter, resulting in the more general phrase ""AMP stack"" or *AMP stack (the asterisk is a wildcard, i.e. it fills in for ""L"", ""W"", ""M"" or whatever). Configurable complexity As these systems are designed to be flexible, each component will have its own settings files that need to be configured *just so* in order for them to work correctly. Each program will have its own collection of configuration files, each file may (and probably will) have a different syntax, a different location, a different set of additional shared library files that also need to be in the right directory with the right filename, and each release of the tools can change these in obscure ways. Working through the manual configuration of all of these can take a good deal of time, especially when one small mistake can, and has, ruined many a good week for many, many people. Clearly, the amount of configuration necessary to get everything running together in a cohesive manner can be daunting for someone with extensive server configuration experience, never mind a beginner. Thankfully many people realized the need to simplify and streamline this motley crew and put together packaged systems that contained all of the necessary components to run an AMP stack on someone's existing computer. There are a number of suitable packages available today, popular ones include WampServer for Windows, MAMP for OSX, and, thankfully, all Linux distributions have all of the tools readily available through their system software manager already. Taking this one step further, Mediacurrent's partner Acquia built their own AMP stack, called Acquia Dev Desktop that provides extra gravy on top of the average AMP meal and is well worth using, for beginners and seasoned developers alike. But what about Docker, DrupalVM, etc? A step beyond the easily installable AMP stack is something like Docker or DrupalVM . These don't just install a web server, database and programming language on a computer, they install a whole virtual computer inside another one, system kernel and all! These can be a great way of simplifying the steps to configure additional software, e.g. reverse-proxy caching using Varnish , custom search engine applications like Apache Solr , etc. They can also provide a (close-to) 1:1 copy of a production website's operating system, down to the very same shared library revisions and everything, with much less effort than ordering some hardware from Cheap Servers 'R Us. However, there's currently more effort required to install and manage these, including what can be hundreds of megabytes of operating system installation files that need to be downloaded per project, in addition to the website project's actual codebase, database and files. Not everyone has the bandwidth to deal with downloading such large installation files, or disk space for another 5gb virtual operating system installation on their computer. There can also be problems running several virtual systems at once - it might be necessary to specifically stop one virtual machine before starting up another. This continues to add more hassle to working on a project than many may wish to deal with. Lastly, while there are many great options available to do this. At the time of writing there are several competing standards available with no one standard achieving significantly more support in the Drupal community than others. This can leave a person needing to be familiar using several web platforms on their computer as different website projects use different tools, which can quickly reduce the benefits of standardizing the tools. It is for these reasons that this article will focus on simpler tools. 9 Reasons to use Dev Desktop There are several reasons why Dev Desktop makes a good starting point for contributing to Drupal: The package runs on Windows *and* Mac OS X. Given that most companies, and people, use one of these operating systems for their day-to-day work (and evening work, and weekend work, and other projects), having the same software available for both can greatly simplify an organization's need to standardize on a single software title for both platforms. This simplifies an organization's toolset, ensures that team members can help and learn from each other, and save time from having to wait for ""The IT Crew"" to show which buttons to click; all three will save a company's staff time, thus making them more productive overall. The interface hides the majority of the complexities that most don't need to deal with - it just works! All the complex pieces are still available if needed, but they can be safely ignored most of the time - there's just one program visible to the computer's user, and it takes care of the detail. When Dev Desktop isn't being used it can be turned off, which turns off the Apache and MySQL server programs. This saves memory, processing power, and battery life on the computer. It makes it super easy to spin up fresh copies of Drupal core or different distributions to test out modules, themes and patches in environments isolated from other sites that might be installed locally. It takes out the pain of setting up the database and its access permissions for each local Drupal install, which is always a tricky step for beginners. It comes with several versions of PHP and each site installed on the computer can be set up to use a different version, and then changed to a different version with just a few mouse clicks! This takes all of the pain out of testing something with both old and new releases of PHP. It is rather popular amongst Drupalists of all skill levels, ensuring that help might not be too far away if and when needed. The software can connect to an Acquia web hosting ""cloud"" account to download entire copies of production websites with a few clicks, and then changes can be uploaded again. With button clicks. Magic! Dev Desktop isn't limited to working with sites that are hosted with Acquia, so it can be used to work on sites hosted on Pantheon , Platform.sh , CMS Farm , etc, in fact any web hosting platform. Obviously these services won't have the tight integration that Acquia's own has, but it won't prevent them from being used. Give it a try Now that I've explained what AMP stacks are, and what Dev Desktop is, give it a try . The next article in this series will explain how to set up Acquia Dev Desktop so it can be used to contribute to Drupal core and contrib issues. Additional Resources Your Drupal Site is a Platform | Blog Post How to Streamline a Vagrant Workflow with Vagrant- Exec | Blog Post Dev Hacks: My Other Office | Blog Post",en 1459354676,CONTENT SHARED,2093656054622337275,-1032019229384696495,-1858408872346331823,,,,HTML,https://developers.googleblog.com/2016/03/introducing-google-api-console.html,Introducing the Google API Console,"Every day, hundreds of thousands of developers send millions of requests to Google APIs, from Maps to YouTube . Thousands of developers visit the console for credentials, quota, and more -- and we want to give them a better and more streamlined experience. Starting today, we'll gradually roll out the API Console at console.developers.google.com focusing entirely on your Google API experience. There, you'll find a significantly cleaner, simpler interface: instead of 20+ sections in the navigation bar, you'll see API Manager, Billing and Permissions only: Figure 1: API Console home page Figure 2: Navigation section for API Console console.cloud.google.com will remain unchanged. It'll point to Cloud console, which includes the entire suite of Google Cloud Platform services, just like before. And while the two are different destinations, your underlying resources remain the same: projects created on Cloud Console will still be accessible on API Console, and vice versa. The purpose of the new API Console is to let you complete common API-related tasks quickly. For instance, we know that once you enable an API in a new project, the next step is usually to create credentials. That's why we've built the credentials wizard: a quick and convenient way for you to figure out what kind of credentials you need, and add them right after enabling an API: Figure 3: After enabling an API, you're prompted to go to Credentials Figure 4: Credentials wizard Over time, we will continue to tailor the API Console experience for the many developers out there who use Google's APIs. So if you're one of these users, we encourage you to try out API Console and use the feedback button to let us know what you think!",en 1459354797,CONTENT SHARED,9122627895188486603,-1032019229384696495,-1858408872346331823,,,,HTML,http://techcrunch.com/2016/03/30/task-management-app-asana-raises-50m-at-a-600m-valuation-led-by-ycs-sam-altman/,Task management app Asana raises $50M at a $600M valuation led by YC's Sam Altman,"Asana , an enterprise app that lets people set and track projects and other goals, has hit a goal of its own: today, the company is announcing that it has raised $50 million. The Series C round - led by Y-Combinator's Sam Altman - values the company at $600 million, the company tells me. As a bit of context, Asana last raised $28 million in 2012; that Series B was at a $280 million valuation , according to our sources. Co-founded in 2009 by Facebook co-founder Dustin Moskovitz and early FB employee Justin Rosenstein out of the belief, in their own words, that ""every team in the world is capable of accomplishing bigger goals, and that software could help empower them to drive work forward with more ease, clarity, and accountability,"" the company will be using the funds to continue building out Asana's functionality (more on that below) and also expand its customer base internationally (it's largely a US-based list of clients today). Asana today has 13,000 paying businesses as customers, up from 10,000 in September, and over 140,000 businesses using the product overall adding some 10,000 every month. The company has both free and premium tiers , with the latter charged at $8.33 per member per month for groups above 15, and for more features. The company says that for the past four years, annual recurring revenue has been ""more than doubling"", and that the company is on track to profitability in the next few years. ""This fundraising is the fuel we need to get to the next stage, and to accelerate the fulfillment of our mission,"" the founders note. In addition to Altman (who said he has wanted to invest in the company ""for a long time"") this round includes a long list of other very high-profile backers - a testament both to the founders' own pedigrees but also Asana's place as one of the more respected and used startups in the productivity/enterprise apps space. They include 8VC (Joe Lonsdale's new VC firm post Formation 8); Peter Thiel's Founders Fund (which led Asana's Series B ); Mark Zuckerberg and Priscilla Chan (respectively CEOs of Facebook and The Primary School); Tony Hsieh (Zappos' CEO and Vegas visionary); Andrew Mason (Detour CEO and Groupon co-founder); Adam D'Angelo of Quora; Aditya Agarwal and Ruchi Sanghvi; Eric Ries (Lean Startup author); Roger McNamee (Elevation Partners' founder); and Moskovitz and Rosenstein themselves. The two point out that these investors' businesses are Asana users, and the individuals use it, too, with some funny side notes. Dropbox VP Aditya Agarwal and Dropbox alum Ruchi Sanghvi ""use Asana to manage their domesticity""; and Mason's fervour, meanwhile, is so strong that he ""makes us look lukewarn on this whole Asana thing."" As more businesses move their work processes online - from creating documents and other data in apps like Quip or Google Docs or Microsoft through to communicating with each other (think Slack or Yammer) - productivity apps seem to be having a moment right now where fundraising is concerned. Just last week, BetterWorks - another platform that helps workers set and manage tasks and goals - announced a Series B of $20 million . Indeed, in addition to BetterWorks, there are others like Basecamp, Wrike and Trello all offering ways to boost productivity and help organize so-called knowledge workers (essentially, those tied to keyboards or screens to get their jobs done), making for a competitive landscape but also a sign of how there is a ripe opportunity to do more. For its part, Asana has been testing a beta of a product called Track Anything , which will essentially let people mark how they are completing tasks without them having to do the legwork. This is a challenge that others are tackling, too.",en 1459354991,CONTENT SHARED,1933229167501870037,-9016528795238256703,-5622965263606080202,,,,HTML,https://uxplanet.org/perfect-menu-for-mobile-apps-39b2cb5b7377?gi=f7f2505d5e95,Perfect Menu for Mobile Apps - UX Planet,"Perfect Menu for Mobile Apps In both applications and sites, users rely on menus to find content and use features. Menus are so important that you can find them in every site or app you encounter, but not all menus are created equally. Too often we face problems with menus - part of menus are confusing, difficult to manipulate, or simply hard to find. Make It Visible At lot of posts have been written about the hamburger menu and arguing against it. That little three-lined button is the devil. And it's not about the icon itself but rather about hiding the navigation behind an icon . Out of Sight, Out of Mind Hidden navigation is a pretty obvious solution for small screens- you don't have to worry about the limited screen estate, just place your whole navigation into a scrollable drawer that is hidden by default. But hamburger buttons are less efficient , since you have to tap once before you're even allowed to see the option you want. In Sight, In Mind Interaction theory, A/B tests, and the evolution of some of the top apps in the world show that exposing menu options in a more visible way engagement and user satisfaction . That's why many apps are shifting from hamburger menus towards making the most relevant navigation options always visible. YouTube makes main pieces of core functionality available with one tap, allows rapid switching between features. There are also clever ways to make the tab bar disappear when it's not in use . If the screen is a scrolling feed, the tab bar can be hidden when people scrolling for new content and revealed if they start pulling down trying to get back to the top. Last but not least, many designers make the mistake of putting their sorting items in a dropdown menu. But this led to the same problem - users only see the selected option, while the other sorting options are hidden. Toggle button example for iOS: Takeaway: Many apps still use the hamburger because it's an easy way to jam a ton of links into an app. But it's wrong direction because if your navigation is complex, hiding it does not make it mobile friendly. Communicate the Current Location Failing to indicate the current location is probably the single most common mistake to see on site or apps menus. ""Where am I?"" is one of the fundamental questions users need to answer to successfully navigate. Users rely on visual cues from menus to answer this critical question. But sometimes what they see isn't what they actually expect to see. Icons There are a universal icons that users know well, mostly those representing functionality like search, email, print and so on. Unfortunately ""universal"" icons are rare. And app designers often hide functionality behind icons that are actually pretty hard to recognize. I've highlighted this problem in this post . Colors Current state can be directly presented in the tab bar using contrasting colors and properly selected label. Good example for color selection. Takeaway: Properly selected i cons and colors can help users understand the current location. If you use icons, always have them usability tested . Coordinate Menus with User Tasks You should use only understandable link labels . Figure out what users are looking for, and use category labels that are familiar and relevant for your target audience. Menus are not the place to get cute with internal jargon. So stick to terminology that clearly describes your content and features. Users love mobile apps that nail a specific use case really quickly. And you can reduce the amount of time users need to spend understanding menus. Complex features should always be displayed with a proper text label. Takeaway: Menu elements should be easy to scan. Users should be able to understand what exactly happens when they tap on a element. Make It Easy to Manipulate Elements that are too small or too close together are a huge source of frustration for mobile users. So make menu links big enough to be easily tapped or clicked. An MIT Touch Lab study found that the average width of the index finger is 1.6 to 2 cm for most adults. This converts to 45-57 pixels. A touch target that's 45-57 pixels wide allows the user's finger to fit snugly inside the target and provides them with clear visual feedback that they're hitting the target accurately. Takeaway: Menu should have finger-friendly design. Matching your touch target sizes to the average finger size improves mobile usability for many users. Conclusion Helping users navigate should be a high priority for almost every site and application. The goal behind this moments is to create an interaction system that naturally aligns with the user's mental models. Simple user flows, clear visuals, and forgiving design help create the illusion that the user's abilities allowed for a seamless experience. And your interaction system should aim to address problems for your users through clear visual communication. You're designing for your users . The easier your product is for them to use, the more likely they'll be to use it. Thank you!",en 1459355624,CONTENT SHARED,7528802258213768379,3891637997717104548,1776955132850411647,,,,HTML,https://www.drupal.org/project/paragraphs,Paragraphs,"Overview Paragraphs is the new way of content creation! It allows you - Site Builders - to make things cleaner so that you can give more editing power to your end-users. Instead of putting all their content in one WYSIWYG body field including images and videos, end-users can now choose on-the-fly between pre-defined Paragraph Types independent from one another. Paragraph Types can be anything you want from a simple text block or image to a complex and configurable slideshow. Paragraphs module comes with a new ""paragraphs"" field type that works like Entity Reference's. Simply add a new paragraphs field on any Content Type you want and choose which Paragraph Types should be available to end-users. They can then add as many Paragraph items as you allowed them to and reorder them at will. Paragraphs module does not come with any default Paragraph Types but since they are basic Drupal Entities you can have complete control over what fields they should be composed of and what they should look like through the typical Drupal Manage Fields and Manage Display screens. You can also add custom option fields and do conditional coding in your CSS, JS and preprocess functions so that end-users can have more control over the look and feel of each item. This is way much cleaner and stable than adding inline CSS or classes inside the body field's source. So... what's it gonna be? Accordions, Tabs, Slideshows, Masonry galleries, Parallax backgrounds...? Think big! Some more examples: Add a block of text with an image left to it Add a slideshow between blocks of text Add a youtube embed between your text Add quotes between your content blocks Features This module has some overlapping functionality with field_collection, but this module has some advantages over field_collection. Different fields per paragraph bundle Using different paragraph bundles in a single paragraph field Displays per paragraph bundle Bundles are exportable with features. Entities, so: exportable field bases/instances, usable in Search API, usable in Views Related modules Demo sites Drupal 7 See this page for the Drupal 7 information and documentation. Requirements Known issues Node Clone does not work well with Paragraphs, see #2394313 Fixed in RC1 Drupal 8 A Drupal 8 version has been created. The Drupal 8 version currently has the same functionality as the Drupal 7 module. Version 8.x-1.0-rc4 is compatible with Drupal 8.0.0. See this page for the Drupal 8 information and documentation. Requirements Credits Paragraphs is developed and maintained by .VDMi/ ; Drupal shop in Rotterdam, The Netherlands Paragraphs logo by Nico Grienauer (Grienauer) .",en 1459355695,CONTENT REMOVED,8078873160882064481,3891637997717104548,1776955132850411647,,,,HTML,http://buytaert.net/white-house-deepens-its-commitment-to-open-source,Dries Buytaert,"White House deepens its commitment to Open Source Yesterday, the White House announced a plan to deepen its commitment to open source . Under this plan, new, custom-developed government software must be made available for use across other federal agencies, and a portion of all projects must be made open source and shared with the public. This plan will make it much easier to share best practices, collaborate, and save money across different government departments. However, there are still some questions to address. In good open source style, the White House is inviting developers to comment on this policy. As the Drupal community we should take advantage and comment on GitHub within the 30-day feedback window. The White House has a long open source history with Drupal . In October 2009, WhiteHouse.gov relaunched on Drupal and shortly thereafter started to actively contribute back to Drupal -- both were a first in the history of the White House. White House's contributions to Drupal include the ""We the People"" petitions platform , which was adopted by other governments and organizations around the world. This week's policy is big news because it will push open source deeper into the roots of the U.S. government, requiring more government agencies to become active open source contributors. We'll be able to solve problems faster and, together, build better software for citizens across the U.S. I'm excited to see how this plays out in the coming months! Updates from Dries straight to your mailbox",en 1459355897,CONTENT SHARED,-9083294960368598209,3891637997717104548,1776955132850411647,,,,HTML,https://dev.acquia.com/blog/drupal-howto-responsive-or-adaptive-images/04/03/2016/9796,Drupal How-To: Responsive or Adaptive Images? | Acquia,"In this 3-part Drupal How-To series, I'm going to show you how various options for configuring images on your site. In Part 1 , we looked at how to tweak the default image options. In Part 2 , we saw ways to allow inline images. In this post, I'll discuss the various options for responsive/adaptive images on your site. Though I'm writing for beginners to Drupal, I assume you're aware of responsive design and of course you've read my first two blog posts so you also understand how Drupal handles images. If you're not sure, check out this info to start: The image problem Creating any website is an obstacle course, but taking the responsive route means you'll run into a few tricky turns: tables, grids, images are pose problems when you try to use the same content across different browser widths. Images pose the biggest problem because not only does a image need to resize, in some cases you need a totally different proportion, and of course, you don't want to serve up a large desktop-sharp image to a 320px mobile device over a 3g connection. Also, the size of the image isn't neccessarily related to the width of the device, but the container around it. Below, I've made an illustration of the image problem with an example 3-column website with a wide lead image. You can see how you could compromise somewhat on the images across your desktop, laptop and tablet. Perhaps you could use CSS to resize the images where there is only a small difference. Media queries check the dimensions of the browser window, and allow you to set CSS specific to that browser width. However, for mobile device, a different crop of an image would be more suitable. It's bleeding-edgy Any solution for so-called fluid images, responsive images, or adaptive images right now is going to be a bleeding-edge case. That means it's likely to be fragile and need tweaking and updates. Here's a good round up of related articles on the topic. The Responsive Images Community Group has proposed the new element as a solution . This would allow the markup to specify multiple image sources, and display the appropriate one based on specific browser widths, and then rely on a ""fallback"" in any other case. The proposed markup would look something like this: The proposal has been in development since last year, though it was only decided in October 2012 to postpone work to get it into the official HTML 5 specification. The proposal is a ""living document"" with frequent updates and changes. Because no browsers can render the proposed new picture element, solutions are cropping up to solve the problem with JavaScript. See, for example, the jQuery Picture script. In that case you wrap a fallback image in a