Welcome guest, you can Login or Create a new account

This week’s best gaming deals: SNES 3DS XL, Quake Champions, Geforce 1080, and more

Posted on August 15, 2018 By In Microsoft News With no comments

It’s that time of the week once more – time to take a big ol’ look at this week’s best gaming deals and see what you’ll end up spending your rent money on this week.

We’ve got deals that’ll work in the UK, deals that’ll work in the US and some deals that will work in both the UK and US, as well as presumably many other places. Let’s get started.

Giveaway!

It’s time once more to offer up the chance for you folks to get your hands on some free games. This time around, GamesPlanet has got together with Jelly Deals to offer up a selection of ten PC titles that you can enter to win. Entry closes at 11:59:59pm on Monday, August 13, so get your entries in while you can.

Win one of ten games from GamesPlanet

UK & US Deals

Summer is truly here when online retailers all showcase their summer sale ranges and digital-only newcomers Voidu are no exception. You can take an extra 18% off your choice of already-discounted game when you enter SUMMER18 at checkout, too.

Summer sale at Voidu (use code SUMMER18) live now

There’s a Devolver Digital sale going on right this very moment at Green Man Gaming, which means everything from Hotline Miami to Marc Ecko’s Getting Up is getting discounts of up to 80%. Get ‘em while you can.

Devolver Digital Sale from Green Man Gaming

Fanatical’s Summer Sale range is now live, offering up a wide variety of PC digital discounts, complete with a set of 24-hour deals that will run through the weekend and a voucher that’ll get you an extra 10% off your purchase. Just enter SUMMER10 at checkout.

Fanatical Summer Sale (use code SUMMER10) live now

Humble is having itself a QuakeCon sale to celebrate, well, QuakeCon shockingly enough. Until Monday, you can get yourself up to 75% off a range of Bethesda titles including Quake Champions itself for under £14 / $180.

Up to 75% off with the QuakeCon sale from Humble

It’s hard to argue with a digital bundle that not only pairs two of the best first-person shooters in recent years, but offers them to you at a price that’s cheaper than buying one of the titles individually. Get Doom and Wolfenstein 2 for under £19 / $27 right now, via Xbox Live.

Doom and Wolfenstein 2 bundle on Xbox One for £18.48 / $26.40 from Xbox Live

Now that we’re into August, it’s time for another fresh set of games up for grabs with a Humble Monthly membership. This time around, you can spend £10 / $12 and get instant access to Sniper Elite 4, Tales of Berseria and Staxel. Then, once the month finishes up, you’ll get a stack of other games, too.

Sniper Elite 4, Tales of Berseria and Staxel OR Rise of the Tomb Raider for £10 / $12 from Humble Monthly

UK Deals

Given the release of the Switch and all that mania that ensued since then, you’d be forgiven for overlooking the 3DS exclusive remake of Metroid 2 that was Metroid: Samus Returns, when it originally launched. If you happened to sleep on a purchase, though, you can grab a copy right now for only £23.

Metroid: Samus Returns on 3DS for £22.99 from Amazon UK

It’s a bit of an odd pairing but Box is currently offering up a discounted ASUS 15.6-inch gaming laptop – equipped with a GeForce 1060 6GB graphics card – along with a free backpack. All for £749.97 while stock lasts.

ASUS 15.6-inch gaming laptop with GTX 1060 6GB and free backpack for £749.97 from Box

If you like driving games and dislike having lots of money in your bank account, you can invest in this Logitech G920 Driving Force Racing Wheel for PC and Xbox One, currently discounted down to £173 from its original £299.

Logitech G920 Driving Force Racing Wheel for PC and Xbox One for £172.80 from Amazon UK

Graphics card prices continue to become more and more reasonable, it’s nice to see that every now and then, you can still get yourself a good deal. Currently, this MSI-branded GeForce 1080 8GB card is down to £445 at Ebuyer.

MSI GeForce GTX 1080 8GB graphics card for £444.99 from Ebuyer

Need some quick storage space or a dedicated drive for your computer to boot Windows from? You can get a 240GB Kingston SSD right now for under £40, or double the capacity without doubling the cost and get a 480GB drive for £68.

Kingston 240GB SSD for £39.95 from Amazon UK
Kingston 480GB SSD for £67.99 from Amazon UK

The Switch is fast becoming the easiest way to play time-consuming and story-rich JRPGs and one such title, Xenoblade Chronicles 2, is now available for just shy of £30 over at Argos, if that’s your kind of thing.

Xenoblade Chronicles 2 on Nintendo Switch for £29.99 from Argos

Remember the dice that showed up in The Last Jedi and then Solo, months later? The ones that are apparently an important part of the Millennium Falcon’s overall style and aesthetic, enough to become an actual plot point in both movies? You can buy a set now, exclusively at Zavvi, for £15, if you like.

Han Solo replica dice for £14.99 from Zavvi

US Deals

Right now, you can head to Amazon and pick up a NES-themed New Nintendo 3DS XL along with a digital copy of Super Mario Kart, all for $149.99. This deal originally appeared as a Prime Day exclusive but is now available once more, for a limited time.

New Nintendo 3DS XL SNES Edition with Super Mario Kart for $149.99 from Amazon US

Immense JRPG and Miyazaki-like Ni No Kuni 2’s Premium Edition is down to $40.74 this week in one of the game’s rare discounts since launch. Grab it while you can and prepare to lose dozens of hours of your life once it arrives.

Ni No Kuni 2 Premium Edition on PS4 for $40.74 from Amazon US

Whether you want to concentrate on thwarting an increasingly militant cult in America’s midwest or you just want to run around and have fun with your bear sidekick, you can grab Far Cry 5 on consoles for $35 this week.

Far Cry 5 on PS4 for $34.99 from Amazon US
Far Cry 5 on Xbox One for $34.99 from Amazon US

Finally, many long months since its initial announcement, The Art of Metal Gear Solid 1-4 is available to buy right here and now. It’s also discounted to $47.99 for a limited time before the price is set to raise back up to its $80 RRP. If you’re a fan of the series, you may want to take a look while you can save some cash.

The Art of Metal Gear Solid 1-4 (hardcover) for $47.99 from Amazon US

If you’re into giant hardcover compendium-type books that take deep dives into the history and making of classic games – and you have a penchant for JRPGs – you might want to pick up the Final Fantasy Ultimania Archive Volume 1 while it’s discounted to just under $28 this week.

Final Fantasy Ultimania Archive Volume 1 for $27.78 from Amazon US

Set for release sometime in October this year, the Switch port of the tragically under-represented DS action-RPG, The World Ends With You has been discounted by an extra $10 before launch at the moment, over at Amazon US.

The World Ends with You on Nintendo Switch for $49.99 from Amazon US

With that, we’re done for another week. Keep in mind that deals, prices and availability can change at the drop of a hat, so apologies if you miss out on something you wanted. I’ll be over at Jelly Deals, scouring the world wide web for more deals. Feel free to visit, or follow us on Twitter and give us a like on Facebook.

Did you know that Jelly Deals has a newsletter? It lets us bring the best deals directly to you each day. Subscribe here, if that seems like your kind of thing.

The post This week’s best gaming deals: SNES 3DS XL, Quake Champions, Geforce 1080, and more appeared first on VG247.

Read more: vg247.com

Deploying Kubernetes on Public Clouds is hard – or is it?

Posted on August 15, 2018 By In Microsoft News With no comments

Automate your Kubernetes deployments on AWS, Azure, and Google

Recently, there’s been talk about how Kubernetes has become hard to deploy and run on virtual substrates such as those offered by the public clouds. Indeed, the cloud-specific quirks around infrastructure provisioning, including storage, networking assets such as load balancers, and overall access control (IAM) differs from cloud to cloud provider. It is safe to assume that it also differs between your on-prem IaaS implementation or virtualized infrastructure and the public cloud APIs.

With all the public Container-as-a-Service (CaaS) offerings available to you, why would you deploy Kubernetes to a generic IaaS substrate anyway? There are many reasons for doing so.

You may…

…require a specific version of Kubernetes that is not available through one of the CaaS services
…have to replicate the on-premise reference architecture exactly
…need full control over the Kubernetes master server
…want to test new configurations of different Kubernetes versions

Whatever your reasons, in order to make this experience easier, our awesome engineers at Canonical have been working hard on an abstraction layer for the most common API calls in the majority of the popular public cloud options such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure. Others are on the roadmap and will be delivered later this year.

Current, the following API integrations are supported:

Service
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Azure

Load balancing
Elastic Block Storage (EBS)
GCE Load Balance
Azure Load Balancer

Block Storage
Elastic Load Balancing (ELB)
GCE Persistent Disk
Azure Disk Storage

This abstraction layer manifests itself as an overlay bundle on the existing CDK bundle, and connects the Kubernetes-native concepts such as load balancer to the public cloud specific ones, such as AWS ELB. To deploy the Canonical Distribution of Kubernetes (CDK) on a public cloud, all you need to do is add the integrator charm to the existing bundle. Now, when running a command such as

$ kubectl expose service web –port=80 –type=LoadBalancer

the public cloud-native API will be used to create that Load Balancer.

Using the Juju-as-a-Service (JAAS) SaaS Platform

JAAS provides immediate access to a Juju GUI canvas and allows for quick and simple composition of Juju models based on ready-to-run bundles and charms. CDK is available as a juju bundle, and can be added to the JAAS canvas by clicking the “+” button and selecting the production-grade Kubernetes option. Add your credentials to the aws-integrator charm configuration so it knows how to interact with AWS.

For example, in order to provision this integration on top of Amazon Web Services (AWS), simply add the CDK bundle to your JAAS canvas, click the “+” and search for the aws-integrator charm, then add it to your model.

Add relations to both the kubernetes-master and the kubernetes-worker unit, and click “deploy”. You will be asked to enter your credentials for AWS, optionally be able to import your SSH keys into the deployed machines, and JAAS will take care of the rest for you.

Using the command line

If the command line is more appealing to you or if the deployment of a production-grade Kubernetes cluster is part of your CI/CD pipeline, you can use conjure-up either in guided or in headless mode.

To deploy the Canonical Distribution of Kubernetes (CDK) with conjure-up, enter the following on your shell prompt and follow the steps outlined by the install wizard. You can also check out our tutorial for more in-depth usage instructions for conjure-up, as well as our online documentation.

Integrating conjure-up with your CI/CD pipeline

Another mode to use conjure-up is headless mode. You can trigger this by submitting the destination cloud and region on the command line like so:

$ conjure-up canonical-kubernetes google/us-east1

There are more options available, for example, offloading the juju controller instantiation to JAAS, specifying an existing model you’ve deployed in a different manner, and so on. Review the conjure-up documentation to create many other repeatable deployment scenarios.

Try it today:

$ sudo snap install microk8s –beta –classic
$ microk8s.enable dns dashboard

or find out more at https://microk8s.io

Summary

Deploying the Canonical Distribution of Kubernetes to a public cloud is easy. You can deploy using conjure-up in both a headless or guided mode, and you can use the Canonical Juju-as-a-Service (JAAS) web interface. Deploying CDK to the public cloud typically takes less than 20 minutes and is easily integrated into your CI pipeline.

CDK is a complete, highly available and resilient reference architecture for production Kubernetes deployments, offered by Canonical with business hours, 24×7 and managed services levels of support. Contact [email protected] for more information.

 
 

Have you tried microk8s?

If you develop software designed to run on Kubernetes, the microk8s snap provides the easiest way to get a fully conformant local Kubernetes up and running in under 30 seconds on your laptop or virtual machine for test and software development purposes.

The post Deploying Kubernetes on Public Clouds is hard – or is it? appeared first on Ubuntu Blog.

Read more: insights.ubuntu.com

Angular 5 and ASP.NET Core

Posted on August 15, 2018 By In Microsoft News With no comments

Microsoft and Google have worked together since Angular 2, rendering ASP.NET Web Forms and MVC Razor obsolete. Nevertheless, while ASP.NET’s front-end tools may be lacking, it is still a great back-end framework.

In this article, Toptal Freelance Angular Developer Pablo Albella teaches us how to create the best architecture for both these worlds.

Read more: toptal.com

Office 365’s design undergoes an overhaul

Posted on August 15, 2018 By In Microsoft News With no comments

Office 365’s design undergoes an overhaul

Microsoft recently announced that Office 365 apps, including Excel, Outlook, PowerPoint, and Word will be going through a design overhaul to boost productivity of their subscribers. They plan on releasing new features in the next few months. Check out what they have in store.

Simplified ribbon

The biggest update is with the ribbon, which is a command bar at the top of a window. The new design now has a simpler, cleaner look that gives users the chance to customize the tools they work with most, simply by pinning apps or files to your Windows taskbar. Even though this new ribbon is designed with simplicity in mind, if you don’t find it helpful you can still revert back to the regular three-line view.

Some users may already be using this new ribbon in the online version of Word, while Outlook for Windows will receive it sometime this month. However, Microsoft disclosed that they aren’t yet ready to roll it out to PowerPoint, Word, and Excel for Windows.

Improved search option

One of the major changes is with the search option in Microsoft Office apps. The developers improved the search experience by using Microsoft graph, so users can now see search recommendations when they move their cursor to the search box. Some have already seen this update take effect, but it won’t be available for Outlook on the web until August.

Better colors and icons

To make the overall design more aesthetically pleasing, the colors and icons of every app have been revamped, too. They wanted a more modern look crisp and clean no matter the size of the user’s screen, which is why they employed scalable graphics. It first debuted on Word before appearing on Excel, PowerPoint, and Works for Windows last month. As for Outlook for Windows and Mac, users can expect the update later this summer.

Office 365 is constantly evolving to benefit subscribers. And to make things even more interesting, users will be chosen at random over the next several months to receive the updates, and Microsoft will gather their reviews to make further improvements. Co-creating new features with customers is something Microsoft truly believes in, so this isn’t simply a social media tactic.

So as you hang tight for these coming changes, consider increasing office collaboration by migrating your files to the cloud. Call us today to get started!

Published with permission from TechAdvisory.org. Source.

The post Office 365’s design undergoes an overhaul appeared first on Manhattan Tech Support.

Read more: manhattantechsupport.com

Rethinking the way you build software with serverless

Posted on August 15, 2018 By In Microsoft News With no comments

The way software is built is constantly changing to meet the ongoing pressure of getting to the market faster and keeping up with the competition. The software development industry has gone from waterfall to Agile, from agile to DevOps, from DevOps to DevSecOps, and from monolithic applications to microservices and containers. Today, a new approach is entering the arena and shifting the paradigm yet again. Serverless aims to capitalize on the need for velocity by taking the operational work out.

“Serverless has changed the game on the go-to-market aspect and has compressed out a lot of the steps that people never wanted to do in the first place and now don’t really have to do,” Tim Wagner, general manager for AWS Lambda and Amazon API Gateway, said in an interview with SD Times.

Amazon describes serverless as a way to “build and run applications and services without thinking about servers. Serverless applications don’t require you to provision, scale and manage any servers. You can build [serverless solutions] for nearly any type of application or back-end service, and everything required to run and scale your application with high availability is handled for you,” the company wrote on its website.

The Cloud Native Computing Foundation (CNCF) and its Serverless Working Group define serverless as “the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled and billed in response to the exact demand needed at the moment.”

Despite its name, the CNCF stated that serverless doesn’t mean developers no longer need servers to host and run code, and it also doesn’t mean that operation teams are no longer necessary. “Rather, it refers to the idea that consumers of serverless computing no longer need to spend time and resources on server provisioning, maintenance, updates, scaling, and capacity planning. Instead, all of these tasks and capabilities are handled by a serverless platform and are completely abstracted away from the developers and IT/operations teams,” the CNCF wrote. This enables teams to worry about their code and applications business logic and operation engineers to focus more on critical business tasks.

Wagner explained this is a major benefit of serverless because most companies aren’t in the business of managing or provisioning servers. By being able to abstract the operational tasks, capacity planning, security patching and monitoring away, businesses can focus on providing value that matters to the customers. However, Wagner said that while serverless certainly eases up operational tasks, it doesn’t take operational teams out of the equation entirely. Applications and application logic still require monitoring and observability. “The serverless fleet portion goes away and that is the part that frankly was never a joy for the operation team or DevOps team to deal with. Now they get to focus their activities on business logic, the piece that actually matters to the company,” he said.

How to successfully transition to serverless

One of the first things you hear about when it comes to serverless is the cost savings. Serverless provides reduced operational costs and reduced development and scaling costs because you can outsource work and only pay for the compute you need.

“It allows applications to be built with much lower cost and because of that, enterprises are able to make and spend more time getting the applications they want. They can devote more time to the business value and the user experience than they were traditionally able to in the past,” said Mike Salinger, senior director of engineering for the application development software company Progress.

However, Nate Taggart, CEO of Stackery, the serverless solution provider for teams, said the cost-saving benefits are a bit of a red herring. The main benefit of serverless is velocity.

“Every engineering team in the world is looking for ways to increase the speed in which they can create and release business value,” Taggart said.

Velocity is a major benefit of serverless, but achieving speed becomes difficult when you have multiple functions and try to transition a large monolithic, legacy application to serverless. Serverless, for the most part, has a low barrier for entry. It is really easy for a single developer to get one function up and running, according to Taggart, but it becomes more difficult when you try to use serverless as part of a team or professional setting.

To successfully deploy serverless across an application, Taggart explained teams need to utilize the microservices pattern. Microservices is an ongoing trend organizations have been leveraging to take their giant monolithic apps and break them out into different services. “You can’t just take an entire monolithic application and lift and shift to serverless. It is not interchangeable. If you have a big monolithic application chances are you are using VMs and containers, so transitioning to serverless becomes a lot tricker. We see microservices as one of the stepping stones into serverless.” he said.

When transitioning a monolithic application to serverless, Amazon’s Wagner suggested doing it in pieces. An entire application doesn’t have to move to serverless. Take the pieces that would benefit from serverless the most and transition those bits to optimize on cost and business results, he explained. According to Wagner, most enterprises already have systems that are hybrid at some level, so instead of having to decide between serverless, containers and microservices, you can combine the compute paradigms to your benefit.

In addition, professional engineering teams moving to serverless need to provide a consistent and reliable environment. In order to do that, Taggart said organizations need to put company-wide standards in place.

“As an organization, you want to ensure that whoever modifies or ships the application does so in a way that is universal so that you can increase reliability and avoid the ‘it worked on my laptop’ problem. When an individual developer is shipping a serverless application, there’s a sort of default consistency,” he said. “When teams are working on serverless applications, and you have more than one developer involved, consistency and standardization become extremely important.”

At a basic level, consistency and reliability are achieved by having a centralized build process, standard instrumentation, a universal method for rolling back apps, and visibility into the architecture and shared dependencies. More advanced methods include having centrally managed security keys, access roles and policies, and deployment environments, Taggart explained.

Amazon’s Wagner added that it is very important to limit the people who can call functions, and limit the rights and access capabilities to ensure the security of applications.

According to Progress’ Salinger, a best practice for transitioning applications to serverless is working in a way where your application is stateless. “Stateless applications are done in such a way that your components can be scaled up and down at any time. You have to make sure your application isn’t relying on a specific state to occur,” he said.

Another design principle is to develop your business logic and user experience first. A common pitfall is that developers think about building a serverless application instead of thinking about building your app and running a function in a way that it will scale out easily, Salinger noted.

“It is all about focusing on the user experience and the value of the application, and not having to worry about all the side stuff that is repeatable and less valuable for the developer and for their app,” Salinger said.

Solving for serverless security

Serverless is still an “immature” technology, which means that serverless security is even more immature, according to Guy Podjarny, CEO of the open-source security company Snyk.

“The platforms themselves, such as Lambda and Azure Functions, are very secure, but both the tooling and best practices for securing the serverless applications themselves are lacking and poorly adopted,” Podjarny said.

While serverless doesn’t radically change security, some things become inherently difficult, according to Hillel Solow, CTO of the serverless solution provider Protego Labs. The top weaknesses of serverless include unnecessary permissions, vulnerable code, and wrong configurations, according to Solow.

In addition, Red Hat’s senior director of product management Rich Sharples said old application security risks become new again with serverless. Those risks include function event data injection, broken authentication, insecure serverless deployment configuration and inadequate function monitoring and logging.

Serverless security isn’t all complicated though, Solow explained. For instance, severless requires teams to turn over ownership of the platform, operating system and runtime to the cloud provider such as Amazon, Microsoft and Google. “The cloud providers are almost always going to do a better job at patching and securing the service, so you don’t have to worry about your team dealing with the things,” he said.

The challenges arise when teams start thinking about how they are going to make sure their application does only what it is supposed to do. Solow explained where you put security and how you put security in place has to change.

In a recent report from Protego Labs, the company found 98 percent of serverless functions are at risk and 16 percent are considered serious. “When we analyze functions, we assign a risk score to each function. This is based on the posture weaknesses discovered, and factors in not only the nature of the weakness, but also the context within which it occurs,” explained Solow. “After scanning tens of thousands of functions in live applications, we found that most serverless applications are simply not being deployed as securely as they need to be to minimize risks.”

According to Podjarny, serverless shuffles security priorities and splits applications into many tiny pieces. “Threats such as unpatched servers and denial of service attacks are practically eliminated as they move to the platform, greatly improving the security posture out of the gate. This reality shifts attacker attention from the servers to the application, and so all aspects of application security increase in importance,” he said. “Each piece creates an attack surface that needs securing, creating a hundred times more opportunities for a weak link in the chain. Furthermore, now that the app is so fragmented, it’s hard to follow app-wide activities as they bounce from function to function, opening an opportunity for security gaps in the cross-function interaction.”

Red Hat’s Sharples added that security teams should think about data in a serverless environment, think about least-privilege controls and fine-grained authorization, practice good software hygiene, and remember data access is still their responsibility

To successfully address the serverless security pains, Podjarny suggested good application security practices should be owned and operated by the development team and should be accompanied by heavy automation. In addition, Protego Labs’ Solow suggested embracing a more serverless model for security, which uses security at the places where your resources are.

“The good news is these are all mitigable issues,” said Solow. “Serverless applications enable you to configure security permissions on individual functions. This allows you to achieve more granular control than with traditional applications, significantly mitigating the risk if an attacker is able to get access. Serverless applications require far more policy decisions to be made optimally, which can be challenging without the right tools, but if done accurately, these decisions can make serverless applications far more secure than their non-serverless analogs.”

Other security best practices Solow suggest include:

Mapping your app to see the complete picture and understand the potential risks
Applying perimeter security at the function level
Crafting minimal roles for each function
Securing application dependencies
Staying vigilant against bad code by applying code reviews and monitoring code and configuration
Adding tests for service configuration to CI/CD
Observing the flow of information to ensure it is going to the correct places
Mitigating for Denial-of-Service and Denial-of-Wallet where hackers can attack your app by “overwhelming” it, causing it to rack up expenses And considering strategies that limit the lifetime of a function instance

The top use cases for serverless

According to the CNCF, there are 10 top use cases for serverless technology

Multimedia processing: The implementation of functions that execute a transformational process in response to a file upload
Database changes or change data capture: auditing or ensuring changes meet quality standards
IoT sensor input messages: The ability to respond to messages and scale in response
Stream processing at scale: processing data within a potentially infinite stream of messages
Chat bots: scaling automatically for peak demands
Batch jobs / scheduled tasks: Jobs that require intense parallel computation, IO or network access
HTTP REST APIs and web apps: traditional request and response workloads
Mobile backends: ability to build on the REST API backend workload above the BaaS APIs
Business logic: The orchestration of microservice workloads that execute a series of steps
Continuous integration pipeline: The ability to remove the need for pre-provisioned hosts

The three revolutions of serverless

According to a recent report from cloud computing company DigitalOcean, while serverless is gaining traction, a majority of developers still don’t have a clear understanding of what it is. Hillel Solow, CTO of the serverless solution provider Protego Labs, explained that the meaning of serverless can be confusing because it has three different core values: serverless infrastructure, serverless architecture and serverless operations.

Serverless infrastructure refers to how businesses consume and pay for cloud resources, Solow explained. “What are you renting from your cloud provider? This is about ‘scales to zero,’ ‘don’t pay for idle,’ ‘true auto-scaling,’ etc. The serverless infrastructure revolution proposes to stop leasing machines, and start paying for the actual consumption of resources,” he wrote in a post.

Serverless architecture looks at “how software is architected to enable horizontal scaling.” As part of this, Solow says there are key design principles:

Setting up serverless storage as file or data storage so that it can scale based on the application’s needs
Moving all application state to a small number of serverless storages and databases
Making sure compute is event-driven by external events like user input and API calls or internal events like time-based events or storage triggers
Organizing compute into stateless microservices that are responsible for different parts of the application logic

Serverless operations defines how you deploy and operate software. According to Solow, operations specifically looks at how cloud-native apps are orchestrated, deployed and monitored. “Cloud native means the cloud platform is the new operating system,” he said. “You are writing your application to run on this machine called AWS. Just as most developers don’t give much thought to the exact underlying processor architecture, and how many hyper-threaded cores they run on, when you go cloud native, you really want to stop thinking about the machines and you want to start thinking about the services. That’s how you write software for Android or Windows, and that’s how you should be writing software for the cloud.”

In addition, serverless is often referred to as Functions-as-a-Service or FaaS because it is an easier way to think about it, according to Red Hat’s senior director of product management Rich Sharples. FaaS is actually a subset of the broader term serverless, but it is an important part because it is “the glue that wires all these services together,” he explained.

“FaaS is a programming model that really speaks to having small granularity of deployable units, and the ability that comes from being able to separate and segregate that out as well as separate it from some of the operational pieces,” said Tim Wagner, general manager for AWS Lambda and Amazon API Gateway. “When I think of serverless, I usually mean a functions model, which is operated by a public cloud vendor, and offers the perception of unbounded amounts of scale and automated management.”

Serverless tools and frameworks

Apache OpenWhisk: Apache OpenWhisk is a open-source, serverless cloud platform designed to execute functions in response to events. It is currently undergoing incubation at the Apache Software Foundation.

AWS Lambda: AWS Lambda is perhaps one of the earliest and most popular serverless computing platforms on the market. Features include the ability to extend other AWS services with custom logic, ability to build custom back-end services, and the ability to use any third-party library. In addition, Amazon explained developers can run code for any type of app or backend service with zero administration.

Azure Functions: Developed by Microsoft, Azure Functions aims to provide developers with an event-driven, serverless compute experience. It features the ability to manage apps instead of infrastructure, is optimized for business logic, and enables developers to create functions in the programming language of their choice.

CloudEvents: CloudEvents is an ongoing effort to develop a specification for describing event data in a common way. “The lack of a common way of describing events means developers must constantly re-learn how to receive events. This also limits the potential for libraries, tooling and infrastructure to aide the delivery of event data across environments, like SDKs, event routers or tracing systems. The portability and productivity we can achieve from event data is hindered overall,” according to the website. The end goal is to eventually offer the specification to the Cloud Native Computing Foundation.

Cloud Functions: Cloud Foundations is an event-driven serverless computing solution from Google Cloud. Key features include no server management, ability to scale automatically, run code in response to events, and connect and extend cloud services.

Fission: Fission is a open-source Functions-as-a-Service serverless framework for Kubernetes designed by Platform 9, a hybrid cloud and container orchestration provider. Fission was built as an alternative to AWS Lambda. According to the company, Lambda causes problems for developers when it comes to the size of their deployment package, amount of memory, and number of concurrent function executions. Fission is designed to free teams from a cloud vendor lock in. With the use of Kubernetes, Fission can run anywhere where Kubernetes runs and removes some of the “software plumbing” the use of containers create. With Fission, developers don’t have to worry about building containers or managing Docker registries.

IBM Cloud Functions: IBM offers a polyglot Function-as-a-Service programming platform based on Apache OpenWhisk. It is designed to execute code on demand in a scalable serverless environment. Features include access to the OpenWhisk ecosystem, ability to accelerate application development, cognitive services, and pay for use.

Kinvey: Progress Kinvey is a serverless cloud platform for building apps for mobile, web and other digital channels. The platform enables developers to build apps without thinking about services so they can focus on the value of their app and not have to worry about the infrastructure, backend code, and scaling.

The post Rethinking the way you build software with serverless appeared first on SD Times.

Read more: sdtimes.com

Evaluating Options for Amazon’s HQ2 Using Stack Overflow Data

Posted on August 14, 2018 By In Microsoft News With no comments

Amazon is a technology behemoth, employing half a million people globally and hiring nearly 130,000 people in 2017. Amazon has been headquartered in Seattle since its early days in the 1990s, but in September 2017, the company announced a search for a secondary headquarters elsewhere in North America. Over 200 cities entered bids to be considered, and last month, Amazon announced a list of 20 finalists. What goes into this kind of choice? Amazon says it wants a city with more than one million residents, access to an airport, and decent commutes. Here at Stack Overflow, we can offer a different view on the question.

Software developers in different locations have different average profiles. For example, they use proportionally more or less of different languages and technologies. We have explored these themes on our blog before, whether it was comparing four large cities or digging into what mobile development looks like across the world. We can use our knowledge of what software developers are like across North America to say which of these finalist cities would be the best fit for Amazon. Amazon could choose a city similar to its current Seattle headquarters in terms of software developer tools and experience, or they could choose a city that is different if it wants to grow other parts of its software workforce.

Most similar cities overall

To start with, we can look for the cities most similar to Seattle among the 20 candidate locations. Seattle is an interesting city to examine in an analysis like this because while this metro area is indeed the home of Amazon, it is also the home of another technology giant. In this analysis, I used geolocation (based on IP addresses) of our users to associate them with the 20 finalist cities/regions. I examined how strongly the presence of Microsoft affects the conclusions we can draw by either including or excluding Redmond, WA and the regions right around it from the “Seattle” definition. It turns out it makes quite a difference! I can see arguments for and against including Redmond here, depending on one’s particular analytical purposes. My main focus here is to understand the Seattle tech ecosystem as it impacts Amazon, so for the rest of this analysis I am going to exclude Redmond and environs from the definition of Seattle.

Once we have users associated with the 20 finalist cities, we can calculate a similarity between each city and Seattle. By “similarity” here, I mean a cosine similarity based on the mean percent of traffic to the top 500 Stack Overflow tags in these cities.

This analysis uses registered users only, and uses their traffic in the past year. We find that registered and unregistered users have similar traffic patterns, but we can more easily identify registered users and have higher quality data for them. You can see exactly what kind of data we store for you as a user, as well as out of predictions.

We see in this plot that all of the options Amazon has identified as finalist cities are very similar to each other. If we added a city in Russia or India to this plot, we would see a significantly lower cosine similarity compared to these North American tech centers. Northern Virginia and Washington, DC are the most similar to Seattle in terms of the kinds of technologies that developers visit. Developers in Northern VA and Washington, DC visit a mix of technologies at proportions that are the closest to developers in Seattle (at least, the parts of Seattle that are not Redmond). There is another tier that is very close in similarity, and it includes Atlanta, Newark, Philadelphia, and Montgomery County. This is super interesting, but that isn’t all we can learn from this kind of data. We can use statistical analysis to explore more.

Understanding developers using principal component analysis

We can use a statistical technique called principal component analysis to answer these kinds of questions. Developers who come to Stack Overflow don’t visit tags in random combinations; the tags that any individual visits are related in ways that are connected to the kind of work that they do.

Let’s think of each Stack Overflow user as a point in a high-dimensional space with tags as the coordinates. Principal component analysis is a way to project these points (or users, in this case) onto a new, special coordinate system. In the new coordinate system, each coordinate, or principal component, is a weighted sum of tags/technologies. The first principal component has the most variance in users in its direction, the second principal component has the second most variance in users in its direction, and so forth.

This plot shows the first six components or dimensions from principal component decomposition of registered traffic from the last year to Stack Overflow questions. Notice the combinations of tags that appear together in these different components.

The first principal component, which explains the most variation in Stack Overflow users, contrasts users who visit a lot of front-end technologies (HTML, JavaScript, jQuery) with those who visit a lot of Python and/or low-level technologies like C++. When we look at all of our users, this spectrum from front-end to low-level and Python is what explains the most difference from one user to another.
The second principal component, which explains the second largest amount of variation in Stack Overflow users, is not a contrast between two kinds of things, but instead is focused on one family of technologies- the Microsoft ecosystem of C#, .NET, Visual Studio, and related technologies. The characteristic of developers that explains the second most difference is whether or not they use these Microsoft technologies.
The third principal component focuses on Android and iOS; this component measures to what extent a developer works building mobile apps.
The fourth principal component is another single family, focused on Java, Spring, and Maven.
The fifth principal component is back to a set of contrasts, and measures how much a developer works with C++ and C versus how much they work with SQL, databases, and perhaps some data handling with dataframes.
The six principal component returns to iOS development for Apple devices, but instead of being partnered with Android like it was before, now it is contrasted with Java tags. This is a lower-rank principal component, so this difference explains less variation in users than the fourth principal component.

There are many principal components, each one less important than the one before in explaining differences between various users. This projection of traffic data into a new coordinate system allows us to draw conclusions about Amazon’s candidate city choices.

 

There is a lot of information in a plot like this, so let’s talk through some details. The labels on the x-axis and y-axis include what percent variation in the data is explained by each component. Each orange or blue point labeled with a city or region represents the aggregate, average user in that metro area, while the gray points represent real, individual users. The principal component decomposition was calculated using all registered users who visited at least 200 questions in the last year, but these plots show one of out every 10 users, for visual clarity.

The analysis in this blog post uses our total, global traffic (not just North America), so the first conclusion we can draw here is that the similarities among Amazon’s candidate cities are high compared to global variation in developer traffic. Compared to our traffic worldwide, these 20 locations are pretty similar to each other. All 20 North American cities are focused proportionally more on low-level languages and Python (more to the left), and compared to the worldwide distribution they use more Microsoft technologies (more up).

When I ran this analysis but included Redmond and the locations around the Microsoft campus in my definition of what Seattle is, Seattle had a higher contribution from this Microsoft-dominated principal component. Dallas, Columbus, and Indianapolis are furthest in the direction (up) on this plot that indicates more Microsoft technologies; these are cities that have proportionally more developers working with technologies like C#, .NET, and Visual Studio. Depending on how invested Amazon wants to be in the Microsoft tech stack, this might be attractive or a limitation.

What if Amazon wants to invest more in mobile development? (I know I have bought plenty of things on Amazon’s app on my phone.)

The candidates are even closer together in this plot, and far away from areas (up and left) that are associated with lots of mobile development. We find that mobile development happens a lot in countries outside of North America. If Amazon wants to choose a city with proportionally more mobile developers, good choices would be Los Angeles, New York, and Toronto.

What if Amazon wants to invest more in data science and machine learning? All of Amazon’s customers experience how they put data science to work, whether it is the recommendation engine or the natural language processing of the Amazon Echo.

This next plot moves us pretty far down the rank of principal components; notice that these dimensions each account for about 1.5% of variation among our users. All of Amazon’s candidate cities have unusually large absolute value and negative-PC17/positive-PC18 values for these two components compared to the global distribution. Let’s check out the technologies that contribute to these dimensions in these directions.

The negative side of principal component 17 involves Hadoop, Spark, Hive, and Scala while the positive side of principal component 18 focuses on R, ggplot2, and statistics. These two components measure how much users are involved in data engineering and data science, respectively, and all of Amazon’s candidate cities have relatively large values for these. If Amazon wants to choose a city with proportionally more developers experienced in these technologies, Raleigh and Columbus would be great choices. It is important to note that often we see statistical analysis technologies like R used proportionally more in cities with high academic, research, and grad student populations. Columbus and Raleigh both have healthy academic centers that are likely contributing here, but Amazon specifically listed proximity to major universities as something they are looking for, so maybe this is good!

Where should Amazon establish a second headquarters?

So after all this analysis, what can we say from analysis of Stack Overflow traffic about Amazon’s options for a second headquarters? If I were asked to offer insights into this choice, what would I recommend?

These large cities and metro areas in the United States are quite similar to each other, especially compared to worldwide variation, and it’s unlikely that any would be a truly bad choice.
The choices that are most similar overall to Seattle in terms of technology ecosystems are Northern VA and Washington, DC. If Amazon wants to go with a city where the developer population feels as familiar as possible, these would be the way to go.
If Amazon wants to choose a city with proportionally more mobile developers, Los Angeles, New York, and Toronto would be the best choices.
If Amazon wants to choose a city with proportionally more developers working in data science and machine learning, Raleigh or Columbus would be excellent choices.

At Stack Overflow, we’re able to explore these kinds of questions because we understand developers, technologies, and how these technologies are related to each other in complex ecosystems. We use this expertise to help companies understand, reach, engage with, and hire developers.

The post Evaluating Options for Amazon’s HQ2 Using Stack Overflow Data appeared first on Stack Overflow Blog.

Read more: stackoverflow.blog

CO, NM & WY Tech Sales | Lewan Technology (A Xerox Company)

Posted on August 14, 2018 By In Microsoft News With no comments

CO, NM & WY Tech Sales | Lewan Technology (A Xerox Company) Work It Daily

*Lewan Technology is a subsidiary of Global Imaging Systems – a Xerox company and looking for professionals interested in CO, NM & WY Tech Sales.

At Lewan, we do not fill empty seats… We hire and develop future leaders!

Lewan Technology believes in building strong professional relationships from the inside out as the best way to serve our employees and customers alike. We share a passion for working together in helping our clients by managing their technology so they can focus on managing their business. If you value hard work and accountability, have a collaborative spirit, a passion for serving others, and want to contribute to our cause, call us to discuss an exciting career opportunity in office technology.

Lewan Technology is looking for professionals interested in CO, NM & WY Tech Sales.

Let’s talk benefits…

Our suite of benefits helps you keep a great work/life balance, which we consider one of the best reasons to work at Lewan!

Very competitive base salary.
Lucrative monthly commission structure, incentive and bonus programs.
Full benefit package including medical, dental and vision coverage.
Life and disability insurance plans.
401(k) with company match.
Opportunity to qualify for annual President’s Club trip and other incentive trips.
Continuing education and ongoing training opportunities to support skill growth and career advancement.

As a Lewan Outside Sales Account Executive you will be responsible for gaining new market share while retaining and growing current accounts. Strong people skills, organizational skills, reliability, professionalism and self-motivation are a must.

CLICK HERE to Apply >>

To be successful in this role you will need to demonstrate the following skills and expertise:

Strategic Planning & Business Development

Articulate and position Xerox products, services and solutions to key decision makers
Aggressively pursue competitive accounts; strive to differentiate Xerox from competitors
Leverage a variety of resources to prospect, research and gain new accounts
Stay informed of technology updates and maintain a technical understanding of hardware and solutions in your portfolio
Architects a strategic marketing plan that aligns sales initiatives with customer requirements
Interprets and analyzes research and competitive intelligence to understand the market drivers and business opportunities
Crafts complex proposals and business solutions with a high degree of confidence and strategic thought

Internal & External Customer Focus

Manage entire sales cycle across customer accounts.
Sustain sales activities; appointments, demos, proposals, cold calls, and database updates
Practices the 360 Selling Process by analyzing the customer’s business communication requirements, and develops customized solutions.
Serves as the first line of contact with customers responsible to assist in the creation and maintenance of accurate paperwork on each sale.
Manages territory by protecting and increasing a profitable revenue stream within current accounts.
Participate in planned in person account reviews.

Analytical & Critical Thinking

Propose and close sales that achieve total revenue growth, profit and customer satisfaction plans
Interprets market data and financial reports to inform overall sales plan
Consolidates and summarizes performance data to shape ongoing business development strategies
Is proficient in understanding key drivers within appropriate markets
Evaluates current state, customer satisfaction and completeness of strategy implementation along with next steps in advancing the account.
Reviews leads, pending orders and lease upgrades, developing action plans to progress each cycle.
Meet forecasting objectives by keeping timely & accurate forecasts on account assignment

Outside Sales Account Executive Desired Competencies and Experience

BS/BA degree in business or other related field/Or equivalent work in sales experience.
Previous sales experience preferred, but not required. We provide training, mentoring and coaching to all of our new Sales Executives.
Proficiency using Microsoft Office (PowerPoint, Word, Excel and Outlook).
Excellent communication skills (oral, written and presentation).
Competitive drive and self-motivation to achieve.
Ability to work collaboratively and effectively in a team-oriented environment.
Ability to influence, negotiate and gain commitment at all organizational levels.
Demonstrated flexibility and adaptability; willingness to take risks and try new approaches.
Must have valid driver’s license, motor vehicle and auto insurance coverage.

 

We are a local office technology and managed service provider supporting Colorado, Wyoming, New Mexico and the Rocky Mountain Region since 1972. Our print management solutions and IT services teams deliver extraordinary expertise and offer a single source for our customers’ printing, document workflow, technology solutions and IT support needs. Our unique business model provides the benefits of both local decision making and the backing of Xerox, a Fortune 500 global technology leader.

Our team of sales, service and support staff make us great. We believe work life balance and a positive, fun and encouraging work environment are the keys to employee and company success. Lewan employees are passionate about providing a world class customer experience, challenging themselves and supporting each other.

Lewan Technology is an equal opportunity employer.

CLICK HERE to Apply >>

Don’t see the right job opening for you? Not job seeking quite yet? Join our Talent Network and stay connected with our recruiting team for future opportunities. If you are interested in different sales opportunities, read about other Xerox subsidiaries in other parts of the country.

 

The post CO, NM & WY Tech Sales | Lewan Technology (A Xerox Company) appeared first on Work It Daily.

Read more: workitdaily.com