<?xml version="1.0" encoding="utf-8" ?>
    <rss
      xmlns:dc="http://purl.org/dc/elements/1.1/"
      xmlns:content="http://purl.org/rss/1.0/modules/content/"
      xmlns:atom="http://www.w3.org/2005/Atom"
      version="2.0"
    >
      <channel>
        <title><![CDATA[Cloudscale News RSS Feed]]></title>
        <description>
          <![CDATA[The latest news about cloudscale and their services.]]>
        </description>
        <link>https://www.cloudscale.ch</link>
        <language>en</language>
        <lastBuildDate>Tue, 31 Mar 2026 00:00:00 GMT</lastBuildDate>
        <atom:link href="https://www.cloudscale.ch/rss-news-en.xml" rel="self" type="application/rss+xml" />
        
        <item>
          <title><![CDATA[Volume Snapshots with CSI
]]></title>
          <link>https://www.cloudscale.ch/en/news/2026/03/31/volume-snapshots-with-csi</link>
          <pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2026/03/31/volume-snapshots-with-csi</guid>
          <description>
            <![CDATA[<p>Snapshots are really handy: Create a snapshot of a volume so you can later revert to exactly that state or use it as a basis for creating a new volume. This now works directly from within Kubernetes – thanks to our CSI driver, which supports snapshots starting with version 4.0.0, enabling the use of tools like Velero for example.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-snapshots-with-csi.png"/><h3>Snapshots for persistent volumes in Kubernetes</h3>
<p>Persistent volumes are essential in many Kubernetes setups: they are used to store data that needs to be retained permanently and should not be tied to the lifespan of a pod. Using our CSI driver, it is possible to <strong>automatically provision volumes in our cloud infrastructure</strong> based on &quot;Persistent Volume Claims&quot; and always attach them to the virtual server where they are currently needed by the corresponding pod.</p>
<p>With the recently released <a href="https://github.com/cloudscale-ch/csi-cloudscale">CSI driver 4.0.0</a> (which includes an additional sidecar component), you can now manage snapshots of your volumes not only manually via our web-based cloud control panel or via API, but also directly from your Kubernetes setup. To do this, the CSI driver uses the standard Kubernetes VolumeSnapshot API and <strong>interacts with the cloudscale API to manage your snapshots</strong> exactly as your setup requires.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-snapshots-with-csi-02f6e4a4e662.png" alt="The cloudscale CSI driver with snapshot support is available from github.com." caption="The cloudscale CSI driver with snapshot support is available from github.com."/>
<h3>Velero: one of countless use cases</h3>
<p>The snapshot support in our CSI driver now makes it even easier for you to create point-in-time copies of persistent volumes in a Kubernetes cluster and restore them should a subsequent operation fail. <strong>You can also create new volumes based on snapshots;</strong> for example, you can clone a production data set for a test system or mount the new volume to selectively access individual prior data points.</p>
<p>In our engineering blog, <a href="https://www.cloudscale.ch/en/engineering-blog/2026/03/31/snapshots-are-not-backups-disaster-recovery-for-k8s">Julian walks you through the process step by step</a>, providing all the necessary configurations, to show you <strong>how to use Velero and our snapshot feature to save the state of a persistent volume</strong> and restore it later. Of course, you can expand on this example as needed and adapt it to your specific use case.</p>
<h3>More than just details – please note</h3>
<p>Please keep in mind – especially with Velero – that the term &quot;backup&quot; can be used in different ways. At cloudscale, we consider volume snapshots to be ideal for serving as a safety net, enabling a quick and easy rollback to a previous state during database migrations or system upgrades. However, since snapshots are based on &quot;copy-on-write&quot; and are stored in the same storage cluster as their original volume, we do not consider them to be a &quot;backup&quot;. For optimal security, we recommend that you <strong>always keep a copy of your data at a different geographic location</strong> – and ideally on third-party infrastructure.</p>
<p>You will need Kubernetes version 1.28 or later, the Kubernetes Snapshot Controller, and the associated CRDs (which are already present in many setups). Otherwise, the basics are the same as what you are already familiar with from snapshots at cloudscale: <strong>Up to 10 snapshots can exist simultaneously per volume,</strong> and they are billed – down to the second for the time they exist – based on the volume&#x27;s size at the time the snapshot was created (at half the per-gigabyte price of a standard NVMe SSD or bulk volume).</p>
<br/>
<p>At cloudscale, we provide the right tools and interfaces to help you manage your Kubernetes deployments. Our CSI driver with snapshot support has been available as a beta version for some time now, and the feedback has been consistently positive. With the release of version 4.0.0, we now recommend that all customers upgrade; this way, you too can <strong>take full advantage of our volume snapshots directly from your Kubernetes setup – using Velero, for example.</strong></p>
<p>Create snapshots using automatic release!<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Guest Article: Sovereign Enterprise AI with Squirro
]]></title>
          <link>https://www.cloudscale.ch/en/news/2026/03/19/guest-article-sovereign-enterprise-ai-with-squirro</link>
          <pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2026/03/19/guest-article-sovereign-enterprise-ai-with-squirro</guid>
          <description>
            <![CDATA[<p>Digital sovereignty doesn&#x27;t have to be a stumbling block for enterprise AI adoption. By pairing Squirro&#x27;s secure, privacy-enabled Enterprise Intelligence platform with cloudscale&#x27;s sovereign infrastructure, organizations get a unified solution that ensures data remains protected within a trusted local environment. The result is a path to innovation that keeps your data fully under your control.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-squirro-orchestrating-enterprise-ai.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-squirro-business-insights.png"/><p>By Matthias Gysi, Presales Engineer, Squirro</p>
<h3>Squirro: The AI platform for regulated industries</h3>
<p>At first glance, AI looks easy. Type a prompt into a browser, and you immediately get an answer. But when you transfer it to the enterprise, you quickly realize that it isn&#x27;t that simple. So why are so many businesses hitting a wall as they try to scale experimental PoCs to production? Because most enterprise AI stacks fail to achieve the accuracy, security, and compliance required in real-world, high-stakes deployments.</p>
<p>The Squirro platform provides a <a href="https://squirro.com/generative-ai-security">secure foundation for generative AI</a>. It lets organizations safely delegate everyday knowledge work, such as writing on-brand content, summarizing dense legal documents, or comparing complex compliance data, while mitigating hallucinations through strict factual grounding. By transforming their disparate data sources into a single source of truth, Squirro helps teams uncover hidden insights and agentically automate entire enterprise workflows, transforming AI into a core business utility that meets the demands of highly regulated industries.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-squirro-orchestrating-enterprise-ai-1dfc23f67e11.png" alt="Orchestrating Enterprise AI with Squirro." caption="Orchestrating Enterprise AI with Squirro."/>
<h3>Countering shadow IT with enhanced RAG</h3>
<p>By now, we&#x27;ve all seen the value of GenAI in terms of efficiency gains. That&#x27;s why it&#x27;s hard to blame those that want to use the technology in their daily activities. But when employees share work-related data with non-approved AI services, so-called shadow IT, they expand the <a href="https://squirro.com/squirro-blog/what-is-genai-attack-surface">digital attack surface</a> and introduce massive security risks. Data breaches, compliance failures, and the exposure of proprietary IP to third-party model trainers are real threats that can cause lasting reputational damage.</p>
<p>The Squirro enterprise AI platform was built specifically to give organizations a secure, accurate, and permissions-enabled way to integrate GenAI into their operations. Because it uses retrieval augmented generation (RAG) to augment user prompts with data stored in an organization&#x27;s enterprise platforms, its performance is far beyond what an off-the-shelf large language model can deliver.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-squirro-business-insights-973b9a7b82c6.png" alt="Enabling tailored vertical solutions with the Squirro Enterprise GenAI Platform." caption="Enabling tailored vertical solutions with the Squirro Enterprise GenAI Platform."/>
<br/>
<p>But while standard RAG retrieves data from internal and external sources to ground an LLM&#x27;s outputs, Squirro&#x27;s enhanced retrieval augmented generation (RAG) stack goes several steps further to enhance accuracy and <a href="https://squirro.com/squirro-blog/protecting-customer-data-genai">protect customer data</a>:</p>
<ul>
<li><strong>Knowledge Graphs</strong> provide deeper deterministic grounding to ensure the AI understands relationships between entities, not just word frequencies.</li>
<li><strong>Data Virtualization</strong> enables real-time integration without needing to constantly ingest fast-changing datasets.</li>
<li><strong>AI Guardrails</strong> ensure that GenAI outputs consistently adhere to corporate policy and regulatory requirements.</li>
<li>The <strong>Privacy Layer</strong> automatically masks and scrubs personally identifiable information (PII) before it ever reaches the LLM, enabling the use of state of the art LLMs from US based hosting providers without GDPR violation.</li>
<li>The <strong>Agentic Framework</strong> helps users move beyond simple task acceleration to end-to-end workflow execution drawing on your proprietary data.</li>
</ul>
<p>This provides a compliant alternative to shadow IT, allowing users to leverage generative AI within a secure enterprise framework.</p>
<h3>Controlling your destiny with full data sovereignty</h3>
<p>Ask business leaders in the DACH region what is holding back AI adoption in their organization, and you&#x27;ll likely hear a common response: legal anxiety surrounding data privacy. And it goes beyond standard security vulnerabilities. The U.S. Cloud Act, enacted in 2018, creates a jurisdictional gray area by allowing US authorities to request data from any cloud service providers headquartered within its borders, regardless of where that data is physically stored. For the public sector, healthcare, and banking, this isn&#x27;t just a compliance hurdle, it&#x27;s a question of data ownership.</p>
<p>Squirro delivers total data sovereignty by giving organizations absolute control over where their data resides. Whether deployed in a Virtual Private Cloud (VPC) or fully on-premises, Squirro ensures sensitive information never leaves your perimeter. Pairing Squirro with cloudscale allows Swiss organizations to scale their AI operations without relying on providers subject to the U.S. Cloud Act.</p>
<h3>Scalable data access control management</h3>
<p>One of the most overlooked hurdles in Enterprise AI is permissions. If you build a GenAI application that taps into all your corporate data, how do you ensure that a junior analyst doesn&#x27;t accidentally gain access to the CEO&#x27;s salary or a confidential M&amp;A memo?</p>
<p>Managing access control lists (ACLs) becomes increasingly complex as organizations scale. Squirro handles this by assigning access-control metadata to each document chunk at the moment of ingestion. Permissions are embedded into the retrievable unit itself. When a user asks a question, the similarity search only returns authorized chunks. If you don&#x27;t have permission to see the document, the LLM doesn&#x27;t even know the document exists, preventing unauthorized exposure to third parties.</p>
<p>We have proven this at scale, managing over 10 TB of index data with over 10,000 user groups for a major central bank, proving that security doesn&#x27;t have to sacrifice performance.</p>
<h3>Future-proofing your AI strategy</h3>
<p>Combining Squirro and cloudscale provides a turnkey sovereign enterprise AI platform that operates independently of major U.S. hyperscalers. This isn&#x27;t just about deploying an industry-hardened technology stack; it&#x27;s about removing the legal and security friction that stalls AI projects in the boardroom.</p>
<p>Ultimately, the choice between AI adoption and sovereignty is a false one. You don&#x27;t have to trade your data privacy for a productivity boost. By deploying Squirro on cloudscale, you can cut the time it takes to adopt sovereign Enterprise AI, building on a foundation that you, and only you, own.</p>
<p>We&#x27;ve successfully installed Squirro on cloudscale to verify the integration; if you&#x27;d like to see how it looks or try a test instance, please <a href="https://squirro.com/contact">get in touch with the Squirro Sales Team</a>.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[New GPUs – Double the VRAM, More Power
]]></title>
          <link>https://www.cloudscale.ch/en/news/2026/02/27/new-gpus-double-the-vram-more-power</link>
          <pubDate>Fri, 27 Feb 2026 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2026/02/27/new-gpus-double-the-vram-more-power</guid>
          <description>
            <![CDATA[<p>cloudscale&#x27;s GPU servers power your AI workloads. In addition to dedicated CPU cores, 1 to 4 GPUs per virtual server deliver the performance needed to run even demanding applications. Now we are taking it to the next level: effective immediately, we are offering NVIDIA RTX PRO 6000 Max-Q GPUs instead of the L40S. But we are not stopping at more powerful GPUs: the new &quot;GPU2&quot; flavors also come with more memory – and are even more affordable.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-gpu2-flavors.png"/><h3>The next generation</h3>
<p>The NVIDIA L40S in our previous GPU servers came with 48 GB of VRAM, and up to four of the GPUs could be used in parallel in a virtual server. The new <strong>NVIDIA RTX PRO 6000 Max-Q boasts twice that amount: 96 GB of VRAM per GPU,</strong> and here too, you can tap into the power of up to four GPUs in a single server. The physical GPUs are passed through to the virtual server undivided, so that the full performance is dedicated to your use.</p>
<p>Speaking of performance: The RTX PRO 6000 <strong>not only offers more VRAM, but also significantly more computing power</strong> than the L40S, and we were also impressed by the energy efficiency of the &quot;Max-Q&quot; version. In line with the larger VRAM, we are also equipping our new GPU2 flavors with more memory, which you can combine in various ratios with 16 to 96 dedicated CPU cores. Nothing has changed with the <a href="https://www.cloudscale.ch/en/news/2025/04/15/cloudscale-gpu-servers-for-llm-ai-etc#toc-1">scratch disk</a>: up to 1600 GB of lightning-fast NVMe SSD storage is available locally to minimize latency.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-gpu2-flavors-9b87102602c7.png" alt="Ready for demanding workloads: GPU servers with up to 4 NVIDIA RTX PRO 6000 Max-Q GPUs, 640 GB RAM, and 96 CPU cores." caption="Ready for demanding workloads: GPU servers with up to 4 NVIDIA RTX PRO 6000 Max-Q GPUs, 640 GB RAM, and 96 CPU cores."/>
<h3>Migrate with care</h3>
<p>To switch from a GPU1 server with L40S to a <strong>GPU2 server with RTX PRO 6000 Max-Q,</strong> in principle, it is sufficient to scale the server to one of the new flavors via the cloud control panel or API. Scaling GPU servers can take a moment, as moving to different physical hardware also requires transferring the contents of your scratch disk.</p>
<p>We recommend, however, that you create a second, new server as a precaution and first <strong>make sure that everything works as desired</strong> (including the <a href="https://developer.nvidia.com/blog/nvidia-transitions-fully-towards-open-source-gpu-kernel-modules">open-source GPU kernel module</a> required for the Blackwell architecture, in Debian/Ubuntu for example the package <code>nvidia-open</code>). You can then migrate the workload to the new server – if you use a Floating IP and/or a load balancer, the IP address will remain the same for your users.</p>
<br/>
<p>The new, and in some cases considerably cheaper, GPU2 flavors with NVIDIA RTX PRO 6000 Max-Q GPUs have been available for a few days now, and early customer feedback on &quot;real&quot; workloads indicates a significant improvement in performance. We are confident that your application will benefit from our GPU servers too – <strong>why not try it out for yourself?</strong></p>
<p>Step it up a notch,<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[All Project Costs at a Glance
]]></title>
          <link>https://www.cloudscale.ch/en/news/2026/01/27/all-project-costs-at-a-glance</link>
          <pubDate>Tue, 27 Jan 2026 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2026/01/27/all-project-costs-at-a-glance</guid>
          <description>
            <![CDATA[<p>One advantage of the cloud is that there are no fixed or investment costs; you only pay for what you actually need. The flip side of this is that managing cloud resources – such as creating or scaling servers – often has an impact on costs. With the new &quot;Project Costs&quot; view, everyone involved now has access to a clear overview: all cloud resources are listed individually, organized by type. This gives you a constant overview and allows you to check that everything has been implemented according to plan, for example after automated deployments.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-services-project-costs.png"/><h3>Consolidation of scattered cost data</h3>
<p>A virtual server at cloudscale can have <a href="https://www.cloudscale.ch/en/news/2020/11/23/more-volumes-more-flexible-container-setups">up to 128 volumes</a> – this means that PVs in a Kubernetes setup can be provisioned using CSI exactly where the pods need them. In the cloud control panel, the corresponding storage costs <strong>used to be displayed together with the server to which the volumes were currently attached.</strong> Especially in large setups, however, such a &quot;coupling&quot; did not always seem helpful. In contrast, other costs – such as for <a href="https://www.cloudscale.ch/en/news/2025/06/30/conveniently-on-the-safe-side-with-snapshots">snapshots</a>, Floating IPs, or <a href="https://www.cloudscale.ch/en/news/2025/08/29/these-components-make-up-an-lbaas">load balancers</a> – were only shown for the respective cloud resources.</p>
<p>For projects in your personal account or in an organization where you are a superuser, there has already been a clearer overview: in the &quot;Billing&quot; area of the control panel, all costs for a project are summarized on a single page. The total costs are broken down into compute, storage, and networking costs and listed individually down to the individual cloud resources. <strong>This overview is now also available to all other project participants,</strong> e.g. <a href="https://www.cloudscale.ch/en/news/2021/09/23/collaboration-with-external-accounts">external collaborators</a> or members of a <a href="https://www.cloudscale.ch/en/news/2022/01/27/cross-organizational-collaboration">partner organization</a>; you can find it directly in the &quot;Services&quot; area by selecting &quot;Project Costs&quot; from the menu.</p>
<h3>An overview of the current situation</h3>
<p>The overview under &quot;Project Costs&quot; is not only useful when you need to answer questions from accounting or a customer. For example, take a look at this summary after an automated deployment using <a href="https://github.com/cloudscale-ch/ansible-collection-cloudscale">Ansible</a> or <a href="https://www.terraform.io/docs/providers/cloudscale/index.html">Terraform</a>; you can see <strong>at a glance whether the cloud resources created correspond to what you intended.</strong> As a sanity check, you can also quickly see from the costs if, for example, significantly too small or too large compute flavors have sneaked into an &quot;infrastructure-as-code&quot; config.</p>
<img src="https://static.cloudscale.ch/img/news-services-project-costs-5d4f92d764df.png" alt="In &quot;Project Costs,&quot; all costs for a project are summarized on a single page." caption="In &quot;Project Costs,&quot; all costs for a project are summarized on a single page."/>
<br/>
<p>As usual at cloudscale, costs are shown per day so you can see what the currently present cloud resources would cost if they existed in this form for a full 24 hours. <strong>Of course, we continue to bill by the second:</strong> resources that you just created or will delete on the same day are only billed on a pro-rata basis. Object storage continues to follow a separate logic here: because these costs are calculated from ongoing usage, a point-in-time view is not possible; instead, you are shown the average costs of the last 7 days.</p>
<h3>More useful information for you</h3>
<p>With the consolidation of all cost information in &quot;Project Costs,&quot; we have also taken a fresh look at <strong>other information that would best support our users in their work in the control panel.</strong> For example, you will now find a separate total for NVMe SSD and bulk storage above your server and volume views, or the overall object count for object storage.</p>
<p>We also recently introduced the &quot;Balance History&quot; in the &quot;Billing&quot; area. For your own account and for organizations where you are a superuser, you can <strong>track the development of your credit balance on a daily basis</strong> and see the amount charged for each project. For more information, simply click to view the <a href="https://www.cloudscale.ch/en/news/2024/04/25/detailed-breakdown-of-past-costs">Billing Report</a> for the relevant project and day, where you will find a list of all billed cloud resources with their individual amounts.</p>
<br/>
<p>With cloudscale, you only pay for what you need – and you can clearly see what that is at any time. This not only gives you the <strong>best possible overview,</strong> but also ensures you always have the right answers when communicating with internal and external stakeholders.</p>
<p>Straightforward.<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Load Balancer "as a Service" With UDP Support
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/12/18/load-balancer-with-udp-support</link>
          <pubDate>Thu, 18 Dec 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/12/18/load-balancer-with-udp-support</guid>
          <description>
            <![CDATA[<p>TCP is typically associated with reliability – if individual packets in a connection are lost, this is detected and the packets are resent. However, its use in DNS or VPNs shows that UDP also covers important use cases. Our load balancer now supports both protocols, allowing you to horizontally scale both TCP- and UDP-based services and protect them from failures.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-lbaas-protocol-combinations.png"/><h3>The right protocol for every use case</h3>
<p>Compared to TCP, <strong>UDP saves a certain amount of overhead;</strong> in data transmission using UDP, for example, data packets can be lost without being retransmitted. In a video stream, for example, this may well be desirable; it is better to have a few pixels missing than to have to stop everything and wait for the missing data packets. In a VPN tunnel, on the other hand, the inner, encapsulated connection can respond to transmission errors and, if necessary, request that the data be sent again.</p>
<p>This makes it clear that UDP-based services cover a wide range of use cases that are by no means less demanding in terms of availability and server capacity. If you operate such services at cloudscale, be sure to use our &quot;LBaaS&quot; for them as well. With two or more backend servers – ideally in &quot;anti-affinity&quot; – processing requests in parallel, you <strong>not only increase the overall capacity but also the availability of your service as a whole.</strong></p>
<h3>Special characteristics of LBaaS with UDP</h3>
<p>In the case of TCP, our load balancer distributes individual connections to the available backend servers (&quot;pool members&quot;). UDP does not work with connections like this; instead, the load balancer distinguishes between individual data flows, which are identified by their respective combination of source and destination IPs and ports. Packets with matching values are assigned to the same data flow (and thus the same backend) over several minutes, which <strong>already leads to a certain degree of &quot;session stickiness&quot; by default.</strong></p>
<img src="https://static.cloudscale.ch/img/news-lbaas-protocol-combinations-40faae73d888.png" alt="The API documentation also covers the supported protocol combinations." caption="The API documentation also covers the supported protocol combinations."/>
<p>Our comprehensive <a href="https://www.cloudscale.ch/en/api/v1#load-balancers">API documentation</a> includes tables <strong>showing you the supported protocol combinations.</strong> For example, it is possible for pool members to use (TCP) HTTP status codes to signal to the health monitor whether they are ready to process requests, even if the actual requests are then transmitted via UDP.</p>
<p>Please note that the load balancer <strong>currently supports UDP for IPv4 traffic only.</strong> If your load balancer is accessible from the internet, it will be assigned both an IPv4 and an IPv6 address by default; in this case, simply do not enter any <code>AAAA</code> DNS records for the hostnames on which you (also) operate UDP services.</p>
<br/>
<p>UDP is ubiquitous, not just when it comes to DNS. <strong>Use our load balancer for your UDP-based services too,</strong> to make your setup even more robust and elegantly handle maintenance work, for example. And if you do not have any experience with our LBaaS yet, you will find a good overview to get you started in the post about the <a href="https://www.cloudscale.ch/en/news/2025/08/29/these-components-make-up-an-lbaas">components of a load balancer</a>.</p>
<p>Reliably,<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Project Audit Logs Available via API
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/12/11/project-audit-logs-available-via-api</link>
          <pubDate>Thu, 11 Dec 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/12/11/project-audit-logs-available-via-api</guid>
          <description>
            <![CDATA[<p>At cloudscale, all changes to your cloud resources are recorded in a log. This means that you can also check retrospectively when exactly, for example, a server was scaled or who the best person in your team is to ask for further details. These audit logs are now also available via API, which allows you to archive them in a location of your choice or include them in monitoring.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>One log – numerous advantages</h3>
<p>Have you ever wondered which member of your team set up that server called &quot;test&quot;? Or when you last had to perform a hard reboot of a certain system? <strong>You will find the answer in the audit log.</strong> Any changes to your cloud resources, irrespective of whether they were carried out in the cloud control panel or via API, are neatly listed and traceable.</p>
<p>If you would like not only to look at these logs in the control panel, but also, for example, to <strong>save them on your own log server or search them using a specific tool,</strong> you can now retrieve the audit log via our API, too. In the process, you have the option of setting a start and/or end time in order to narrow down extensive logs to the relevant period. In addition, a special <code>poll_more</code> URL enables you to access periodically and precisely any logs that have been added since the last retrieval. This also makes it possible for you to evaluate the logs in a monitoring system and, for example, to nominate somebody automatically to perform an additional manual review of certain operations.</p>
<h3>Further details</h3>
<p>In addition to the exact time stamp for every change, you will find <code>action</code> and <code>message</code> fields in the audit log that you retrieve via API. The action field lists what type of change it is (e.g. &quot;server_volume_attach&quot;), while the message field describes the whole process, i.e. also indicates which volume was attached to which server. <strong>You will also see who initiated the change</strong> (usually the e-mail address of an account in the control panel or the API token used) and from which IP address. As always, you will find all the details in our <a href="https://www.cloudscale.ch/en/api/v1#project-logs">API documentation</a>.</p>
<p>Not every change carried out via the control panel or API will take the same amount of time and actions can run in parallel. As soon as the action has been successfully completed, the log entry is prepared, which determines the time stamp of the log. Shortly afterwards (in a matter of milliseconds), the result of the action and the log entry can be seen. In this short period of time, it is possible for log entries to &quot;overtake&quot; each other, i.e. a log with an earlier time stamp only becomes visible later. When accessing subsequent pages and when &quot;polling&quot;, the <code>cursor</code> <strong>ensures that you do not miss a log entry.</strong> <a href="https://www.cloudscale.ch/en/engineering-blog/2025/10/09/generating-truly-sequential-ids-in-postgresql">Michi has provided a comprehensive insight into this mechanism</a> in our engineering blog.</p>
<h3>Preprepared sample code: start right away</h3>
<p>In order to enable you to start quickly, we have prepared a <strong>ready-to-use, annotated Python script</strong> for you. This will allow you to try retrieving the audit log via API and to become familiar with the approach used. Even if you use different tools and languages, this will provide you with a good overview and a basis for your own implementations.</p>
<p>First of all prepare a Python virtual environment with the required dependency:</p>
<pre><code class="language-bash">mkdir project-log-api-client
cd project-log-api-client/
python3 -m venv venv
source venv/bin/activate
pip install aiohttp
</code></pre>
<p>Then create the actual <code>api-log-client.py</code> script with the following content:</p>
<pre><code class="language-python">import asyncio
import json
from collections.abc import AsyncIterator
from datetime import UTC
from datetime import datetime
from datetime import timedelta
from typing import Any
from urllib.parse import quote

from aiohttp import ClientSession

API_TOKEN = &quot;INSERT_PROJECT_API_TOKEN&quot;
POLL_INTERVAL_SECONDS = 120


async def stream_logs(session: ClientSession, start: datetime) -&gt; AsyncIterator[Any]:
    poll_url = f&quot;https://api.cloudscale.ch/v1/project-logs?start={quote(start.isoformat())}&quot;

    # The outer loop fetches all logs available at the time,
    # then waits for a defined interval.
    while True:
        current_page = poll_url

        # The inner loop fetches individual pages of available logs
        # until the `next` field in the response is `null`.
        while current_page is not None:
            async with session.get(current_page) as response:
                if not response.ok:
                    # The API did not return with status code 200.
                    raise Exception(f&quot;Error {response.status} from API: {await response.text()}&quot;)

                obj = await response.json()

            # Return all fetched logs to the caller.
            for log in obj[&quot;results&quot;]:
                yield log

            current_page = obj[&quot;next&quot;]
            poll_url = obj[&quot;poll_more&quot;]

        # Wait for a defined interval before polling for new logs.
        await asyncio.sleep(POLL_INTERVAL_SECONDS)


async def main() -&gt; None:
    # Header to authenticate the API access.
    headers = {&quot;Authorization&quot;: f&quot;Bearer {API_TOKEN}&quot;}

    # Retrieve logs from the past hour before streaming new logs.
    start = datetime.now(UTC).astimezone() - timedelta(hours=1)

    print(f&quot;Streaming logs, starting from {start:%F %H:%M:%S}. Use ctrl-C to stop.&quot;)

    async with ClientSession(headers=headers) as session:
        # Iterate over logs returned by the API and print them to the console.
        async for log in stream_logs(session, start):
            print(json.dumps(log, indent=4))


if __name__ == &quot;__main__&quot;:
    asyncio.run(main())
</code></pre>
<p>Now start the script. It will provide you with the audit log of the past 60 minutes on the command line and will then periodically add any new log entries that have been created in the interim.</p>
<pre><code class="language-plaintext">$ python3 api-log-client.py 
Streaming logs, starting from 2025-12-11 12:21:48. Use ctrl-C to stop.
{
    &quot;ip_address&quot;: &quot;172.30.244.1&quot;,
    &quot;action&quot;: &quot;server_create&quot;,
    &quot;message&quot;: &quot;Server &#x27;hello&#x27; has been created&quot;,
    &quot;timestamp&quot;: &quot;2025-12-11T12:23:17.460366Z&quot;,
    &quot;actor&quot;: {
        &quot;user&quot;: {
            &quot;email&quot;: &quot;johanna@example.com&quot;
        }
    }
}
[...]
</code></pre>
<br/>
<p>With the audit logs of your projects at cloudscale you <strong>know at all times who did what when,</strong> which means that you can quickly establish the correct links or approach the right person for further inquiries. You can now also use these logs via API for maximum flexibility when archiving and evaluating.</p>
<p>Reliably committed,<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[For Worst-Case Scenarios: Rescue Mode with Grml
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/11/28/rescue-mode-with-grml</link>
          <pubDate>Fri, 28 Nov 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/11/28/rescue-mode-with-grml</guid>
          <description>
            <![CDATA[<p>For your servers at cloudscale, you can choose from a range of popular Linux distributions, but you have also had the option for a while now of starting other operating systems. The most common reason for booting a system other than the pre-installed one is probably to resolve problems. Here, the new rescue mode for your servers helps you avoid numerous steps so that, in the worst-case scenario, you can get your server back online as quickly as possible.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-rescue-mode-button.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-rescue-mode-boot-screen.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-rescue-mode-boot-options.png"/><h3>Rescue mode as an additional boot option</h3>
<p>You can use the VNC console to <strong>start your servers from a volume other than the root volume,</strong> which will allow you to boot and, if required, install <a href="https://www.cloudscale.ch/en/news/2020/01/14/use-your-own-iso-usb-images">almost any operating system</a>. However, first of all you need to attach an additional volume to your server with the required content for booting (e.g. a USB or ISO image). Although this process offers you complete flexibility, it can cost precious time in an emergency.</p>
<p>Rescue mode now offers you a multifunctional tool that is available at all times in such cases. It takes just two clicks for your server to start a <a href="https://grml.org">Grml</a> image. This Debian-based distribution can probably be considered as <strong>the standard live image for tricky situations.</strong></p>
<p>A live system is particularly helpful when normal access to your server is not working for whatever reason. Whether the server is not booting in the first place (the reason for this can often be seen in the <a href="https://www.cloudscale.ch/en/news/2022/10/18/did-you-know-our-control-panel#toc-9">console log</a>), cannot access the network properly, has an excessively restrictive firewall setting, or you have simply lost the login details, Grml (in the same way as the boot diskette used to) offers you a tool to <strong>access a faulty configuration and solve the problem.</strong></p>
<img src="https://static.cloudscale.ch/img/news-rescue-mode-button-6c7d7dcea3d4.png" alt="Using the blue button, you can activate and deactivate rescue mode for a server." caption="Using the blue button, you can activate and deactivate rescue mode for a server."/>
<h3>Activate rescue mode</h3>
<p>You activate rescue mode via the blue &quot;emergency suitcase&quot; button in the cloud control panel. <strong>When you activate rescue mode, your server automatically restarts</strong> or switches on and then boots the Grml live image. You can then use the VNC console to access your server and, if required, open e.g. network access or mount the root volume.</p>
<img src="https://static.cloudscale.ch/img/news-rescue-mode-boot-screen-fdd5853b634a.png" alt="After 30 seconds, Grml starts with the default settings." caption="After 30 seconds, Grml starts with the default settings."/>
<p>Grml offers you a wealth of options from the outset. If you let the 30-second countdown of the boot loader elapse, it starts with the default settings and then <strong>offers you a selection menu via the VNC console</strong> where you can, for example, change the keyboard layout or network configuration. By pressing &quot;q&quot;, you reach the zsh shell and can decide for yourself what to do next.</p>
<h3>More features for faster work</h3>
<p>You can work in a more targeted – and convenient – manner if you <strong>provide the boot loader with information.</strong> Press the Tabulator key during the countdown and add options to the displayed string, e.g. <code>services=networking,cloud-init-main,cloud-config,ssh</code>, followed by the Enter key. This will enable Grml to activate the network while booting (it obtains the settings via DHCP) and to load the SSH daemon.</p>
<img src="https://static.cloudscale.ch/img/news-rescue-mode-boot-options-a9bb1240c66f.png" alt="Press the Tabulator key to add options, e.g. &quot;services=networking,cloud-init-main,cloud-config,ssh&quot;." caption="Press the Tabulator key to add options, e.g. &quot;services=networking,cloud-init-main,cloud-config,ssh&quot;."/>
<p>Most importantly, however, it also starts &quot;<a href="https://www.cloudscale.ch/en/news/2020/06/23/initialize-servers-with-cloud-init">cloud-init</a>&quot;; this obtains, among other things, the SSH public keys that you selected when creating the server from our metadata server and stores them for the &quot;grml&quot; live user. This means that you can access the server straight away with <code>ssh grml@&lt;IP address&gt;</code> and will have <strong>all the usual SSH options,</strong> including copy/paste from your local system and file transfers.</p>
<p><strong>Grml offers you a wide range of aids and short-cuts.</strong> In order to obtain an overview, use <code>grml-tips &lt;keyword&gt;</code> directly in the Grml zsh on your server and have a look at the <a href="https://grml.org/cheatcodes">cheat codes on the Grml website</a>.</p>
<h3>Back to normal operation</h3>
<p>Once you are ready, you deactivate rescue mode again via the control panel. The server <strong>restarts and boots from its root volume as usual.</strong> If the error was successfully eliminated, you can then access your server as normal, e.g. via SSH with your standard username and SSH key.</p>
<p>While rescue mode is active, you can reboot or switch off the server, e.g. with the <code>shutdown</code> command. Before you do so, be aware that <strong>a switched-off server in rescue mode can only be switched on again by deactivating rescue mode</strong> (the reason for this is a specific trait of OpenStack, which our cloud infrastructure is based on). The server will then once again try to boot from its root volume, although you still have the option of activating rescue mode again.</p>
<br/>
<p>We all know that things sometimes get stuck in IT. The new rescue mode for your servers at cloudscale means that you can <strong>get to where it matters more quickly,</strong> for example to the decisive config that gets everything up and running again.</p>
<p>The right tool when it matters!<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Create New Volumes From Snapshots
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/10/16/create-new-volumes-from-snapshots</link>
          <pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/10/16/create-new-volumes-from-snapshots</guid>
          <description>
            <![CDATA[<p>Volume snapshots provide you with the option of, for example, reversing a failed server upgrade as if it never happened. You can now not only use a snapshot to revert the corresponding original volume to an earlier state, but you can also use it as a basis for creating a new volume. This means that volume snapshots can support you in various new areas of application and, under certain circumstances, save you a great deal of time.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Use snapshots flexibly</h3>
<p><strong>Create a snapshot of a volume in a matter of seconds,</strong> for example before a major upgrade or any other change to your server. If something goes wrong, you can just as quickly revert the server to the snapshot and thus to a previous, functional state. However, what if you simply want to check the content of a snapshot or only require a single file from it, e.g. the previous version of an important config?</p>
<p>A recent development is that you can also <strong>create new volumes from your volume snapshots</strong> where the new volume then contains the exact data that were &quot;frozen&quot; in the snapshot. In the above example, you could then attach the new volume to the corresponding server and mount it, which would enable you to access the required config file specifically.</p>
<p>It goes without saying that updates and their potential pitfalls are not the only use case for this new feature. An obvious example is, for instance, a 1:1 copy of a root volume. Previously you would have <a href="https://www.cloudscale.ch/en/news/2020/01/14/use-your-own-iso-usb-images">started your server from a live system</a> and then copied the volume block by block, e.g. with <code>dd</code>. Now you can simply create a snapshot and then turn it into a separate new volume. <strong>This allows you to subsequently continue working flexibly,</strong> for example by downloading the content of the volume to a local archive. Or you can use it to create an image file for a custom image in &quot;raw&quot; format (however, direct use as a root volume for a new server is not possible at the moment).</p>
<h3>Use via an existing API endpoint</h3>
<p>To create a new volume from a snapshot, use <strong>the same API endpoint that was previously used to create a volume.</strong> Specify the <code>volume_snapshot_uuid</code> as a parameter in order to identify the desired snapshot as the basis for the new volume. In addition, the API will accept a <code>name</code> and, as an option, <code>tags</code> for the new volume; the other properties (such as size or cloud location) are based on the indicated snapshot.</p>
<p>The API request could look like this, for example:</p>
<pre><code class="language-plaintext">curl -i -H &quot;Authorization: Bearer YourApiTokenGoesHere&quot; \
  -F name=&quot;my-volume-name&quot; \
  -F volume_snapshot_uuid=&quot;351d461c-2333-455f-b788-db11bf0b4aa2&quot; \
  https://api.cloudscale.ch/v1/volumes
</code></pre>
<p>As always, you will find all the required details in the extended section <a href="https://www.cloudscale.ch/en/api/v1#create-a-volume">&quot;Create a Volume&quot; in our API documentation</a>.</p>
<p>Please note that the snapshots are crash-consistent, also when creating volumes from them. This means that they contain an image of the original volume at exactly the point in time when the snapshot was created, as if the server had crashed at this moment. Depending on the specific circumstances, it <strong>may make sense to stop tricky services before creating a snapshot</strong> or to shut down the server completely in order to ensure that e.g. write caches are also contained in the volume and thus in the snapshot. When preparing a custom image, please also consider <a href="https://www.cloudscale.ch/en/news/2020/06/23/initialize-servers-with-cloud-init">including &quot;cloud-init&quot;</a> and keeping it as small as possible. You will only add any required free space when you create a new server with the image.</p>
<h3>Points to note</h3>
<p>Be aware that not just snapshots, but also volumes created from them <strong>are not a backup.</strong> They are not only in the same storage cluster as the original volume and will potentially also be affected in the case of technical issues, but we also use &quot;Copy-On-Write&quot; (COW) for rapid and efficient provision of snapshots and volumes, which means that different volumes and snapshots may depend on the same, shared data fragments. For solid backup, use snapshots and the volumes based on them as an interim step and then copy the data to a separate location, such as an archive that is maintained locally to you.</p>
<p>The COW mechanism that is used in the background does not, however, cause any limitations for handling your volumes and snapshots. Even if you have used a snapshot as the basis for a new volume, you can later delete the snapshot – or even the original volume as a whole – <strong>without this affecting the new volume.</strong></p>
<br/>
<p>Snapshots as the basis of a new volume will also support you when you do not need to revert a whole server. Maintain the content of your root volumes, for example, when you delete virtual servers, create a consistent 1:1 copy as a basis for a download or a custom image, or access previous data versions while leaving everything else in the new state. <strong>Simply start with a snapshot and keep all options open for yourself.</strong></p>
<p>For moments that need to be saved,<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Private Networks Across Project Boundaries
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/09/23/private-networks-across-project-boundaries</link>
          <pubDate>Tue, 23 Sep 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/09/23/private-networks-across-project-boundaries</guid>
          <description>
            <![CDATA[<p>At cloudscale you can group cloud resources that belong together in projects (e.g. to separate your test setup from your prod setup) and thus also allocate graduated access rights to those involved. You can use private networks to connect virtual servers if you do not want individual ones (e.g. a DB backend) to be accessible from the internet. So why not combine these two concepts? Network sharing enables you to share private networks with other projects if required, also across account and organization boundaries.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-network-sharing.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-network-sharing-ports.png"/><h3>Separate projects, joint network</h3>
<p>You can use private networks to &quot;wire up&quot; virtual servers internally in one cloud location, e.g. a web server that accepts requests from clients with the associated database backend. Where individual servers do not need to be accessible from the internet at all, this means that you can minimize the target area for attacks. <strong>You can now also connect your servers and load balancers using a private network if they are not in the same project.</strong></p>
<p>Projects not only provide a clear overview of and bring order to your cloud resources. If different teams within your organization are responsible for the maintenance of, for example, web workers and DB clusters, you can determine their access rights in the control panel for each project separately. With separate projects and a shared private network, you can then <strong>clearly map the responsibilities</strong> and isolate the databases from the internet, while ensuring that they remain accessible from the web workers.</p>
<img src="https://static.cloudscale.ch/img/news-network-sharing-d20e55c63137.png" alt="The private network &quot;db-access&quot; in the &quot;DB Cluster&quot; project can be shared with the &quot;Web Servers&quot; project." caption="The private network &quot;db-access&quot; in the &quot;DB Cluster&quot; project can be shared with the &quot;Web Servers&quot; project."/>
<p>A further example of a separation of this kind could be, for instance, when management of your firewall has been outsourced to an external service provider (and this partner organization has been granted access rights to the firewall project in the control panel). The filtered traffic can then <strong>travel through the private network to the servers in your other projects.</strong> Or you may have a centralized log server that you run in a separate project (key word: segregation of duties). Finally, you can also use this to limit the scope of an API token that you have stored in a tool for automation purposes.</p>
<h3>Make the most of shared networks</h3>
<p>If two projects are to share a private network, first determine <strong>which project should be the owner of the network.</strong> Details such as the name or MTU of the network can only be changed from within this project at a later stage. For the actual sharing (or to change the circle of participating projects later), notify our support team. You will find the link for this in the control panel under &quot;Network &gt; Sharing&quot;.</p>
<p>The properties of the private network are visible to all participating projects, in particular e.g. the name of the network so that it can be identified without doubt when connecting a server. The MAC addresses and – provided they are managed via DHCP – the IP addresses of all devices in this network are also visible under &quot;Network &gt; Ports&quot;. While these technical details are accessible to everyone in the network anyway (e.g. by means of ARP requests), they also help you to avoid collisions and any subsequent errors. <strong>The names of servers and load balancers in other projects, however, are not visible;</strong> devices of this kind are simply displayed as &quot;Other Device&quot;.</p>
<img src="https://static.cloudscale.ch/img/news-network-sharing-ports-e669bf7d5f35.png" alt="Seen from the &quot;DB Cluster&quot; project, the servers in the &quot;Web Servers&quot; project are shown as &quot;Other Device&quot;." caption="Seen from the &quot;DB Cluster&quot; project, the servers in the &quot;Web Servers&quot; project are shown as &quot;Other Device&quot;."/>
<p>With regard to IP addresses, you can use any IPs in your private network, which allows you to consider existing address patterns already used by a participating project. You will find further information on the <strong>configuration of the desired subnet and other DHCP features</strong> in our <a href="https://www.cloudscale.ch/en/news/2020/04/03/mastering-the-private-network-with-managed-dhcp">article on Managed DHCP</a>.</p>
<br/>
<p>Shared private networks allow you to guarantee internal data flow between the correct systems without all of them having to be in the same project (and thus managed by the same people). This ensures that you can structure access rights more precisely and <strong>centralize services that are used by several projects</strong> – both within your own organization and with partner organizations.</p>
<p>Network? Go!<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[These Components Make Up an "LBaaS"
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/08/29/these-components-make-up-an-lbaas</link>
          <pubDate>Fri, 29 Aug 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/08/29/these-components-make-up-an-lbaas</guid>
          <description>
            <![CDATA[<p>cloudscale&#x27;s load balancers are a well thought-out solution: they help you operate highly available setups and take a lot of tedious work off your shoulders; at the same time, they are so flexible that you can use them – with the right settings – in very different scenarios. This article looks at the various logical components of a load balancer and their options – so you can get the most out of them for your specific case.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-lbaas-apirequest.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-lbaas-diagram-en.png"/><h3>The load balancer &quot;as a whole&quot;</h3>
<p>In the background, a cloudscale load balancer consists of a redundant pair of virtual servers that we manage for you. Externally, they share an IP address (the so-called VIP, short for &quot;virtual IP address&quot;), which is active on one of the two systems and – similar to a Floating IP – is automatically and almost seamlessly moved to the other system if a problem is detected. In this way, <strong>we prevent the load balancer itself from becoming the single point of failure,</strong> while you save yourself the effort of having to build and maintain such a setup yourself.</p>
<p>From a logical perspective, the load balancer (or the load balancer object in the API) is like a <strong>bracket that &quot;encloses&quot; the components</strong> described below behind the VIP mentioned.</p>
<p>As always, you can find additional parameters as well as <strong>examples of API requests and responses for all of the objects mentioned</strong> in our comprehensive <a href="https://www.cloudscale.ch/en/api/v1#load-balancers">API documentation</a>, so that you can try everything out in practice right away.</p>
<img src="https://static.cloudscale.ch/img/news-lbaas-apirequest-e6b557a22fd7.png" alt="Example request from the API documentation for creating a load balancer. This can then be further configured with listeners, pools, pool members and health monitors." caption="Example request from the API documentation for creating a load balancer. This can then be further configured with listeners, pools, pool members and health monitors."/>
<h3>The listeners</h3>
<p>A listener is the ear, so to speak, with which your load balancer listens for incoming connections. If you want to use your load balancer for HTTPS traffic, for example, you will typically set up a listener on TCP port 443. Particularly convenient: At this point, you can already <strong>define which clients are allowed to establish a connection at all.</strong> If you enter one or more IP addresses or ranges in <code>allowed_cidrs</code>, then only these, but no other addresses, can connect to your listener.</p>
<p>What happens with the traffic once it has been received is determined by the pool you configure. You will usually specify a separate pool for each listener, but it is also possible to have <strong>several listeners point to the same pool.</strong></p>
<h3>The pools and their pool members</h3>
<p>Essentially, a pool collects all incoming connections that can be handled in the same way. First and foremost, this means <strong>distributing the connections to one or more backend servers – the so-called pool members –</strong> which then process the requests. Separately for each pool member, you can configure the IP address and port at which it is ready to accept the connections from this pool. In our example, an HTTPS server needs to be running, which does not, however, need to be configured on port 443, but can be configured on any port individually for each pool member.</p>
<p>Directly for the pool itself, you configure <strong>the scheme according to which the connections are distributed</strong> between two or more pool members. Instead of a simple <code>round_robin</code>, you can use <code>least_connections</code> to route new connections to the pool member that currently has the fewest active connections, or use <code>source_ip</code> to keep routing connections from a specific client to the same pool member, e.g. for persistent sessions on a website.</p>
<p>Also select the <code>protocol</code> for your pool: With <code>tcp</code>, the pool members &quot;see&quot; or log the IP address of the load balancer as the supposed client, as the payload data is forwarded from the load balancer to the backend, but a new TCP connection is established for this. You can work around this using <code>proxy</code> or <code>proxyv2</code> if your server software supports it (such as nginx): With this protocol, the load balancer can not only pass on the payload data from the original connection to the backend server, but <strong>also include information about the original client IP.</strong></p>
<img src="https://static.cloudscale.ch/img/news-lbaas-diagram-en-5dac9f6bfd7e.png" alt="Diagram illustrating the components of a load balancer." caption="Diagram illustrating the components of a load balancer."/>
<h3>The health monitors</h3>
<p>You can optionally configure a health monitor for each pool. This allows you to define under which circumstances the pool members are considered &quot;healthy&quot; – for example, if they respond to pings or return the expected HTTP status code in response to a configurable HTTP request. Using the health monitor, the load balancer can <strong>periodically check the individual pool members and continuously adjust the balancing</strong> so that incoming connections are only forwarded to functioning backend servers.</p>
<br/>
<p>Last but not least, we would like to point out that a load balancer can either be accessible from the internet or only from one of your private networks, for example for services within a Kubernetes cluster. Publicly accessible load balancers can also be combined with Floating IPs. By the way, a single load balancer can be used for a large number of services/pools, each with its own set of pool members. Having said that, our load balancer &quot;as a service&quot; is <strong>not only highly flexible, but also particularly affordable at CHF 1.50 per day</strong> – making it the ideal upgrade for setups where availability matters to you.</p>
<p>Servers: Healthy.<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Guest Article: Deploying a SCION AS in the Cloud
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/08/19/deploying-a-scion-as-in-the-cloud</link>
          <pubDate>Tue, 19 Aug 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/08/19/deploying-a-scion-as-in-the-cloud</guid>
          <description>
            <![CDATA[<p>Recently, our team at ETH Zurich received exciting news about SCION&#x27;s expansion into the cloud. In a joint effort, cloudscale and Cyberlink now offer native SCION connectivity to the production network for all customers. This marks a major milestone: Now, anyone can take advantage of SCION&#x27;s advanced networking features directly from their servers at cloudscale, without being part of a larger consortium.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-deploying-scion-non-managed.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-deploying-scion-managed.png"/><p>By ETH Network Security</p>
<p>The SCION production network has seen significant growth in recent years. Notable examples of its deployment include the <a href="https://www.six-group.com/en/products-services/banking-services/ssfn.html">Secure Swiss Finance Network (SSFN)</a> and the <a href="https://support.hin.ch/de/thema/sshn.cfm">Secure Swiss Health Network (SSHN)</a>, both of which support critical infrastructure and are tailored to members of their respective ecosystems.</p>
<p>Here, we will walk you through the steps we followed to deploy a SCION Autonomous System (AS) in the cloud supported by <a href="https://www.cyberlink.ch/die-scion-cloud">Cyberlink</a> and <a href="https://www.cloudscale.ch">cloudscale</a> – exactly as any customer could.</p>
<p>The first step is to get in touch with cloudscale and Cyberlink, who provided us with detailed information about their SCION offering. As of the time of writing, they support two types of SCION connectivity.</p>
<h3>Non-managed SCION access</h3>
<p>This option provides a direct SCION link to Cyberlink&#x27;s SCION Border Router at cloudscale within the requested SCION ISolation Domain (ISD). Customers choosing this approach are responsible for deploying and managing their own SCION AS services within their Virtual Data Center (VDC).</p>
<p>Read the full <a href="https://www.scion.org/deploying-a-non-managed-scion-as-in-the-cloud/">walkthrough for non-managed SCION access</a> at www.scion.org.</p>
<img src="https://static.cloudscale.ch/img/news-deploying-scion-non-managed-4d2d1223ae0c.png" alt="Guide: Deploying a non-managed SCION AS in the cloud." caption=""/>
<h3>Managed SCION access</h3>
<p>In this setup, Cyberlink handles the deployment and management of SCION services on behalf of the customer, offering a more hands-off experience. The managed access makes use of the Anapaya EDGE appliance.</p>
<p>Read the full <a href="https://www.scion.org/deploying-a-managed-scion-as-in-the-cloud/">walkthrough for managed SCION access</a> at www.scion.org.</p>
<img src="https://static.cloudscale.ch/img/news-deploying-scion-managed-e1110d66cbbe.png" alt="Guide: Deploying a managed SCION AS in the cloud." caption=""/>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Fleeting: Scale GitLab Runners Automatically
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/07/31/fleeting-scale-gitlab-runners-automatically</link>
          <pubDate>Thu, 31 Jul 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/07/31/fleeting-scale-gitlab-runners-automatically</guid>
          <description>
            <![CDATA[<p>Cloud computing allows the use of server resources as required. In the case of cloudscale, new virtual servers are available within seconds; if the servers are no longer needed, they can be removed just as quickly again – along with the associated costs. A prime example of this is software development with continuous integration and automated tests. GitLab with Fleeting and the cloudscale plugin ensure that it is easy for you to make the most of these benefits.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-gitlab-fleeting-server.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-gitlab-fleeting-job.png"/><h3>GitLab Fleeting: flexible resources for your tests</h3>
<p>GitLab provides you with a comprehensive platform for software development that goes far beyond pure source code management and that you can host yourself, which makes it different to similar products. In the process, GitLab Runners carry out your automated tests and can dynamically request the required computing power from e.g. cloudscale. In the past, GitLab used Docker Machine for this purpose, but this was deprecated quite some time ago, so GitLab &quot;Fleeting&quot; was developed as a modern replacement and we provided the appropriate <a href="https://github.com/cloudscale-ch/fleeting-plugin-cloudscale">cloudscale plugin</a>. This means that <strong>the required cloud resources are always available for your tests.</strong></p>
<p>The advantages are obvious: with sufficient computing power, it takes less time to run through (or even get to) your pipelines and <strong>you receive the feedback required for the next stage of work more quickly.</strong> If there is nothing to be tested, the resources can automatically be deleted, thus minimizing costs. By regularly replacing GitLab Runners with new instances, you also reduce the risk of your tests being falsified by accumulated artifacts, configs, and the like.</p>
<h3>Try out autoscaling for yourself</h3>
<p>The cloudscale plugin for GitLab Fleeting was written in cooperation with Puzzle ITC AG and to a significant extent by Yannik Dällenbach. In his <a href="https://www.puzzle.ch/blog/2025/05/28/how-to-write-a-gitlab-fleeting-plugin">blog post at Puzzle</a>, Yannik takes you through the development process step by step. In <a href="https://www.cloudscale.ch/en/engineering-blog/2025/03/27/today-i-learned-gitlab-fleeting-edition">our Engineering Blog</a>, Denis Krienbühl additionally provides you with insights into the packaging of the plugin. If you want to <strong>try it out immediately, you can set up a GitLab server with Runners and plugin in a few steps</strong> using an <a href="https://github.com/cloudscale-ch/gitlab-runner">Ansible playbook and some brief instructions</a>, which will allow you to test autoscaling in a risk-free, hands-on manner.</p>
<img src="https://static.cloudscale.ch/img/news-gitlab-fleeting-server-bf30a550f351.png" alt="The &quot;gitlab&quot; server hosts the GitLab installation, while the &quot;fleeting-*&quot; servers are automatically created and deleted as needed." caption="The &quot;gitlab&quot; server hosts the GitLab installation, while the &quot;fleeting-*&quot; servers are automatically created and deleted as needed."/>
<p>The best way to do this is to create a new project in the cloud control panel (this makes it clear at all times what is part of your test) and to create an API token with write access within this project. You will also need a Python environment from where you initiate the creation of the setup. Depending on your system, you may also need to add some additional software packages; a virtual server with a fresh Ubuntu 24.04 LTS installation, for example, will be missing <code>python3.12-venv</code>. There is no assumption, however, that you are using Ansible already, as <strong>the repository provides everything you need</strong> to automatically set up a GitLab server with autoscaling.</p>
<p>Start by cloning the &quot;gitlab-runner&quot; repository and then follow the few required steps in the Readme. After running the playbook, which may take a while, you will find two new virtual servers in your cloudscale project. As the name implies, your GitLab installation will run on the &quot;gitlab&quot; server. You can access it via an HTTPS-protected web interface, and the access data is provided towards the end of the playbook output. There is also an instance of &quot;GitLab Runner&quot; on this server, which is responsible for the dispatching of CI jobs and the creation/deletion of the virtual servers required for them. In addition, you will find a &quot;fleeting-*&quot; virtual server, which is already available for your first CI jobs. <strong>Further servers of this kind will be automatically created and deleted again as required.</strong></p>
<img src="https://static.cloudscale.ch/img/news-gitlab-fleeting-job-9a012bc55cb9.png" alt="The sample CI job tests whether it can access the shared CI cache." caption="The sample CI job tests whether it can access the shared CI cache."/>
<p>The Readme also contains a sample CI job so that you can see your Runners in action at once. This job tests whether it can <strong>access artifacts in the shared CI cache.</strong> If you activated the <code>s3_cache</code> during setup and indicated access data to a bucket in our object storage, the job should notify you that it has found the cache from the second run onwards. Take a look at the configuration behind the scenes, too: you can, for example, choose which flavor and thus which level of performance the servers use for your CI jobs. You can also determine how much spare capacity needs to be available for immediate use at any time – even based on specific days and times.</p>
<br/>
<p>If you do not yet have a self-hosted GitLab, use our Ansible playbook to create a fully functional setup in a jiffy and start using it at once. If you prefer a manual setup or if you already work with GitLab, have a look at the code of our Fleeting plugin and of the playbook and simply take what is suitable for your use case. <strong>Use the full performance when your CI jobs require it</strong> – and reduce your costs at other times.</p>
<p>Always exactly what you need.<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Conveniently on the Safe Side With Snapshots
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/06/30/conveniently-on-the-safe-side-with-snapshots</link>
          <pubDate>Mon, 30 Jun 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/06/30/conveniently-on-the-safe-side-with-snapshots</guid>
          <description>
            <![CDATA[<p>With a snapshot of your volumes, you can turn back time, so to speak: If everything is not running smoothly after a software upgrade, for example, you can simply restore your server to its old, functional state. You can now use this practical feature not only via API, but also via our web-based cloud control panel. Freeze an image of your setup before minor and major changes – it only takes a few clicks for your &quot;Plan B&quot;.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-snapshots.png"/><h3>Safe and flexible with snapshots</h3>
<p>You are always wiser in hindsight. That is why with cloudscale you only decide after the fact whether a change has taken place at all: If everything went well and your server runs as expected, you leave it at that; however, if it turns out that the previous state was better, you restore it from the snapshot and <strong>let the failed change disappear from the timeline, so to speak.</strong> With up to 10 snapshots per volume, you can also &quot;freeze&quot; several intermediate states during an extensive migration and only decide later whether and to which point you want to return.</p>
<p>Snapshots support the security of your servers in several ways: Firstly, you reduce the residual risk associated with changes to practically zero – <strong>you always have the functional initial state for sure.</strong> This also lowers the barriers to installing available (security) updates and keeping your systems up to date. Snapshots also help you to run through an update process several times or in variants: Reset your lab system as often as you like and find the best approach for your change before you even touch the productive setup.</p>
<h3>Tips for use</h3>
<p>You can find your volumes with their snapshots <strong>in the <a href="https://control.cloudscale.ch">control panel</a> under &quot;Services &gt; Storage &gt; Volumes&quot;;</strong> the volumes connected to virtual servers are also linked on the servers&#x27; &quot;Storage&quot; tab.</p>
<p>Each snapshot has a name that you can define when you create it and also change later. For example, name your snapshot &quot;After step 3 DB conversion&quot; so that, if necessary, you can <strong>easily find the state you want to return to</strong> and do not have to rely solely on the date and time of the snapshot.</p>
<img src="https://static.cloudscale.ch/img/news-snapshots-b6392aa6369f.png" alt="Create up to 10 snapshots per volume and decide later whether and to which point you want to return." caption="Create up to 10 snapshots per volume and decide later whether and to which point you want to return."/>
<p>Volume snapshots are &quot;crash-consistent&quot;, they <strong>freeze exactly the content of your virtual hard disk that is present when the snapshot is created</strong> (as if the server had &quot;crashed&quot; at that moment). If necessary, clarify how your application will behave if you reset the volume to such a state. It may make sense to shut down services or the entire server to create snapshots, so that caches, for example, are safely written to the volume and are therefore included in the snapshot.</p>
<p>If there are several snapshots of a volume, you can only revert to the most recent one. For an earlier state, simply delete the snapshots you want to skip – <strong>as soon as the desired snapshot is the latest, it will be available for revert.</strong> To roll back, a non-root volume must also be disconnected from the server or, in the case of a root volume, the virtual server must be switched off.</p>
<h3>The fine print: please pay attention</h3>
<p>Be aware that <strong>snapshots are no substitute for a proper backup.</strong> Volumes and the associated snapshots are stored in the same storage cluster, and in the event of a failure, the snapshots are potentially also affected. Furthermore, snapshots can only be used to revert entire volumes; it is not possible to selectively read/restore data.</p>
<br/>
<p>We would like to warmly recommend the use of volume snapshots. Be it via our API – for example as part of your automated server management – or now also via the web-based control panel: <strong>maintain the possibility to try your changes again (and perhaps differently) if necessary or to undo them completely.</strong></p>
<p>Keeps your options open:<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[ISO/IEC 27001:2022 Recertification
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/05/23/iso-iec-27001-2022-recertification</link>
          <pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/05/23/iso-iec-27001-2022-recertification</guid>
          <description>
            <![CDATA[<p>cloudscale recently once again successfully passed the recertification audit for compliance with ISO/IEC 27001, 27017 and 27018. For the first time, the revised &quot;27001:2022&quot; standard was applied, which provides a more integrated view of &quot;information security management systems&quot; (ISMSs) than the previous version of the standard.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-iso-27001-2022-en.png"/><h3>Certified for another three years</h3>
<p>cloudscale.ch Ltd.&#x27;s information security management system has been certified since 2019. We have always been tested in accordance with the <strong>ISO/IEC 27001 standard, which deals with &quot;information security&quot; in general</strong> and can be implemented by organizations of virtually any size and in virtually any sector, and in accordance with the ISO/IEC 27017 and ISO/IEC 27018 standards, which contain complementary controls for securing cloud services and protecting personal data in public clouds.</p>
<p>Certification is valid for three years and requires annual surveillance audits to maintain its validity. After our initial certification in 2019, we successfully achieved recertification for the first time in 2022 as well as passing a surveillance audit in each intervening year. This meant that recertification was due again in 2025 and we are delighted to have maintained our status seamlessly. The <strong>new certificate is valid until 2028</strong> and also <a href="https://www.cloudscale.ch/en/iso-27001-27017-and-27018-certificate.pdf">available for download</a>.</p>
<img src="https://static.cloudscale.ch/img/news-iso-27001-2022-en-761af74a5159.png" alt="Our new ISO/IEC 27001:2022 certificate." caption="Our new ISO/IEC 27001:2022 certificate."/>
<h3>The new ISO/IEC 27001:2022 standard</h3>
<p>The recertification audit was even more comprehensive than usual this year. For the first time, we were tested in accordance with the new ISO/IEC 27001:2022 standard, which provides <strong>an even more integrated view of information security</strong> than the previously valid ISO/IEC 27001:2013 standard. The new standard prescribes 93 controls that must be taken into account and – unless there are valid reasons to the contrary – also implemented. Although the total number of controls is lower than before, no controls have been removed, but some of them have simply been reworded, streamlined and newly assigned to the categories &quot;Organizational controls&quot;, &quot;People controls&quot;, &quot;Physical controls&quot; and &quot;Technological controls&quot;.</p>
<p>Completely new controls have also been added to the existing, in part reworded controls, for example with regard to &quot;Business continuity&quot; and &quot;Configuration management&quot;. Here, once again, it paid off that <strong>information security has always been part of cloudscale&#x27;s DNA.</strong> We had already covered many of the standard&#x27;s new requirements in our day-to-day work. There were no changes in the two other cloud-specific standards (ISO/IEC 27017:2015 and ISO/IEC 27018:2019) whose controls were also tested in the audit.</p>
<h3>Continuous improvement included</h3>
<p>The unchanged main focus of the new version of the standard – and a given at cloudscale – is continuous improvement. The ISMS processes must be designed in such a way that weaknesses and potentials are identified and improvement measures implemented. This fits perfectly with the way we work at cloudscale and <strong>continuously improve information security</strong> (e.g. by means of automation, monitoring and redundancies). As a consequence, we can confidently look towards the surveillance audits coming our way during the period of validity of the certificate until 2028.</p>
<p>Every year we are also audited for our <a href="https://www.cloudscale.ch/en/news/2022/04/29/isae-3000-report-available">ISAE 3000 report</a>, which is totally separate from &quot;ISO&quot;. This is not a certification, but a test of specific controls that some customers, in particular in regulated sectors, require for their <strong>internal reporting</strong> in the case of outsourced processes. If required, we are happy to make this report available to such customers on request.</p>
<br/>
<p>Audits are a test, which means they are more of an obligation than a pleasure. In this context, we would also like to thank our certification authority, Swiss Safety Center AG, for their support and constructive cooperation since our initial certification. It goes without saying that we are delighted that the seamless renewal of our ISO-27001 certification shows to the outside what really matters to us all year round: <strong>the security – comprehensively understood as confidentiality, integrity and availability – of your data.</strong></p>
<p>International standards, Swiss care.<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[cloudscale GPU Servers – for LLM, AI, etc.
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/04/15/cloudscale-gpu-servers-for-llm-ai-etc</link>
          <pubDate>Tue, 15 Apr 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/04/15/cloudscale-gpu-servers-for-llm-ai-etc</guid>
          <description>
            <![CDATA[<p>Everyone is talking about &quot;AI&quot; technology with the hopes it raises of being used in the most varied areas of life. You no doubt already also have ideas of how you can improve everything with intelligent tools. There are many freely available building blocks on the internet, and the new cloudscale GPU servers now provide you with the required computing power to go full throttle with the appropriate model.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-gpu-servers.png"/><h3>The new cloudscale GPU flavors</h3>
<p>Use virtual servers with GPUs at cloudscale, too, with immediate effect by choosing one of our GPU flavors when launching a new server. Just as with the existing Flex and Plus flavors, you can choose between various CPU and RAM configurations. In addition, your server will be allocated <strong>one to four physical GPUs</strong> depending on the flavor. A local scratch disk is also included in the GPU flavors and you will find more information on this below.</p>
<p>The new GPU flavors are aimed at maximum performance, which is why they are based on the tried-and-tested Plus flavors, where the selected number of CPU cores are exclusively available to your virtual server and you can <strong>use them to full capacity 24/7.</strong> The same applies to the GPUs, where one or more NVIDIA L40S GPUs supply massive processing power for your workloads and the GPUs are passed to your virtual server &quot;as a whole&quot; as PCI devices.</p>
<h3>A new element: the scratch disk</h3>
<p>From the outset, your servers&#x27; virtual hard drives have been saved in our Ceph-based storage clusters at cloudscale. This means that they are always immediately available, irrespective of the physical machine your virtual server is running on at the time and that these volumes (with the exception of the root volume) can be moved between virtual servers. This comes at the cost of <strong>a certain degree of latency.</strong> Read and write operations run via network connections, which means that – despite 100 Gbps links – they are on the move for considerably longer than in the case of locally installed NVMe disks.</p>
<p>In everyday situations, most requests tend to affect a small section of the data, which can be kept in a cache if required. As LLMs and similar workloads may be different here, <strong>our GPU servers have a local scratch disk.</strong> This storage is located on NVMe disks directly in the physical machine the virtual server is running on, thus providing minimal latency. Data are also stored in duplicate in a RAID 1 array as protection against failure.</p>
<img src="https://static.cloudscale.ch/img/news-gpu-servers-7aea20971387.png" alt="The new GPU servers at cloudscale: dedicated CPU power, 1 to 4 NVIDIA L40S GPUs, and a local scratch disk." caption="The new GPU servers at cloudscale: dedicated CPU power, 1 to 4 NVIDIA L40S GPUs, and a local scratch disk."/>
<p>Operating this setup involves a few particular issues. When moving GPU servers to another physical machine (which is not possible as &quot;live migration&quot; due to the GPUs, but can only occur when the server is switched off), <strong>the content of the scratch disk must also be transferred,</strong> which takes a certain amount of time. Moving your GPU server may be triggered during e.g. scaling or become necessary when maintenance work is due on our part.</p>
<p>In the event of (hardware) problems, GPU servers are restarted on a different physical machine depending on availability. Please assume, however, that you will be given a new, empty scratch disk in the process. For this reason, you should <strong>only use the scratch disk for data where complete loss can be tolerated at any time</strong> and ensure that you regularly copy any interim results to a separate storage location.</p>
<h3>Development insights</h3>
<p>Our GPU servers have been available to selected customers since late February and feedback has been extremely positive. In parallel to gathering initial practical experience, we implemented various improvements, in part also in OpenStack, the open source project our setup is based on. We will, of course, also <strong>give our enhancements back &quot;upstream&quot; to the projects in question,</strong> in as far as this is possible and feasible.</p>
<p>One of these improvements is the possibility of enlarging the scratch disk at a later point in time – <strong>up to 1&#x27;600 GB are available to you locally,</strong> in addition to the usual volumes in our storage clusters. We have also deactivated data compression when moving the scratch disk between physical machines; our internal 100 Gbps network means we can do without this overhead. And with regard to the SSH connection that is opened for the migration, we ensured that the ciphers used can benefit from the AES support of the CPUs.</p>
<h3>Your turn</h3>
<p>When creating a new virtual server in our cloud control panel, you will <strong>find the GPU flavors in the &quot;Dedicated GPUs&quot; tab.</strong> Use the &quot;please contact support&quot; link once and provide us with the key data of your planned use; in addition we will need you to attach a signed copy of the &quot;Addendum for GPU servers&quot;. After a manual check we will enable the GPU flavors for the project you specify.</p>
<p><strong>Update:</strong> Signing the &quot;Addendum for GPU servers&quot; just got easier! You can now conclude it for your account or organization in the control panel under <code>Contracts</code> &gt; <code>GPU Addendum</code> and start a server immediately — no support ticket required.</p>
<p>If you do not yet have a specific use case, but would like to speak to your own chatbot, Lukas has made it easy for you to get started. In our engineering blog, he shows you step by step <a href="https://www.cloudscale.ch/en/engineering-blog/2025/04/14/diy-ai-chatbot">how to install Ollama and DeepSeek-R1 70B at cloudscale</a> and make them accessible via the web. A useful tip: our NVIDIA L40S have 48 GB memory per GPU. To ensure that performance does not collapse, take as many GPUs as needed for <strong>your selected model to fit completely into the memory of the GPUs.</strong></p>
<br/>
<p>Our new GPU servers with up-to-date NVIDIA L40S GPUs and a local scratch disk provide maximum performance for your LLM and AI workloads. After one-off activation, you can <strong>start, scale and delete GPU servers via the control panel or API at any time using the self-service model.</strong> It goes without saying that, as usual at cloudscale, you benefit from to-the-second billing without fixed costs and from a data location in Switzerland. However, the offer is currently limited to a first come, first served basis.</p>
<p>Still here for you personally,<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Managed Services on Our Infrastructure
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/03/13/managed-services-on-our-infrastructure</link>
          <pubDate>Thu, 13 Mar 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/03/13/managed-services-on-our-infrastructure</guid>
          <description>
            <![CDATA[<p>We have all come across it in the context of bicycle repairs or gardening: some people like the challenge of doing it themselves, while others prefer to hire a professional. It is no different with the cloud. While our virtual servers offer full root access, you may not want to run software yourself, but &quot;simply&quot; to use it. This is why we have recently added a wide range of managed services to our marketplace, which are offered by our partners and run on the solid cloudscale infrastructure.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-managed-services-en.png"/><h3>The best as part of your team</h3>
<p>If you want to be one of the best, you need a <strong>clear focus.</strong> Here at cloudscale, for example, our focus is on a rock solid infrastructure, which is carefully thought out in all its details and comes with top support for our customers who can rely on us 24/7. For others, their focus may be the development of specialized apps, data analysis, optimizing business processes, or something completely different that they rely on cloud services for.</p>
<p><a href="https://www.cloudscale.ch/en/marketplace">Our marketplace</a> exists because nobody can be one of the best in all areas. This is where you will find managed services by our partners for those components that are important to you, but do not form part of your core business. All the services are run by our partners on the cloudscale infrastructure. This ensures that you have a <strong>provider with appropriate know-how by your side</strong> at every level.</p>
<h3>From password manager to complete container infrastructure</h3>
<p><strong>The providers in our marketplace cover a wide spectrum of services,</strong> which means you can find, for example, solutions relating to personal workspace for you and your team, to collaboration on joint documents, or to a shared password manager. You will also find what you are looking for if you simply want to deploy your containerized applications and need a suitable Kubernetes environment for this (e.g. based on OpenShift or Rancher).</p>
<img src="https://static.cloudscale.ch/img/news-managed-services-en-9aab0c9bb1fd.png" alt="Keycloak and GitLab: Two of the many managed services in our marketplace." caption="Keycloak and GitLab: Two of the many managed services in our marketplace."/>
<p>It goes without saying that you can also <strong>specifically use individual software components as a managed service.</strong> A managed GitLab, for example, is the perfect home for your application&#x27;s source code; your online shop can rely on a managed database with PostgreSQL or MariaDB in the backend; and with user accounts kept in a managed Keycloak, you benefit from single sign-on in compatible applications (including <a href="https://www.cloudscale.ch/en/news/2021/06/18/single-sign-on-using-own-identity-provider">our cloud control panel</a>).</p>
<h3>Suits you</h3>
<p>The available managed services are as different as the providers behind them. Select whoever suits you and your team. Whether advice matters to you or you prefer a high degree of automation, whether you like a broad application spectrum from a single place, or if you are looking for a provider who is local to you, <strong>the perfect match awaits you in our marketplace.</strong></p>
<p>Even if it previously looked as if you had to do everything yourself at cloudscale, appearances can be deceptive! For years we have maintained close and constructive partnerships, and with the new marketplace you will be able to see more clearly that <strong>numerous professionally managed services are available</strong> on cloudscale, too. You can currently find a wealth of the most varied offers. We have already initiated discussions with further providers and will continuously add them to our website.</p>
<br/>
<p>Bring the best into your team! Although you have complete control with the cloudscale infrastructure, this does not mean that you have to handle everything yourself. With the <strong>managed services in our marketplace,</strong> you will find support exactly where you need it from a partner who is just right for you.</p>
<p>Successful together!<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Volume Snapshots Pave the Way Back
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/02/11/volume-snapshots-pave-the-way-back</link>
          <pubDate>Tue, 11 Feb 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/02/11/volume-snapshots-pave-the-way-back</guid>
          <description>
            <![CDATA[<p>Everyone knows what to do: an optimized config here, a new version there, and never skip regular security updates. Although everything normally works out well, it is never possible to exclude problems completely. However, this should not be a reason for you not to keep your systems updated! You can now use our API to create snapshots of your volumes to ensure a return to a functioning system – just in case.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Intentionally &quot;freezing&quot; a state</h3>
<p>Create a snapshot of your volume before undertaking any – potentially risky – change. If things do not work as you would like them to after the change (and if a specific repair is not an option), you can revert the volume to the snapshot, i.e. restore the content of the volume to where it was before the change. A volume snapshot is a <strong>momentary image of a volume as a whole</strong> (including all programs and data, as well as boot loader, partitioning, etc.) with the bytes exactly as they are on the virtual hard drive.</p>
<p>It goes without saying that snapshots are not only useful when the horse has already bolted. When testing changes on a lab setup before their productive deployment, you can <strong>use snapshots to run through the process as many times as you would like</strong> in order to optimize e.g. scripts or other parameters.</p>
<p>All it takes to create a snapshot is a simple HTTPS request to our API, e.g.:</p>
<pre><code class="language-plaintext">curl -i -H &quot;Authorization: Bearer YourApiTokenGoesHere&quot; -F name=&quot;pre-dist-upgrade&quot; -F source_volume=&quot;2db69ba3-1864-4608-853a-0771b6885a3a&quot; https://api.cloudscale.ch/v1/volume-snapshots
</code></pre>
<p>As always, you will find all the supported requests and attributes in our detailed <a href="https://www.cloudscale.ch/en/api/v1#volume-snapshots">API documentation</a>.</p>
<h3>Technical details and tips</h3>
<p>Snapshots are <strong>crash-consistent:</strong> if you created a snapshot at a time point X and now revert the volume to the snapshot (e.g. after a failed software upgrade), the server will behave as if the volume was unplugged at time point X and only plugged back in again now. Write caches and other data that were not on the volume at time point X cannot be restored in this way. It is probably worth clarifying if and how your setup can handle a state of this kind. It may be advisable to stop tricky services before creating a snapshot, to do a <code>sync</code>, or to shut down the server completely.</p>
<p>In the case of <strong>more than one volume,</strong> please also note that although volume snapshots are individually crash-consistent, the datasets do not relate to exactly the same time point when two or more volumes are snapshotted during live operations. In order to avoid differences here, create the snapshots when the server is shut down.</p>
<p>By the way, if you have upscaled the volume since creating a snapshot and then return to the snapshot, the volume will <strong>automatically be reduced to the size it was</strong> when the snapshot was created.</p>
<p>Ultimately, deleting volumes is only possible when there are no more snapshots of them. If a snapshot is still available for a root volume, the corresponding server cannot be deleted either. Therefore, if required, <strong>delete any snapshots before you delete volumes or servers.</strong></p>
<h3>Two additional points</h3>
<p>As usual at cloudscale, snapshots are charged to the second. The storage space costs <strong>only half the regular price of NVMe SSD or bulk volumes</strong> (depending on the volume to which the snapshot refers). You will find the costs of your snapshots listed separately in the &quot;Billing&quot; area of the cloud control panel.</p>
<p>Snapshots make it possible to restore one or more volumes of a server to an earlier state. However, please be aware that they <strong>do not replace a proper backup.</strong> On the one hand, snapshots are saved in the same storage cluster as the original volumes here at cloudscale and a potential failure might affect both. For maximum security and independence we recommend that you always keep a copy of your important data outside our infrastructure. On the other hand, snapshots are designed to restore a volume as a whole. It is not possible only to restore selective data from the snapshot.</p>
<br/>
<p>Volume snapshots are the perfect solution to ensure that you can <strong>return to an earlier state within the shortest amount of time after changes</strong> – whether for tests, training or as a safety net when upgrading critical servers. Quick to create and inexpensive, a snapshot may be able to save you a great deal of hassle. The &quot;undo button&quot; where it really matters!</p>
<p>Game over? Bonus life!<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Object Storage: Lower Price and Practical Information
]]></title>
          <link>https://www.cloudscale.ch/en/news/2025/01/30/object-storage-lower-price-and-practical-information</link>
          <pubDate>Thu, 30 Jan 2025 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2025/01/30/object-storage-lower-price-and-practical-information</guid>
          <description>
            <![CDATA[<p>cloudscale has provided S3-compatible object storage at each location for some time now. Here, costs are based solely on actual occupancy and use of storage, and will be noticeably reduced from 2025-02-01 onwards. You can also find out about the technical background and optimal use of object storage.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-objects-lower-price.png"/><h3>Cheaper storage space from 2025-02-01</h3>
<p>Costs for object storage are taken from your account balance once a day shortly after midnight (in the local time zone of Zurich). These costs are <strong>made up of three components,</strong> namely number of API requests (CHF 0.005 per 1000 requests), outbound network traffic (CHF 0.02 per GB, inbound traffic is free of charge), and storage space used (here, an average value is worked out over the day).</p>
<p>The storage space component usually makes up the largest proportion of the overall costs. From 2025-02-01, storage space used will <strong>only cost CHF 0.001 per GB and day, which is 66% less than previously.</strong> This will make our object storage even more attractive, e.g. for off-site backups of your important data. The three cost components will continue to be calculated individually and precisely, then added together and only rounded up to whole Centimes in a final step, before being taken from your credit.</p>
<h3>Insight into technical details</h3>
<p>At cloudscale, we rely on Ceph for our storage clusters: you can access its S3-compatible HTTPS-API directly on <code>*.objects.rma|lpg.cloudscale.ch</code>, e.g. for uploading and downloading objects. As we could not, however, use Ceph features for billing, we developed our own microservice, which is called &quot;rgw-metrics&quot;. <strong>rgw-metrics collects usage data from our Ceph storage clusters,</strong> thus allowing exact, usage-based billing, on the one hand, and the display of current and past usage data in our cloud control panel and via the API, on the other.</p>
<img src="https://static.cloudscale.ch/img/news-objects-lower-price-ca84c89754af.png" alt="Usage data collected by rgw-metrics, displayed in the control panel." caption="Usage data collected by rgw-metrics, displayed in the control panel."/>
<p>rgw-metrics runs independently of the software providing the control panel and API and saves the collected usage data as a time series dataset. <strong>We recently carried out a rgw-metrics rewrite:</strong> in addition to switching from Flask to Django, which we also use elsewhere, we have now implemented a containerized setup. A further important aim of this change is enhanced efficiency. Further <a href="https://www.cloudscale.ch/en/engineering-blog/2025/01/29/improving-metrics-collection-for-object-storage">insights into rgw-metrics are available from Julian</a> in our engineering blog.</p>
<h3>Useful object storage features</h3>
<p>In the most basic case, you create a bucket and then use one of the numerous S3-compatible tools to upload and download objects and to delete them again at the end. However, our object storage is also suitable for demanding setups, given that <strong>Ceph supports a wealth of S3 features</strong> such as ACLs, versioning and policies for individual access rights.</p>
<p><strong>Example 1:</strong> Additional read-only access to your bucket</p>
<p>To set up read-only access to your bucket (e.g. &quot;my-bucket&quot;) for a person or application, create an additional objects user via the control panel or API and make note of the displayed user ID (e.g. &quot;11111111...88888888&quot;). Then create a file as follows and save it locally, e.g. under <code>policy.json</code>:</p>
<pre><code class="language-plaintext">{
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [{
    &quot;Effect&quot;: &quot;Allow&quot;,
    &quot;Principal&quot;: {&quot;AWS&quot;: &quot;arn:aws:iam:::user/1111111122222222333333334444444455555555666666667777777788888888&quot;},
    &quot;Action&quot;: [&quot;s3:ListBucket&quot;,&quot;s3:GetObject&quot;],
    &quot;Resource&quot;: [
      &quot;arn:aws:s3:::my-bucket&quot;,
      &quot;arn:aws:s3:::my-bucket/*&quot;
    ]
  }]
}
</code></pre>
<p>Now add this policy to your bucket by means of e.g. the <code>s3cmd</code> tool:</p>
<pre><code class="language-plaintext">s3cmd setpolicy policy.json s3://my-bucket
</code></pre>
<p>The additional objects user can now use their own credentials (access key and secret key) to list and read the objects in &quot;my-bucket&quot;, but cannot change or delete them.</p>
<p><strong>Example 2:</strong> Configuring CORS headers</p>
<p>You can make objects publicly accessible, e.g. by means of:</p>
<pre><code class="language-plaintext">s3cmd --acl-public setacl s3://my-bucket/my-font.woff2
</code></pre>
<p>The object is then directly available (without access key and secret key authentication) at <code>https://my-bucket.objects.lpg.cloudscale.ch/my-font.woff2</code>.</p>
<p>However, when integrating it into a website – e.g. https://www.example.com – the browser may not load the file due to the same-origin policy. In this case, help is available in the form of CORS (cross-origin resource sharing) headers that you configure on your bucket. To do this, create a file as follows and save it locally, e.g. under <code>cors.xml</code>:</p>
<pre><code class="language-plaintext">&lt;CORSConfiguration&gt;
  &lt;CORSRule&gt;
    &lt;AllowedOrigin&gt;https://www.example.com&lt;/AllowedOrigin&gt;
    &lt;AllowedMethod&gt;GET&lt;/AllowedMethod&gt;
  &lt;/CORSRule&gt;
&lt;/CORSConfiguration&gt;
</code></pre>
<p>Then transfer this configuration to your bucket by means of e.g. s3cmd:</p>
<pre><code class="language-plaintext">s3cmd setcors cors.xml s3://my-bucket
</code></pre>
<p>When visiting https://www.example.com the browser will access https://my-bucket.objects.lpg.cloudscale.ch/my-font.woff2 in a &quot;preflight&quot; request and send the HTTP header</p>
<pre><code class="language-plaintext">Origin: https://www.example.com
</code></pre>
<p>in the request. If the sent URL matches the one configured on the bucket, the object storage adds the headers</p>
<pre><code class="language-plaintext">access-control-allow-origin: https://www.example.com
access-control-allow-methods: GET
</code></pre>
<p>to its response, thus signalling to the browser that the file may be loaded in the context of this website.</p>
<br/>
<p>Our object storage covers the most varied use cases, ranging from filing your off-site backups, to serving as a storage backend for your applications, and to sharing files. <strong>From 2025-02-01 you will also benefit from significantly cheaper storage space.</strong> What more could one ask for?</p>
<p>&quot;Object-oriented&quot; in the best possible sense,<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[cloudscale x Cyberlink
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/12/12/cloudscale-x-cyberlink</link>
          <pubDate>Thu, 12 Dec 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/12/12/cloudscale-x-cyberlink</guid>
          <description>
            <![CDATA[<p>Cyberlink Ltd acquires 40 percent of cloudscale.ch Ltd – A strong signal for the future of the Swiss cloud industry.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-thomas-knuesel-manuel-schweizer.webp"/><p>Cyberlink Ltd, a leading Swiss managed service provider for cloud and connectivity solutions, acquires 40 percent of cloudscale.ch Ltd. With this strategic investment, Cyberlink is strengthening its position in the Swiss cloud market and, together with cloudscale, is expanding its offering with innovative, locally rooted cloud solutions.</p>
<p>&quot;The collaboration with cloudscale enables us to strategically expand our cloud services and provide even more comprehensive support to companies with the highest standards of data security and flexibility,&quot; explains Thomas Knüsel, CEO of Cyberlink Ltd. &quot;By storing data in Swiss data centers and using customizable solutions, we meet the high compliance standards of industries such as the financial and healthcare sectors.&quot;</p>
<p>The partnership allows Cyberlink to access cloudscale&#x27;s cloud-native technologies. The cloud infrastructures of cloudscale are based on powerful open source software and offer virtual servers, load balancers and object storage for demanding projects. Customers also benefit from seamless integrations with DevOps tools such as Ansible and Terraform or the ability to manage cloud services directly via a user-friendly control panel or APIs.</p>
<img src="https://static.cloudscale.ch/img/news-thomas-knuesel-manuel-schweizer-1d74bfab3104.webp" alt="Thomas Knüsel (left), CEO of Cyberlink Ltd, and Manuel Schweizer, CEO of cloudscale.ch Ltd." caption="Thomas Knüsel (left), CEO of Cyberlink Ltd, and Manuel Schweizer, CEO of cloudscale.ch Ltd."/>
<p>One example of the technological innovation is the jointly developed SCION Cloud, a revolutionary solution designed specifically for industries with the highest security, availability and compliance requirements. The SCION Cloud combines cloudscale&#x27;s cutting-edge and top-certified cloud platform with Cyberlink&#x27;s highly available SCION Internet for the fastest and easiest access to any common isolation domain in Switzerland.</p>
<p>For cloudscale, the partnership means new growth opportunities. &quot;With Cyberlink as a strong partner, we can further expand our market presence and accelerate the ongoing development of our services,&quot; emphasizes Manuel Schweizer, CEO of cloudscale.ch Ltd. &quot;Together, we are creating solutions that are specifically tailored to the needs of the Swiss market.&quot;</p>
<p>The partnership marks an important step for the future of the Swiss IT sector. By combining technological expertise and local service, both companies are strengthening Switzerland&#x27;s digital infrastructure and creating added value for companies that depend on high-performance and legally compliant cloud solutions.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[The Engineering Blog – by and for Engineers
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/11/29/engineering-blog-by-and-for-engineers</link>
          <pubDate>Fri, 29 Nov 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/11/29/engineering-blog-by-and-for-engineers</guid>
          <description>
            <![CDATA[<p>If you are interested in technical details and if you sometimes wish you could watch our professionals at work, then our new engineering blog is just the thing for you. This is where members of our cloudscale team share unfiltered insider knowledge from a personal perspective. Find out more about their tricky challenges, their tips and favourite tools as well as general insights into their work in the &quot;engine room&quot; of our cloud.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-fridge.jpg"/><h3>Information to meet every need</h3>
<p><strong>Communication and transparency are important to us here at cloudscale.</strong> This is why, every few weeks, you will find up-to-date information, e.g. the presentation of new features or tips on how to use our service optimally, in the &quot;News&quot; area of our website (and in your inbox, if desired). In addition, we publish upcoming maintenance work and information about any incidents on our <a href="https://www.cloudscale-status.net">https://www.cloudscale-status.net</a> status page. This is also where you can subscribe directly to status notifications on the channel of your choice.</p>
<p>We are now launching the <a href="https://www.cloudscale.ch/en/engineering-blog">engineering blog</a> as a complement to these announcements from a &quot;company perspective&quot;. This is where you can find out first-hand what is on our employees&#x27; minds and what matters to them. <strong>Written by techies for techies,</strong> these posts contain interesting details from our stack, personal evaluations relating to their work, and undoubtedly also valuable inputs for your own projects. The posts are generally available either in German or in English, depending on the language they were written in by our engineers.</p>
<h3>Extensive range of reading material</h3>
<p>It goes without saying that our engineering blog will not start with an empty list. You will be able to find <strong>three very different articles</strong> there from the outset.</p>
<p>Actual data storage in our Ceph storage clusters occurs using a large number of OSDs that occasionally, e.g. in the case of maintenance work, need to be restarted. <a href="https://www.cloudscale.ch/en/engineering-blog/2024/11/27/staggering-restarts-in-ceph">Find out from Denis</a>, whose blog post includes latency graphs and the Python script used, how we have <strong>minimized the performance impact of these restarts with a staggered approach</strong> (in English).</p>
<img src="https://static.cloudscale.ch/img/news-fridge-3b8f3e2080a8.jpg" alt="Our new fridge: not (yet?) a topic in the engineering blog, but for Lukas it inspired an unusual analogy." caption="Our new fridge: not (yet?) a topic in the engineering blog, but for Lukas it inspired an unusual analogy."/>
<p><a href="https://www.cloudscale.ch/en/engineering-blog/2024/11/28/filling-the-fridge-my-onboarding-at-cloudscale">Lukas has provided you</a> with a very different type of insight. He is the newest member of the cloudscale team and has been with us for a few weeks. In an unusual analogy to a recently completed &quot;fridge project&quot;, he describes <strong>how he experienced onboarding with us</strong> as well as the challenges he has faced and successes he has enjoyed so far (in English).</p>
<p>Our cloud is based on established open-source projects such as OpenStack and Ceph. However, we also write and maintain our own software on a scale where it is important to maintain oversight. <strong>Effectively sifting through code and finding specific code locations with precision</strong> is an important ability. In his post, <a href="https://www.cloudscale.ch/en/engineering-blog/2024/11/29/searching-python-code-based-on-its-ast-using-xpath">Michi explains</a> how he uses pyastgrep when regular expressions are no longer effective (in German).</p>
<p>If you have any questions or input on any of the posts, e-mail <a href="mailto:engineering-blog@cloudscale.ch">engineering-blog@cloudscale.ch</a>, <strong>where the engineers behind the posts are looking forward to your feedback.</strong></p>
<br/>
<p>Find out how varied the people and tasks behind our cloud services are. Discover how our engineers experience their work, what inspires them, and how they find solutions. Even if you do not operate your own cloud, you will <strong>find exciting new perspectives in our engineering blog – by techies for techies!</strong></p>
<p>Ready for the deep dive?<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Exactly Your Style – the cloudscale Zip-up Hoodie
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/10/25/cloudscale-zip-up-hoodie</link>
          <pubDate>Fri, 25 Oct 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/10/25/cloudscale-zip-up-hoodie</guid>
          <description>
            <![CDATA[<p>We love cloudscale, and it shows. We were delighted when our new zip-up hoodies were delivered recently – just in time for the cooler weather outside. But of course, our hoodies aren&#x27;t just comfortable and warm: the attention to detail that you know from our cloud services also makes this versatile piece of clothing something very special. Whether for the office, leisure or when you&#x27;re out and about: get yourself some cloudscale to wear and show that you value quality as much as we do.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-zip-hoodie.jpg"/><h3>From idea to reality</h3>
<p>The story of our cloudscale hoodie began by chance: some computer graphics had been produced, meant to illustrate the many ways in which <a href="https://www.cloudscale.ch/en/news/2024/02/01/cloudscale-reloaded-in-best-hands">our new logo</a> could be used. One of the images showed a fictitious <strong>hoodie that immediately sparked enthusiasm</strong> – so much so that we really wanted to have it in reality.</p>
<p>If it says &quot;cloudscale&quot; on it, then of course <strong>our usual quality standards should also be behind it.</strong> Together with the manufacturer, we managed to get the eyelets and cord ends in green and blue to match our logo. For the zipper, we insisted on YKK, the brand that is generally regarded as the gold standard – after all, putting it on should be a pleasure even in the long term. For the hood, too, we were only happy with the fit in the second iteration.</p>
<img src="https://static.cloudscale.ch/img/news-zip-hoodie-f7e47a6f4867.jpg" alt="Our model is 186 cm and wears size XL." caption="Our model is 186 cm and wears size XL."/>
<h3>Now it&#x27;s your turn</h3>
<p><strong>You too can proudly carry a piece of cloudscale with you,</strong> with our &quot;circle&quot; logo on the chest and a subtle print on the left upper arm. Write to us at <a href="mailto:merchandise@cloudscale.ch">merchandise@cloudscale.ch</a> and tell us the quantity and size (S, M, L, XL) of your desired hoodies as well as the delivery address.</p>
<p>Until the end of December 2024, you can benefit from a <strong>special price of CHF 50 instead of 80 per hoodie</strong> (plus postage and packaging, CHF 10 flat for up to 3 hoodies). Shipping only in Switzerland and while stocks last, payable by bank transfer within 10 days. The prices quoted include value added tax.</p>
<br/>
<p>For our fans: well thought-out features, online and offline. <strong>Order your cloudscale zip-up hoodie now</strong> and in just a few days you can slip it on – we suspect you won&#x27;t want to take it off again.</p>
<p>Always well dressed,<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Guest article: The SCION Cloud
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/10/24/guest-article-the-scion-cloud</link>
          <pubDate>Thu, 24 Oct 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/10/24/guest-article-the-scion-cloud</guid>
          <description>
            <![CDATA[<p>2 Swiss tech pioneers offer new solutions for the financial &amp; healthcare sectors. – The digital world never stands still and in Switzerland, two companies are working tirelessly to provide developers and DevOps teams in compliance-sensitive industries with the infrastructure they desire: cloudscale.ch &amp; Cyberlink. With the introduction of the SCION Cloud, the Swiss cloud and connectivity providers set new standards in terms of protection, availability and compliance. But how did this come about and what does it mean for the community? A look behind the scenes with cloudscale.ch CEO Manuel Schweizer, Cyberlink CEO Thomas Knüsel and Cyberlink Lead Network Engineer Matthias Schwarzenbach.</p>]]>
          </description>
          <content:encoded><![CDATA[<p>By Max Wellenhofer, initially published at <a href="https://www.cyberlink.ch/de/news/die-scion-cloud-2-schweizer-tech-pioniere-bieten-neue-losungen-fur-den-finanzsektor-das-gesundheitswesen-623">Cyberlink AG</a></p>
<p>In compliance-sensitive industries, the system environments are often highly complex and in Switzerland we expect ICT integrators not only to comply with formal regulations, but also to implement them consistently. In order to meet the requirements of industries such as the financial sector, healthcare and insurance, many companies turn to international consulting know-how. This often brings large foreign hyperscalers into play, whose solutions are convincing on paper, but whose data processing often takes place at locations outside Switzerland.</p>
<p>cloudscale.ch and Cyberlink offer a Swiss alternative. With the SCION Cloud, they have jointly developed a solution that keeps all data and workloads entirely in Switzerland while meeting the highest security and compliance requirements. The architecture is specially designed to meet the requirements of financial service providers, healthcare providers and insurance companies and is backed exclusively by Swiss infrastructure.</p>
<p>With the SCION Cloud, developers and DevOps teams can run their workloads in Switzerland without compromising on technology or concerns about data transfers. The regulations in this country exist for good reason – with the SCION Cloud, companies can meet them efficiently and reliably.</p>
<h3>Two Swiss pioneers accept the challenge</h3>
<p><strong>cloudscale.ch Ltd.</strong> (<a href="https://www.cloudscale.ch">cloudscale.ch</a>) is an Infrastructure-as-a-Service provider with a strong focus on open source technologies and a self-service platform that enables customers, among other things, to configure data center services via API. The close ties to the DevOps and cloud-native community are no coincidence. Many cloudscale.ch employees used to be consumers of such services themselves. &quot;We focus strongly on the user instead of simply developing products that we ourselves think are good,&quot; emphasizes <strong>Manuel Schweizer</strong>, CEO. Having a background in network technology, he became intensively involved with networks and their optimization early on in his career. As a board member of the Swiss Internet Exchange (SwissIX), he gained valuable experience in the Swiss networking scene. From the very beginning, cloudscale.ch has focused on the needs of customers with high requirements in terms of availability and information security. The company consistently focuses on open source technologies.</p>
<p><strong>Cyberlink Ltd</strong> (<a href="https://www.cyberlink.ch">cyberlink.ch</a>), an innovative service provider in the Swiss ICT market for almost three decades, has established itself as a leading provider of connectivity and cloud services. Under the leadership of <strong>Thomas Knüsel</strong>, CEO, who has been part of the company since 2012, Cyberlink has strengthened its focus on business customers and highly secure network solutions. The implementation of SCION in particular has enabled Cyberlink to further reinforce its role as a pioneer in secure networks. &quot;We emerged as a pioneer in the Internet sector and have specialized in infrastructure services in the cloud and connectivity area. Our goal as a managed service provider is to offer added value by continuously taking care of the infrastructure so that our customers can focus on their core business,&quot; says Knüsel. &quot;In addition, we have always believed that synergies are very relevant, which is how we became aware of cloudscale.ch through SCION. Manuel and our lead engineer in the connectivity area, Matthias Schwarzenbach, worked together to develop a solution that integrates SCION into a modern cloud environment. We were convinced of SCION&#x27;s capabilities right from the start. It offers exactly the security features that many of our customers need, and through our partnership with cloudscale.ch we can also offer this technology in the cloud.&quot;</p>
<p>Both companies share a commitment to staying at the cutting edge and using innovative technologies that enable Swiss companies to meet the increasing demands for protection and compliance. The identification of SCION as the next generation of the Internet was the common denominator that got this partnership started.</p>
<h3>The discovery of SCION: Technical curiosity and SIX</h3>
<p><a href="https://scion-architecture.net/">SCION (Scalability, Control, and Isolation On Next-Generation Networks)</a> represents a fundamental advancement of existing network technologies. Originally developed at ETH Zurich, SCION offers significant advantages over conventional network architectures: SCION is a revolutionary Internet architecture that offers path control, increased protection and improved availability.</p>
<p>Under the umbrella of the SCION Association, the new Internet architecture is being further developed and advanced as an open standard. In practice, the software implementation of Anapaya Systems Ltd – itself an innovative Swiss SME – is used in particular.</p>
<p>For Schweizer, it was personal curiosity to begin with: &quot;What does this thing actually do? Who needs it? As a technician, I wanted to familiarize myself with the technology first.&quot; The key stimulus came when Fritz Steinmann from SIX announced the replacement of the old Finance IPNet with a SCION-based network. &quot;This was a great opportunity for us, as the target group for this technology matches the target group of cloudscale.ch very well,&quot; Schweizer explains. With existing ISO certification and ISAE reports, the first steps had already been taken towards the financial and healthcare sectors. Schweizer: &quot;I saw the potential of integrating SCION into our cloud and thus creating real added value for our customers.&quot;</p>
<p>For companies with high security and compliance requirements, SCION with its benefits is an absolute game changer.</p>
<h3>Technical challenges and the partnership with Cyberlink</h3>
<p>Implementing SCION in a cloud environment was not a trivial task. &quot;I quickly realized that we couldn&#x27;t do it alone,&quot; Schweizer admits. In addition, customers often had different locations with different providers, which required a consistent solution.</p>
<p>&quot;Together with Matthias, the Network Engineering Lead at Cyberlink, we were able to engineer and thoroughly test everything over a period of six months,&quot; Schweizer reports enthusiastically.</p>
<h3>The technical implementation in detail</h3>
<p>Manuel Schweizer describes the challenge of integrating SCION into the cloudscale.ch platform clearly and pragmatically: &quot;We didn&#x27;t want to have to install individual hardware for every single customer, let alone keep hardware in stock that might never be used.&quot; This meant that, instead of dedicated hardware, an efficient and scalable virtual solution had to be created. In areas that are sensitive from a regulatory perspective, such as the <a href="https://www.cyberlink.ch/scion/ssfn">Secure Swiss Finance Network (SSFN)</a>, meeting the requirement for provider redundancy was one of the biggest challenges. &quot;It would not be enough to simply connect two ISPs to a redundant SCION core cluster,&quot; says Schweizer. Edge provider redundancy is required in order to be admitted to SIC/euroSIC. In a physical world, this would have been implemented with separate connections to two different ISPs. In a virtual world, however, this redundancy for the last mile is provided by the cloud provider of choice. In our case, Cyberlink and cloudscale.ch jointly guarantee the geo-redundant connection of each virtual edge to the two SCION cores. As these are in turn connected to different ISPs, the entire setup is highly available and therefore also fulfills the requirement for provider redundancy at the virtual level. This entire connection remains within the cloudscale.ch infrastructure before the traffic leaves the core. &quot;With this clear communication and preference, we then went to SIX and pitched,&quot; Schweizer explains.</p>
<p>The discussions with SIX and ultimately with the Swiss National Bank (SNB) led to a crucial point: the SNB, as the supervisor of the financial sector, had the final say on the definitive architecture. The team quickly realized that core cluster redundancy alone was not enough. &quot;We are establishing provider redundancy from the core upwards,&quot; Schweizer explains. The two SCION cores are distributed across different data centers and connected to other SCION participants via different upstream connections as well as the SwissIX Internet Exchange. This architecture corresponds to the best-practice setup, which meets the SNB&#x27;s requirements and ensures the highest compliance standards.</p>
<p>Matthias Schwarzenbach was delighted with the collaboration with Manuel Schweizer: &quot;There were no reservations. Manuel and I sat together in the meeting room for days on end, exchanging ideas and developing solutions. It was this creative, almost informal atmosphere that kept us going.&quot; Thomas Knüsel also remembers: &quot;The two of them virtually outbid each other with ideas and so the solution reached a level that neither Manuel nor Matthias could have achieved on their own.&quot;</p>
<p>Today, cloudscale.ch is able to connect any isolation domains (ISDs) such as the SSFN or the SSHN (Secure Swiss Health Network) to its cloud. New SCION edges can be provisioned in less than 24 hours – and soon in under 2 hours. A virtual machine with the Anapaya image is started and connected to the SCION cores with two separate VLANs. Additional fiber optic connections or layer 2 services are not required. The solution is fully multitenant-capable and as close to a cloud-native solution as is currently technically possible.</p>
<p>As part of the conceptual design with SIX and SNB, the question arose as to whether the two cores should be operated in a cluster setup. &quot;The advantages were clear,&quot; says Schweizer. In the cluster structure, the two cores share the path information. This interconnection increases redundancy and ensures that maintenance work can be carried out without interrupting the connection of the edge. This solution was ultimately approved and has already proven itself in practice.</p>
<p>Schweizer sums up: &quot;We managed to convince the SNB of our solution. The SCION Cloud from cloudscale.ch and Cyberlink meets the highest compliance requirements and offers a fully compliant and high-performance cloud infrastructure.&quot;</p>
<h3>Advantages for developers and DevOps engineers</h3>
<p>The SCION Cloud offers clear advantages for the technical community: Engineers can use the popular features of cloudscale.ch, at the same time they have all the advantages of SCION and do not have to worry about the network connection.</p>
<ul>
<li><strong>Path control and traffic engineering</strong>: Developers can explicitly control the data path through the network, opening up new possibilities for optimization and security strategies.</li>
<li><strong>Integration with DevOps tools</strong>: The SCION Cloud is fully API-managed and can be seamlessly integrated with tools such as Terraform or Ansible.</li>
<li><strong>Security and compliance</strong>: The highest security standards are met, which is particularly important for applications in regulated industries.</li>
</ul>
<p>One specific example is the integration of Kubernetes clusters into a SCION-embedded environment. &quot;From my point of view, we can offer our customers the best technical solution currently available on the market,&quot; Schweizer is convinced.</p>
<h3>Compatible with any isolation domain</h3>
<p>The SCION Cloud is already compatible with every ISD, natively multitenant-capable, flexibly scalable and can be connected as required. The elegant combination with Cyberlink&#x27;s connectivity services enables users to access applications operated at cloudscale.ch via the appropriate ISD either physically or virtually.</p>
<h4>Financial sector: Secure Swiss Finance Network (SSFN)</h4>
<p>Since the announcement that the Finance IPNet would be replaced by a SCION-based network in September 2024, it has been clear where developments in the financial sector were heading. In this young but currently best established market, the SCION Cloud is characterized not least by the approval by the SNB. Banks and financial service providers must ensure that their networks meet the highest standards while remaining flexible enough to satisfy new requirements. The SCION Cloud offers a comprehensive solution: it allows control over the data path and enables users to utilize an any-to-any architecture. This means that they are no longer reliant on point-to-point connections such as MPLS. With SCION, financial service providers can establish connections that are not only secure but also flexible – they can decide which partners they want to connect to and which paths they want to use for data transmission. As soon as the new technology is used not only to replace old systems, but also to optimize legacy meshes and consistently reduce superfluous leased lines, this flexibility holds enormous potential for cost savings and significantly simplifies the management of the network infrastructure. The SCION Cloud is the ideal choice, especially for fintechs and banks that want to access cloud services or move their own applications to the cloud.</p>
<h4>Healthcare: Secure Swiss Health Network (SSHN)</h4>
<p>Cyberlink and cloudscale.ch also see great potential in the healthcare sector in particular: health insurance software providers and other players in the healthcare sector could be integrated into the SSHN to protect the entire infrastructure stack. The SCION Cloud makes it possible to host healthcare applications in a secure, fully compliant and scalable manner. The focus here is on the protected and reliable networking of medical practices, pharmacies and hospitals. However, a key challenge for the Secure Swiss Health Network (SSHN) was user access to the SCION Cloud. How do you get healthcare providers onto this network within a reasonable period of time? How do you equip all practices and clinics in Switzerland with the necessary hardware to connect to the SSHN? As a combination of the best that cloudscale.ch and Cyberlink have to offer, the SCION Cloud opens up two options here: For larger facilities such as hospitals, Cyberlink&#x27;s Managed SCION Edge is used, which is installed locally and ensures the highest security standards. For smaller practices that may not require the same infrastructure, there is an alternative solution with the &quot;Anapaya Gate&quot;. This gate allows access to the SCION world via existing home networks. While it offers a lower level of security, it remains a viable option for less critical applications. This comprehensive connectivity portfolio offers all participants in the ecosystem secure access to sensitive data and a wide range of healthcare applications.</p>
<h4>Other compliance-sensitive industries and their ISDs</h4>
<p>Other sectors will follow with dedicated ISDs. With its accredited and compliant IT infrastructure, the SCION Cloud is also ready for the strict regulatory requirements of these GRC-sensitive industries, such as the energy and payment sectors, as well as other critical areas. The SCION Cloud ensures that compliance requirements are reliably met while offering the scalability that companies need to grow in the market.</p>
<h3>SCION is the Internet of the future</h3>
<p>Cyberlink and cloudscale.ch have a clear vision: &quot;SCION is the Internet of the future. With cloudscale.ch as a top Swiss cloud provider and Cyberlink with 30 years of experience in connecting Switzerland, we have the best prerequisites to help shape this future.&quot; This team was able to pass one of the strictest auditors in Switzerland with a new technology and do so on time. This would never have been possible without the right partner. A partnership between two top Swiss providers that you can rely on 100%, even in the most challenging crises.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[10 Years of cloudscale
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/09/30/10-years-of-cloudscale</link>
          <pubDate>Mon, 30 Sep 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/09/30/10-years-of-cloudscale</guid>
          <description>
            <![CDATA[<p>Let&#x27;s go back to 2014 when there were already clouds, but we were missing a user friendly option with a data location in Switzerland. This is why, after the idea had been floating around in various people&#x27;s minds for a while, we founded cloudscale.ch AG in 2014. We embarked on our journey properly when we started work in our first office in Zurich Oerlikon at 09:09 h on 09/09 (there was no way we were going to leave such a memorable moment to chance!). A lot has happened since then and, as is fitting for a tenth anniversary, we would like to use this opportunity to review some of the highlights.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-early-tweets.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-kart.jpg"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-office-neugasse.jpg"/><p><strong>2015:</strong> The first weeks after the founding of the company were spent preparing the main tools for our work. Once this was complete, product development itself started. Based on OpenStack and Ceph (open source was important to us from the outset), we built a cloud setup where virtual servers could be created and deleted in a fully automated manner within seconds. We programmed the corresponding control panel ourselves to ensure that using it remained as simple and intuitive as we had imagined – <strong>our beta phase users were impressed.</strong></p>
<p><strong>2016:</strong> The big moment arrived in early January when we celebrated general availability, aka our grand opening. <strong>It was finally possible for anyone to create an account with cloudscale using self-service</strong> and start using our virtual servers immediately. It goes without saying that we did not stop there and with a growing team, we prepared the technical basis of our control panel for upcoming further development and added features such as IPv6, private networks and bulk storage to our cloud.</p>
<img src="https://static.cloudscale.ch/img/news-early-tweets-015ed88caf0d.png" alt="Tweets from the early days: In 2014, on 09.09. at 09:09 AM, cloudscale sets off on its journey. At the beginning of 2016, the cloud is ready for the general public." caption="Tweets from the early days: In 2014, on 09.09. at 09:09 AM, cloudscale sets off on its journey. At the beginning of 2016, the cloud is ready for the general public."/>
<p><strong>2017:</strong> Storage and tooling were two of the main topics of 2017. We progressively added S3-compatible object storage to our cloud offer in order to do justice to a growing need for scalable storage space with purely use-based costs. The switch to NVMe disks resulted in enhanced performance for our SSD volumes. <strong>And users of widespread DevOps tools, such as Ansible and Terraform, were now able to manage their infrastructure &quot;as code&quot;</strong> and thus automatically provision whole server landscapes in a reproducible manner.</p>
<p><strong>2018:</strong> &quot;Meltdown&quot; and &quot;Spectre&quot; made the general public aware of the fact that security vulnerabilities can not only occur in software, but also in hardware. Further CPU vulnerabilities were to follow and at cloudscale, we paid particular attention to always taking the necessary countermeasures quickly. This is why, for example, SMT or Hyper-Threading has been deactivated on all our compute hosts since 2019 in order to pre-empt attacks based on these elements. It goes without saying that <strong>our move to a new, considerably more spacious office location was a true highlight of 2018.</strong></p>
<p><strong>2019:</strong> The highlights continued into 2019, with many of them coming under the heading of &quot;information security&quot;. Among other things, we achieved ISO/IEC 27001, 27017 and 27018 certification, and with the opening of our second cloud location in Lupfig (Canton Aargau), we created the basis for geo-redundant setups. In addition, a good level of power was provided with the introduction of a new generation of compute hosts based on AMD and the launch of our &quot;Plus&quot; flavors, which ensure that <strong>dedicated computing power of up to 112 physical CPU cores is permanently available to a virtual server.</strong></p>
<p><strong>2020:</strong> Although we were already set up for mobile work, it took a while for the pandemic-enforced model of working from home 24/7 to establish itself. Today we are extremely grateful for the hybrid model we developed, and on Mondays we all meet in person in the office, but other than that everyone can decide for themselves. However, offshoring and nearshoring are not considerations for us. <strong>With cloudscale being a completely Swiss cloud provider, you can rely on the fact that the infrastructure is not only located in Switzerland, but that it is also administered from here.</strong> In line with this, we have been a partner of the &quot;swiss hosting&quot; label since it was launched in 2020.</p>
<p><strong>2021:</strong> The comprehensive redesign of our cloud control panel was a long-awaited milestone for everyone who uses their cloud resources collaboratively. Organizations, projects, teams and various collaboration features support work in the cloud, irrespective of how it is &quot;organized&quot;. Single sign-on with GitHub or one&#x27;s own IDP make logging in simpler and more secure. <strong>The collaboration features have also made cloudscale into an ideal partner for larger organizations, too,</strong> as well as for everyone looking after projects for their own customers in the cloud.</p>
<img src="https://static.cloudscale.ch/img/news-kart-19097a7b04e0.jpg" alt="Not always just at the screen: The cloudscale crew at a team event in Winterthur." caption="Not always just at the screen: The cloudscale crew at a team event in Winterthur."/>
<p><strong>2022:</strong> We can hardly be more central after <strong>our 2022 move into our current office right by the main railway station in Zurich – with a team that has grown to more than ten people.</strong> A lot happened in terms of our cloud offer, too, with e.g. the expansion of the collaboration features, more flexible management of custom images, and many new compute flavors, which means that you also always have the appropriate infrastructure for CPU- or RAM-heavy workloads. The new completely linear price model means that the price for a certain quantity of CPU and memory resources does not depend on whether it is a single large virtual server or several small virtual servers. In addition, an ISAE 3000 report has been available every year since 2022, if required.</p>
<p><strong>2023:</strong> In line with our focus on availability, we provided our customers with a load balancer service as a further feature to enable the creation of resilient setups. While our services continue to be billed to the second, we have fundamentally simplified the underlying mechanism. We have also started accepting payments using the popular Swiss TWINT payment solution. And <strong>we prepared for SCION, the network architecture of the next generation</strong> that promises enhanced reliability, confidentiality and control of communication in critical areas such as the finance and healthcare sectors.</p>
<img src="https://static.cloudscale.ch/img/news-office-neugasse-8bea8b79ba27.jpg" alt="Our current office location by the main railway station in Zurich." caption="Our current office location by the main railway station in Zurich."/>
<p><strong>2024:</strong> With enhanced transmission capacity between our cloud locations, increased flexibility when using Kubernetes setups thanks to CCM, an improved overview of costs and private networks, we are continuously improving and expanding the cloud services for our customers in the current year, too. Our new, fresh appearance is particularly eye-catching. We not only completely redesigned and expanded our website, but also subtly adjusted the control panel and API documentation in line with the new design. <strong>The clear lines reflect our unambiguous claim: at cloudscale, you are in best hands.</strong></p>
<br/>
<p><strong>Which further highlights will we encounter on our journey? Let&#x27;s find out</strong> – together with a strong team and, of course, our customers who will be able to continue to count on us as a close and approachable partner in future, too. Speaking of our team: if you think you are a good match for us, please <a href="https://www.cloudscale.ch/de/jobs">let us know</a>!</p>
<p>On the journey with you,<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Combining Firewall with Floating IPs
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/08/30/combining-firewall-with-floating-ips</link>
          <pubDate>Fri, 30 Aug 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/08/30/combining-firewall-with-floating-ips</guid>
          <description>
            <![CDATA[<p>Floating IPs help you to increase the availability of your application and make it easier to manage your setup. They can be moved between virtual servers so that incoming traffic is always routed to the desired server. They are also retained if you want or need to replace a server completely. Use these advantages not only for servers that directly provide a service, but also for your firewalls.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Using firewall distributions with Floating IPs</h3>
<p><a href="https://www.cloudscale.ch/en/news/2019/02/27/firewall-distribution-at-a-mouse-click">Two dedicated firewall distributions</a> are available at cloudscale: OPNsense and pfSense CE. Choose one of these images to set up a virtual server as a firewall between the open Internet and a private network. You can then <strong>configure this firewall conveniently in a web-based administration interface</strong> and adapt it to your requirements.</p>
<p>To ensure that the firewall also processes traffic that arrives via a Floating IP, <strong>the Floating IP must be entered in the administration interface.</strong> You can find the setting in OPNsense under &quot;Interfaces -&gt; Virtual IPs&quot;, in pfSense CE under &quot;Firewall -&gt; Virtual IPs&quot;. Enter the Floating IP and the prefix length (<code>/32</code>) here. In most cases, &quot;Type: IP Alias&quot; and the assignment to the &quot;WAN&quot; interface should be the appropriate setting; more details on the individual options can be found in the <a href="https://docs.opnsense.org/manual/firewall_vip.html">documentation for OPNsense</a> and <a href="https://docs.netgate.com/pfsense/en/latest/firewall/virtual-ip-addresses.html">pfSense CE</a>. By the way: By default, OPNsense and pfSense CE do not respond to pings; add &quot;ICMP Echo request&quot; to the firewall rules to change this if desired.</p>
<p>If you want to migrate an existing server behind your firewall that already provides a service using a Floating IP, you can – once everything is prepared – simply move the Floating IP from the server to the firewall as the last step. If you are not yet using a Floating IP, we recommend adding it to the existing server first and then adjusting the DNS entries: This way, <strong>your service will remain available under the old and new IP addresses in parallel</strong> while the new DNS entries are gradually picked up.</p>
<h3>Tips for modifying existing setups</h3>
<p><strong>If your existing server should no longer be directly accessible from the Internet after the migration, you can remove the &quot;public&quot; interface from the server.</strong> To do this, you need an API token with &quot;Write access&quot; as well as the UUID of the server and the private network to which it should be connected. You can then issue the necessary API call via the command line as follows:</p>
<pre><code class="language-bash">curl -i -H &quot;Authorization: Bearer 11112222333344445555666677778888&quot; -H &quot;Content-Type: application/json&quot; -X PATCH --data &#x27;{&quot;interfaces&quot;: [{&quot;network&quot;: &quot;11111111-2222-3333-4444-555555555555&quot;}]}&#x27; https://api.cloudscale.ch/v1/servers/11111111-3333-5555-7777-999999999999
</code></pre>
<p>NB: It is also possible to add a public interface to the server again later; in this case, the server will be assigned <strong>a new public IP address.</strong> You can find more information about our API in the <a href="https://api.cloudscale.ch">API documentation</a>.</p>
<p>After making changes to the interfaces, <strong>it is advisable to briefly check the names of the interfaces.</strong> If they are not permanently assigned, they could change after a reboot (e.g. the private network from <code>ens4</code> to <code>ens3</code>) and lead to connectivity issues. The Linux distributions rely on different tools here; keywords are, for example, &quot;netplan&quot; and &quot;udev rules&quot;.</p>
<p>If a server is no longer directly accessible from the Internet, you will also need a new way to access it. <strong>Choose the solution that suits you best, e.g. a VPN or port forwarding</strong> – depending on your firewall strategy. It is also possible to first connect to the firewall via SSH and then continue from its command line to the respective server.</p>
<p>Finally, just in case, we recommend <strong>setting a root password on your server with which you can log in &quot;locally&quot;, but not via SSH.</strong> In the event of boot or connectivity problems, you can then log in to the server via the VNC console in our control panel and resolve the issue. Alternatively (and somewhat more complicated), you can also <a href="https://www.cloudscale.ch/en/news/2020/01/14/use-your-own-iso-usb-images">start the server with a temporarily connected live Linux</a> for troubleshooting purposes.</p>
<br/>
<p>cloudscale offers a range of features relating to the security and availability of your setups. Use and combine these according to your requirements and preferences. Even with existing setups, you remain flexible and can, for example, <strong>replace the direct Internet connection of your servers with a dedicated firewall complete with Floating IP.</strong></p>
<p>Security to suit your taste!<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Securing QCOW2 Image Imports
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/07/31/securing-qcow2-image-imports</link>
          <pubDate>Wed, 31 Jul 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/07/31/securing-qcow2-image-imports</guid>
          <description>
            <![CDATA[<p>In early July, a security vulnerability in OpenStack was disclosed, which could be exploited through custom images in QCOW2 format. In addition to the measures we took immediately, we are now undertaking small changes in the custom image import process in order to best protect security for us and our customers in future, too. Our aim here is to inform you about the background and about what you need to be aware of with automated imports.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Overview and timeline</h3>
<p>At cloudscale, several popular Linux distributions are available for new servers. Thanks to so-called custom images, even more individual setups are possible: as a customer, you upload a hard disk image, which can be used to fill the root volume of your new virtual servers initially, including any required tools and settings. This is where the <strong>security vulnerability was found in OpenStack,</strong> the open source project our cloud offer is based on. Uploading specially crafted images in QCOW2 format made it possible to access data in systems that play a part in our import process of custom images.</p>
<p><strong>As an immediate measure, we had temporarily deactivated the import of QCOW2 images.</strong> Specifically, this means that all images were treated as &quot;raw&quot;. Images that actually were in QCOW2 format could therefore not be used by virtual servers to boot. While we were aware that this causes additional work for some customers, as they need to convert images into raw image format before importing them, we decided to implement this temporary step in the short term in the interest of security for us and our customers.</p>
<p>Timeline:</p>
<ul>
<li><strong>December 2020:</strong> Introduction of custom images, support for raw format</li>
<li><strong>May 2022:</strong> Additional support for images in QCOW2 format</li>
<li><strong>February 2023:</strong> Specification of image format no longer required for import</li>
<li><strong>2024-07-02:</strong> <a href="https://security.openstack.org/ossa/OSSA-2024-001.html">OSSA-2024-001</a> / <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-32498">CVE-2024-32498</a> security vulnerability disclosed</li>
<li><strong>2024-07-03:</strong> Import of images in QCOW2 format blocked as an immediate measure</li>
<li><strong>2024-07-31:</strong> Import of images in QCOW2 format possible again, adaptation of API</li>
</ul>
<h3>Background of the vulnerability</h3>
<p>Image files in QCOW2 format have several special features, e.g. the fact that they tend to require less space. In addition, they can not only contain the actual data of a data carrier, but also <strong>references to externally located files.</strong> The core of the OpenStack security vulnerability was inadequate checks in some OpenStack components that processed references of this kind and their target files when processing QCOW2 images. As custom images are always saved in raw format at cloudscale, the vulnerability affected in particular automatic recognition of the format to be imported and the conversion from QCOW2 to raw. However, according to our analysis, there are no indications that the vulnerability was actually exploited at cloudscale.</p>
<p>The conversion of images where OpenStack uses <code>qemu-img convert</code> has in the meantime been <strong>patched so that it is no longer possible to exploit it even using specially crafted images.</strong> This means that we can once again enable the import of QCOW2 images here at cloudscale. However, in the OpenStack developer community, there is not (yet) a reliable solution for automatic format recognition using <code>qemu-img info</code>, which is why at cloudscale we will in future no longer attempt automatic image format recognition.</p>
<h3>Use of QCOW2 images at cloudscale</h3>
<p><strong>Importing QCOW2 images is possible at cloudscale again effective immediately, but the format has to be indicated during imports.</strong> When <a href="https://www.cloudscale.ch/en/api/v1#import-a-custom-image">importing via API</a>, the attribute <code>source_format</code> is used for this purpose. After being declared deprecated in February 2023, this has now once again been officially introduced following recent incidents. While the attribute can be left out for images in raw format, <code>source_format</code> is required for QCOW2 images. We are aware that this change in API behavior is not completely non-breaking, but the purpose of the selected solution is to minimize the required adaptations.</p>
<p><strong>The current versions of our modules/plug-ins support the <code>source_format</code> attribute,</strong> which means they can be used to import QCOW2 images. You will find answers to all the important questions relating to this topic in the FAQs below.</p>
<br/>
<p>We are delighted that, after the exclusion of QCOW2, which became necessary at short notice, we can once again enable the import of custom images in this format. Unfortunately, in the process, it was not possible to avoid the reintroduction of an old attribute in the relevant API call. <strong>It was important to us not to leave any known vulnerabilities open,</strong> while causing as little work as possible in terms of adaptation. We would ask for your understanding where adaptations are nonetheless necessary.</p>
<p>Considerate and pragmatic.<br/>
Your cloudscale team</p>
<h3>FAQs</h3>
<p><strong>What will happen to existing custom images?</strong><br/>
Nothing. The change only applies to the import process. Existing images are not affected.</p>
<p><strong>What will change for imports using the web-based control panel (control.cloudscale.ch)?</strong><br/>
You now need to specify the format of the image in the &quot;Source Format&quot; drop-down.</p>
<p><strong>My import is showing the error message &quot;Import could not be processed&quot;, what do I need to do?</strong><br/>
You probably tried to import a file as &quot;QCOW2&quot; that is not a file of this kind. Check the format and the URL.</p>
<p><strong>I only import raw images. Am I affected?</strong><br/>
No. However, ensure that the <code>source_format</code> field  is either not transmitted or contains <code>&quot;raw&quot;</code>. An error will occur for other values.</p>
<p><strong>I use Terraform, what do I need to do?</strong><br/>
In order to be able to import QCOW2 images, you need to use the <a href="https://www.terraform.io/docs/providers/cloudscale/index.html">&quot;cloudscale&quot; provider</a> from version v4.4.0 onwards and specify the <code>import_source_format</code> for new images. You can continue to use existing images with no change, the format should not be specified retrospectively.</p>
<p><strong>I use <a href="https://docs.ansible.com/ansible/latest/collections/cloudscale_ch/cloud/index.html#plugins-in-cloudscale-ch-cloud">cloudscale_ch.cloud for Ansible</a>, what do I need to do?</strong><br/>
Ensure that the correct <code>source_format</code> is transmitted. You do not need to update the Ansible collection.</p>
<p><strong>I use <a href="https://github.com/cloudscale-ch/cloudscale-go-sdk">cloudscale-go-sdk</a> in my Go application, what do I need to do?</strong><br/>
If you use it to import custom images, you need to ensure that the correct <code>source_format</code> is transmitted. The dependency does not need to be updated.</p>
<p><strong>I use <a href="https://github.com/cloudscale-ch/cloudscale-python-sdk">cloudscale-python-sdk</a> in my Python application, what do I need to do?</strong><br/>
If you use it to import custom images, you need to ensure that the correct <code>source_format</code> is transmitted. The dependency does not need to be updated.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[NetBox as a "Source of Truth"
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/06/28/netbox-as-a-source-of-truth</link>
          <pubDate>Fri, 28 Jun 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/06/28/netbox-as-a-source-of-truth</guid>
          <description>
            <![CDATA[<p>A cloud service such as cloudscale is based on the most varied of systems, which means that maintaining an overview is essential. In this context, we have recently started using NetBox, which collects a wealth of information and settings relating to all components in a central location and provides this inventory as a basis for engineering and operations.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Structured data instead of individual text files</h3>
<p>In the open source environment, plain text files, e.g. in a YAML or JSON format, are frequently used for (software) configurations and other data that are <strong>evaluated by means of an automated process.</strong> For a long time, our Ansible setups, which we use to automate a majority of our installation and maintenance processes, obtained their data about our inventory from a large number of such YAML files.</p>
<p>Our recent switch to <a href="https://github.com/netbox-community/netbox">NetBox</a> now explicitly represents connections in the NetBox database that were more or less apparent in these YAML configs, ultimately providing additional protection against errors. You can <strong>navigate through the inventory</strong> via the web-based interface of NetBox and, for example, see the occupancy of server racks or check which devices share a chassis.</p>
<h3>Comprehensive and flexible</h3>
<p>NetBox supports a wealth of information about every device. In addition to location, position in rack, device type and network details, it is also possible to record e.g. whether the cooling air flows from the front of the device to the back or vice versa. For cloudscale, the most relevant recorded data are, in particular, <strong>the role that a device fulfills,</strong> i.e. whether it is a storage server, a switch or a monitoring system, to name but a few.</p>
<p>The tasks that we automated in Ansible then <strong>use the API to obtain the hosts they are to be applied on from NetBox;</strong> the same goes for host-specific configs, such as MAC addresses. The interfaces required for this are provided by the inventory plugin contained in the <a href="https://github.com/netbox-community/ansible_modules">NetBox Ansible Collection</a>. We use <a href="https://docs.ansible.com/ansible/latest/plugins/cache.html">caching</a> to improve the performance for more complex playbooks, too. The required data are collected once at the beginning of the playbook run, maintained for the whole run and then discarded to ensure that the next run also starts with the most up-to-date data.</p>
<p>In our precisely structured Ansible setups, (physical and virtual) hosts are frequently allocated to more than just a single Ansible group, although NetBox is limited to at most one role per device. This is why, in order to be able to represent the existing structure in NetBox, we decided to only allocate the primary child group (lowest group in the inheritance tree) as a role for the device. Any <strong>further child groups are recorded in NetBox using &quot;tags&quot;.</strong> We then transform these special tags into groups by means of an internally developed Ansible plugin and add these as a variable to the host in question at playbook runtime.</p>
<h3>More than networking</h3>
<p>NetBox is officially aimed at network engineers. However, its claim to offer &quot;a cohesive, extensive, and accessible data model for all things networked&quot; makes it clear that <strong>the area of application is considerably broader.</strong> This means that our customers can also represent their virtual servers in NetBox, together with properties, such as the specific cloud location, IP addresses, private networks, etc. In combination with <a href="https://www.cloudscale.ch/en/news/2020/12/21/cloud-orchestration-with-ansible-collections">our Ansible Collection</a>, it is even possible to import existing cloud setups into NetBox.</p>
<p>As a &quot;source of truth&quot;, NetBox can ultimately also take on a managing role, with <strong>new virtual servers being first recorded in NetBox</strong> and then, based on this data, automatically created and configured by Ansible.</p>
<br/>
<p>NetBox represents a powerful tool to further automate tasks surrounding the inventory. Thanks to it being open source, we can also contribute our own improvements back to the community. <strong>With NetBox as a &quot;source of truth&quot;, everyone involved can rely on correct data about the inventory,</strong> both via Ansible playbooks and in the web-based GUI.</p>
<p>A reliable basis.<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Convenient Overview of Private Networks
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/05/31/convenient-overview-of-private-networks</link>
          <pubDate>Fri, 31 May 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/05/31/convenient-overview-of-private-networks</guid>
          <description>
            <![CDATA[<p>Not every server should be accessible directly from the Internet. Segmentation into several networks may make particular sense in terms of security, e.g. to shield database servers or to filter traffic on a central firewall. Our cloud control panel allows you to maintain an overview, even when there are numerous private networks, thus minimizing the risk of configuration errors.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-networks-ports.png"/><h3>Private networks for every requirement</h3>
<p>The first step to using a private network has been kept intentionally simple at cloudscale: it takes just a few clicks during launch to connect virtual servers to an existing or a new private network. The connection to the private network can either be set up in addition to the direct connection to the Internet or can replace the latter. The private network is <strong>exclusively available to you from layer 2,</strong> which allows you, for example, to freely configure the IP addresses on the involved servers. Jumbo frames are supported for enhanced efficiency in the private network, but it goes without saying that you can customize the default MTU of 9000 bytes if required.</p>
<p>In a private network, a DHCP service is available as standard, allocating IP addresses from a randomly selected <code>/24</code> within <code>172.16.0.0/12</code> to requesting servers. <strong>In addition, there are many <a href="https://www.cloudscale.ch/en/news/2020/04/03/mastering-the-private-network-with-managed-dhcp">other configuration options</a> available.</strong> These allow you, for example, to create private networks without a DHCP service or to define any other IP range (at least /24) for the DHCP service. DHCP functionality can also be deactivated for individual servers or the IP address can be allocated in a fixed manner in the private network instead of the DHCP service selecting it at random. Together with the IP address, the DHCP service can also provide servers with a gateway and/or a list of DNS resolvers, thus making local configuration of this information redundant.</p>
<h3>Consistently clear layout</h3>
<p>The &quot;Networks&quot; area in our cloud control panel ensures that you always maintain an overview, even when there are several private networks. We have expanded this progressively over recent months so that <strong>all the relevant details of your private networks are clearly summarized</strong> and can, in part, be customized directly.</p>
<p>Under &quot;Settings&quot;, in addition to the freely selectable network name, you will also find those details that <strong>relate directly to the layer 2 network,</strong> in particular the MTU. While this setting determines the packet size that is actually possible in the private network, if the DHCP service is activated, it is also communicated to the servers in the DHCP response.</p>
<p>The &quot;Subnets&quot; tab summarizes the information <strong>associated with the DHCP service.</strong> This includes the IP address area selected for the network in CIDR notation and the range from which the DHCP service selects the addresses, provided that you do not indicate specific addresses. The &quot;Gateway&quot; and &quot;DNS Servers&quot; values do not affect the behavior of the DHCP service directly, but are part of the DHCP response to configure your servers.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-networks-ports-31811c30b362.png" alt="List of all devices involved in the private network."/>
<p>Finally, under &quot;Ports&quot;, you will see all the <strong>devices involved in your private network,</strong> including their MAC address and the IP address that the DHCP may have reserved for the device. It also goes without saying that you can use other IPv4 or IPv6 addresses on your virtual servers. In addition to your virtual servers, the two DHCP servers that manage your subnet are listed as well.</p>
<p>Any <a href="https://www.cloudscale.ch/en/news/2023/04/28/load-balancer-as-a-service">load balancers</a> are also included in the list. Due to it being designed for high availability, a <strong>load balancer consists of two individual servers,</strong> which is why it appears with two ports in your private network. The IP addresses visible here are the same as the ones that can be seen in your backend&#x27;s logs, unless you are using the proxy(v2) protocol for the load balancer pool in question, which transmits the actual source IP of the client together with the simple TCP packets to your backend. If the VIP address of the load balancer is also located in the private network, it will be listed under the ports, too.</p>
<br/>
<p>Security considerations mean that it often makes sense only to connect those systems that actually need to communicate with each other. And if your setup develops further over time, &quot;Networks&quot; in our cloud control panel will always allow you to <strong>see which networks are available and where which devices are involved.</strong> This helps you reduce the risk of errors and makes your life even easier with the centrally managed DHCP options.</p>
<p>Networks made easy.<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Detailed Breakdown of Past Costs
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/04/25/detailed-breakdown-of-past-costs</link>
          <pubDate>Thu, 25 Apr 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/04/25/detailed-breakdown-of-past-costs</guid>
          <description>
            <![CDATA[<p>At cloudscale, you only book and pay for what you actually use. This means, for example, that you can change the compute flavor of your servers at any time or even extend volumes during live operations. Despite our simple price structure, this may result in &quot;skewed&quot; prices or potentially in project costs that change from day to day. With the new billing report, you can now see a detailed list of the actual to-the-second costs.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-billing-report.png"/><h3>Complete control over costs</h3>
<p>If you want to invoice your customers for the costs of specific cloud projects, compile costs for bookkeeping purposes, or simply maintain an overview, you can use the new billing report in our cloud control panel and pull a <strong>report on the costs for the required time period.</strong> You can select specific days as the start and end dates.</p>
<p>An initial overview shows the overall costs of each project. A <strong>separate page for each project</strong> then shows the total for the individual types of resources (e.g. all servers, all volumes, etc.) and the exact costs of the specific cloud resources (i.e. individual virtual servers, volumes, etc.). Projects and resources that no longer exist are also shown.</p>
<h3>To the second</h3>
<p>The listed cloud resources can be &quot;opened up&quot;: if there was a change during the selected time period, you can see the <strong>individual to-the-second time segments and the associated costs.</strong> Although the most common reason for a change is when servers and volumes are scaled, the price adjustments from the year before last and this year are also shown. For technical reasons, there is also a &quot;cut&quot; in time segments if a project was moved from a personal account <a href="https://www.cloudscale.ch/en/news/2021/05/27/new-control-panel-organizations-projects-teams">to an organization</a>. Time stamps marked in blue/green represent the start or end of the time period shown and do not mean a change relating to service or price.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-billing-report-4b11995db099.png" alt="New billing report with a list of the actual to-the-second costs."/>
<p>All time stamps in the billing report refer to the Europe/Zurich time zone, irrespective of <a href="https://www.cloudscale.ch/en/news/2022/10/18/did-you-know-our-control-panel">your settings</a>. This is due to us switching to <a href="https://www.cloudscale.ch/en/news/2023/08/09/simplified-billing-mechanism">billing per calendar day</a> last August, where costs are collected throughout the day and taken in full from your account or organization credit shortly after midnight (in the local time zone of Zurich). It goes without saying that you can <strong>also select older time periods</strong> in the billing report, but for virtual servers it is not possible in all cases to show the compute flavor that was active at that time.</p>
<h3>Two hints</h3>
<p>All projects and cloud resources in the billing report are <strong>shown using the name they currently have</strong> or last had (if they have been deleted in the interim). If you are uncertain, you can use the UUID, which is also shown, to check which resources these are specifically.</p>
<p>The <strong>&quot;BGP Announcements&quot;</strong> right at the bottom of the billing report are a little known pro feature: if you already have your own IP space, but do not wish to run your own infrastructure, you can have the space configured as <a href="https://www.cloudscale.ch/en/news/2017/07/06/more-flexibility-with-floating-networks">Floating Networks</a> at cloudscale, thus making virtual servers accessible under your own IPs. Our support team is happy to advise you if required.</p>
<br/>
<p>Irrespective of how small or large your cloud project is, at cloudscale we want not only technological administration, but also pricing to be straightforward. The new billing report now allows you to retrospectively <strong>reconstruct the development and costs of your project in detail</strong> – for the exact time period required and at the level of your choice.</p>
<p>You can count on us.<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[K8s Cloud Controller Manager for cloudscale
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/03/05/cloud-controller-manager</link>
          <pubDate>Tue, 05 Mar 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/03/05/cloud-controller-manager</guid>
          <description>
            <![CDATA[<p>Kubernetes setups at cloudscale can now interact even more closely with our infrastructure: Our Cloud Controller Manager (CCM) enables the enrichment of node metadata with information from our API as well as the automated use of our load balancer product.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Automatic metadata for nodes</h3>
<p>With our CCM, your K8s cluster now knows even more about the nodes involved. <strong>Metadata about the nodes is automatically retrieved from our cloud API,</strong> e.g. the IP address(es) of the respective virtual server or its geographical region/zone. It is also possible to distinguish whether a node has been deleted or simply switched off. The compute flavor is equally available; this makes it possible, for example, to place certain workloads specifically on nodes that use a &quot;Plus&quot; flavor and thus have the selected number of CPU cores available for exclusive use at all times.</p>
<p>Our CCM is available on <a href="https://github.com/cloudscale-ch/cloudscale-cloud-controller-manager">GitHub</a> and <strong>supports the three latest minor releases of Kubernetes.</strong> The corresponding documentation with examples and a helper to try out the CCM in a test cluster can also be found on GitHub.</p>
<h3>K8s services with our LBaaS feature</h3>
<p>In addition to more detailed information about the underlying infrastructure, the CCM also enables the automated management of our <a href="https://www.cloudscale.ch/en/news/2023/04/28/load-balancer-as-a-service">LBaaS feature</a>. To use this, select the <code>type: LoadBalancer</code> for a <code>Service</code> in Kubernetes. The load balancer can make your service <strong>accessible either from the Internet or only in one of your private networks.</strong></p>
<p>The load balancer distributes incoming requests in the private network among the nodes of the K8s cluster, which do not require their own &quot;public&quot; interface. The permitted clients can be restricted to the desired IPs or IP ranges already at the load balancer level. In order to be able to <strong>recognize or log the IP addresses of the clients in the backend despite NAT,</strong> use the &quot;proxy&quot; or &quot;proxyv2&quot; protocol, which is supported by nginx, among others.</p>
<p>Our load balancer service is designed to support and simplify the operation of highly available services. However, please note that <strong>applying configuration changes via CCM may imply some downtime</strong> – approximate numbers for the different supported cases can also be found in the documentation.</p>
<br/>
<p><strong>Thanks to our CCM, tasks can be automated</strong> since your K8s cluster can rely on additional information about the infrastructure and on our load balancer service. This way, you not only increase the efficiency of your setup, but also the availability for your users.</p>
<p>Everything under control.<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[cloudscale Reloaded: in Best Hands
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/02/01/cloudscale-reloaded-in-best-hands</link>
          <pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/02/01/cloudscale-reloaded-in-best-hands</guid>
          <description>
            <![CDATA[<p>&quot;Simple, yet beautiful&quot; – our aim from the very beginning applies more than ever now that cloudscale has a new look. At first glance, it may seem as if everything has changed, but closer inspection reveals that our fresh appearance is simply an enhanced expression of what we have always been. We love it! How about you?</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Old and new</h3>
<p>Blue and green. Memorable lettering instead of far-fetched graphics. That is how we have always been known and how we want things to remain. <strong>There is, however, a striking difference:</strong> the new color tones look significantly fresher, and the clean lines of the font better reflect our identity.</p>
<p><strong>The &quot;o&quot; is particularly eye-catching:</strong> it has been simplified from a speedometer logo to a color gradient, which not only represents performance, but also stands for simplicity, scalability and dynamics. With its high recognition factor, it is a design element that can be used in manifold ways in other places. (For those with an eye for detail, the point where blue touches green reflects the angle of the letter &quot;c&quot;.)</p>
<h3>A unified whole</h3>
<p>The completely new design of the website is inspired by the colors and shapes of the lettering, thus creating an unmistakable appearance. <strong>The aim is for cloudscale to feel the same,</strong> in terms of first impression and ongoing cooperation alike.</p>
<p>It goes without saying that this also applies to daily use and to the engineering of sophisticated setups. It makes perfect sense for us to have <strong>adapted our cloud control panel and API documentation to the new design.</strong> Needless to say, we have in no way sacrificed the clear overview, and in addition to the new design, we have also further optimized usability.</p>
<h3>As approachable as ever</h3>
<p>Transparency has always been important to us. One aim of our newly designed website is to make relevant information even more accessible, and it goes without saying that we will continue to keep you updated with regard to new developments and our offers. Our contact options are also unchanged, so if you have any unanswered questions, please get in touch for a personal response. In line with our cloud being so uncomplicated, we will from now on simply call ourselves <strong>&quot;cloudscale&quot;</strong> without the &quot;.ch&quot; in everyday use.</p>
<br/>
<p>cloudscale stands for professionalism. Our new, consistent look provides an even better reflection of this standard to the outside, making it clear from the outset that, at cloudscale, your project is not simply in the cloud, but <strong>in best hands.</strong></p>
<p>Even more ourselves now,<br/>
Your cloudscale team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Upgrade to 2x 100 Gbps Between Cloud Regions
]]></title>
          <link>https://www.cloudscale.ch/en/news/2024/01/29/upgrade-200-gbps-between-cloud-regions</link>
          <pubDate>Mon, 29 Jan 2024 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2024/01/29/upgrade-200-gbps-between-cloud-regions</guid>
          <description>
            <![CDATA[<p>In today&#x27;s age of video streaming, increasingly large data quantities need to be transmitted at ever increasing speeds. However, fast connections are also essential for data transfer between servers, which is why we recently significantly upgraded the route-redundant direct connection between our two cloud regions Rümlang (RMA) and Lupfig (LPG).</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Route-redundant direct connection via dark fiber</h3>
<p>Just over four years ago, our second cloud site in Lupfig (Canton Aargau) <a href="https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations">became operational</a> in order to enable geo-redundant setups for our customers. So that virtual servers would be able to communicate directly between the two sites without the data traffic leaving our network, we connected the sites using dark fiber. This means that instead of simply purchasing transmission capacities from a third-party provider, we <strong>rent unused &quot;dark&quot; glass fiber lines</strong> and use our own transceivers to &quot;illuminate&quot; them ourselves at the endpoints.</p>
<p>To achieve greater total capacity, we use coarse wavelength division multiplexing (CWDM) where <strong>laser light with different wavelengths</strong> (also known as &quot;colors&quot; in everyday parlance) is transmitted over one and the same glass fiber line. On one of these channels, we connected the core routers of our two sites with 10 Gbps, while two others, bundled together to 20 Gbps, used to connect the locations at switching fabric level.</p>
<p>For maximum availability, we duplicated the physical connection. The sites in Rümlang and Lupfig are connected by separate dark fiber lines <strong>on different, non-intersecting routes.</strong> This means that the connection remains available even if one of the fiber lines is damaged, e.g. during roadworks. Even within the data centers, we paid attention to detail with our cables entering the buildings via separate entrances, not intersecting after this point either and terminating in different racks. This redundancy allowed us to double the normally available capacity to 2x 10 Gbps between the core routers and to 4x 10 Gbps between the switching fabrics.</p>
<h3>Upgrade to 2x 100 Gbps</h3>
<p>We recently significantly extended the connection between the <a href="https://www.cloudscale.ch/en/news/2020/06/04/cumulus-linux-switch-paid-off">switching fabrics</a>: instead of 2x 10 Gbps there are now 100 Gbps available on each of the separate routes. By adapting the data rate to the one used inside the sites, less buffering is required, which <strong>further reduces latency of data transmission.</strong></p>
<p>The connections between the sites at switching fabric level are used, among other things, <strong>for data traffic between your virtual servers.</strong> This means that you automatically benefit from this upgrade if you, for example, periodically save an archive copy of your data off-site.</p>
<h3>Outlook</h3>
<p>The upgrade of the connections to a total of 2x 100 Gbps will generally benefit all those applications where relatively large quantities of data are exchanged between our cloud sites. In addition to increased &quot;breathing space&quot; for existing setups, the extended capacity also creates a <strong>foundation for future features</strong> such as off-site back-up of our customers&#x27; data.</p>
<p>At cloudscale.ch, we use redundancy at every level to avoid failures whenever possible. However, we nonetheless recommend that our customers make the most of the full potential of both our sites and create their setups in a geo-redundant manner. The recently significantly extended direct connection between our sites has also created a basis for <strong>rapid and reliable replication of your data.</strong></p>
<br/>
<p>A common definition states that the &quot;cloud&quot; is based on the three pillars of compute, storage and network. Although it is often the most inconspicuous of the three, the network is decisive – not only for standard operations, but in particular with regard to geo-redundancy. With our upgrade to 2x 100 Gbps between the two cloud regions Rümlang and Lupfig, you can <strong>build on a solid foundation,</strong> today as well as tomorrow.</p>
<p>Looking ahead, <br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[SCION: Network Architecture of the Next Generation
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/12/27/scion-network-architecture-of-the-next-generation</link>
          <pubDate>Wed, 27 Dec 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/12/27/scion-network-architecture-of-the-next-generation</guid>
          <description>
            <![CDATA[<p>Cyberattacks are a daily topic in news reports and people have in the meantime become used to short-term disruption and interruptions. However, certain applications require a greater degree of availability and reliability than the architecture of today&#x27;s Internet can provide. SCION offers a new approach here: developed in Switzerland, it promises greater reliability, trust and control for the networking of market participants in critical areas, such as the financial and health sectors.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Internet: growth over the decades</h3>
<p>The Internet is omnipresent today and has become essential to almost all areas. It becomes particularly apparent how key this infrastructure is to our lives when cyberattacks and large-scale failures occur. Some such problems are facilitated by the <strong>fundamental architecture of the Internet,</strong> which it inherited from the early days of its 50-year history.</p>
<p>Certain areas have higher requirements in terms of reliability and security of their central communication infrastructure, in particular e.g. the financial and health sectors. <strong><a href="https://www.scion-architecture.net/">SCION</a> was developed with these sectors in mind at ETH Zurich</strong>. The full name of the routing protocol &quot;Scalability, Control, and Isolation On Next-generation networks&quot; already shows how it differs from the conventional Internet.</p>
<h3>Control and isolation with SCION</h3>
<p>The Internet, as a &quot;combination&quot; of many individual networks, really is a net: there are <strong>countless possible data transport paths</strong> between two communication partners A and B. The path the data actually take arises from a number of influencing factors and decisions by the providers involved, with A and B having little impact.</p>
<p>This is where SCION is completely different. Based on a variety of parameters, such as latency, jitter, packet loss and available bandwidth, network participants can <strong>prioritize or exclude certain paths</strong>. Thanks to constantly updated metrics, optimal routing can thus be ensured. Complete path control also allows the exclusion of certain path segments and specific providers.</p>
<p>If a network hub fails, data traffic within the Internet automatically finds a new path. This adaptation, however, requires time and in some cases, it may take several minutes for the affected data to flow again. <strong>SCION consistently relies on modern concepts</strong> to virtually avoid interruptions of this kind. If the currently preferred route fails, immediate diversion to a different route based on the defined specifications takes place.</p>
<p>Authentication of network participants is a further important feature of SCION. In completely separate &quot;isolation domains&quot; (ISDs), participants can rely on the fact that they <strong>only receive data traffic from legitimate, verified sources.</strong></p>
<h3>SCION at cloudscale.ch</h3>
<p>Thanks to its focus on reliability and trust, SCION is ideal for the financial sector. SIX, for example, is currently replacing its &quot;Finance IPNet&quot; with the SCION-based &quot;Secure Swiss Finance Network&quot; (SSFN). <strong>SCION is also already in use in the health and energy sectors.</strong></p>
<p>At cloudscale.ch, we are in the process of putting two SCION core routers into operation. This means that in future our customers will be able to <strong>participate in these secure networks directly with their cloud setups.</strong> If you are interested in SCION access, please contact us to discuss the next steps.</p>
<br/>
<p>SCION was developed for applications with requirements that can only be partially fulfilled by today&#x27;s Internet. Complete path control and the available metrics mean that participants regain the upper hand in terms of data flow. They also benefit from <strong>minimal down time and authentication of all communication partners.</strong> This is why we will soon also be enabling our customers to participate in various SCION-based networks or ISDs.</p>
<p>With the best connections, <br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Price Adjustment for Compute Flavors
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/11/16/price-adjustment-for-compute-flavors</link>
          <pubDate>Thu, 16 Nov 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/11/16/price-adjustment-for-compute-flavors</guid>
          <description>
            <![CDATA[<p>After many years of not being an issue for most people, inflation has once again become an ever-present topic. Energy prices, which have been increasing rapidly, not least because of the current international situation, are making particular headlines. cloudscale.ch is also at the mercy of this market environment, which is why some of our prices are being adjusted from 2024-01-01.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-pricing-compute-flavors-en.png"/><h3>Adjustment for compute flavors only</h3>
<p>Despite there being no way around a price increase, our aim is to limit it to what is absolutely necessary and to maintain our principle of simplicity and transparency – also in terms of pricing. Specifically, this means that <strong>the prices of our compute flavors will be subject to a uniform increase of 10%,</strong> i.e. a virtual server with the &quot;Flex-8-2&quot; flavor, for example, will cost CHF 2.20 instead of CHF 2.00 per 24 hours.</p>
<img src="https://static.cloudscale.ch/img/news-pricing-compute-flavors-en-8d9866a2887b.png" alt="Excerpt from the compute flavours as of 2024-01-01."/>
<p>The purely linear price structure will be maintained with this increase. Consequently, costs for memory and CPU resources will still be based solely on overall requirements and not on whether you combine your workloads on a single large server or prefer to distribute them across several smaller servers. <strong>The other prices, e.g. for volumes, object storage and load balancers, will also remain unchanged</strong> and no new price components will be introduced.</p>
<h3>Background</h3>
<p>Thanks to favorable purchase conditions, we were able to reduce our prices by up to 33% in July 2022. In the meantime, however, the environment has changed and <strong>energy prices in particular have increased dramatically – not only in Switzerland.</strong> These increased prices have a double effect on the running of a cloud infrastructure: on the one hand, running physical servers uses power directly, and on the other hand, power is essential for the required cooling. In addition, purchasing hardware has also become significantly more expensive.</p>
<p><strong>The new prices will apply from 0:00 h (Swiss time) on 2024-01-01 for the running of existing and newly created servers</strong> and will be visible in our cloud control panel from shortly after midnight. Accordingly, account balances will be debited for the first time at the new conditions in the night to 2024-01-02 for the services used on 2024-01-01.</p>
<br/>
<p>At cloudscale.ch you only pay for what you actually use and it goes without saying that <strong>to-the-second billing will continue.</strong> This is why, even with the current price increase for compute flavors, we are convinced that this flexibility remains worthwhile, especially in an environment where prices are generally increasing.</p>
<p>As transparent as always, <br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Resources in Multiple Projects With Terraform
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/10/17/multiple-projects-with-terraform</link>
          <pubDate>Tue, 17 Oct 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/10/17/multiple-projects-with-terraform</guid>
          <description>
            <![CDATA[<p>You can use our Terraform provider to manage your resources at cloudscale.ch &quot;as code&quot;. By grouping your cloud resources into different projects, you can separate them clearly according to your specific requirements. In the following we would like to introduce a Terraform feature that is easy to overlook, but that you can use to combine these benefits in order to use a single Terraform repository to manage cloud resources in multiple projects.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Terraform: infrastructure as Code</h3>
<p>Terraform enables you to define the required cloud infrastructure in the form of configuration files. <strong>Based on this configuration, Terraform then creates the actual cloud resources via the API.</strong> In day-to-day operations, if there have been deviations in practice, Terraform can then recreate the state based on the configuration and apply any changes made to the configuration to the real setup. Terraform configs are also often managed in a version control repository, such as Git, and used as part of a CI/CD pipeline.</p>
<h3>Projects: order, security and transparency</h3>
<p>At cloudscale.ch, cloud resources such as servers and volumes are created in projects. <strong>Projects enable resources to be grouped,</strong> e.g. for each end client or in order to separate dev and prod environments. If several participants are working with cloud resources, <a href="https://www.cloudscale.ch/en/news/2021/05/27/new-control-panel-organizations-projects-teams">access rights can be determined</a> for individuals and teams on a project-by-project basis. Finally, the costs for each project can be displayed separately and can then be further broken down according to resources.</p>
<p><strong>API tokens for the cloudscale.ch API are also linked to a project</strong> and only allow access to resources within that project. In practice, it may therefore be desirable e.g. to create a backup server in its own separate project. This means that an API token in the primary project that is frequently used in day-to-day operations (e.g. to move a Floating IP between servers) cannot be used to make changes to the backup server.</p>
<h3>Central management in Terraform</h3>
<p>If you want to create all your resources at once with Terraform, you will face the issue of how to use the <a href="https://registry.terraform.io/providers/cloudscale-ch/cloudscale/latest">cloudscale-ch provider</a> with multiple API tokens. The solution consists of creating two <code>provider</code> blocks, i.e. <strong>instantiating the provider twice and adding an <code>alias</code> to one instance.</strong> You can then assign a separate API token to each instance.</p>
<p>This will look like this:</p>
<pre><code class="language-typescript">terraform {
  required_providers {
    cloudscale = {
      source = &quot;cloudscale-ch/cloudscale&quot;
    }
  }
}

# Define variables for the API tokens
variable &quot;cloudscale_api_token&quot; {}
variable &quot;cloudscale_backup_api_token&quot; {}

# Define the provider for the default project
provider &quot;cloudscale&quot; {
  token = var.cloudscale_api_token
}

# Define the provider for the second project with an alias
provider &quot;cloudscale&quot; {
  alias = &quot;backup&quot;
  token = var.cloudscale_backup_api_token
}
</code></pre>
<p>You can declare the resources in the first project as usual:</p>
<pre><code class="language-typescript"># Create servers using the default provider
# in the first project
resource &quot;cloudscale_server&quot; &quot;demo-server&quot; {
  name               = &quot;demo-server-${count.index + 1}&quot;
  flavor_slug        = &quot;plus-8-4&quot;
  image_slug         = &quot;ubuntu-22.04&quot;
  ssh_keys           = [file(&quot;~/.ssh/id_ed25519.pub&quot;)]
  zone_slug          = &quot;lpg1&quot;
  count              = 3
}
</code></pre>
<p>The resources that belong to the second project are then additionally given the provider keyword as an extension.</p>
<pre><code class="language-typescript"># Create a backup server using the aliased provider
# in the second project
resource &quot;cloudscale_server&quot; &quot;backup-server&quot; {
  # Use the aliased provider
  provider           = cloudscale.backup

  name               = &quot;backup-server&quot;
  flavor_slug        = &quot;flex-4-2&quot;
  image_slug         = &quot;ubuntu-22.04&quot;
  ssh_keys           = [file(&quot;~/.ssh/id_ed25519.pub&quot;)]
  zone_slug          = &quot;rma1&quot;
}
</code></pre>
<p>Now all the resources in both projects can be created at once:</p>
<pre><code class="language-typescript">terraform apply -var=&quot;cloudscale_api_token=$TOKEN1&quot; \
                -var=&quot;cloudscale_backup_api_token=$TOKEN2&quot;
</code></pre>
<p>Incidentally, as it is a Terraform feature, you can not only use the <code>alias</code> keyword at cloudscale.ch to specify multiple API tokens, but generally whenever you want to <strong>use an otherwise identical <code>provider</code> block with different parameters.</strong></p>
<br/>
<p>Choose the right approach for you to manage your resources at cloudscale.ch, depending on the specific setup, participants and preferred way of working for each case. Even if you spread your cloud resources across multiple projects in the process, you can use <code>alias</code> instances of the <code>cloudscale-ch</code> provider to <strong>bring the threads together in a single consolidated Terraform repository.</strong></p>
<p>One goal, many names: <br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Selected Aspects of the New FADP
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/09/29/selected-aspects-of-the-new-fadp</link>
          <pubDate>Fri, 29 Sep 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/09/29/selected-aspects-of-the-new-fadp</guid>
          <description>
            <![CDATA[<p>Data protection and data security are pivotal when it comes to safeguarding the confidence and rights of people, known as &quot;data subjects&quot;, whose data are being processed. The revised Federal Act on Data Protection means that awareness of this topic has once again come to the forefront, which is also reflected here at cloudscale.ch, where we receive frequent questions about it. In the following, we would like to look at some of the key points and explain how we deal with them, in particular also in terms of the data processing agreement that we make available to our customers.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-fadp-dsg-en.png"/><h3>The new Swiss FADP</h3>
<p><a href="https://www.fedlex.admin.ch/eli/fga/2017/2057/de">According to the Swiss Federal Council</a>, transparency, monitoring options and awareness of responsibility associated with the processing of personal data were three of the goals of the revised <a href="https://www.fedlex.admin.ch/eli/cc/2022/491/en">Federal Act on Data Protection (FADP)</a>. A further aim was to align the Swiss FADP more closely with the EU General Data Protection Regulation (GDPR) to <strong>ensure continued certification by the European Commission of an adequate level of protection in Switzerland.</strong></p>
<p>Even though the old FADP contained similar regulations, many data protection provisions of recent years were mainly familiar from the GDPR. The revised FADP means that <strong>Swiss data protection has now also become a topical issue.</strong> This is probably not least due to the new threat of fines of up to CHF 250,000 for non-compliant decision-makers. In the following, however, we do not want to focus on the fines, but on the two main constellations relating to day-to-day data protection and on the question of how cloudscale.ch is involved.</p>
<h3>Data subjects and controllers</h3>
<p><strong>The new FADP protects natural persons</strong> (i.e. individuals, rather than legal persons such as companies or associations) when their data are processed. These data subjects do not need to be known by name: if the subjects can be identified using the data, the data are considered as personal data. A company processing personal data of this kind, thereby determining the purposes for which and the means by which the data are processed, is the &quot;controller&quot;. In the context of e.g. an online shop, the owner of the shop is the controller who processes, among other things, the contact details and orders of customers (data subjects).</p>
<p>Certain rights and obligations exist between data subject and controller. This means that, if necessary, data subjects are entitled to have incorrect data that are processed about them rectified or possibly even data completely deleted. The starting point for this is data subjects being informed about the data processing, which is why <strong>the controller has an obligation to provide information and to disclose,</strong> which often takes the form of a privacy policy on the controller&#x27;s website.</p>
<p>Here at cloudscale.ch, we have always been economical and, in our role as controller, do not collect unnecessary data in advance. However, in order to conclude contracts and provide our services, a certain minimum level is necessary, including e.g. contact details of our customers or data for invoicing purposes. We also provide information about our data processing as a controller in a <strong>privacy policy published on our website.</strong></p>
<br/>
<img src="https://static.cloudscale.ch/img/news-fadp-dsg-en-301d54951ba2.png" alt="Data subject, controller, (sub-)processor: An overview."/>
<h3>Data processing</h3>
<p>Controllers often not only process personal data themselves, but <strong>also involve service providers.</strong> In the online shop mentioned as an example above, a credit check could be obtained before orders are shipped to a customer on account. The shop owner is also the controller for this step and remains wholly responsible for data processing. Here, the credit agency is a &quot;processor&quot; because it processes data on behalf of the controller. This kind of data processing is basically permissible, but needs to be regulated in a contract concluded between controller and processor.</p>
<p>As a cloud provider, we here at cloudscale.ch become a processor as soon as our customers use our services to process personal data, e.g. when the afore-mentioned online shop is run on our infrastructure. This is why <strong>we already offer the required contract for data processing, the data processing agreement (DPA),</strong> which can be concluded with just two clicks of the mouse directly in our cloud control panel.</p>
<p>As opposed to the GDPR with its detailed regulations, even the revised version of the Swiss FADP gives barely any instructions on the content of the DPA. We nonetheless made the most of the opportunity and reviewed our DPA, with the revisions coming into effect on 2023-09-01. While there were almost no essential changes, we revised the structure and a lot of the wording to <strong>improve clarity and make things easier to understand,</strong> e.g. in the following places:</p>
<ul>
<li>The document (in German) is now called &quot;Vertrag zur Auftragsverarbeitung&quot; or &quot;AV-Vertrag&quot; (AVV), which are more common terms nowadays than the previous name.</li>
<li>We no longer specifically mention GDPR, but simply use &quot;applicable data protection legislation&quot;. This clearly refers to the fact that other regulations may also apply, e.g. the Swiss Federal Act on Data Processing (FADP).</li>
<li>We have continued to use the terms &quot;verarbeiten&quot; and &quot;personenbezogene Daten&quot;, as is common in the context of GDPR, whereas the Swiss FADP uses different German synonyms. The consistent choice of wording aims to make it easier to read while it in no way opposes the interpretation of the DPA according to Swiss law.</li>
<li>We now specify &quot;documented instructions&quot; (a GDPR term). The aim here is to clearly state that the customer, or possibly a third party, independently and directly (and not via correspondence with us) determines how our IaaS services are used and thus how data are processed.</li>
<li>Previously we referred to other documents for technical and organizational measures (TOMs). On our website, in particular, we provided regular reports on improvements to our security features. Now, a set of TOMs has been explicitly summarized in an Annex to the DPA. NB: the TOMs described represent a point-in-time snapshot. While the level of protection described is binding and may not be undercut, we have the option in future to change and further develop the measures actually taken. The customer will have to assess whether this level is suitable for specifically planned processing. Here at cloudscale.ch, we provide standardized services without getting involved in individual cases.</li>
<li>In addition to the TOMs that we take on our end, we also have a list of security-relevant features relating to our services, which our customers can use for themselves to support the security of data and their processing.</li>
<li>The provision regarding deletion of data at the end of the contract has been specified in more detail, in particular in terms of reference to the fact that customers can move their data to a new location independently.</li>
</ul>
<br/>
<p>Data protection and data security have always been key for us here at cloudscale.ch. In this process, we not only handle personal data responsibly, but also <strong>support our customers as they adhere to the relevant specifications,</strong> e.g. by means of the DPA that they can conclude directly in our cloud control panel.</p>
<p>Committed to data protection,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[New, Simplified Billing Mechanism
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/08/09/simplified-billing-mechanism</link>
          <pubDate>Wed, 09 Aug 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/08/09/simplified-billing-mechanism</guid>
          <description>
            <![CDATA[<p>At cloudscale.ch, there are no fixed costs or minimum terms and as a customer you only pay for what you actually use. And there is no catch: to-the-second billing means that you do not have to worry about adding on services when you need them as you can simply delete them again afterwards. While this benefit has remained unchanged, we have completely revised and simplified the mechanism behind it.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-project-costs.png"/><h3>Until now, cloud resources could be &quot;exchanged&quot;</h3>
<p>If you created a virtual server or other cloud resources, the relevant costs for 24 hours were <strong>taken from your account balance immediately.</strong> When this prepaid period expired, a new fee was taken for the next 24 hours – for as long as you kept the resources.</p>
<p>When cloud resources were deleted, a credit note was issued for any remaining unused, but prepaid time in this 24-hour period. When servers and volumes were scaled, the credit note was issued at the old price and then the new price was charged – again for 24 hours. This means that the cloud resources were <strong>invoiced individually throughout the day</strong> depending on when they were created or scaled for the last time.</p>
<h3>Now, only actual costs are charged</h3>
<p>The costs are now recorded in the background and taken from your account balance shortly after midnight (in the local time zone of Zurich). This means that for cloud resources that have already been deleted again, you are <strong>charged the actual costs directly</strong> instead of by means of a &quot;down payment invoice&quot; that is then corrected using a credit note.</p>
<p>While both methods ultimately result in the same costs, the new mechanism is based on what you intuitively expect from to-the-second billing. And in certain constellations this <strong>avoids numerous unnecessary debits and payment reversals,</strong> e.g. if you use <a href="https://www.cloudscale.ch/en/news/2023/05/08/gitlab-runners-in-the-cloud">integration tests with dynamically created GitLab runners</a> or if you are simply trying out <a href="https://www.cloudscale.ch/en/news/2022/07/01/price-reductions-and-new-flavors">various flavors</a> to find the one that is right for you.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-project-costs-3d74139f8fd2.png" alt="New overview page per project listing all costs."/>
<p>On a new overview page per project, the costs are further broken down.</p>
<h3>Overview of total costs and composition</h3>
<p>As previously, you will see the daily total costs of your individual projects in the &quot;Billing&quot; area of our cloud control panel. <strong>On a new overview page per project, these costs are now further broken down,</strong> which allows you to see at a glance how these costs are distributed across the individual resource types (e.g. compute, storage, etc.) and across the specific resources (individual virtual servers, volumes, etc.). Here, the costs per day are listed for all resources that exist in the project at the time in question.</p>
<p><strong>Our object storage is an exception</strong> as it is not calculated per day but based on actual usage. This is why these costs were always taken from your account balance retrospectively, i.e. shortly after midnight. As there is no fixed price per day here given that it is usage-based, the overview shows the average costs of the past seven days.</p>
<br/>
<p>Flexibility is one of the many advantages of the cloud. <strong>You always book and pay for exactly what you need.</strong> Billing per calendar day (instead of for each individual cloud resource) means that the &quot;bookkeeping&quot; behind this system has been significantly simplified. And we are already working on other tools to provide you with enhanced support in terms of cost control.</p>
<p>No calculator required.<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Mitigation of CVE-2023-20593 (Zenbleed)
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/07/26/mitigation-cve-2023-20593-zenbleed</link>
          <pubDate>Wed, 26 Jul 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/07/26/mitigation-cve-2023-20593-zenbleed</guid>
          <description>
            <![CDATA[<p>On Monday, 2023-07-24, it was announced that researcher Tavis Ormandy had found a <strong>CPU vulnerability in the AMD Zen 2 platform.</strong> Tavis Ormandy <a href="https://lock.cmpxchg8b.com/zenbleed.html">described the hole in detail on his website</a>. As we now mainly rely on AMD CPUs here at cloudscale.ch, we realized immediately that we were affected by this security hole. Using the proof-of-concept code that was published together with the description, we were then able to confirm this.</p>]]>
          </description>
          <content:encoded><![CDATA[<p>There was no question for those involved in the escalation process that this vulnerability required an immediate reaction / immediate mitigation in order to <strong>ensure the security of our customers in the best possible way.</strong></p>
<p>In a next step, three possible mitigation approaches (BIOS update, &quot;chicken bit&quot; and microcode update) were discussed and the latter two were tested in our lab environment. These two problem-solving approaches have the advantage that they can be used during live operations, although the &quot;chicken bit&quot; option would probably have been associated with a considerable loss of performance, which is why we opted for <strong>mitigation by means of a microcode update.</strong></p>
<p>After successful application of the microcode update in our lab, we were able to verify that it was <strong>no longer possible to exploit the vulnerability using the proof-of-concept code</strong> and that operations remained stable.</p>
<p>After our test suite passed successfully, we decided to <strong>roll out the update in batches in the production environment.</strong> Given the urgency, we did not set a two-week notice period for the <a href="https://www.cloudscale-status.net/incidents/78065">maintenance window</a>, as we usually do, but scheduled it with immediate effect – in both cloud locations at once, despite usually scheduling maintenance for two separate days. We initially applied the microcode update on individual compute hosts and then in batches.</p>
<p>The <strong>last compute host was finally patched</strong> at 01:33 (CEST) on Tuesday, 2023-07-25.</p>
<p>Setting priorities:<br/>
Your cloudscale.ch team</p>
<p>PS: In order to be continuously updated about incidents and planned maintenance work, <strong>subscribe to the updates on the channel of your choice</strong> <a href="https://www.cloudscale-status.net">on our status page</a>.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Type-Safe Mocking of Interfaces in TypeScript
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/07/13/type-safe-mocking-interface-typescript</link>
          <pubDate>Thu, 13 Jul 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/07/13/type-safe-mocking-interface-typescript</guid>
          <description>
            <![CDATA[<p>As we have already mentioned previously, here at cloudscale.ch we love <a href="https://www.cloudscale.ch/en/news/2021/04/27/testing-infrastructure-from-user-perspective">automated testing</a>. We are also great fans of type-safe languages. Since we started <a href="https://www.cloudscale.ch/en/news/2023/06/08/control-panel-stack-highlights">increasingly using TypeScript</a> in our front-end GUI, the question has arisen more frequently about how we can write good tests for our TypeScript code. This is why we want to look in more detail at the world of unit testing and TypeScript and to share with you a code snippet that has made our life significantly easier.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-control-panel-server-view.png"/><p>The code used in the examples in this article is a somewhat simplified version of the actual code that calculates data for the following React component (this view will be familiar to our users):</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-control-panel-server-view-105d21f60dde.png" alt="Summary of existing servers."/>
<p>The data read from the API is used to generate both the server list and the summary.</p>
<p>The aim is to establish the following information for a given list of servers:</p>
<ul>
<li>Number of servers</li>
<li>Total server memory</li>
<li>Total storage of all server volumes</li>
<li>Total daily server costs</li>
</ul>
<p>To start, here is an overview of the corresponding TypeScript code:</p>
<pre><code class="language-typescript">export interface Server {
    name: string
    daily: number
    memory: number
    volumes: Volume[]
}

export interface Volume {
    type: &#x27;ssd&#x27; | &#x27;bulk&#x27;
    size: number
}

export interface ServerSummary {
    count: number
    totalMemory: number
    totalStorage: number
    totalCost: number
}

export const getServerSummary = (servers: Server[]): ServerSummary =&gt; {
    const count = servers.length;
    const totalCost = servers.reduce((accu, s) =&gt; accu + s.daily, 0);
    const totalMemory = servers.reduce((accu, s) =&gt; accu + s.memory, 0);
    const volumes = servers.reduce&lt;Volume[]&gt;((accu, s) =&gt; accu.concat(s.volumes), []);
    const totalStorage = volumes.reduce((accu, v) =&gt; accu + v.size, 0);
    return {count, totalCost, totalMemory, totalStorage};
}
</code></pre>
<br/>
<p>The two first interfaces, <code>Server</code> and <code>Volume</code>, are the data types for the input. Next comes the <code>ServerSummary</code> interface, which contains the four values to be calculated. Lastly, we can see the <code>getServerSummary</code> function, which is our test subject. To calculate the totals, we use <code>Array.reduce()</code> here.</p>
<p>In the next step, we will look at the associated unit test, which is also implemented in TypeScript:</p>
<pre><code class="language-typescript">test(&#x27;test getServerSummary&#x27;, () =&gt; {
    // arrange
    const servers: Server[] = [
        {name: &#x27;server1&#x27;, daily: 1, memory: 4, volumes: [{size: 50, type: &#x27;ssd&#x27;}]},
        {name: &#x27;server2&#x27;, daily: 2, memory: 8, volumes: [{size: 10, type: &#x27;ssd&#x27;}, {size: 200, type: &#x27;bulk&#x27;}]},
    ]

    // act
    const actual = getServerSummary(servers)

    // assert
    const expected: ServerSummary = {
        count: 2,
        totalCost: 3,
        totalMemory: 12,
        totalStorage: 260,
    };
    expect(actual).toEqual(expected)
});
</code></pre>
<br/>
<p>This is a classic unit test in line with the arrange, act and assert (AAA) pattern:</p>
<ul>
<li>Arrange: we define two <code>Server</code> instances with test data.</li>
<li>Act: we call the <code>getServerSummary</code> function.</li>
<li>Assert: we compare the actual result with the expected result.</li>
</ul>
<p>Although this test works perfectly, close observation reveals the following: as we are using TypeScript and the properties <code>Server.name</code> and <code>Volume.type</code> are not optional, we also have to populate them in the test data (see Arrange) even though they are not relevant for this test case. If we remove <code>name</code> and <code>type</code>, the unit test will continue to run, but the TypeScript compiler will issue an error message.</p>
<p>Having to specify the test data that are not required might not be a problem in this small example, but can very quickly become inconvenient in complicated, nested structures.</p>
<p>The following represents an initial naive attempt to solve the problem using <a href="https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#type-assertions">TypeScript Type Assertions</a>:</p>
<pre><code class="language-typescript">const servers: Server[] = [
    {daily: 1, memory: 4, volumes: [{size: 50}]} as unknown as Server,
    {daily: 2, memory: 8, volumes: [{size: 10, type: &#x27;ssd&#x27;}, {size: 200,}]} as unknown as Server,
]
</code></pre>
<br/>
<p>We now no longer need to indicate the superfluous attributes, but we have also sacrificed type safety: if we now incorrectly use a wrong name or type, such as <code>{sizeGb: &#x27;x&#x27;}</code>, for <code>{size: 50}</code>, we will get no compiler error and an unintuitive test result.</p>
<p>After several experiments and a range of improvements, we ended up with the following test helper:</p>
<pre><code class="language-typescript">function mockPartially&lt;T extends object&gt;(mockedProperties: Partial&lt;T&gt; = {}): T {
  const handler = {
    get(target: T, prop: keyof T &amp; string) {
      if (prop in mockedProperties) {
        return mockedProperties[prop];
      }
      throw new Error(`Mock does not implement property: ${prop}, but it was accessed.`);
    },
  };
  return new Proxy&lt;T&gt;({} as T, handler);
}
</code></pre>
<br/>
<p><code>mockPartially()</code> sets up a <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy">proxy object</a> for any type <code>T</code>. An object with any subset of properties of <code>T</code> can be passed as <code>mockedProperties</code>. This is made possible by the <code>Partial</code> type constructor. <code>Partial&lt;T&gt;</code> creates a new type where all properties of <code>T</code> are <a href="https://www.typescriptlang.org/docs/handbook/utility-types.html#partialtype">set as optional</a>. Using the proxy object, we can implement any behavior when accessing properties of the object. In our case we throw an error if the property in question was not specified in <code>mockedProperties</code>. If the property has been indicated, its value is returned unchanged.</p>
<p>We import <code>mockPartially</code> as <code>mP</code>, which enables us to define the test data as follows:</p>
<pre><code class="language-typescript">const servers: Server[] = [
    mP&lt;Server&gt;({daily: 1, memory: 4, volumes: [mP&lt;Volume&gt;({size: 50})]}),
    mP&lt;Server&gt;({daily: 2, memory: 8, volumes: [mP&lt;Volume&gt;({size: 10}), mP&lt;Volume&gt;({size: 200})]}),
]
</code></pre>
<br/>
<p>This solution has the following advantages for us:</p>
<ul>
<li>We can omit any unnecessary input data.</li>
<li>If we forget required data, we receive a clear error message, such as: <code>Mock does not implement property: volumes, but it was accessed.</code></li>
<li>In the case of false test data, such as <code>{sizeGb: 50}</code> or <code>{size: &#x27;x&#x27;}</code>, we receive error messages from the TypeScript compiler that are easy to understand.</li>
</ul>
<br/>
<p>We hope you have found this in-depth insight into the day-to-day life of our software developers interesting and that you might be able to use our <code>mockPartially</code> helper yourself.</p>
<p>(Not) mocking!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Our Control Panel: Highlights from the Stack
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/06/08/control-panel-stack-highlights</link>
          <pubDate>Thu, 08 Jun 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/06/08/control-panel-stack-highlights</guid>
          <description>
            <![CDATA[<p>We regularly add new functionalities to our cloud offer and thus to our cloud control panel and the API. In this article, we would like to focus not on individual features but on milestones relating to the technology behind them, from Python and Django to React and to TypeScript. This technology is the foundation that makes it possible for our developers to concentrate on the essential and, at the same time, for you to benefit from a user-friendly web interface and numerous integrations in terms of automation tools.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-control-panel-components.png"/><h3>Django – a solid basis</h3>
<p>Over the years, our cloud control panel has undergone comprehensive development. It was originally written in Lua as a minimum viable product. However, in order to improve maintenance we wanted to use a framework, which is why we decided to <strong>re-implement the control panel in Python and Django.</strong></p>
<p>Django describes itself as &quot;the web framework for perfectionists with deadlines.&quot; It has proved to be reliable, well-tested and high-performance. Django allowed us to <strong>rapidly add new features</strong> and improve existing code.</p>
<h3>Gradual introduction of React</h3>
<p>At the time, the control panel comprised just a few dynamic components. These enable users of our control panel to <strong>continuously receive current information, e.g. the power status of servers,</strong> without having to reload the page. The aim here was to further improve usability with more direct handling of cloud resources. After significant consideration, we decided to introduce React. In order to guarantee a smooth transition, we opted for a gradual implementation.</p>
<p>We started implementing new features and adapting existing functionality with React, at all times prioritizing those cases where the benefit seemed greatest for our customers. This <strong>gradual integration</strong> made it possible for us to keep susceptibility to errors as low as possible and at the same time to make the most of all the advantages of dynamic web components.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-control-panel-components-a18a71c260a5.png" alt="Dynamic web components in the cloud control panel."/>
<p>The displayed information, e.g. the power status of servers, is up-to-date, even without reloading the page.</p>
<h3>Switch to single-page application</h3>
<p>The switch to a single-page application (SPA) ensued as a logical next step in our increasing use of dynamic web components. <strong>This means it is no longer necessary to reload the page within the control panel,</strong> thus providing an appealing and interactive web application.</p>
<p>Together with this switch, we also largely replaced the generation of HTML elements using Django templates. Instead an (internal) API now runs on the server side and provides the browser with all the required contents. This clear <strong>separation of browser and server technologies</strong> also makes cooperation easier in our software development team.</p>
<h3>End-to-end type checking</h3>
<p>When revising existing software code, in particular, errors are frequently introduced in untyped languages such as JavaScript if it is not consistently clear what type of variables will be used at runtime. This is why we have been using TypeScript for a while now. This language makes it possible to <strong>complement JavaScript code with data types</strong> and to have them tested by the TypeScript compiler.</p>
<p>Communication between browser and server code via our REST API is a central aspect of our control panel. We therefore also want to ensure that both sides are using the same data types. To this end, we developed a special code generator that <strong>automatically generates TypeScript code based on the definitions of our Django REST API.</strong> In the process of our continuous integration (CI), checks are then carried out to ensure that the code generated in this way is correct. This end-to-end type checking approach helps us to improve code quality and to reduce potential errors.</p>
<br/>
<p>We continuously work at further improving our web application based on feedback and iterative steps. Our goal is to develop a user-friendly application that <strong>does justice to the requirements of our users, while supporting us in its further development.</strong></p>
<p>On the same page!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[GitLab Runners in the Cloud
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/05/08/gitlab-runners-in-the-cloud</link>
          <pubDate>Mon, 08 May 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/05/08/gitlab-runners-in-the-cloud</guid>
          <description>
            <![CDATA[<p>Cloud computing is suitable for many areas of application, and its benefits are particularly discernible when computing power and storage space only need to be used temporarily, such as for software tests and deployments with GitLab. Our Ansible playbook and step-by-step instructions will help you to use GitLab runners on the cloudscale.ch infrastructure while benefiting from maximum performance with minimum costs.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-gitlab-runners.png"/><h3>Periodic peaks due to integration tests</h3>
<p><strong>Automated software tests make sense</strong> and are indispensable in many scenarios. At defined times or for defined events, e.g. when pushing code changes, test suites are started that will, ideally, confirm that all test cases are running successfully and might be directly followed by a productive deployment. Any errors found serve as early and meaningful feedback about where corrections are still required.</p>
<p>More complex test configurations may be fairly resource-intensive and require high-performance infrastructure, especially when engineers need the test results in order to continue with their work. In-between test runs, however, the test infrastructure is idle. This is where the cloud offers a way of avoiding unnecessary costs: when required, resources are made available at extremely short notice; once tests have been completed, the <strong>infrastructure and the associated costs can immediately be reduced again.</strong></p>
<br/>
<img src="https://static.cloudscale.ch/img/news-gitlab-runners-c434d13c00b5.png" alt="Setting up GitLab runners in the cloud via Ansible playbook or manually."/>
<h3>Your own dynamic test setup</h3>
<p><a href="https://github.com/cloudscale-ch/gitlab-runner#gitlab-gitlab-runners-autoscaling-caching---on-cloudscalech">Our Ansible playbook</a> <strong>will support you as you create your own test setup,</strong> which only requires very few resources for day-to-day operation and automatically draws on resources from our cloud during test runs. Even if your tests are extremely demanding and require runners of correspondingly large dimensions, the resources are deleted again after the test run and only incur low costs due to our to-the-second billing.</p>
<p>The Ansible playbook assumes that you want to install the whole setup with all its components from scratch. If you already have, for example, a GitLab instance or a runner or would like to <strong>adjust other installation details individually,</strong> the accompanying <a href="https://github.com/cloudscale-ch/gitlab-runner#-manual-setup">instructions we have provided for a manual setup</a> will help you install the components you want.</p>
<br/>
<p>Thanks to automatic scaling and billing by the second, you can make the most of the cloud with resources that are available exactly when you need them and that you only pay for during this time. Test setups with GitLab runners and their typically short, but intensive workloads benefit particularly here, and you can <strong>install them in a flash with our Ansible playbook and the corresponding instructions.</strong></p>
<p>Test us out!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Load Balancer "as a Service"
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/04/28/load-balancer-as-a-service</link>
          <pubDate>Fri, 28 Apr 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/04/28/load-balancer-as-a-service</guid>
          <description>
            <![CDATA[<p>Ensuring the greatest possible availability of an online service requires measures at various levels. Redundancy – a directly incorporated &quot;plan B&quot;, so to speak – plays a key role here. Instead of engineering everything yourself, you can use our new load balancer service with immediate effect to create a sophisticated setup to optimize the continuous availability of your online service by means of redundancy.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-load-balancer-docs.png"/><h3>From fail-over to load balancing</h3>
<p>At cloudscale.ch, we have always endeavored to maximize the availability of our infrastructure and thus to guarantee interruption-free operation of your virtual servers. Failures are, however, still possible and, in addition, there are planned interruptions, e.g. when you update the software you use. You already have the option of Floating IPs as a mechanism for <strong>keeping your service available from your users&#x27; perspective:</strong> the IP address that users connect to can be moved, either in an automated manner or manually, from one virtual server to another, thus ensuring that requests can be processed while the original server is offline.</p>
<p>Our new load balancer offer goes even further. As opposed to with just an IP address, it is not only possible to divert incoming traffic from one server to another, but the load balancer can <strong>continuously distribute incoming connections – and thus the computing load – to two or more virtual servers.</strong> Additional health checks are used to regularly check the state of the virtual servers and if one of them does not respond as expected, it is taken out of rotation and incoming traffic distributed among the remaining correctly functioning servers. As opposed to with a Floating IP, it is also possible to configure separate sets of servers for processing requests to different TCP ports.</p>
<h3>Redundancy within the load balancer itself</h3>
<p>To ensure that the load balancer itself does not become a single point of failure, it is actually <strong>a pair of load balancers that run on separate hardware.</strong> The &quot;virtual IP address&quot;, which is visible from the outside, is allocated – in a similar way to a Floating IP – to one of the two load balancers, and switches to the other load balancer if a problem is detected with the first one. While it is already possible to build a setup of this kind on your own with two additional virtual servers and a Floating IP, our load balancer service significantly reduces the effort required. Once it has been configured, the load balancer carries out its work without your having to worry about scripting checks and fail-overs or about maintaining additional servers.</p>
<p>Please note that the virtual IP address (VIP) is linked to a specific load balancer and will be deleted if the load balancer is deleted. So that you can offer your users a service with an IP address that remains the same, we recommend that you use a Floating IP in combination with load balancers. Floating IPs (but not Floating Networks) can also be <strong>moved between virtual servers and load balancers,</strong> which means that you can seamlessly replace an individual server with a load balancer setup.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-load-balancer-docs-38bb1ae61072.png" alt="Description of the calls in the API documentation."/>
<h3>A few tips</h3>
<p>Creating and configuring load balancers is currently only possible via the API. The extensions to our <a href="https://github.com/cloudscale-ch/cloudscale-go-sdk">Go SDK</a>, <a href="https://www.terraform.io/docs/providers/cloudscale/index.html">Terraform provider</a> and <a href="https://docs.ansible.com/ansible/latest/collections/cloudscale_ch/cloud/index.html#plugins-in-cloudscale-ch-cloud">Ansible collection</a>, which are based on our API, will be published over the next few days. Existing load balancers will also be displayed in our web-based cloud control panel; this method of configuration has been planned for a later date. It goes without saying that the API calls required to use our load balancer service are described in detail in our <a href="https://www.cloudscale.ch/en/api/v1#load-balancers">API documentation</a>. You will also find sample requests and responses for every supported call there. Please note that <strong>the API specification is currently still designated as &quot;beta&quot;,</strong> and we reserve the right to carry out further adjustments that are not fully compatible with the current state.</p>
<p>The virtual servers, which incoming connections are to be distributed between, need to be accessible from the load balancer via a private <a href="https://www.cloudscale.ch/en/news/2020/04/03/mastering-the-private-network-with-managed-dhcp">managed network</a>. <strong>Two different use cases</strong> are supported for the load balancer itself: it can be created with a public IP address (VIP) and thus accept requests from the Internet; or the VIP may already be located in a private network itself, which means that the load balancer can be used e.g. for services within a Kubernetes cluster.</p>
<br/>
<p>With load balancers &quot;as a service&quot; you can rely on a tried-and-tested concept with immediate effect without having to worry about the individual components yourself. As incoming traffic is always diverted completely automatically to a functioning system, it is simple for you to optimize the availability of your online services for your users. <strong>Reliability – as a service.</strong></p>
<p>Our engineering for your VIP.<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Our Status Page is Moving
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/03/17/our-status-page-is-moving</link>
          <pubDate>Fri, 17 Mar 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/03/17/our-status-page-is-moving</guid>
          <description>
            <![CDATA[<p>Renew your subscription to stay up-to-date with our latest maintenance and service information in future, too. Once our status page has moved, you will have even greater flexibility in choosing the right medium for you: in addition to email and RSS/Atom feeds, you will also have the option of receiving the latest information directly on one of the supported chat platforms if desired.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-status-page-expanded.png"/><h3>A new home for our status page</h3>
<p>Over the past five years, you have been able to find <strong>information about planned maintenance work and any incidents</strong> on <a href="https://www.cloudscale-status.net">https://www.cloudscale-status.net</a>. You also have the option of subscribing to email or the RSS/Atom feed to receive up-to-date information at all times. To ensure that this channel of communication is not affected by interruptions to our normal cloud infrastructure, we run our status page in a separate data center and use a different domain outside our standard &quot;.ch&quot; TLD.</p>
<p>Although the status page proved to be very successful, it was <strong>time to re-evaluate the setup.</strong> We decided to use a specialist provider to host our status page going forward so that we can not only maintain the service for you but further improve it. We selected the German service provider Statuspal. The switch, which is planned for within the next two weeks, will require status page downtime of about one hour. It goes without saying that this will not affect our cloud services.</p>
<h3>Renew and adapt subscription</h3>
<p>We will include the history of existing status notifications on the new status page so that you can continue to see all the entries relating to past events. As we ran the previous system ourselves and are now going to rely on an external service provider, we will intentionally not be migrating the list of notification subscribers to the new system. If you would like to be notified immediately of new entries in future, too, please <strong>simply subscribe to the desired categories again after the switch.</strong></p>
<p>With the new status page, you also have the <strong>possibility of receiving notifications directly in one of the supported chat systems,</strong> such as Slack and Teams. In addition, further tools can be connected via webhooks, including Mattermost, Rocket.Chat and – with a little engineering – also other tools that support webhook integration. If you already use a chat channel to coordinate things relating to your cloud resources, our up-to-date service information will appear directly where it is needed.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-status-page-expanded-298bb6062ae3.png" alt="Preview of our new status page."/>
<h3>Same URL, fresh design</h3>
<p>Although the address of our status page remains https://www.cloudscale-status.net and will continue to be linked to from our https://www.cloudscale.ch website, we recommend that you <strong>save the status page separately in your bookmarks</strong> to ensure that you can also find it if our website ever fails.</p>
<p>The service categories from which you select the information that is relevant to you will also remain unchanged. The layout will be even simpler and you will be able to <strong>see intuitively which entries and updates belong together.</strong></p>
<br/>
<p><strong>Make sure not to miss the latest maintenance and service information in future either.</strong> Add &quot;https://www.cloudscale-status.net&quot; to your bookmarks and – after the switch – renew your subscription to the desired notifications and channels of communication.</p>
<p>Status: Operational.<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Working at cloudscale.ch
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/02/15/working-at-cloudscale-ch</link>
          <pubDate>Wed, 15 Feb 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/02/15/working-at-cloudscale-ch</guid>
          <description>
            <![CDATA[<p>Here at cloudscale.ch, we empower customers in Switzerland and around the world. At the press of a button (or even in a completely automated manner), the suitable infrastructure is always ready – as a self-service feature around the clock. Our engineers make sure that this works reliably and smoothly. With this article, we would like to provide an insight into our work at cloudscale.ch.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-office-neugasse.jpg"/><h3>Main focus: technology</h3>
<p>Our engineers have a great deal of freedom in their choice of work tools, in terms of both hardware and software. It goes without saying that <strong>employees also select their own work device.</strong> This typically consists of a notebook (in addition to MacBooks, Linux setups are also a popular choice) and a large monitor. In terms of IDE, for example, options include JetBrains products as well as vim and Visual Studio Code. Shared tools currently include GitLab and Confluence.</p>
<p>Broad-based and in-depth skills are not only required but also fostered at cloudscale.ch. There is a further training budget for every employee that consists of a certain number of workdays and a financial element. Given that people&#x27;s interests vary greatly, <strong>each employee can select whether and where to use this budget,</strong> with no questions asked. They may choose books to help gain in-depth individual knowledge of a topic, but may also opt for conferences and courses run by specialized providers. It goes without saying that a contribution can also be made towards further training that exceeds this budget, with specific details depending on the individual case. In addition, knowledge is exchanged within the team, e.g. through internal training sessions, mutual code reviews or spontaneous day-to-day discussions.</p>
<h3>Open source with a special touch</h3>
<p>At cloudscale.ch we rely on <strong>existing tools and frameworks as well as on our own developments.</strong> Our cloud is based on OpenStack and Ceph, two leading open source projects in their respective fields. While we developed the web-based cloud control panel and the API that our customers use ourselves from scratch, we also use existing elements such as the Django REST framework. Where we feel it is useful, we also contribute to open source projects, e.g. in the form of extensions and bug fixes.</p>
<p>We are committed to quality – in all its facets. In addition to the performance and availability of our cloud services, this also includes the user experience. Our customers&#x27; points of contact with our cloud – from the control panel to the API and to the documentation – are carefully prepared to ensure that, wherever possible, stumbling blocks never even develop. <strong>This includes tests at all levels,</strong> from unit tests to integration tests and to <a href="https://www.cloudscale.ch/en/news/2021/04/27/testing-infrastructure-from-user-perspective">acceptance tests</a>, which we also publish. Further measures comprise code reviews and manual tests. We take the time needed to deliver solid work and to go the metaphorical extra mile if necessary. This is why we do not issue releases according to a fixed schedule but instead when they are ready and have met our quality standards.</p>
<br/>
<img src="https://static.cloudscale.ch/img/news-office-neugasse-8bea8b79ba27.jpg" alt="Right next to Zurich main station: our office."/>
<h3>Autonomy in terms of working hours and workplace</h3>
<p>Many cloudscale.ch employees – parents and non-parents alike – work part-time in order to ensure they have enough time for their personal duties and commitments. We are also flexible in terms of working hours and allow employees as much freedom as possible. With the exception of Mondays, when we all meet at the office, <strong>working from home is a popular option,</strong> which also facilitates the coordination of the various parts of people&#x27;s life. Using notebooks and the necessary tools, we have everything required to enjoy the flexibility that this well-established model affords us.</p>
<p>On Mondays, when we are all in the office, we have an all-hands meeting for coordination purposes and to share information that may affect everybody. This is then followed, in alternate weeks, by sprint planning or a bi-weekly meeting for technical matters. In addition, there are retrospectives at longer intervals as well as short dailies. Otherwise, we arrange meetings and calls when they are specifically required, e.g. to discuss details of a planned feature, according to the principle of <strong>as much as necessary, as little as possible.</strong></p>
<br/>
<p>We regularly <a href="https://www.cloudscale.ch/en/jobs">look for new people</a> to join our System Engineering and Software Engineering team at cloudscale.ch. If you speak German and English and would like to join the team, please send an email to jobs@cloudscale.ch. The application process may differ on a case-by-case basis, but generally involves <strong>getting to know you personally in an online meeting followed by a technical interview at our offices in Zurich.</strong> You will then have the opportunity to meet the rest of the team over a drink of your choice in an informal setting to see whether the &quot;chemistry&quot; is right. And if everything works out, we will soon be saying:</p>
<p>Welcome on board!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Add Prepaid Credit – Now Also With TWINT
]]></title>
          <link>https://www.cloudscale.ch/en/news/2023/01/16/add-prepaid-credit-with-twint</link>
          <pubDate>Mon, 16 Jan 2023 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2023/01/16/add-prepaid-credit-with-twint</guid>
          <description>
            <![CDATA[<p>At cloudscale.ch, we have always supported several means of payment for topping up your credit balance. This now also includes TWINT, the popular Swiss payment app. However, even with all the existing methods of payment, you will notice an updated look and feel associated with our switch to a new payment service provider.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-twint.png"/><h3>All sorted with prepaid</h3>
<p>Prepaid offers are popular and we all know them from mobile SIM cards and prepaid credit cards. The advantages are obvious: you know exactly where you stand at all times and setting your budget in advance means that there are <strong>no surprises at the end of the month.</strong> Prepaid mode is particularly well suited to services without basic fees, such as those offered by cloudscale.ch.</p>
<p>To use our cloud services, <strong>add an amount of your choice to your account or your organization.</strong> The minimum payment is just CHF 10, which gives users a risk-free way of seeing whether our services are right for their use case. When you use services, these are deducted from your credit every day; otherwise the credit remains in your account or organization and can be used at a later date.</p>
<h3>Increased flexibility, improved usability</h3>
<p>With immediate effect you can now <strong>also select TWINT to load your credit.</strong> This payment system, which is supported by many Swiss banks, can be used in multiple ways both online and offline and has become popular within a very short time. We are delighted to be able to meet a significant need of our Swiss customers with TWINT.</p>
<p>It goes without saying that you can also continue to use the previous payment methods, which consist of Mastercard, Visa, American Express, PostFinance Card and E-Finance as well as PayPal. Our payment service provider is now Datatrans Ltd., which means that the payment process in our cloud control panel has a new look. We particularly like the <strong>clear user guidance and the overlay design,</strong> which integrates almost seamlessly into the control panel and makes loading your credit even more elegant.</p>
<img src="https://static.cloudscale.ch/img/news-twint-1664447c6b09.png" alt="Support for TWINT and an updated look and feel."/>
<h3>Unchanged general conditions</h3>
<p>Nothing has changed in terms of the general conditions pertaining to your credit. With an unchanged maximum balance of CHF 3000, you can decide how long your credit should last. <strong>To-the-second billing also remains unchanged.</strong> When you delete a service, any costs that were charged for 24-hour use are credited back to your balance on a pro-rata basis. And it goes without saying that, for every payment made, you receive an email with a VAT-compliant invoice, which can also be downloaded from the control panel.</p>
<br/>
<p>Usability is a top priority for us. The current switch not only entails cosmetic improvements: with TWINT, we are delighted to offer a <strong>new and popular Swiss means of payment,</strong> which is an ideal complement to established international payment solutions and PostFinance.</p>
<p>Let&#x27;s twint!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Review of 2022 Events
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/12/22/review-of-2022-events</link>
          <pubDate>Thu, 22 Dec 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/12/22/review-of-2022-events</guid>
          <description>
            <![CDATA[<p>Further training and contact with like-minded people – in the open source community and beyond – are integral components of our work at cloudscale.ch. With the end of the year approaching, we are delighted to look back on the many interesting events of 2022 and at the same time to look forward to further rewarding events and encounters in the new year.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>DevOpsDays Zurich</h3>
<p><a href="https://www.devopsdays.ch">DevOpsDays Zurich</a> has now become a fixed date in our calendar and cloudscale.ch was once again there in 2022 as a sponsor. <strong>Prepared and spontaneous topics from the wide field of DevOps</strong> were addressed and expanded in different formats, including e.g. classic talks as well as open spaces. <a href="https://devopsdays.org/events/2022-zurich/program/oliver-goetz">Oliver Goetz&#x27;s talk</a> was particularly relevant from today&#x27;s perspective, covering the (further) development of a cloud-based smart metering system that coordinates energy suppliers and users. The presentation included MVPs and a small, physically reconstructed model.</p>
<h3>OpenInfra Summit</h3>
<p>In early June, one of our engineers took the night train to the <a href="https://openinfra.dev/summit/berlin-2022">OpenInfra Summit</a> at the Berlin Congress Center on Alexanderplatz, where the Chaos Communication Congress has also been held in the past. Under the last corona restrictions, about 1000 participants from around the world engaged with <strong>OpenStack, Kubernetes, containerization and developments surrounding CSI.</strong> Insights were also provided into confidential computing and encrypted workloads. It goes without saying that interpersonal relations were not neglected at the three-day conference with plenty of opportunities to develop and maintain contacts.</p>
<h3>OpenStack Ops Meetup</h3>
<p>Following the OpenInfra Summit, the one-day <a href="https://wiki.openstack.org/wiki/Operations/Meetups">OpenStack Ops Meetup</a> also took place in Berlin. The main focus of the moderated sessions were <strong>discussions with other operators of private and public OpenStack clouds,</strong> which centered on, for example, extremely practical matters that are on the agenda everywhere: from applying the regular upgrades, to challenges associated with growth, and to various deployment approaches. Here again, the closing dinner provided further opportunity for personal discussions, where suddenly you might find yourself talking to a Swedish cloud provider.</p>
<h3>Swiss Python Summit</h3>
<p>Python plays a key role at cloudscale.ch, as <strong>the language that not only most of our own software is written in,</strong> but also for example OpenStack. We took an appropriately large delegation to <a href="https://www.python-summit.ch">Swiss Python Summit</a> with its wide-ranging program on the OST Campus in Rapperswil-Jona. Memorable talks included &quot;Automating teaching about automation in Python&quot; by Florian Bruhin and, of course, &quot;<a href="https://www.youtube.com/watch?v=24E4meYni6s">Rust for Python Developers</a>&quot; by our own Dave Halter. In glorious weather conditions we also met many familiar faces, including some from <a href="https://www.coredump.ch">Coredump Hacker- und Makerspace</a> in Rapperswil-Jona.</p>
<h3>Cloud Native Day</h3>
<p>On a beautiful autumn day, Cloud Native Day was held on Bern&#x27;s Mount Gurten, and cloudscale.ch was once again involved as a sponsor. <strong>At the two-track event, Kubernetes and cluster APIs were two of several key topics.</strong> Florian Forster went beyond pure containers in his talk. Taking the example of ZITADEL, an open source identity provider that can also be used for single sign-on with our cloud control panel, he spoke about his practical experience of switching to &quot;serverless&quot;. Cloud Native Day ended with the after-party, which even the onset of rain could not dampen.</p>
<h3>Dataspace Switzerland</h3>
<p>This year, in addition to these events with a more technical focus, cloudscale.ch was also a partner of Dataspace Switzerland, the series of events by swiss made software. <a href="https://www.swissmadesoftware.org/blog/Anderes/rueckblick---3--dataspace-switzerland---den-kopf-nicht-in-den-sand-stecken.html">June</a> was all about the Swiss Confederation&#x27;s National Cyber Security Centre (NCSC), current threats such as ransomware, IoT, the American CLOUD Act and, last but not least, the &quot;human factor&quot;. The main focus of <a href="https://www.youtube.com/watch?v=gjfSFW9GMro">November</a> was <strong>the new data protection law.</strong> It became clear that almost all companies still have things to do before the legislation comes into force on 2023-09-01, and practice-focused talks showed how to make sure the next steps on this journey are successful.</p>
<h3>Ansible Zürich Meetup</h3>
<p>After a year&#x27;s break, it was time again in late November for the Zurich Ansible community to get together for the <a href="https://www.meetup.com/de-DE/ansible-zurich/">meetup</a> not far from our office. In their presentations, three speakers provided an insight into best practices and offered <strong>tips on using Ansible for experienced and less experienced users.</strong> Shop talk then continued over American hot dogs, waffles and drinks, also with some employees of Red Hat, the company that has long been behind the popular automation tool.</p>
<br/>
<p>The wide range of events and the active participation show that IT and cloud are not merely technical matters. Whether you are a user of a complex system or an engineer, personal discussions help you to keep your eye on the ball and to drive technology forwards together. Do you know of an event that should not be missed? Please let us know as <strong>we are already looking forward to interesting discussions in 2023.</strong></p>
<p>See you soon!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Gain Confidence With Cloud Exercises
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/11/18/gain-confidence-with-cloud-exercises</link>
          <pubDate>Fri, 18 Nov 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/11/18/gain-confidence-with-cloud-exercises</guid>
          <description>
            <![CDATA[<p>New cloudscale.ch employees usually bring a great deal of experience with them. However, onboarding as one of our engineers still involves a learning process where we accompany our new team members both in person and with prepared modules. One of these modules consists of &quot;cloud exercises&quot;, which provide an overview of our services and enable new employees to get to know various supported deployment tools. As exercises from a user perspective, they are equally suitable for our customers and partners, which is why we have also published them on GitHub.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-cloud-exercises.png"/><h3>Onboarding with a broad and practice-focussed perspective</h3>
<p>Not all cloudscale.ch engineers are typical cloud users themselves, which is a good thing. Our teams deal with hardware, virtualization and the ongoing maintenance and further development of our whole infrastructure, among other things. This is what makes it possible for you as a user to access the resources you require within seconds, either at the press of a button or <strong>in a completely automated manner.</strong></p>
<p>Proximity to our customers and their concerns is a top priority for us here at cloudscale.ch. This is why we have compiled a collection of short exercises for engineers who join our company in order to enable them to <strong>get to know our offer and its associated features from a customer perspective,</strong> too. The insights gained in this way are useful in manifold ways – and not least when it becomes necessary to recreate our customers&#x27; concerns and respond to them competently.</p>
<h3>Learning by doing – for our users, too</h3>
<p>It was clear to us that <strong>guided exercises of this kind, which relate to the features and tools and can be completed independently,</strong> are not only useful for our own engineers, but are also beneficial to users at our customers and partners. This is why we have published this little tour of our services <a href="https://github.com/cloudscale-ch/cloud-exercises">on GitHub</a>.</p>
<img src="https://static.cloudscale.ch/img/news-cloud-exercises-e9d27ca0650c.png" alt="Cloud exercises with introduction and background information."/>
<p>In addition to a short introduction, the cloud exercises also contain links to relevant blog entries for each area. However, there are no actual instructions or a one-size-fits-all solution. <strong>The journey is its own reward.</strong> By spending time on a previously unknown feature or tool, you develop your own understanding piece by piece. And if you find yourself racking your brain in vain, we here at cloudscale.ch are happy to give you a few pointers – irrespective of whether you are one of our new colleagues or one of our customers.</p>
<br/>
<p>The cloud exercises also make it clear that several paths frequently lead to the same destination. Use the exercise examples as an opportunity to <strong>try out various approaches for yourself.</strong> And who knows, maybe you will come across a solution by chance that will soon become indispensable.</p>
<p>Practice makes perfect!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Did You Know...? – Our Control Panel
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/10/18/did-you-know-our-control-panel</link>
          <pubDate>Tue, 18 Oct 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/10/18/did-you-know-our-control-panel</guid>
          <description>
            <![CDATA[<p>From the clear layout of your cloud resources to our transparent, purely linear price structure: it has always been important to us here at cloudscale.ch to have an easy-to-use cloud offer. In this article, we would like to give you a few tips about our cloud control panel to help you achieve your day-to-day goals even more elegantly.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-defaultproject.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-dropdownsearch.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-timezone.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-2fa.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-loginnotification.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-defaultsshkeys.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-servergroups.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-defaultzone.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-reversedns.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-consolelog.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-panel-hostkeys.png"/><h3>Default project</h3>
<p>While many of our users work on several cloud projects at once, these often differ greatly in scope. By <strong>setting your most frequently used project as your &quot;default project&quot; in your account settings,</strong> you can get going immediately after logging in. It goes without saying that you can continue to access your other projects with just two clicks of your mouse.</p>
<img src="https://static.cloudscale.ch/img/news-panel-defaultproject-ae0d19dfc4d1.png" alt="Default project"/>
<h3>Search in the dropdown</h3>
<p>You can move intuitively between your projects by selecting them from the dropdown field. And to make things even faster, simply <strong>type a few letters from the required project name</strong> (even from the middle of the word) into the open field and confirm by pressing Enter. This also works for most other dropdowns in our control panel.</p>
<img src="https://static.cloudscale.ch/img/news-panel-dropdownsearch-dbb6961e6209.png" alt="Search in the dropdown"/>
<h3>Time zone and format</h3>
<p>You will find dates and times in various places in the control panel, which allow you to see immediately when, for example, a server was created or an API token was last used. It goes without saying that time stamps also play a key role in the various logs. In your account settings, select the <strong>time zone and format that will best support you in your work.</strong></p>
<img src="https://static.cloudscale.ch/img/news-panel-timezone-d6417d97df40.png" alt="Time zone and format"/>
<h3>Two-factor authentication</h3>
<p><strong>Protect your cloudscale.ch account with two-factor authentication (2FA).</strong> This is where, when logging in, on top of your normal password, you will be asked for a token (also known as a one-time password or OTP), which you generate with your smart phone, for example. Store the corresponding recovery code in a safe place, too, in case your smart phone is ever not available.</p>
<img src="https://static.cloudscale.ch/img/news-panel-2fa-4f6faf205482.png" alt="Two-factor authentication"/>
<h3>Login notification</h3>
<p>Under the &quot;Sessions&quot; menu item you can indicate whether you want to be <strong>informed of logins to your cloudscale.ch account.</strong> Particularly in cases where you take advantage of <a href="https://www.cloudscale.ch/en/news/2021/06/18/single-sign-on-using-own-identity-provider">SSO with your own identity provider</a> to log into our control panel, it is recommended that you have an additional password-based &quot;break glass&quot; account with enabled notifications in case this account is actually used.</p>
<img src="https://static.cloudscale.ch/img/news-panel-loginnotification-54090322978e.png" alt="Login notification"/>
<h3>Default SSH keys</h3>
<p>Upload the public SSH keys of your colleagues into our cloud control panel, too, so that you can grant them access from the outset when creating new servers. You can use the &quot;star&quot; icon to determine <strong>which of the keys should be preselected,</strong> and it goes without saying that you can adapt your selection when you launch each individual server.</p>
<img src="https://static.cloudscale.ch/img/news-panel-defaultsshkeys-02354c4944d7.png" alt="Default SSH keys"/>
<h3>Server groups</h3>
<p>Anti-affinity ensures that correspondingly grouped servers (e.g. three web servers in a load-balancing cluster) run on different physical systems at any given time. This enables you to minimize the impact of a potential hardware problem on your overall setup. &quot;Server groups&quot; provides you with an <strong>overview of your existing groups,</strong> allows you to rename them if required, and to create new groups in advance, e.g. if you then want to reference them in an automated setup.</p>
<img src="https://static.cloudscale.ch/img/news-panel-servergroups-7795bb804d49.png" alt="Server groups"/>
<h3>Default zone</h3>
<p>To prepare for improbable but potentially serious events, such as fire and earthquakes, we recommend <a href="https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations">geo-redundant setups</a> at our two cloud locations. Select the location where you carry out more frequent changes as your default zone for the project in question. <strong>When you create new servers in the web-based control panel this zone is preselected;</strong> when you launch via API, it is applied whenever a zone is not explicitly indicated in the API call.</p>
<img src="https://static.cloudscale.ch/img/news-panel-defaultzone-a0a259d7dc4f.png" alt="Default zone"/>
<h3>Automatically set reverse DNS</h3>
<p>If you want to set a specific reverse DNS entry for an IP address, you can make this change at any time via the control panel. If you <strong>specify a &quot;Fully Qualified Domain Name&quot; (FQDN)</strong> as the host name at the time you create a server, this is automatically used for the reverse DNS. You can, of course, change it again at any time in this case, too.</p>
<img src="https://static.cloudscale.ch/img/news-panel-reversedns-43cd88f3b381.png" alt="Automatically set reverse DNS"/>
<h3>Console log</h3>
<p>Nothing is perfect – and this applies in IT at least as much as anywhere else. If a server has stopped responding, a reboot usually helps, which you can trigger via the control panel if necessary. <strong>Have a look at the console log first.</strong> You will often find information there that the server has written to the serial console in connection with the system crash, which may help you to resolve the issue.</p>
<img src="https://static.cloudscale.ch/img/news-panel-consolelog-0b495b010ea0.png" alt="Console log"/>
<h3>Host keys</h3>
<p>The public SSH keys of the server are also output to the console log when <a href="https://www.cloudscale.ch/en/news/2020/06/23/initialize-servers-with-cloud-init">cloud-init</a> generates them during the initial boot-up. Our system collects them from there and <strong>displays the corresponding fingerprints in the control panel.</strong> This allows you to verify that you are communicating directly with the correct server at the very first SSH connection – even before it can be checked using your <code>known_hosts</code> file.</p>
<img src="https://static.cloudscale.ch/img/news-panel-hostkeys-2c425c7ca1e6.png" alt="Host keys"/>
<br/>
<p>Everyone has an individual working style and, in many cases, <strong>different approaches lead to the desired result.</strong> So, make sure you use the defaults and features in a way that suits you and your projects. For collaborative work on cloud resources, you can find even more guidance in <a href="https://www.cloudscale.ch/en/news/2022/03/04/collaboration-overview#toc-concepts">Collaboration – the key concepts</a>, and we have compiled a few tips for your optimal cloud setup in <a href="https://www.cloudscale.ch/en/news/2021/07/28/optimal-use-of-our-infrastructure">How to Make Optimal Use of Our Infrastructure</a>.</p>
<p>Well thought out from all angles,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Current Assessment of Energy Supply
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/09/22/current-assessment-of-energy-supply</link>
          <pubDate>Thu, 22 Sep 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/09/22/current-assessment-of-energy-supply</guid>
          <description>
            <![CDATA[<p>There are almost daily media reports on a possible energy shortage in Switzerland, which may occur in the latter part of this winter. Many things are still unclear at the moment, from the risk of this actually occurring, to the preventive measures that need to be taken and to potential consequences. As a cloud provider, we know how important reliable server operation is for our customers, which is why we would like to inform you of the facts we are currently aware of.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>The cloud: virtual and physical at the same time</h3>
<p>At cloudscale.ch, we offer our customers IT infrastructure &quot;as a service&quot;. Virtual servers ranging from really small to really large can be created in no time at all and can then also be scaled and deleted again at any time. This saves our customers from having to procure and maintain hardware, operate data center infrastructure, and guarantee connectivity. It goes without saying, however, that computing power, storage space and network connections are still based on <strong>correspondingly dimensioned physical resources</strong> that cannot be moved at the drop of a hat in the case of a power failure.</p>
<h3>Data centers have taken precautions</h3>
<p>This makes it all the more important to have an energy supply that is as reliable as possible. For this reason, cloudscale.ch was extremely intentional from the outset in its choice of the data centers we use and that we also draw power from. The data centers at both cloud locations have two separate power supply lines into the building, two dedicated power supply lines to our hardware, plus <strong>UPS systems and diesel generators</strong> that kick in without delay in the case of a power failure. Franz Grüter, Chairman of the Board of Directors of Green Datacenter AG, which our customers know as the &quot;LPG&quot; location, was <a href="https://www.tagesanzeiger.ch/die-rechenzentren-koennten-200000-haushalte-versorgen-425738191886">quoted in the Tagesanzeiger</a> newspaper as saying: &quot;Our data center will continue to work even in the case of a blackout.&quot;</p>
<p>The data center operators at both cloud locations are continuously monitoring the situation and taking further optimization measures wherever possible, e.g. with an increased diesel reserve and supply contracts. As far as we know, both our locations have already been officially <strong>classified as &quot;critical infrastructure&quot;.</strong> In addition, the sector is committed to raising awareness among decision-makers of the significance of IT infrastructure for all areas of life.</p>
<h3>Risk minimization through redundancy</h3>
<p>It goes without saying that, here at cloudscale.ch, we are already committed to minimizing the consequences of isolated incidents, which can never be completely excluded, as far as possible. This means that our Internet connectivity is designed in a redundant manner with <strong>several direct connections to various IP transit providers</strong> with equipment on their side that is also protected by two circuits, UPS and diesel generators.</p>
<p>With two cloud locations that work independently of each other to the greatest degree possible, we also make it possible for our customers to build geo-redundant setups. This allows you to additionally protect yourself against worst-case scenarios, even though we are convinced that, thanks to the measures already in place, there is <strong>no need to fear power failures over the winter.</strong></p>
<br/>
<p>At the moment, nobody knows what will actually happen this winter given that influencing factors range from international energy markets all the way to the weather. The data center operators at both our cloud locations have, however, always taken precautions, which means that, thanks to UPS systems and diesel generators, they can <strong>also cover longer interruptions,</strong> thus guaranteeing continuous operation of our servers.</p>
<p>Long-term preparations will prove their worth!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Scale to Our New Flavors Now
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/08/09/scale-to-new-flavors-now</link>
          <pubDate>Tue, 09 Aug 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/08/09/scale-to-new-flavors-now</guid>
          <description>
            <![CDATA[<p>The new compute flavors that we introduced on 2022-07-01 offer a wide range of advantages. During a transition phase, existing servers and automated setups with old flavors are running unchanged in parallel. Benefit now by selecting the new flavor that best suits your individual use case by 2022-08-31 at the latest.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-scale-to-new-flavors-now-1.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-scale-to-new-flavors-now-2.png"/><h3>Transition phase for old compute flavors – switch now</h3>
<p>The <a href="https://www.cloudscale.ch/en/news/2022/07/01/price-reductions-and-new-flavors">new flavors</a> offer various memory/CPU ratios and thus fit even more precisely for the most varied of workloads. Most of the new flavors are also considerably cheaper than the previous ones with similar specifications. Since 2022-07-01, only the new flavors can be selected in our web-based cloud control panel. The old flavors are, however, still available to a limited extent during a transition phase, with <strong>existing virtual servers continuing to run unchanged</strong> and the old flavors remaining available via API for automated setups.</p>
<p>This <strong>transition phase will run until 2022-08-31</strong>. Scale your existing servers to one of our new flavors by this date. This only takes a few seconds and can be carried out when it suits you, e.g. when you have to restart your servers anyway to install security patches. Any servers that are still using an old flavor can be identified by a yellow icon in the cloud control panel.</p>
<img src="https://static.cloudscale.ch/img/news-scale-to-new-flavors-now-1-2b746a077ebc.png" alt="Servers with old flavors are highlighted in the cloud control panel." caption="Servers with old flavors are highlighted in the cloud control panel."/>
<p><strong>Adapt any automated setups by 2022-08-31, too</strong>, in order to ensure that they are only using our new flavors. In the case of the Docker Machine driver, you can either explicitly specify a new flavor or you can use the <a href="https://github.com/cloudscale-ch/docker-machine-driver-cloudscale">latest release</a>, which uses a new default value. For the Rancher UI driver, please ensure that all node templates under &quot;GLOBAL APPS &gt; Cluster Management &gt; RKE1 Configuration &gt; Node Templates&quot; are using one of the new flavors. The same applies to Terraform configurations and Ansible playbooks.</p>
<h3>Based on current hardware</h3>
<p>A significant advantage of using cloud services lies in the fact that the cloud provider deals with the hardware. In addition to maintenance and troubleshooting, this also includes the <strong>ongoing evaluation and procurement of new hardware</strong>. At cloudscale.ch, we have relied on AMD-based hardware for quite some time now, e.g. due to the large number of PCIe lanes in the case of our <a href="https://www.cloudscale.ch/en/news/2019/07/25/bluestore-encryption-and-nvme-only-storage">storage hosts with NVMe SSDs</a>.</p>
<p>With a view to launching our <a href="https://www.cloudscale.ch/en/news/2019/11/19/even-more-power-thanks-to-plus-flavor">&quot;Plus&quot; flavors</a> with dedicated cores, we introduced AMD CPUs for our compute hosts as well, and their percentage has been increasing ever since. Now we are decommissioning the previous Intel-based compute hosts. Virtual servers that have been running with Intel CPUs (identifiable in e.g. <code>/proc/cpuinfo</code>) will <strong>automatically be migrated to one of our current high-performance AMD compute hosts</strong> as soon as you scale to one of the new flavors.</p>
<h3>Further advantages</h3>
<p>In terms of price, you have already been benefiting automatically: where the new flavors are cheaper than old flavors with comparable specifications were, we have applied the lower price since 2022-07-01 to existing servers with old flavors as well. There are, however, further reasons to scale now. <strong>The new Flex-4-1, for example, offers double the memory</strong> of the old Flex-2 for the same price, while the new Flex-16-8 comes with 8 instead of the 6 vCPUs of the old Flex-16.</p>
<img src="https://static.cloudscale.ch/img/news-scale-to-new-flavors-now-2-e4772711f11c.png" alt="For scaling, all new Flex and Plus flavors are available, e.g. the new Flex-16-8 with more vCPUs than the previous Flex-16." caption="For scaling, all new Flex and Plus flavors are available, e.g. the new Flex-16-8 with more vCPUs than the previous Flex-16."/>
<p>In the case of numerous workloads, there is additional potential for saving money given that the new &quot;memory-optimized&quot; and &quot;CPU-optimized&quot; flavors allow you to <strong>adapt your servers even more precisely to your specific needs</strong> without having to pay for underused resources. Or you can take the opportunity to test how your application performs if you switch from &quot;Flex&quot; to &quot;Plus&quot; (or vice versa).</p>
<br/>
<p>Please note that we will only be running the old flavors (with just one digit in the name) until 2022-08-31. <strong>Scale existing servers by this date to any one of our new flavors</strong> (with two digits in the name) in order to not only ensure a continued smooth operation, but also benefit from potential improvements to the performance and cost efficiency of your servers. It goes without saying that you will also enjoy the same benefits with automated setups. Simply adapt your tooling to the new compute flavors by 2022-08-31.</p>
<p>Ready when you are!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Price Reductions and New Flavors
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/07/01/price-reductions-and-new-flavors</link>
          <pubDate>Fri, 01 Jul 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/07/01/price-reductions-and-new-flavors</guid>
          <description>
            <![CDATA[<p>Flexibility is one of the many advantages of using cloud services. You have always had the choice of various &quot;compute flavors&quot; at cloudscale.ch that you can switch between at any time with no notice period. Following numerous customer requests, we are now introducing compute flavors with different memory/CPU ratios effective 2022-07-01. In addition to comparable new flavors to replace the existing ones, there are now also &quot;CPU-optimized&quot; and &quot;memory-optimized&quot; flavors to cover the needs of more special workloads. Also effective 2022-07-01, we are reducing our prices: The prices of the new Plus flavors are lowered by 25% and the larger Flex flavors even by 33%!</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>The right flavor for every use case</h3>
<p>Cloud server use cases are as individual as our customers. This is why, from the outset, we have offered <strong>flavors in various sizes</strong> at cloudscale.ch. And in contrast to with your own physical servers, you can scale your virtual servers in no time with us, always matching your current requirements.</p>
<p>With immediate effect, <strong>additional compute flavors with different memory/CPU ratios</strong> are available to you. This means that you can book a lot of CPU performance for compute-intensive workloads (e.g. batch processing) without having to pay for unused memory. For other applications, such as certain database setups or the delivery of static files, it may be worth keeping all required data in memory. In this case, you can select a flavor with a relatively large quantity of memory compared to the number of vCPUs or dedicated cores.</p>
<h3>Details of the new compute flavors</h3>
<p>Starting with the best news: <strong>In most cases, our new compute flavors are considerably cheaper</strong> than the previous flavors with comparable specifications. And thanks to the wider selection of compute flavors that even better covers the requirements of the most varied use cases, costs for your virtual servers can be further optimized.</p>
<p>The familiar flavor names &quot;Flex&quot; (shared vCPUs) and &quot;Plus&quot; (dedicated cores) have been maintained. They are now composed in a uniform manner according to the scheme <code>Flex-&lt;Memory&gt;-&lt;vCPUs&gt;</code> and <code>Plus-&lt;Memory&gt;-&lt;Cores&gt;</code>, which <strong>provides immediate clarity in terms of the specifications of the compute flavor</strong> in question. We have, however, gone one step further and decided on a purely linear price model:</p>
<ul>
<li>Irrespective of whether Flex or Plus, 4 GB of memory now cost CHF 0.5 per day.</li>
<li>In terms of processing performance, Flex flavors cost an additional CHF 0.5 per vCPU per day, and Plus flavors an additional CHF 1 per dedicated core per day.</li>
</ul>
<p>This means that a &quot;Flex-4-2&quot; with 4 GB memory (CHF 0.5) and 2 vCPUs (CHF 1) costs CHF 1.5 per day, while a &quot;Plus-32-4&quot; with 32 GB memory (CHF 4) and 4 dedicated cores (CHF 4) costs CHF 8 per day. To make things easier to understand, we will continue to list prices per day (i.e. 24 hours). It goes without saying that you will, however, still benefit from <strong>to-the-second billing</strong> for your cloud services.</p>
<h3>Simply scale to the new flavors</h3>
<p>Our new compute flavors are available to you with immediate effect in the cloud control panel and via API. When scaling existing servers, please note that <strong>only AMD CPUs</strong> will be used for our new flavors, which means that details visible in your server (e.g. in <code>/proc/cpuinfo</code>) may change.</p>
<p>Where the new flavors are cheaper than the corresponding old flavor with similar specifications, <strong>we already apply the lower price to the old flavor</strong>, which means that you also benefit if you are not immediately able to scale to a new flavor.</p>
<p>During a <strong>transition phase until 2022-08-31</strong>, servers can still be started with old flavors via our API. This means that for the time being automated setups, e.g. in Ansible and Terraform, will continue to work and you can take your time adapting them. However, we decided against simple &quot;renaming&quot; of flavors; if our API suddenly returned &quot;Flex-4-2&quot; instead of &quot;Flex-4&quot;, tools such as Terraform might unexpectedly identify a need for action. Over the coming days, we will publish an update for users of our <a href="https://github.com/cloudscale-ch/docker-machine-driver-cloudscale">Docker Machine Driver</a> and <a href="https://github.com/cloudscale-ch/ui-driver-cloudscale">Rancher UI Driver</a>, which will be adapted to our new flavors.</p>
<br/>
<p>Irrespective of your use case, a cloud server is always suitable as it can be adapted to your current requirements at any time. You can now also enjoy this flexibility at cloudscale.ch if your application is particularly memory- or CPU-heavy as our new flavors enable you to <strong>book exactly what you need</strong>, while also allowing you to benefit from our new and better pricing.</p>
<p>The right flavor for every taste!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Our DNS Setup at cloudscale.ch
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/06/15/our-dns-setup-at-cloudscale-ch</link>
          <pubDate>Wed, 15 Jun 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/06/15/our-dns-setup-at-cloudscale-ch</guid>
          <description>
            <![CDATA[<p>While data make their way through the Internet to their destination using numeric IP addresses, the Domain Name System (&quot;DNS&quot;) ensures that these IPs remain concealed behind user-friendly domain names. Almost unnoticed in day-to-day processes, the DNS translates domain names into IP addresses and vice versa, which is why it is often compared to a telephone directory. Find out how we manage our part of this globally distributed database here at cloudscale.ch.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-dns-setup-en.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-dns-reverse.png"/><h3>External view of our DNS servers</h3>
<p>The authoritative name servers of cloudscale.ch are important for our customers in various situations. They <strong>make it possible, for example, for our own services to be found</strong> and then used. This includes access to our website and the cloud control panel via a browser, as well as sending requests to our API from client-side tools. In addition, the domain names of our Object Storages need to be resolved to IP addresses, also for visitors of third-party websites if static content from our Object Storages is embedded there. Furthermore, our name servers respond to what are known as reverse lookups: our customers can determine a reverse DNS name (PTR record) for their virtual servers and Floating IPs, which is then published via DNS and can be queried from our name servers.</p>
<p>If a DNS query is made for one of our IP addresses or domains, the DNS client (starting from the root zone and following the DNS hierarchy) first identifies our name servers and then asks them for the required information. We currently have three public name servers: we run <code>ns1.cloudscale.ch</code> at our &quot;RMA&quot; cloud location in Rümlang (Canton Zurich) and <code>ns2.cloudscale.ch</code> at our &quot;LPG&quot; location in Lupfig (Canton Aargau). Although we have taken numerous measures to protect our cloud locations against failure, including redundancy in terms of Internet connectivity and hardware, we additionally run <code>ns3.cloudscale.ch</code> outside our own infrastructure. <strong>The three name servers are completely independent of each other</strong> and can respond to DNS queries directly without having to rely on a central component such as a joint database.</p>
<h3>Concealed control infrastructure</h3>
<p>The decisive data source for our public name servers is an internal DNS setup that cannot be reached from the Internet. This is also designed in a geo-redundant manner and constantly replicates its dataset between our two cloud locations. Changes to DNS entries are fed into this internal DNS setup in a first step. A special control service then tests, several times a minute, whether new or changed entries are present and initiates zone transfers where necessary, which enables the public name servers to update their copy of the data. Changes to DNS entries – most commonly new reverse DNS entries from the cloud control panel – generally become <strong>visible from the Internet within ten seconds</strong>.</p>
<img src="https://static.cloudscale.ch/img/news-dns-setup-en-254516ecddae.png" alt="Geo-redundant DNS setup."/>
<h3>Tried-and-tested technology</h3>
<p>The DNS protocol itself is already <strong>designed with a certain degree of fault tolerance</strong>. It is common (and in certain cases mandatory) to have two or more authoritative name servers for a zone, such as a domain. If a DNS client does not receive a response to a query, it shortly afterwards automatically tries again with another one of these name servers. However, a delay of this kind may have undesired effects, which is why, for our DNS setup at cloudscale.ch, we not only have a redundantly designed physical infrastructure, but also tried-and-tested software components and configurations to avoid failure wherever possible. And last but not least, the systems involved in the DNS are closely monitored here so that we can intervene in good time if necessary.</p>
<p>By the way, if you specify a &quot;Fully Qualified Domain Name&quot; (FQDN) as the server name when creating a virtual server, it is automatically recorded in our DNS setup as a reverse DNS entry to the IP addresses of this server (IPv4 and possibly IPv6). Floating IPs take over the reverse DNS of the virtual server that they are initially assigned to. <strong>You can adapt the reverse DNS of servers and Floating IPs at any time</strong> in order to ensure that they match the DNS entries in your own domain.</p>
<img src="https://static.cloudscale.ch/img/news-dns-reverse-ec9eb1852637.png" alt="Configurable Reverse DNS."/>
<br/>
<p>Even beyond web and email addresses, the Domain Name System is involved practically everywhere. Here at cloudscale.ch, it is important to us that the <strong>DNS resolution of our domains and IP addresses functions reliably</strong>, as this is essential for our customers to be able to manage their cloud resources via the cloud control panel and API as well as access our Object Storages. With a carefully designed geo-redundant DNS setup without single point of failure, we help to ensure that using our services is not only simple but also smooth.</p>
<p>This is what our (domain) name stands for!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Successful ISO Recertification
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/05/13/successful-iso-recertification</link>
          <pubDate>Fri, 13 May 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/05/13/successful-iso-recertification</guid>
          <description>
            <![CDATA[<p>&quot;Information security&quot; requires permanent work and needs to be taken into account for all activities. It does not simply involve a one-off product purchase that can be crossed off your to-do list. Certification in accordance with, for example, ISO/IEC 27001 is only issued for a limited period of time and is audited regularly. We are delighted to inform you that the certificate for cloudscale.ch has been renewed without interruption following a successful recertification audit.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Information security along the whole supply chain</h3>
<p>Information is more important than ever today. At the same time, there has been an increasing focus on protecting information, which in turn raises the significance of standards, such as ISO 27001 that covers various aspects relating to information security. ISO 27001 certification is important for many of our customers – not only for selected service providers or data centers, but for <strong>the whole data processing supply chain</strong>.</p>
<p>For this reason, we already had cloudscale.ch certified in accordance with ISO 27001 as early as 2019, consequently committing ourselves to regular audits. Following our recent successful &quot;recertification audit&quot;, <strong>our certificate was renewed without interruption</strong>. You will find the <a href="https://www.cloudscale.ch/en/iso-27001-27017-and-27018-certificate.pdf">current certificate, which is valid until 2025</a>, on our <a href="https://www.cloudscale.ch/en/about">website</a> and can download it for your files.</p>
<p>In addition to the universally applicable ISO/IEC 27001:2013 standard, we were also audited in accordance with the ISO/IEC 27017:2015 and 27018:2019 standards. These two standards were developed as <strong>extensions to ISO 27001 and define complementary controls</strong> that are particularly relevant to cloud services and to processing personally identifiable information in public clouds.</p>
<h3>Continuous improvement as an integral component</h3>
<p>We only recently announced that an <a href="https://www.cloudscale.ch/en/news/2022/04/29/isae-3000-report-available">ISAE 3000 report</a>, which also deals with information security, is available if required. Although it is a coincidence that these two announcements are made so close together, it is no coincidence that this topic is consistently relevant for us. While ISO 27001 certificates are valid for three years, the standard mandates <strong>annual audits through the accredited certification body</strong>. This means that once the certificate has been issued, there are two &quot;surveillance audits&quot; and then a more comprehensive &quot;recertification audit&quot; for the renewal of the certificate.</p>
<p>In addition to this, internal audits also need to be performed every year. To achieve certification, it is not, however, enough for the tested processes to meet the requirements of the standard at the time of the audit. Moreover, the processes themselves need to be <strong>continuously further improved</strong>.</p>
<h3>Useful features for enhanced security</h3>
<p>With cloudscale.ch, you are choosing a cloud provider that not only takes data protection and information security extremely seriously, but also monitors this by means of independent audits. As mentioned above, it is essential that information security is implemented at all levels. We therefore provide a range of features that help customers <strong>enhance data security</strong> in their own areas of responsibility.</p>
<p>These include various <a href="https://www.cloudscale.ch/en/news/2022/03/04/collaboration-overview">options for collaboration</a> in our cloud control panel, which mean that you can use cloud services in a corporate context, too, without needing to share accounts and passwords. Graduated access rights for each project and the use of two-factor authentication (2FA) enable you to add extra protection. And if you already use an &quot;OpenID Connect&quot;-compatible identity provider, such as Keycloak or ZITADEL, you can also <a href="https://www.cloudscale.ch/en/news/2021/06/18/single-sign-on-using-own-identity-provider">benefit from single sign-on</a> when logging into our cloud control panel. This means that you not only control the log-in process yourself, but you <strong>make things more convenient</strong> for your employees at the same time.</p>
<br/>
<p>Although standards, processes and audits are sometimes perceived to be a little &quot;dry&quot;, information security is not created on paper, but needs to be <strong>lived out in day-to-day routines</strong>. Our recent recertification audit in accordance with ISO 27001, 27017 and 27018 once again proved that we have our eye on the ball here at cloudscale.ch.</p>
<p>With trust comes responsibility!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[The Right Image for Every Situation
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/05/06/custom-image-management</link>
          <pubDate>Fri, 06 May 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/05/06/custom-image-management</guid>
          <description>
            <![CDATA[<p>Whether ready-to-use appliances, specialized distributions or your individual config, with custom images you can create cloud servers that are ideally prepared for their intended purpose from the very first start-up. Managing and using custom images has now become even easier, which means that you can build on the appropriate image in any situation.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-custom-image-management.png"/><h3>Images as the base for new cloud servers</h3>
<p>In order for new cloud servers to be available immediately, the operating system is not installed manually (e.g. from a DVD), but instead the root volume is <strong>prefilled with an image</strong> when a new cloud server is launched and this installation template is then further configured during the initial boot process.</p>
<p>You have been able to import and use <a href="https://www.cloudscale.ch/en/news/2020/12/09/flexible-and-efficient-thanks-to-custom-images">your own images</a> for a while now at cloudscale.ch. This enables you to create virtual servers in no time at all using a different operating system to those provided by us or using software and configurations that are <strong>completely adapted to your requirements</strong>. Where there are several available image files from distributors or third-party providers, it is often the case that one of them has been specifically optimized for OpenStack environments like at cloudscale.ch.</p>
<h3>Managing custom images in the cloud control panel</h3>
<p>Managing custom images is now <strong>also possible in our web-based cloud control panel</strong>. Even if you are not using our API, the <a href="https://www.cloudscale.ch/en/news/2020/09/15/cloudscale-cli-now-available">cloudscale.ch CLI</a> or the DevOps tools we support, such as <a href="https://www.cloudscale.ch/en/news/2021/12/23/terraform-imports-and-data-sources">Terraform</a> and <a href="https://www.cloudscale.ch/en/news/2020/12/21/cloud-orchestration-with-ansible-collections">Ansible</a>, you can now import your own images at cloudscale.ch in order to start one or more virtual servers on this base. In addition, it is now possible to import images directly in QCOW2 format without needing to convert them to RAW first. It goes without saying that you also benefit from this simplified process if you continue to import images via API.</p>
<img src="https://static.cloudscale.ch/img/news-custom-image-management-e8f4e43d9d14.png" alt="Custom image management in the cloud control panel."/>
<p>We recommend that you use <a href="https://www.cloudscale.ch/en/news/2020/06/23/initialize-servers-with-cloud-init">cloud-init</a> for your own images, too. Using cloud-init, virtual servers can <strong>apply numerous settings automatically</strong> during the boot process, e.g. relating to the name and IP address of the server. You can also hand over many other options to cloud-init as &quot;user data&quot; at the time you create the server. In addition, you can specify for each custom image how our system should handle the user data. This is of particular significance if your image uses Ignition, which serves a similar purpose, rather than cloud-init. You will find further details about this in the <a href="https://www.cloudscale.ch/en/api/v1#integrating-custom-images-with-cloudscalech-infrastructure">documentation</a>.</p>
<br/>
<p>Whether you are compiling a customized image with your preferred tools and settings or using a comprehensive complete package from a third-party provider, <strong>managing and using your custom images has now become even easier</strong> at cloudscale.ch.</p>
<p>For the right base,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[ISAE 3000 Report Available
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/04/29/isae-3000-report-available</link>
          <pubDate>Fri, 29 Apr 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/04/29/isae-3000-report-available</guid>
          <description>
            <![CDATA[<p>Although &quot;IT&quot; and &quot;business management&quot; often seem to be far removed from each other in everyday life, they are actually closely connected. Nowadays, IT infrastructure is the lifeblood of many companies, which means it is just as much a focus for auditors as, for instance, accounts. For this reason and with immediate effect, cloudscale.ch is offering a report based on the ISAE 3000 standard to customers whose audits also cover outsourced IT processes, thus helping them to adhere to internal and external compliance requirements.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Comprehensive reporting – also for IT</h3>
<p>Most people tend to first think of accounts when they hear the term &quot;audit&quot;. However, correct financial statements alone are often not sufficient and other processes, e.g. those relating to IT, may be important for the survival of a company and therefore be <strong>included in audits and business reporting</strong>. This is where a report based on ISAE 3000 comes in.</p>
<p>The abbreviation &quot;ISAE 3000&quot; stands for &quot;International Standard on Assurance Engagements 3000&quot;, which is an international test standard issued by the International Federation of Accountants (IFAC). The standard creates a uniform framework for &quot;assurance engagements other than audits or reviews of historical financial information.&quot; In the process, an <strong>independent auditor</strong> checks the internal control system of a company or specific division and produces a corresponding report.</p>
<p>While ISO 27001 deals specifically with information security and prescribes more than 100 associated controls, ISAE 3000 basically does not include any requirements relating to controls or internal control as such. However, the actual scope of an audit based on ISAE 3000 is <strong>disclosed in detail in the resulting audit report</strong>, which then allows readers to assess whether the tested controls meet their own requirements. To compare: an ISO 27001 certificate documents that the standard has been adhered to across its whole scope, but does not provide further details about its implementation.</p>
<h3>Audit procedure based on ISAE 3000</h3>
<p>A theoretical starting point is a worst-case scenario, e.g. &quot;A burglar publishes secret data.&quot; To achieve the reassuring sense that this worst case will most probably not occur requires several steps. Once the risk has been identified, the next step is to formulate an <strong>objective</strong> to prevent it from happening (&quot;Unauthorized persons have no access to secret data&quot;). To achieve the objective, specific <strong>controls</strong> are in turn defined (&quot;The server room is always locked&quot;, &quot;The data are encrypted&quot;).</p>
<p>In an audit based on ISAE 3000, an auditor assesses three issues: do the controls seem <strong>suitable</strong> for achieving the objective? Were the controls actually <strong>implemented</strong>? Were they also <strong>effective</strong>?</p>
<p>By answering these questions for all the important processes and objectives of a company, an auditor can provide top management with a <strong>statement about whether the objectives have been achieved</strong>. It goes without saying that the auditor is not omniscient, but by using professional judgement and auditing a sufficient quantity of random samples, an adequately high level of certainty can be achieved about whether the assessment corresponds to (an ultimately always unknown) reality.</p>
<h3>Reporting beyond corporate boundaries</h3>
<p>Gaps in audits of one&#x27;s own company inevitably occur wherever a process has been outsourced, e.g. when cloudscale.ch cloud services are used instead of running one&#x27;s own servers and data centers. In this case, it makes sense for the outsourcing partner to be audited separately. This audit report enables auditors to close the gaps in their own audit and, once again, obtain a <strong>complete overview of processes within the company</strong>.</p>
<p>At cloudscale.ch, an <strong>audit report based on the ISAE 3000 standard is available</strong> to our customers with immediate effect. For a contribution towards the cost of producing the report, we will be happy to provide you with a copy if required. The specific nature of reports of this kind makes this particularly relevant for customers who are themselves audited based on a standard such as ISAE 3000 and who would like to or are obliged to close the above-mentioned gaps that exist in purely internal audits.</p>
<h3>No surprises in terms of content</h3>
<p>While a company can define its own objectives and controls for internal processes, a service provider such as cloudscale.ch needs to make a selection. This selection aims to cover the <strong>typical requirements of many customers</strong>, but cannot cater to every individual case. While the format of an ISAE 3000 report was completely new to us here at cloudscale.ch, the selected objectives and controls were tried-and-tested ones. From the outset we have consistently used our customers&#x27; requirements as a guide and taken those measures that we believe to be relevant to our customers.</p>
<p>While these include obvious matters such as limited access to our server systems and continuous training of our employees, they also include features such as <a href="https://www.cloudscale.ch/en/news/2019/07/25/bluestore-encryption-and-nvme-only-storage">at-rest data encryption</a> and the option of <a href="https://www.cloudscale.ch/en/news/2019/10/25/segmentation-with-multiple-private-networks">separate private networks</a>, as we have already reported on. The <strong>independent external audit and formal reporting</strong> are new; the specific measures that we take to contribute to the security and reliability of your cloud resources are, however, long established.</p>
<br/>
<p>Our customers know and appreciate the technical focus that we have always had at cloudscale.ch and that we still maintain today. Preparing our first own ISAE 3000 report was a completely new experience for us. Now that we have committed to it, it feels doubly good that the technical foundation we have developed continuously also stood the test of a business audit. And, far more importantly, having an independent auditor&#x27;s report means that we can help our customers in <strong>complying with their own specifications</strong>.</p>
<p>Objective achieved!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Collaboration that Meets Every Need – an Overview
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/03/04/collaboration-overview</link>
          <pubDate>Fri, 04 Mar 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/03/04/collaboration-overview</guid>
          <description>
            <![CDATA[<p>Here at cloudscale.ch, we believe that accounts are something personal: every account belongs to a person and should not be shared. This ensures that it is possible to select truly personal and secure login credentials and, ideally, also two-factor authentication (2FA). Our cloud control panel has a range of features that facilitate collaboration, irrespective of how it is &quot;organized&quot;. We would like to use a fictional example to illustrate the available options and the different aspects of the various approaches.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-collab-myproject.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-collab-billing.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-collab-members.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-collab-teams.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-collab-collaborator.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-collab-partners.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-collab-remove-member.png"/><h3>Use cloudscale.ch – alone or together</h3>
<p>Andrea is a committed open source user. She regularly finds and tests new tools for all kinds of tasks. She uses cloudscale.ch in order to ensure that she always has a fresh system that she can play around with and then delete again. This enables her to start virtual servers with just a few clicks and to choose from a selection of the most popular Linux distributions, depending on the tool she wants to look at.</p>
<img width="345" height="225" src="https://static.cloudscale.ch/img/news-collab-myproject-7042c6b525aa.png" alt="Every account has a &quot;My Project&quot; for virtual servers and other cloud resources by default." caption="Every account has a &quot;My Project&quot; for virtual servers and other cloud resources by default."/>
<p>In her work, Andrea is Head of IT at Enjoy AG, a company that runs an online platform for restaurants and delivery services. So far, the platform has been running on a physical server that Enjoy AG set up with a housing provider. In addition, there is an on-premise server in the office that is used for the CRM among other things. Both servers are actually oversized in terms of performance, but they are getting old and Andrea is concerned that a hardware defect may result in business coming to a standstill for a long period of time. Simply buying new devices is not an option for Andrea as this would not resolve her concerns about the lack of redundancy.</p>
<p>Andrea would rather use the <a href="https://www.cloudscale.ch/en/news/2021/02/09/why-the-cloud">benefits of the cloud</a> and, instead of an individual excessively large server, create a geo-redundant failover setup that can be scaled with the growth of her platform. In order to enable her and her employees to work on the new cloud setup together, Andrea creates a <a href="https://www.cloudscale.ch/en/news/2021/05/27/new-control-panel-organizations-projects-teams">new organization</a> &quot;Enjoy AG&quot; in the cloudscale.ch cloud control panel. This means that with just a single login, she has two completely separate areas: a personal one and a business one.</p>
<img src="https://static.cloudscale.ch/img/news-collab-billing-b1766a1bb33a.png" alt="Account and organization each have their own balance and can be loaded independently of each other." caption="Account and organization each have their own balance and can be loaded independently of each other."/>
<h3>Invite and manage organization members</h3>
<p>Andrea then creates invite links for her colleagues so that they can join the &quot;Enjoy AG&quot; organization with their own personal accounts. It comes as no real surprise that her employees Jeremy and Patricia are also already cloudscale.ch customers; the others quickly set up a new account for free via &quot;Signup&quot;. Under &quot;Members&quot;, Andrea has a constant overview of which colleagues have used the invite link to join the organization. She pays particular attention to the 2FA status and asks those who have not yet activated this feature to do so.</p>
<img src="https://static.cloudscale.ch/img/news-collab-members-b450ce5255a0.png" alt="Existing members of the organization are managed in a clearly laid out manner and new members are invited to join the organization by means of an invite link." caption="Existing members of the organization are managed in a clearly laid out manner and new members are invited to join the organization by means of an invite link."/>
<p>Under &quot;Projects&quot; Andrea first creates three projects called &quot;CRM&quot;, &quot;Test&quot; and &quot;Prod&quot; to group the new cloud servers logically. If further projects arise in future, she can add these at any time. She could now add the appropriate employees for each project as project members and specify whether these people only have read access or also permission to make changes, e.g. to create or delete cloud servers. However, in order to make her life easier, Andrea decides to use &quot;teams&quot;. In keeping with the established structure of Enjoy AG, she creates teams with the appropriate team members and then adds the teams to the projects. In the team &quot;IntOps&quot;, she makes Patricia team leader, which means that Patricia can manage the team members herself in future and, for example, include any colleagues who switch to her team.</p>
<img width="345" height="240" src="https://static.cloudscale.ch/img/news-collab-teams-9986989d443b.png" alt="Organization members can be grouped into teams in order to manage project rights efficiently." caption="Organization members can be grouped into teams in order to manage project rights efficiently."/>
<h3>Include specialists as external collaborators</h3>
<p>Following a request from the marketing department, the &quot;Landing Pages&quot; project is soon added. Susanna is a freelance worker who has been given the task by Enjoy AG of running web pages for various marketing campaigns on a separate cloud server. It is clear to Andrea that Susanna requires full access to the appropriate cloud resources for this, but at the same time, she does not want Susanna to see all the other projects and people involved in the cloud control panel. This is why she generates an invite link for an &quot;<a href="https://www.cloudscale.ch/en/news/2021/09/23/collaboration-with-external-accounts">external collaborator</a>&quot;, which she sends to Susanna. As previously, Andrea could now set up a separate team with Susanna and the internal colleagues as team members. However, in this case, she believes it will be clearer if she allocates the existing &quot;Platform&quot; team plus Susanna to the &quot;Landing Pages&quot; project as project members.</p>
<img width="345" height="300" src="https://static.cloudscale.ch/img/news-collab-collaborator-f8dc2fb5b099.png" alt="External collaborators can access selected projects without seeing the rest of the organization." caption="External collaborators can access selected projects without seeing the rest of the organization."/>
<h3>Partnership between two organizations: clear responsibilities</h3>
<p>Sven is Andrea&#x27;s contact partner at Coders GmbH. He and his team developed the online platform for Enjoy AG and also help with maintenance and troubleshooting during live operations. On the old server, the developers at Coders already had SSH access, but without being able to see the server &quot;from the outside&quot;, they were sometimes limited in what they could do to help. This will all change with the new cloud setup. Coders GmbH also already uses cloudscale.ch internally and all employees have personal accounts. Andrea does not, however, want to invite every single one of them to be an external collaborator, which is why she and Sven agree on a &quot;<a href="https://www.cloudscale.ch/en/news/2022/01/27/cross-organizational-collaboration">partnership</a>&quot; between the two organizations. This means that she can share her &quot;Prod&quot; project with Coders GmbH, and Sven, as a superuser of the &quot;Coders GmbH&quot; organization, can add its teams and members to this project independently.</p>
<img src="https://static.cloudscale.ch/img/news-collab-partners-cf34c0cfc8eb.png" alt="Projects can be shared with partner organizations; superusers manage the rights for accounts in their own organization independently." caption="Projects can be shared with partner organizations; superusers manage the rights for accounts in their own organization independently."/>
<p>Unfortunately, Oliver leaves Coders GmbH shortly after this. He wants to take some time out in a peaceful mountain village so he can work on a few personal projects that are important to him. As well as for his work at Coders, Oliver also used his cloudscale.ch account for these projects, which is why he would like to keep his account. It is not a problem to make this wish come true as Sven can simply remove Oliver&#x27;s account from the organization and thus also from the Enjoy AG &quot;Prod&quot; project.</p>
<img width="260" height="450" src="https://static.cloudscale.ch/img/news-collab-remove-member-8acbc3396e45.png" alt="Organization members can be removed from the organization at any time. The account per se and personal projects and cloud resources remain in place." caption="Organization members can be removed from the organization at any time. The account per se and personal projects and cloud resources remain in place."/>
<p>With a little luck, Sven almost seamlessly finds a replacement in Christina. At the same time, he would like to progressively include Thomas, who is completing his training at Coders, in the collaboration with Enjoy AG. At the moment, as opposed to the other employees, all Thomas needs is read access. Sven is delighted that he can implement all these administrative adjustments himself without having to bother Andrea.</p>
<br/>
<p>In the meantime, the new platform release is almost finished and both Enjoy AG and Coders GmbH are looking forward to offering the restaurants and delivery services numerous new features. The landing pages for the new campaign are also ready. As everybody has been granted the access required for their contribution, there is nothing more standing in the way of a successful deployment. And once the work has been completed in the various workplaces, there might even be a face-to-face meeting to toast the successful collaboration.</p>
<p>To successful collaboration!<br/>
Your cloudscale.ch team</p>
<br/>
<p>PS: In order to ensure that administration of the organization does not depend solely on her, Andrea plans to make her colleague Jeremy a superuser, too, so that they can stand in for each other if one of them is absent, for instance. In the cloudscale.ch blog, she found the following overview that she forwards onto Jeremy so that he can get started quickly.</p>
<h3>Collaboration – the key concepts</h3>
<p><strong>Projects:</strong> Cloud resources can be grouped in projects, e.g. for separate test and production environments. Access rights – either read only or with permission to make changes – can be defined separately for each project.</p>
<p><strong>Organization:</strong> An organization represents a company, a department or any other group of people who work together on cloud resources. Organizations have a separate framework agreement and a separate balance, which makes them suitable for the contractual demarcation of cloud resources from other accounts and/or organizations.</p>
<p><strong>Organization member:</strong> Organization members can work on projects of the organization. Irrespective of their rights for individual projects, organization members can – in the same way as employees in a company – see which projects, teams and other members exist within the organization.</p>
<p><strong>External collaborator:</strong> Accounts that need to be able to work on one or several projects without having any insight into the organization beyond this can be invited as external collaborators. They will then only see these projects and not the other projects, teams and members of the organization.</p>
<p><strong>Team:</strong> To simplify rights management, accounts (i.e. organization members and external collaborators) can be grouped into teams. Access rights for the team automatically apply to all team members.</p>
<p><strong>Team leader:</strong> Team leaders can independently add and remove organization members and external collaborators as team members of the relevant team. Team leaders can also make other organization members (but not external collaborators) team leaders within the team and remove this role again.</p>
<p><strong>Superuser:</strong> A superuser can manage all aspects of the organization. This includes managing projects, teams and rights as well as inviting and removing organization members and external collaborators.</p>
<p><strong>Partner organization:</strong> The superusers of two organizations can agree on a partnership and share projects with the other organization. The superusers in the two partner organizations manage the access rights of their own organization members and external collaborators for the joint projects. As is the case for external collaborators, only the shared projects – and not the other projects, teams and members – of the other organization are visible.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Cross-Organizational Collaboration
]]></title>
          <link>https://www.cloudscale.ch/en/news/2022/01/27/cross-organizational-collaboration</link>
          <pubDate>Thu, 27 Jan 2022 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2022/01/27/cross-organizational-collaboration</guid>
          <description>
            <![CDATA[<p>Cloud projects are frequently based on teamwork, and project boundaries do not always correspond to the boundaries of a company or another organization. The new &quot;partner organizations&quot; feature allows you to replicate these kinds of constellations in our cloud control panel, too. You can grant partner organizations access to selected projects or add members of your own organization to your partners&#x27; projects just as you would assign responsibilities in collaborative projects in the real world.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-partner-organizations-en.png"/><h3>Projects with internal and external participants</h3>
<p>Several people often work together on cloud projects. However, sharing user accounts involves various risks and dangers. This is why many cloudscale.ch users work in what are known as &quot;<a href="https://www.cloudscale.ch/en/news/2021/05/27/new-control-panel-organizations-projects-teams">organizations</a>&quot; that represent, for example, a company or department. <strong>All organization members have personal login credentials</strong> and can enable two-factor authentication (2FA) for their user account. The organization&#x27;s superusers determine which organization members and external collaborators are authorized to access which of the organization&#x27;s projects.</p>
<p>A second company or other organization is often also involved when external participants are associated with a project. This is where the new &quot;partner organizations&quot; feature comes into its own: you can grant a partner organization the right to include the appropriate employees for the project independently, without you having to know details about who exactly has the required know-how or desired availability. Every time you share a project, you decide <strong>whether the partner organization should only have read access or whether it is given change permissions</strong>, too.</p>
<h3>Partner organizations: delegate administrative tasks</h3>
<p><strong>Partner organization management</strong> can be found in the &quot;Organizations&quot; area of our cloud control panel. This is where you can initiate new partnerships as well as check and, if necessary, terminate existing ones. A partnership documents that two organizations know each other; it works both ways and, on its own, does not imply any kind of access rights to the other partner&#x27;s projects or cloud resources.</p>
<p>As easily as you can add organization members, teams and external collaborators, you can now <strong>add partner organizations to your projects</strong>. This means that you allow the superusers of the partner organization in question to grant access rights to these projects to their own organization members and external collaborators. These access rights are limited to the access level you grant to the partner for the project in question. External superusers cannot, however, change (or even see) the rights granted to your organization members and external collaborators; you retain sole control of this.</p>
<p>Conversely, if a partner organization has shared one of its projects with your organization, you can <strong>add your own organization members, teams and external collaborators to this project</strong> as usual. NB: further delegation is not possible. Only the organization that owns the project can add partner organizations to it.</p>
<img src="https://static.cloudscale.ch/img/news-partner-organizations-en-09fdb66393e9.png" alt="Partner Organizations"/>
<h3>Well thought-out details for every scenario</h3>
<p>As is the case for <a href="https://www.cloudscale.ch/en/news/2021/09/23/collaboration-with-external-accounts">external collaborators</a>, the internal details of your organization remain concealed from individuals in your partner organizations. The latter can <strong>only see those projects you share with them</strong>, but not other internal structures such as projects, teams, or potential further partner organizations. It is only in the case of changes that the acting account is visible in the corresponding project log, which ensures adherence to possible compliance specifications. It goes without saying that you can remove partner organizations from your projects at any time; all access rights associated with them will expire with immediate effect. Once you have withdrawn access rights, also check any API tokens in the affected projects as some of them may no longer be required.</p>
<p>The &quot;partner organizations&quot; feature is typically used when you need the active contribution of partners (such as customers or service providers) in your projects or when you would like to increase transparency and grant read access to your end customers. However, <strong>this is not only for corporate settings</strong>, as you can also use organizations and partnerships for associations, with your friends, and anywhere else that you work on a cloud project with different groups.</p>
<p>By the way, in addition to email/password and GitHub, we also support your own &quot;OpenID Connect&quot;-compatible <a href="https://www.cloudscale.ch/en/news/2021/06/18/single-sign-on-using-own-identity-provider">identity provider</a>, such as Keycloak or ZITADEL, for logging into our cloud control panel. The choice is completely yours; it is <strong>not necessary for all users to employ the same login mechanism</strong> for successful collaboration with partner organizations.</p>
<br/>
<p>Partner organizations complete our feature set relating to collaboration. In addition to your own organization members and individual external collaborators, you can now <strong>also replicate partnerships at an institutional level in our cloud control panel</strong>. With every partner determining which of their organization members are the best match in the current phase, your project benefits to the greatest possible degree from the expertise and availability of everyone involved. And at the same time, you also minimize your administrative overheads.</p>
<p>Your partner in every scenario,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Imports and Data Sources in Terraform
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/12/23/terraform-imports-and-data-sources</link>
          <pubDate>Thu, 23 Dec 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/12/23/terraform-imports-and-data-sources</guid>
          <description>
            <![CDATA[<p>&quot;Infrastructure as code&quot; with Terraform normally means that you first define the infrastructure you require in config files and that Terraform then uses your specifications to create the infrastructure in practice. However, thanks to &quot;import&quot; functionality, you can also include existing resources in your Terraform setup for further management. In addition, &quot;data sources&quot; can be utilized to read and use further values that relate, for example, to resources in separate Terraform setups.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Many projects &quot;evolve&quot; rather than following a plan</h3>
<p>IT projects frequently start out small. This approach is often intentional in order to validate certain hypotheses quickly using a process that is as lean as possible and to continuously adapt the project in short iterations. And sometimes supposedly short-term experiments establish themselves and over time grow into an important product or tool. At cloudscale.ch, we make it particularly easy for you: within the shortest amount of time and with just a few clicks of your mouse, your first cloud servers are ready and can be changed <strong>just as quickly and easily</strong> when required.</p>
<p>It goes without saying that we also support the creation and management of your cloud infrastructure with Terraform. However, the things that provide consistency and efficiency in an established project would often represent nothing more than unnecessary overheads if applied in the initial phase. If the project then grows, you may at some point realize that a tool like Terraform would actually have been worth it. Given that in many cases you will not want to start from scratch again, the <strong>import function for existing resources</strong> means that, fortunately, you do not have to do so.</p>
<h3>Import existing resources into Terraform</h3>
<p><strong>The starting point for Terraform is your configuration</strong>, which is where you define all the resources, such as the cloud servers that you need for your project along with their properties. Once these resources have been created, Terraform keeps a &quot;state&quot; in order to remember which resources in the real world belong to which definitions. If your specifications or reality change, Terraform can detect this and recreate the desired state during the next <code>terraform apply</code>.</p>
<p>To include an existing resource in your Terraform setup at cloudscale.ch, simply define it as usual in your Terraform config. Instead of then having it newly created with <code>terraform apply</code>, just use <code>terraform import</code> to <strong>link your definition to the existing resource</strong>, which you specify by means of its unique ID. You will find the details required for each resource in the <a href="https://registry.terraform.io/providers/cloudscale-ch/cloudscale/latest/docs">documentation for our Terraform provider</a>. By then running <code>terraform plan</code>, you can check whether Terraform still detects differences between the definition and the actual resource. Keep adapting the definition, if necessary, until Terraform no longer detects a need for change.</p>
<p>Things are special with regard to SSH keys, which are provided to a newly created cloud server by means of a metadata server and config drive and incorporated by <a href="https://www.cloudscale.ch/en/news/2020/06/23/initialize-servers-with-cloud-init">cloud-init</a> during the first boot up. As the handling of SSH keys is limited to this initial configuration, Terraform cannot read them from our API and store them in its state at a later point in time. With regard to importing servers, this means that you must <strong>not specify SSH keys</strong>; otherwise Terraform would detect a need for change every time, which would result in the server being deleted and newly created.</p>
<h3>Additional values with data sources</h3>
<p>At cloudscale.ch, you can import almost all your existing resources into your Terraform setup as described above, with the only current exception being &quot;<a href="https://www.cloudscale.ch/en/news/2020/12/09/flexible-and-efficient-thanks-to-custom-images">custom images</a>&quot;. You may, however, not want an import, for example if the resources in question are already part of another Terraform setup. In order to still be able to <strong>access attributes of such resources</strong> and to use them e.g. in the definition of further resources, you can use data sources.</p>
<p>Data sources make <strong>all the important properties of your cloud resources</strong> available in Terraform. This means that servers managed by Terraform can, for example, adopt the flavor of a server outside your Terraform setup or be defined to have an interface in a manually managed private network.</p>
<p>Various &quot;arguments&quot; are available in order for you to access data sources. Use one or several of them to filter for the resource you are looking for. The values that can be used as an argument and other &quot;attributes&quot; of the resource are then <strong>available to you for further use in your Terraform setup</strong>. You will also find all the important information relating to the use of data sources in our documentation.</p>
<br/>
<p>Terraform differs from conventional configuration management systems in that it allows an automated setup of your infrastructure as a &quot;greenfield project&quot; without you having to prepare servers or networks. In practice, however, many projects do not develop and evolve in the way that hindsight deems best. At cloudscale.ch, you can now, with the help of imports and data sources, <strong>integrate &quot;legacy&quot; resources of this kind into your Terraform setup as if they had always been there</strong>. Avoid having to start from scratch again when &quot;tidying up&quot; evolved setups and instead use your energy to advance your project.</p>
<p>Create order without the waste,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Raw Block Volumes via CSI Driver
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/11/30/raw-block-volumes-via-csi-driver</link>
          <pubDate>Tue, 30 Nov 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/11/30/raw-block-volumes-via-csi-driver</guid>
          <description>
            <![CDATA[<p>If applications are to maintain persistent data in a Kubernetes setup, CSI is what you are looking for. The &quot;container storage interface&quot; makes it possible to automatically provide persistent volumes on the correct node so that they can be mounted into the desired pod. Some applications, however, are unable to store their data in the form of files in a mounted file system, but require direct disk access. Our cloudscale.ch CSI driver now supports so-called &quot;raw block volumes&quot; for such use cases.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Principle and advantages of CSI</h3>
<p>Container setups offer numerous advantages. Among other things, container orchestrators such as Kubernetes can ensure that the required containers are started at any time. If a node leaves the cluster (whether for maintenance work or for other reasons), Kubernetes can start the containers in question elsewhere in the cluster, thus re-establishing the target state. In such a case, CSI allows <strong>the persistent volumes (PVs) to be immediately available again</strong> as well: an appropriate CSI driver not only initially prepares the required volumes on the underlying cloud infrastructure, but afterwards also connects them to the correct node so that the container can access the volume and its data.</p>
<p>To allow a container to use the storage space of a PV in the first place, <a href="https://github.com/cloudscale-ch/csi-cloudscale">our CSI driver</a> normally formats the volume with ext4 at the time of creation and then mounts it in the file system of the node from where it is made available to the container. Some workloads, however, are unable to store their data in a file system, but <strong>only on entire block devices</strong>. Rook is an example of this: In a Kubernetes setup, the Cloud Native Computing Foundation project can install, among other things, a storage cluster with Ceph and provide a storage service for other applications. When operating on physical hardware, Ceph would write directly to the hard disks for actual data storage and use its own optimized &quot;BlueStore&quot; format rather than partitions and file systems. In a Kubernetes setup, the physical disks are replaced by PVs that need to be available as raw block volumes.</p>
<h3>Using CSI with raw block volumes</h3>
<p>Raw block volumes are officially supported in Kubernetes from version 1.18 onwards. We introduced this option with version 3.1.0 of our CSI driver. To create a raw block volume, simply add <code>volumeMode: Block</code> in the persistent volume claim (PVC) (otherwise the default <code>Filesystem</code> will be used), as shown in the following <a href="https://github.com/cloudscale-ch/csi-cloudscale/tree/master/examples/kubernetes/raw-block-volume">example</a>:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pod-pvc-raw-block
spec:
  volumeMode: Block
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: cloudscale-volume-ssd
</code></pre>
<p>In the pod definition please indicate the desired device path by using <code>volumeDevices</code> (instead of the mount point with <code>volumeMounts</code>):</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Pod
metadata:
  name: my-csi-app-raw-block
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeDevices:
        - devicePath: /dev/xvda
          name: my-cloudscale-volume
      command: [ &quot;sleep&quot;, &quot;1000000&quot; ]
  volumes:
    - name: my-cloudscale-volume
      persistentVolumeClaim:
        claimName: csi-pod-pvc-raw-block
</code></pre>
<p>The volume, which is thus created on our cloud infrastructure and is also visible in our cloud control panel or via API, is <strong>passed on &quot;one to one&quot; to the relevant pod</strong> and is accessible from the first to the last byte. It goes without saying that it is also possible to encrypt volumes of this kind within the pod, e.g. with LUKS.</p>
<p>As is the case for all volumes at cloudscale.ch, raw block volumes can either be stored on NVMe SSDs or on our bulk storage. You can also, as usual, <strong>scale up such volumes during live operation</strong>. Please be aware, however, that only the size of the block device is increased via the CSI driver and adjustments of partitions and file systems (if any) need to be performed in the relevant pod.</p>
<h3>Different approaches for different use cases</h3>
<p>Support for raw block volumes means that a Kubernetes/CSI setup is now also suitable for use cases that are not viable with purely file-based persistent storage. If you are interested in Ceph and would like to run trials, the above-mentioned <a href="https://rook.io">Rook</a> is an example of this. There are, of course, also reasons for using Rook with Ceph productively, e.g. if you require a CephFS backend for another application. Make sure you <strong>avoid potential single points of failure</strong> in this case. Where several pods (e.g. &quot;mon&quot; or &quot;osd&quot; in Ceph) guarantee redundancy, place these on Kubernetes nodes that are in <a href="https://www.cloudscale.ch/en/news/2016/10/21/increasing-availability-using-anti-affinity">anti-affinity</a> to one another, as this means that an isolated hardware issue on one of the physical compute servers does not affect several of these pods at the same time.</p>
<p>Ceph is often used with a replication factor of 3, which is also the case for our own Ceph clusters that the NVMe-SSD and bulk volumes as well as our object storage are based on. If you are running your own Ceph setup on top of this infrastructure, this typically means that every data fragment filed in there is physically stored nine times. Evaluate for your individual use case <strong>how much (additional) redundancy you require</strong> and what level of overhead you are willing to accept. It might be that, for example, an NFS or database server of your own or, as an alternative, our object storage is also suitable as central data storage for your applications.</p>
<br/>
<p>Cloud infrastructure and container setups are the method of choice for an increasing number of use cases as they offer <strong>maximum flexibility and scalability</strong>. Whether for a short test or for a productive HA setup, you can now make the most of these advantages in an even more versatile manner thanks to support for raw block volumes in our CSI driver. If you have any feedback or suggestions for improvement, please contact us <a href="https://github.com/cloudscale-ch/csi-cloudscale">via GitHub</a> or directly.</p>
<p>Volumes to your taste, whether ready-formatted or raw!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Our Monitoring and Alerting Journey
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/10/27/our-monitoring-and-alerting-journey</link>
          <pubDate>Wed, 27 Oct 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/10/27/our-monitoring-and-alerting-journey</guid>
          <description>
            <![CDATA[<p>The cloudscale.ch infrastructure not only forms the basis for services we offer, but also provides the backbone for everything that our customers build on it. This is why our monitoring continuously checks that all our components are &quot;up&quot; and interacting as they should and raises an alert if an intervention is required. Over time, we have increasingly fine-tuned and optimized our monitoring, which means that even problems within the monitoring setup do not remain undetected and that, at the same time, unnecessary alerts are reduced to a minimum.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Our tried-and-tested monitoring basis</h3>
<p>Thanks to redundancy at all levels, our customers are unaffected by most isolated problems. Irrespective of whether a cable, a hard disk or a load balancer fails, <strong>the overall system continues without interruption</strong>. It goes without saying that we need to follow up on such cases and, for example, reinstate redundancy in order to ensure that our cloud remains as reliable as it is. Beyond detecting and reporting defective components, our monitoring also covers performance data and the correct functioning of complete end-to-end processes, which enables us to identify any required action in good time.</p>
<p>From the very beginning, we have relied on Zabbix as the linchpin of our monitoring and alerting at cloudscale.ch. Thanks to its versatility and adaptability, we have been able to use this tool to cover most monitoring requirements, which have increased in number and complexity over time. To complement our internal Zabbix, we also added external monitoring early on. On the one hand, this has allowed us to replicate additional use cases and to <strong>better include the user perspective</strong>. On the other hand, we are able to cover cases where our own monitoring is also affected by a problem and/or the generated alerts cannot be sent out for some reason.</p>
<h3>Wide range of optimizations</h3>
<p>In addition to our internal Zabbix, two external monitoring services now check the most varied aspects of our cloud that are &quot;visible&quot; from the outside, from object storage to API calls. All of this converges on the Opsgenie platform, which makes it possible to store the specified on-call schedules and to pass on alerts to the correct person. If the responsible engineer is ever unable to respond to a report immediately, it is automatically escalated to defined further persons. It goes without saying that the complexity increases with the number of services involved, which is why we use regular automated dummy alerts to test whether alert processing is working correctly and whether <strong>the setup is operational all the way to the mobile of the designated on-call engineer</strong>.</p>
<p>Escalating an identified problem correctly is only one part of the process. We also put a great deal of effort into optimizing the database from which the anomalies are extracted. Starting with an already broad set of values that monitoring systems typically read from a running target system, we added further checks that are in part even more hardware-oriented. This means that our monitoring can, for example, recognize if an NVMe disk has not negotiated the usual data rate on the PCIe bus. At the other end of the spectrum, abstracting from the hardware, we have an increasing number of checks that <strong>monitor the state of whole clusters</strong> without being dependent on a specific host to query. Thanks to a solid baseline of measurements, we can then determine threshold values in such a way that allows problems to be reliably identified without causing a lot of noise.</p>
<h3>Why we can sleep at night</h3>
<p>Although on-call engineers at cloudscale.ch tend to sleep through the night, the redundancy mentioned above and carefully considered threshold values only provide a partial explanation for this. Wherever an analysis during office hours is adequate, we have allocated a low severity level to the checks and set Opsgenie in such a way that nobody is woken up. Consistent follow-up processing is important here: even low-severity events and anomalies that occur during the day are investigated in a timely manner before they develop into a problem that necessitates getting up at night. If something ever has a greater impact, the same principle applies, and we almost always find a way to <strong>identify similar cases earlier or to avoid them completely</strong> in future.</p>
<p>On top of all this, there is one further, less technical aspect. Thanks to separate monitoring for our lab, new engineers can take the time they need to settle in without pressure, which enables them to quickly become aware of the things they need to pay attention to. And when resolving problems which cannot be completely avoided despite constant improvements, <strong>directly linked runbooks provide the required backing</strong>. If on-call engineers are woken at night, which tends to be the exception rather than the rule, it does not take long before they can go back to bed with confidence.</p>
<br/>
<p>The reliable operation of our cloud infrastructure is key for many of our customers. It is based on the fact that we <strong>identify anomalies as early as possible</strong>, which allows us to avoid most problems before they have an impact on our customers. Our monitoring setup, which has grown and been consistently improved over the years, serves as our eyes and ears right into the furthest corners of our systems. At the same time, it contains the intelligence required to support and reduce the pressure on our engineers in their work.</p>
<p>You can sleep well (too)!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Collaboration with External Accounts
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/09/23/collaboration-with-external-accounts</link>
          <pubDate>Thu, 23 Sep 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/09/23/collaboration-with-external-accounts</guid>
          <description>
            <![CDATA[<p>With immediate effect, it is possible to collaborate on a project with accounts outside your organization in our cloud control panel. The new &quot;external collaborators&quot; feature covers a range of scenarios and allows you, for example, to manage cloud resources together with customers, suppliers or partners, and to grant different access rights for this purpose.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Limit what can be seen</h3>
<p>Since the complete <a href="https://www.cloudscale.ch/en/news/2021/05/27/new-control-panel-organizations-projects-teams">overhaul of our cloud control panel</a>, cloud resources can be used for several different projects. Access rights for these projects can be granted either for whole teams or for individual organization members. However, as is usually the case in companies, there is a certain degree of transparency for employees, which means that <strong>all organization members can see which other members, teams and projects are in their organization,</strong> even if they themselves are not part of the team or project in question.</p>
<p>This transparency may not be desired when cooperating with external persons. You can now, therefore, <strong>invite external persons to join your organization as &quot;external collaborators&quot;, which ensures that they will have no insight into the internal structure of your organization.</strong> Managing your projects is as easy as it was: you can add external collaborators to your teams or grant them direct read or change access to the desired projects. External collaborators can then only see the projects they are actually authorized to see and have no access to details that refer to other activities or relationships. Irrespective of whether you would like to work with customers or service providers, the external collaborators feature will enable you to achieve the correct degree of transparency in every scenario.</p>
<h3>Use tried-and-tested concepts</h3>
<p>For maximum security we recommend that you work with personal accounts and do not allow several persons to share login credentials. At the same time, this will ensure that you can use the project logs to trace who performed a certain action if required. You can now also invite external partners to create a personal account at cloudscale.ch and send them <strong>an invite link that will allow them to join your organization as external collaborators.</strong> If any of your external partners uses an &quot;OpenID Connect&quot;-compatible identity provider and would like to <a href="https://www.cloudscale.ch/en/news/2021/06/18/single-sign-on-using-own-identity-provider">benefit from single sign-on</a> at cloudscale.ch as well, our support team will be happy to help.</p>
<p><strong>Granting read and change access to your projects and adding external collaborators to teams follows the same principles as for organization members.</strong> External collaborators do not, however, have access to projects where you have selected &quot;Grant access to all members of the organization&quot;. Instead, in order to grant selected external collaborators access to a project, you need to use the option of teams or individual access.</p>
<p>Incidentally, it goes without saying that just as for members of your organization, personal accounts are free of charge for external collaborators. They are, however, free to use cloud services separately from your organization at their own cost.</p>
<h3>A few tips</h3>
<p>Many companies have internal security guidelines, many of which explicitly require the use of two-factor authentication (2FA). In our control panel, you will therefore <strong>not only see the 2FA status for members, but also for the external collaborators in your organization.</strong> Any changes to this are also documented in the organization log. As opposed to for members of the organization, all other account-related actions of an external collaborator (e.g. logging in/out, changing the account password) are not included in the organization log.</p>
<p>The details of your organization are basically not visible to external collaborators. <strong>As soon as you grant access to a project, this also grants access to the corresponding project log,</strong> i.e. information about changes to the project and their initiators. This ensures that external collaborators can actually perform their assigned tasks (e.g. project monitoring or reconstructing a technical problem). External collaborators also see the stored email addresses of your organization (main email address and billing email address, if specified) and can select these as the sender&#x27;s address for support tickets.</p>
<br/>
<p>As a concept, external collaborators are predominantly in line with organization members, which means they can be effortlessly integrated into your existing access scheme. The small difference in the detail, however, opens up completely new areas of application: you can now <strong>also involve your customers or external specialists in the management of your cloud resources</strong> without them finding out about each other or about your internal matters. This means that committed cooperation on your cloud projects can be structured more efficiently than ever.</p>
<p>Better together,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Bulk and Object Storage – Upgrade to "All Flash"
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/08/30/bulk-object-storage-all-flash</link>
          <pubDate>Mon, 30 Aug 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/08/30/bulk-object-storage-all-flash</guid>
          <description>
            <![CDATA[<p>At cloudscale.ch, we place great value on performance – even when it is not the main focus. This is why, during the most recent expansion of our bulk and object storage, we switched to &quot;all flash&quot; in this area, too. Consequently, our customers automatically benefit from improved access times to their bulk volumes and objects at no additional cost.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>What we changed</h3>
<p>From the outset, cloudscale.ch has relied on separate Ceph-based storage clusters. These store data on dedicated storage servers independently of the compute hardware, distributing the data across multiple storage servers and using triple replicated storage. During the expansion of our bulk and object storage, we also decided to replace the existing servers and, going forward, to <strong>exclusively use fast SSDs for the disks</strong>.</p>
<p>The new storage servers not only have faster disks, but also significantly higher-performance AMD CPUs and more RAM compared to the previous generation. This means that Ceph, which deals with data distribution and replication in the cluster, can work to its full advantage. Our system engineers also significantly increased the object storage caches. As we have continued to use particularly high-performance NVMe-SSDs for these caches, this provides an <strong>additional benefit for write operations and frequently read objects</strong>.</p>
<h3>The advantages</h3>
<p>One of the greatest disadvantages of conventional hard drives is their mechanical way of functioning. To write data onto the drive in a certain location or to read them from there, the read/write head has to be physically moved to the correct position and the magnetic disk turned until the desired location is underneath the read/write head. For many hard drives, this process takes an average of about 8.5 ms, but when multiple access to the disk is required simultaneously, the waiting time for a process that is further back in the queue can be considerably longer. The switch to &quot;all-flash storage&quot; for our bulk and object storages means that <strong>mechanical latency is no longer an issue</strong> and minimizes the mutual impact on performance when several customers require access at the same time.</p>
<p>A typical advantage of Ceph clusters that store data over multiple storage servers and disks can be seen in the case of simultaneous data access: there is no need for queues and consecutive processing as parallel execution is possible. However, this was not the case previously when two operations had to access the same physical hard drive. Due to the mechanics, only one read or write operation could take place at any one time. Thanks to the SSDs, <strong>parallel access is now possible even at the disk level</strong> and our customers automatically benefit from the higher overall performance of our bulk and objects clusters.</p>
<h3>A few tips</h3>
<p>You do not have to do anything to benefit from our all-flash clusters for bulk and object storage. The switch took place in May at the Rümlang (RMA) site and in mid-August at the Lupfig (LPG) site, and <strong>included all existing volumes and buckets</strong>. Please note that the rate limit of 500 IOPS remains in place for bulk volumes. This is to prevent excessive use of our clusters by individual customers with particularly disk-intensive applications to the detriment of other customers. For database applications, in particular, we still recommend our NVMe-SSD volumes, which have no limit of this kind and are specifically designed for the highest level of performance.</p>
<p>While the new setup means that the typical access time of rotating hard drives is no longer an issue, there is still a certain degree of latency: as our storage clusters are operated separately from the compute servers on different hardware, all disk access takes place via the network. For this reason, we recommend that you <strong>enable any caching options that the software you use offers</strong> and that you select a flavor with sufficient RAM, which Linux can automatically use as a disk cache.</p>
<br/>
<p>Even if you only rarely require certain data, access to them should still be as fast as possible. By switching our bulk and object storages to &quot;all flash&quot;, we have managed to combine <strong>unchanged reasonable costs with considerably higher performance thanks to SSDs</strong> – even, and in particular, when multiple access is required simultaneously. See for yourself!</p>
<p>Farewell, mechanical limitations – welcome, all flash!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[How to Make Optimal Use of Our Infrastructure
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/07/28/optimal-use-of-our-infrastructure</link>
          <pubDate>Wed, 28 Jul 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/07/28/optimal-use-of-our-infrastructure</guid>
          <description>
            <![CDATA[<p>In many ways, an IaaS cloud offer is a drop-in replacement for a physical server, so if you already have experience of administering your own device or a dedicated server, you can use these same skills in a cloud server, too. However, it is probably worth taking a closer look at the additional features and some of the specific characteristics of the cloud. In this article we have provided an overview of tips to ensure you get the most out of your setup at cloudscale.ch.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Performance: use caches</h3>
<p>As opposed to with conventional servers, here at cloudscale.ch we use separate Ceph-based storage clusters. What looks like a single local hard disk to a virtual server, is in fact storage capacity that is distributed across numerous disks and servers and that <strong>keeps data in a multiply redundant manner</strong>. One of the many advantages consists of the fact that, in the event of a hardware defect on a physical compute host, the affected virtual servers can be restarted on a replacement machine within an extremely short period of time. <strong>All data are immediately available again</strong> without requiring an engineer to move the hard disks in question from one server at the data center to another. At the same time, however, as all disk access in this setup takes place via the network, latency is significantly higher than the level that can be achieved with local disks, despite <a href="https://www.cloudscale.ch/en/news/2020/06/04/cumulus-linux-switch-paid-off">dedicated connections of up to 100 Gbit/s</a>. This is why you should use the caching features if the software you use (e.g. a database server) offers this option.</p>
<p><strong>Our tip:</strong> It may be worth selecting a slightly larger flavor, given that Linux automatically maintains a disk cache using RAM that is not being otherwise used.</p>
<h3>Performance: parallelize workloads</h3>
<p>Once data start flowing to or from our storage cluster, a further advantage of this setup becomes discernible: as multiple storage servers and an even greater number of disks <strong>work together and can be addressed in parallel</strong>, the achievable data transfer rate is considerably higher than what would be possible with an individual local SSD. This means that the more your application&#x27;s data access can be parallelized, the more you can <strong>benefit from the usable overall performance</strong> of our storage cluster.</p>
<p><strong>Our tip:</strong> Parallelize your workloads wherever possible as, compared to purely sequential processing, this will significantly increase storage performance.</p>
<h3>Performance: select the flavor based on the use case</h3>
<p>In addition to RAM requirements (including a reasonable reserve as a disk cache), processing power requirements play a key role when selecting the appropriate flavor. Beyond the selectable number of vCPUs/cores, <strong>the available &quot;Flex&quot; and &quot;Plus&quot; schemes have been optimized for different use cases</strong>. With Flex flavors, you share processing power with other customers while adhering to moderate average utilization in accordance with fair-use regulations. In addition to the low cost, you also benefit from adequate reserve capacity, which ensures that you are always prepared for peak demand periods. With the <a href="https://www.cloudscale.ch/en/news/2019/11/19/even-more-power-thanks-to-plus-flavor">Plus flavors</a>, on the other hand, the number of physical CPU cores you have booked are always exclusively available to you. <strong>You can and may use this capacity at all times</strong>; with Plus, there are no bottlenecks caused by overbooking of available processing power.</p>
<p><strong>Our tip:</strong> Your virtual servers can be scaled at any time, including from Flex to Plus and vice versa. Take advantage of this option, for example to perform initial tests with various flavors or whenever your requirements change.</p>
<h3>Costs: appropriate storage for any application</h3>
<p>The first 10 GB of NVMe-SSD root volume storage are included with every virtual server. Containing the base image for the launch of the server, this amount of storage is often already sufficient for its live operation. In addition, <strong>further volumes can be added to and deleted from the server at any time</strong>. Select <a href="https://www.cloudscale.ch/en/news/2019/07/25/bluestore-encryption-and-nvme-only-storage#toc-item1">NVMe-SSD storage</a> here if you require maximum performance, e.g. for operating a database. The economical bulk volumes, which are limited to a maximum of 500 IOPS, on the other hand, are better suited to larger quantities of stored data that are used less intensively. <strong>If you are unsure, try out both options</strong>. While bulk volumes are definitely not appropriate for database use, they may be perfectly adequate for other scenarios, thus supporting cost-effective server operation.</p>
<p><strong>Our tip:</strong> As an alternative to NVMe-SSD and bulk volumes, which you will see as <code>sdX</code> devices in your server, our S3-compatible Object Storage is also available to you. The particularly interesting aspect here is that – in addition to a utilization-based charge for requests and outgoing data transfers – you do not pay for available storage capacity, but only for the capacity you actually use.</p>
<h3>Costs: scale volumes as required</h3>
<p>You can increase the root volume as well as additional NVMe-SSD and bulk volumes at any time, <strong>even during live operation</strong>. Provided there is tooling and appropriate partitioning in the server, this works up to the level of the file system. For this reason you should calculate your volumes based on the <strong>capacity you actually require</strong>; there is no need to plan for future requirements in advance. Please note that reducing volumes is not supported. Therefore, if you only need additional capacity temporarily, we recommend that you create an <a href="https://www.cloudscale.ch/en/news/2020/11/23/more-volumes-more-flexible-container-setups">additional volume</a>, which you can delete again at any time in future.</p>
<p><strong>Our tip:</strong> If you do not want to use multiple additional volumes as separate mount points, you can combine them, e.g. using LVM, into a continuous area. This way, you still have the option of removing individual PVs from the volume group and deleting the respective volumes at a later time.</p>
<h3>Resilience: copies in various locations</h3>
<p>At cloudscale.ch, we assume that, in their own interest, our customers have <strong>backups of their data on a third-party infrastructure</strong>, with the simplest scenario, for example, being locally on their own work device. If you also have copies with backup character within our cloud infrastructure, we recommend that you always save these in a <strong>different cloud location</strong> to their source data. Our two <a href="https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations">geographically separate cloud regions</a> in Rümlang (Canton Zurich / RMA) and Lupfig (Canton Aargau / LPG) mean that your data are also protected in the best possible way in the case of unlikely, but potentially serious events such as a fire or an earthquake. Incidentally, your data will always take the most direct route possible thanks to our dark fiber ring between the two regions.</p>
<p><strong>Our tip:</strong> As an alternative, there is S3-compatible Object Storage available to you in both cloud locations. If you use this option, make sure that you actually create the bucket in the <a href="https://www.cloudscale.ch/en/news/2020/01/17/object-storage-new-urls">other location</a>.</p>
<h3>Resilience: use anti-affinity</h3>
<p>Scaling &quot;up&quot; in the case of growing resource requirements, i.e. allocating more resources to an existing server, avoids complexity and thus possible sources of errors. It may, however, make sense to scale &quot;out&quot; instead and to use a (redundantly designed) load balancer to <strong>spread the load across several servers</strong>. You can use our <a href="https://www.cloudscale.ch/en/news/2016/10/21/increasing-availability-using-anti-affinity">anti-affinity feature</a> to avoid the issue of several of these servers being simultaneously affected by a potential isolated hardware problem. By placing up to four virtual servers in anti-affinity to each other during their creation, you can ensure that <strong>these servers actually run on separate physical hosts</strong>.</p>
<p><strong>Our tip:</strong> Always combine servers that perform similar tasks and can &quot;stand in&quot; for each other in the case of failure.</p>
<h3>Resilience: use of Floating IPs</h3>
<p>Today, almost everything takes place via domain names, and with a sufficiently low TTL (time to live) for your DNS entries, you can control which IP address and therefore which server your requests are directed to. This, however, still involves a certain delay, and the behavior of various clients is not always totally consistent. <strong>With Floating IPs, you can avoid having to change the IP address</strong>: simply move your <a href="https://www.cloudscale.ch/en/news/2017/04/20/high-availability-using-floating-ips">Floating IPs</a> between your servers, within one region or across regions, in order to direct traffic to the desired server within seconds. <strong>To achieve a failover setup, you can also automate this process via our API</strong>. Here, two servers – e.g. two load balancers – constantly monitor each other so that they can redirect live traffic almost seamlessly to themselves if their counterpart encounters a problem.</p>
<p><strong>Our tip:</strong> Even for simple setups, Floating IPs offer you decisive added value given that, unlike a server&#x27;s IP addresses, Floating IPs are kept when a server is deleted. This means that, from a user&#x27;s point of view, services can be resumed unchanged even if you completely replace the server in question in the background.</p>
<h3>Security: protect backend servers</h3>
<p>In setups with multiple servers, e.g. with a separate web and database server, some of the systems often do not need to be directly accessible from the Internet. Rather, it is actually particularly important to prevent direct access in order to provide optimal protection for these backend systems. At cloudscale.ch, you can connect backend systems of this kind to the frontend servers <strong>via a separate, private network</strong>, which eliminates the need for a direct connection between the backend systems and the Internet. For more complex setups you can also create <a href="https://www.cloudscale.ch/en/news/2019/10/25/segmentation-with-multiple-private-networks">multiple separate private networks</a> or <strong>specify the settings to be distributed in the private network</strong> by <a href="https://www.cloudscale.ch/en/news/2020/04/03/mastering-the-private-network-with-managed-dhcp">the DHCP service</a>.</p>
<p><strong>Our tip:</strong> OPNsense and pfSense CE, two <a href="https://www.cloudscale.ch/en/news/2019/02/27/firewall-distribution-at-a-mouse-click">dedicated firewall distributions</a> available in our cloud control panel, allow you to use a web interface to conveniently manage a firewall server at the interface between public and private networks.</p>
<h3>Security: encryption</h3>
<p>This first part is not actually a tip in the true sense of the word. At cloudscale.ch, all data that our customers store on our storage clusters (i.e. all data on NVMe-SSD and bulk volumes and in our Object Storages) are automatically <a href="https://www.cloudscale.ch/en/news/2019/07/25/bluestore-encryption-and-nvme-only-storage#toc-item3">encrypted &quot;at rest&quot;</a>. This encryption serves as an <strong>additional layer of security</strong>, e.g. in the event that a defective disk can no longer be completely erased when it is decommissioned. It goes without saying that users with the appropriate skills can go one step further. Our Object Storage supports server-side encryption using SSE-C. You are also free to <strong>set up additional encryption</strong>, e.g. with LUKS, for volumes or individual partitions within your virtual servers that is solely under your control. However, please be aware that after an (unplanned) reboot, a manual intervention will normally be required. In addition, the installation and debugging of setups of this kind are not covered by our support.</p>
<p><strong>Our tip:</strong> Our CSI (container storage interface) driver <a href="https://www.cloudscale.ch/en/news/2019/03/15/persistent-volumes-in-kubernetes-with-csi#toc-item3">supports full disk encryption</a> as well. In Kubernetes setups, this can be used to encrypt even persistent volumes for containers with minimal effort using LUKS.</p>
<h3>Security: certification and compliance</h3>
<p>At cloudscale.ch, in addition to data encryption, you also benefit from the fact that we are <a href="https://www.cloudscale.ch/en/news/2019/05/24/certified-as-per-iso-27001-27017-and-27018">certified as per ISO 27001, 27017 and 27018</a>. The data centers we use are also certified according to <strong>ISO 27001 and other international standards</strong>. Moreover, cloudscale.ch is a hosting partner of the <a href="https://www.cloudscale.ch/en/news/2020/10/22/swiss-hosting-label-launch-partner">&quot;swiss hosting&quot; label</a>, which means that we offer you the certainty that all data are <strong>exclusively stored and processed in Switzerland</strong>. In this way we help you meet your own customers&#x27; compliance requirements as effectively as possible.</p>
<p><strong>Our tip:</strong> If you process data from EU persons in our cloud, we also offer you the option of concluding a DPA in accordance with EU GDPR. You will find this agreement in our cloud control panel and can conclude it there with just two clicks of your mouse.</p>
<br/>
<p>These tips do not claim to be exhaustive, and depending on your specific use case, other topics may also be particularly significant, such as efficient processes based on <a href="https://www.cloudscale.ch/en/news/2020/12/21/cloud-orchestration-with-ansible-collections">the</a> <a href="https://www.cloudscale.ch/en/news/2019/12/23/latest-features-with-terraform">DevOps</a> <a href="https://www.cloudscale.ch/en/news/2019/08/14/docker-machine-and-rancher">tools</a> <a href="https://www.cloudscale.ch/en/news/2021/02/25/manage-kubernetes-clusters-with-okd">we</a> <a href="https://www.cloudscale.ch/en/api/v1#introduction">support</a>. It goes without saying that we are also constantly improving our offer. We will be glad to answer any questions that arise in your day-to-day work directly, and look forward to any feedback that will help us shape our road map further.</p>
<p>Simply sophisticated,<br/>
Your cloudscale.ch team</p>
<p>This post was updated on 2021-08-10.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Single Sign-On Using Your Own Identity Provider
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/06/18/single-sign-on-using-own-identity-provider</link>
          <pubDate>Fri, 18 Jun 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/06/18/single-sign-on-using-own-identity-provider</guid>
          <description>
            <![CDATA[<p>Security is key when handling data and systems. Many companies have established detailed compliance specifications for this purpose. At the same time, security measures – in particular the increasing number of passwords – are often seen as cumbersome by employees. Linking our cloud control panel to your own identity provider gives you a double advantage: your employees will benefit from increased convenience in their day-to-day work with our cloud thanks to single sign-on, while you can be sure that the security standards you have defined also apply when accessing our control panel.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>More control in day-to-day cloud business</h3>
<p>With the recently introduced &quot;organizations&quot;, companies and other groups can manage cooperation around their cloud resources. As a superuser, you can <strong>invite any cloudscale.ch accounts into your organization</strong> and grant them read-only or full access on a project-by-project basis. People who already have an account at cloudscale.ch – e.g. for private use – can continue to use it and will then additionally see the resources of your organization.</p>
<p>It may be that as a company, you prefer your employees to use a separate account associated with their business email address instead. In this case, you can additionally choose to link your own &quot;OpenID Connect&quot;-compatible identity provider (&quot;IDP&quot;), such as <a href="https://www.keycloak.org">Keycloak</a> or <a href="https://zitadel.ch">ZITADEL</a>, to our cloud control panel. <strong>During a signup or login attempt with an address from your email domain, the user is then redirected to your IDP</strong>. Once authenticated in accordance with the specifications of your IDP, they are returned to our control panel, where they are also logged in.</p>
<h3>Tips on using your IDP at cloudscale.ch</h3>
<ul>
<li>Two-factor authentication (&quot;2FA&quot;) is an important security feature. In our control panel, you can see for each &quot;member&quot; of your organization whether 2FA is enabled for the account in question or not. By logging your members into our control panel via your own IDP, <strong>you can also technically enforce 2FA</strong>, if required, which will save you from having to perform periodic checks and give you the certainty that this additional layer of security is in place at all times.</li>
<li>To simplify onboarding, an email pattern can be stored for every organization on request. Newly created accounts matching this pattern will then be <strong>automatically added as a member of the organization in question</strong>.</li>
<li>It is easy for you to define directly in your IDP whether a certain person is <strong>actually authorized to log into our control panel</strong>. This will also enable you to maintain control when employees join or leave your company.</li>
<li>If you operate your IDP in our cloud, make sure that you can still take the required repair measures in the case of its failure. This is why we recommend that you <strong>include a superuser in your organization</strong> that does not depend on the same IDP. Another option is an IDP setup with automatic failover.</li>
</ul>
<p>Our support team is happy to help if you would like to link your IDP or if you have further questions.</p>
<h3>Convenience even without your own IDP – login with GitHub</h3>
<p>Even if you do not have your own IDP, there is now also a further login option available. If you would like to <strong>log into our control panel with GitHub rather than a password</strong>, simply click on &quot;Continue with GitHub&quot; when you sign up and during future logins. The primary email address registered at GitHub is associated with your cloudscale.ch account and serves as the contact address for our communication with you. Please note that, in this case, logging into our control panel depends on the availability of GitHub and your account there.</p>
<p>With existing accounts, it is currently not possible to switch between GitHub-based and password-based logins. Our support team is happy to help if you already have an account and would like to move existing projects to a new account with a different login procedure.</p>
<br/>
<p>Single sign-on solutions mean that users require fewer different passwords and have to enter them less frequently. This also reduces the temptation to select weak passwords that increase risk exposure. By linking our cloud control panel to your existing IDP, <strong>you will make work easier for your employees while simultaneously increasing security for your company and customers</strong>.</p>
<p>Creating trust,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[New Control Panel: Organizations, Projects, Teams
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/05/27/new-control-panel-organizations-projects-teams</link>
          <pubDate>Thu, 27 May 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/05/27/new-control-panel-organizations-projects-teams</guid>
          <description>
            <![CDATA[<p>We have done it: the new cloud control panel is live! The most significant overhaul of the control panel since cloudscale.ch was established has not only come up trumps with a completely new user interface, but also with numerous new features, such as organizations, projects, and teams. With projects, for example, you can group exactly those resources that belong together, irrespective of whether you only want to keep test and productive environments separate or whether you want to segregate completely independent setups. Companies and other organizations will also benefit from the new member and rights management. Whether you work alone or in a team, you can be sure that your cloud infrastructure is under control.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-control-panel.png"/><h3>Projects: enhanced order for everyone</h3>
<p>Several tried-and-tested approaches have been used in the past in order to maintain an overview, from a suitable name scheme and tags to the use of widespread DevOps tools. You can go one step further with our new &quot;projects&quot;. Your cloud resources are <strong>not only optically grouped and protected against mix-ups, but are also technically separated</strong> from each other. This stops you from inadvertently linking a test server into the private network of your productive infrastructure and prevents the API token of one cluster causing issues in another. All cloud resources, such as virtual servers, volumes, private networks, Floating IPs, and Objects Users are part of exactly one project.</p>
<p>Projects also mean that you can easily switch between your different &quot;hats&quot;. The clusters of your end customers A and B, your personal experimental setup, and the web server you run as a volunteer are all clearly separated from each other, but can still be reached with just two clicks. <strong>Where you may so far have worked with several accounts, the new projects are now your method of choice</strong>.</p>
<h3>Work together – in organizations</h3>
<p>There are numerous projects, both in business and private life, that you do not work on alone. The newly introduced &quot;organizations&quot; represent this kind of collaboration. You can <strong>set up organizations for your company, your association, or yourself</strong> whenever you want to collaborate on your cloud resources with other people. As a &quot;superuser&quot; in an organization, you can manage its projects, invite &quot;members&quot; and grant permissions. As organizations are each based on a separate framework agreement, they are also suitable for constellations where cloud resources need to be treated separately for accounting purposes.</p>
<p>The advantages of an organization are obvious: every organization member has their own login credentials and can enable two-factor authentication for additional protection. The traceability of changes in organization and project logs means that you will also meet your end customers&#x27; compliance requirements. For every project of your organization, you can <strong>determine on a case-by-case basis which members and/or teams should have read-only or full access</strong>.</p>
<img src="https://static.cloudscale.ch/img/news-control-panel-9c31214306b3.png" alt="New cloud control panel"/>
<h3>Simple use, smooth transition</h3>
<p>As usability is in the DNA of cloudscale.ch, it goes without saying that we also placed particular emphasis on ease of use in this evolutionary step. In the completely overhauled cloud control panel, which is now based on single-page application architecture, almost everything is in its usual place and <strong>existing functionality has remained unchanged</strong>. API specifications are also the same, which means that automated processes and tools do not need to be adapted.</p>
<p><strong>Your existing cloud resources are not affected by the switch</strong>, and any API tokens will continue to work. If you have so far shared an account with other people and would like to now run this account as an organization, simply let us know and we will be happy to migrate the account for you. We would, of course, also be delighted to receive any other feedback.</p>
<br/>
<p>With the most significant overhaul of the cloud control panel in our history, we have managed to bridge the gap: at cloudscale.ch, the cloud will remain <strong>simple, fast and intuitive</strong> in future, too, and in addition, you now have access to the appropriate tools to maintain an <strong>overview and control in larger contexts</strong> – both on your own and in teams.</p>
<p>Experience the cloud from a new perspective!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Testing our Infrastructure from a User Perspective
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/04/27/testing-infrastructure-from-user-perspective</link>
          <pubDate>Tue, 27 Apr 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/04/27/testing-infrastructure-from-user-perspective</guid>
          <description>
            <![CDATA[<p>When you have a complex technical solution developed, you want to be sure that you actually receive what was agreed on. This is why, especially in the IT sector, it is common practice to perform acceptance tests when a solution is handed over to a customer to ensure adherence to the specification. Standardized continuous services, such as cloud services, are subject to a constant handover process, which is exactly why, at cloudscale.ch, we have developed an &quot;Acceptance Test Suite&quot; and recently released it on GitHub. This provides you with an insight into part of our quality assurance process and allows you to see for yourself which standards we set for our cloud infrastructure.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-acceptance-tests.png"/><h3>What is tested</h3>
<p>We developed the <a href="https://github.com/cloudscale-ch/acceptance-tests#readme">Acceptance Test Suite</a> to allow us to see, as completely as possible, our cloud infrastructure from a customer perspective. This means that <strong>these end-to-end tests cover almost every aspect of our cloud offer</strong>, including confirmation that a single server really can have <a href="https://www.cloudscale.ch/en/news/2020/11/23/more-volumes-more-flexible-container-setups">up to 128 volumes</a>, that servers can be scaled between Flex and <a href="https://www.cloudscale.ch/en/news/2019/11/19/even-more-power-thanks-to-plus-flavor">Plus flavors</a>, that a <a href="https://www.cloudscale.ch/en/news/2017/04/20/high-availability-using-floating-ips">Floating IP</a> can be moved between servers, and that jumbo frames can be used in a <a href="https://www.cloudscale.ch/en/news/2019/10/25/segmentation-with-multiple-private-networks">private network</a>. During this process, the acceptance tests simulate a power user utilizing all the features of our offer. This provides us with the certainty that our infrastructure works perfectly during intense day-to-day use, too.</p>
<p><strong>The acceptance tests use our public API</strong> and thus the same technical interface as <a href="https://www.cloudscale.ch/en/news/2020/09/15/cloudscale-cli-now-available">our CLI</a> and DevOps tools, such as Ansible and Terraform. As this enables acceptance tests to be fully automated, they can be performed regularly: from GitHub we run them against <a href="https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations">our two cloud locations</a> every day <strong>to ensure that we are constantly aware of what our customers see &quot;from the outside&quot;</strong>. We also use this to test our lab setups every day as well as to perform targeted test runs before and after major updates. As a complement to our manual verification steps in the lab, the acceptance tests provide additional certainty that planned work on productive systems will not have any negative effects for our customers.</p>
<p>Our S3-compatible Object Storage, which is based on Ceph, is the only thing completely excluded from the acceptance tests we developed. Here, we use the corresponding automated tests that are already publicly available for this open source project.</p>
<h3>What the acceptance tests mean for our customers</h3>
<p>We use redundancy and extensive monitoring in order to minimize negative effects of isolated events, e.g. hardware defects, for our customers. With the acceptance tests, which simulate a wide range of use cases, we have extended this monitoring so that it also covers cases where all the &quot;cogs&quot; work in isolation, but for some reason still do not mesh together correctly. Typical examples here are configuration errors or version updates that bring with them slightly modified system behavior. Our comprehensive acceptance tests mean that we detect many of the edge cases while still in the lab; regular testing of the productive systems then confirms to us that <strong>all features are available to our customers as usual over the long term</strong>.</p>
<p>Despite all these precautions, problems can still occur in unexpected places, causing previously correct system behavior to suddenly disappear. Expanding our Acceptance Test Suite is one way to <strong>prevent regressions of this kind in future</strong>. As a project that has grown and continues to grow, the acceptance tests are developing together with our cloud offer. It goes without saying that all customers automatically benefit from this institutionalized learning process.</p>
<img src="https://static.cloudscale.ch/img/news-acceptance-tests-90e44082956e.png" alt="cloudscale.ch acceptance tests on GitHub"/>
<h3>How to see for yourself</h3>
<p>The main aim of our acceptance tests is, of course, for you to be able to rely on the documented features of our infrastructure in your day-to-day work and life. We, however, go an extra step. On GitHub, you can <strong>see the tests we run against our productive cloud infrastructures, <a href="https://github.com/cloudscale-ch/acceptance-tests/actions">including the results</a></strong>. Please remember that, as things sometimes go wrong on the Internet, tests may be repeated one additional time before they are assessed as &quot;failed&quot;.</p>
<p>For everyone who would like to look at this in more detail, we have published the <a href="https://github.com/cloudscale-ch/acceptance-tests">source code of our acceptance tests</a> on GitHub. <strong>This will enable you to reconstruct exactly which tests we perform and how we perform them</strong>. If desired, you can also run the tests against our infrastructure yourself. All you need is a Linux or macOS system with Python (version 3.6 and above) and a cloudscale.ch account (for security reasons, we recommend that you use a separate account where you do not use any productive resources).</p>
<br/>
<p>Our customers depend on their infrastructure working reliably at cloudscale.ch. Regular tests both in our lab and on the productive infrastructure are a fixed component of our quality assurance process. Releasing our acceptance tests on GitHub provides you with a direct insight into this essential tool that we use to measure ourselves against every day.</p>
<p>Performing the acid test,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[New Terms of Service (ToS)
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/03/26/new-terms-of-service-tos</link>
          <pubDate>Fri, 26 Mar 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/03/26/new-terms-of-service-tos</guid>
          <description>
            <![CDATA[<p>We are nearly there and about to launch our new completely updated cloud control panel. The new options for the use of our offer meant that we had to adapt our <a href="https://www.cloudscale.ch/en/tos.pdf">Terms of Service (ToS)</a>. The updated ToS will apply to new customers with immediate effect and to existing customers from 2021-04-26. In the following, we provide an overview of the main changes.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Reason and procedure</h3>
<p>Keeping Infrastructure-as-a-Service (&quot;IaaS&quot;) simple has always been one of our stated aims at cloudscale.ch: an account has an email address and a password, some credit, and as many or as few virtual servers as required at the time. This is not, however, always ideal for a company using our services and some users would also like to group their cloud resources, e.g. according to the purpose of individual servers. In future, our new cloud control panel will address all scenarios optimally:</p>
<ul>
<li><strong>The customer &quot;account&quot; will remain the same</strong> as with the current approach, but with additional options, e.g. the ability to separate cloud resources into different projects. This will allow, for example, the clean separation of test and production environments.</li>
<li><strong>The new &quot;organizations&quot; will be subject to a separate framework agreement</strong> and will also be able to book cloud resources; however, instead of a password for logging in, these will have one or several &quot;members&quot; who may have different roles and rights within the organization.</li>
</ul>
<p>If you would like to find out more about the new options, please make sure you have subscribed to our newsletter or keep an eye on the news section on our website.</p>
<p>The new cloud control panel will go live in the second quarter of 2021. To ensure transparent regulation of the new technical options, we are already publishing our updated ToS, which cover these changes. <strong>These new ToS will apply with immediate effect to newly created customer accounts. For existing customers, the new ToS come into effect on 2021-04-26 after a transition period of 30 days</strong> in accordance with the current regulations. We will not be invoking the provision about changes to ToS being authorized with immediate effect upon booking additional services and this provision will then be deleted without replacement in the new ToS. This means that an automated cloud setup will not affect the lead time available to you.</p>
<p><strong>There is no need for you to do anything.</strong> Any existing services will continue unchanged, also after the new ToS come into effect.</p>
<h3>The most important changes</h3>
<p>With accounts and organizations, the new ToS explicitly cover both options for a customer relationship and thus of a framework agreement. A customer &quot;account&quot;, which is required to log into our cloud control panel, is the typical approach for an individual who would like to use cloudscale.ch services. Once logged in, <strong>&quot;organizations&quot; can be created to conclude additional framework agreements</strong>, registered in the name of one&#x27;s company, for example.</p>
<p>The &quot;Introduction&quot; and &quot;Use&quot; sections of the new ToS define which agreement and thus whose responsibility individual actions, e.g. creating a new server, come under. <strong>In all cases, the account or organization within the scope of which an action is performed is relevant</strong>. As a result, the new option of granting authorization is based on the principles known from e.g. work contracts and proxy assignments.</p>
<p>As cooperation is possible, accounts are no longer always completely independent from each other. The &quot;Transmission of customer data&quot; section now states that <strong>when authorization is accepted or granted, certain data may be or become visible to other people</strong>, e.g. in the logs of an organization you are a member of.</p>
<p>To make it clear that the person reading the ToS is not necessarily being addressed personally, we have changed the wording from &quot;you&quot; to &quot;customer&quot;. Please bear in mind that the English translation of the ToS is provided for the convenience of our non-German-speaking customers, and <strong>only the German version is legally binding</strong>.</p>
<p>In addition to a few minor changes, we have also been able to meet another customer demand. Already in the past, we announced maintenance work in advance wherever possible and now <strong>we have included a corresponding section (&quot;Maintenance work&quot;) in our ToS</strong>. In the interest of our customers, we nonetheless continue to reserve the right to carry out maintenance work at short notice in urgent cases, e.g. security updates.</p>
<h3>Most things will stay the same</h3>
<p>The basic principle behind our ToS remains unchanged. We want to be a close and approachable partner for you even in the &quot;small print&quot;. This starts with the data location, and <strong>we will continue to operate our whole infrastructure exclusively in data centers in Switzerland</strong>. We provide individual support for technical issues. Our services and their prices will not change with the new ToS. And because we are convinced that our &quot;overall package&quot; speaks for itself, <strong>you still do not need to observe any minimum contractual periods or notice periods</strong> at cloudscale.ch.</p>
<br/>
<p>We look forward to introducing our completely updated cloud control panel to you soon and are convinced that, with the updated ToS, we have laid a balanced foundation for the new options. However, should you have any concerns, please contact us directly so we can discuss your questions.</p>
<p>Full flexibility with full transparency,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Manage Kubernetes Clusters with OKD
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/02/25/manage-kubernetes-clusters-with-okd</link>
          <pubDate>Thu, 25 Feb 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/02/25/manage-kubernetes-clusters-with-okd</guid>
          <description>
            <![CDATA[<p>Containers are the talk of the town due to their many benefits. They are just as well suited for short-term use of an application as they are for use as components in CI/CD pipelines or for operating highly available productive services in clusters. There are as many approaches for creating and running container setups as there are areas of application – and cloudscale.ch does not restrict your options. In the following we will provide a brief introduction to the world of containers and then show you how you can start running an &quot;OKD&quot; cluster at cloudscale.ch. OKD is a comprehensive Kubernetes distribution developed as an open source project that also provides the basis for Red Hat OpenShift.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-okd-demo.png"/><h3>Container orchestration: what is it all about?</h3>
<p>Many of our customers already use containers in one form or another. A currently widespread approach for isolating containers is the use of Linux namespaces in combination with cgroups. This has been a mainstream approach since about 2013 following the breakthrough of Docker and the ensuing Open Container Initiative (OCI) standardization process. In containers, applications can be separated from each other in a <strong>particularly resource-efficient</strong> manner as this approach does not use hardware virtualization and – as opposed to in the case of full virtualization – the operating system does not run in multiple parallel instances either.</p>
<p>While it is possible to manage several containers on a few nodes manually without any problems, container orchestrators, such as <a href="https://kubernetes.io">Kubernetes or &quot;K8s&quot;</a>, come into play at the latest when issues of scaling up and container lifecycle management arise. Kubernetes ensures, among other things, that the desired number of instances of each container are running and independently determines the appropriate nodes for them. During the deployment of new container versions, it replaces old instances in accordance with <strong>defined deployment strategies</strong>, e.g. to ensure that a service is continuously available as a whole. And if persistent storage is required for certain containers, Kubernetes can automatically provision it to the right node <a href="https://www.cloudscale.ch/en/news/2019/03/15/persistent-volumes-in-kubernetes-with-csi">via CSI</a>.</p>
<h3>Kubernetes exists in various &quot;flavors&quot;</h3>
<p>At cloudscale.ch, customers have the choice of how to install and run Kubernetes. If you would like to work directly with the &quot;upstream&quot; Kubernetes, you can deploy your K8s cluster with e.g. <a href="https://kubespray.io">Kubespray</a>: this tool is <strong>based on Ansible</strong> and can be used together with our <a href="https://www.cloudscale.ch/en/news/2020/12/21/cloud-orchestration-with-ansible-collections">Ansible collection</a>. This way, it is straightforward for you to set up the cloud infrastructure required for the cluster via our API.</p>
<p><a href="https://rancher.com">Rancher</a> goes one step further as a higher-level administration tool. Rancher is run on a container basis itself and provides a graphic web frontend and APIs that make it possible to set up and manage <strong>complete Kubernetes clusters in just a few steps</strong>. Rancher automatically prepares the cloud resources required for a cluster; the <a href="https://www.cloudscale.ch/en/news/2019/08/14/docker-machine-and-rancher">cloudscale.ch node driver</a> is already pre-installed in current Rancher releases.</p>
<p><a href="https://www.okd.io">OKD</a> is another powerful tool that we will look at in more detail here. This open source project is also the basis for OpenShift that is sold by Red Hat as a complete software, services and support package. Along the lines of the Linux kernel and the Linux distributions built around it, <strong>OKD can be seen as a Kubernetes distribution</strong>. In addition to pure container management, OKD also integrates several tools that, for example, monitor the cluster, perform logging functions and route network traffic to the correct containers. The installation of OKD at cloudscale.ch benefits from various features that we have already reported on, e.g. <a href="https://www.cloudscale.ch/en/news/2019/10/25/segmentation-with-multiple-private-networks">private networks</a> with <a href="https://www.cloudscale.ch/en/news/2020/04/03/mastering-the-private-network-with-managed-dhcp">managed DHCP</a>. For what are known as master and worker nodes, OKD uses Fedora CoreOS, which stems from the previous Core OS and is one of the available options when installing new servers at cloudscale.ch (just like Flatcar Container Linux, by the way, which is a popular Core OS fork).</p>
<h3>Create your own OKD cluster step by step</h3>
<p>If you are interested in OKD, we have published detailed instructions <a href="https://github.com/cloudscale-ch/okd-demo">on GitHub</a> about how to <strong>create your own OKD cluster</strong>. Relying on Ansible, this tutorial uses a popular DevOps tool to automate the individual steps. In addition, our &quot;how to&quot; guide uses &quot;ocp4-helpernode&quot;, which was developed in the OpenShift setting, to make the procedure even more straightforward, in particular for provisioning HAProxy and DNS. The process essentially consists of the following four steps:</p>
<p>Step 1: The <strong>required tools are installed</strong> on your personal device, e.g. your own laptop or an alternative cloud server, and certain basic variables are defined.</p>
<p>Step 2: The <strong>helper node is installed</strong> and the services that perform a range of key functions in the cluster are configured on it. API connections, for example, will run via the HAProxy, which also makes it possible to reach your applications on the worker nodes from the Internet at a later stage. The DNS server, in turn, enables resolution of cluster-internal domains and IP addresses, while an Apache HTTP server is used for serving static files. Ignition configs, in particular, are stored on the Apache: in a similar way to <a href="https://www.cloudscale.ch/en/news/2020/06/23/initialize-servers-with-cloud-init">&quot;cloud-init&quot;</a>, Ignition allows individual settings to be applied on newly created servers on the very first start-up.</p>
<img src="https://static.cloudscale.ch/img/news-okd-demo-be17cac30082.png" alt="OKD demo cluster network diagram"/>
<p>Step 3: This is where the <strong>master and worker nodes are created</strong>. Like with the helper node, starting these virtual servers at cloudscale.ch is automated using our Ansible collection. The new nodes fetch their prepared Ignition configs and thus their specific configuration settings from the Apache server from the previous step, which means they can complete their set-up independently.</p>
<p>Step 4: In a final step where the <strong>new nodes are accepted into the cluster</strong>, the relevant CSRs need to be signed. This completes the installation of the OKD cluster.</p>
<br/>
<p>The popularity of container setups continues unabated, not least because of the numerous advantages, such as the <strong>clean separation of individual services</strong> with simultaneous efficient use of resources. There are a wide range of possible solutions in container management, depending on the specific requirements and preferences. Whether OKD, Rancher or a particularly streamlined approach, at cloudscale.ch you will find the components you need to make it work.</p>
<p>For your favorite tools,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Why the Cloud?
]]></title>
          <link>https://www.cloudscale.ch/en/news/2021/02/09/why-the-cloud</link>
          <pubDate>Tue, 09 Feb 2021 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2021/02/09/why-the-cloud</guid>
          <description>
            <![CDATA[<p>&quot;The cloud&quot; is used as standard for many things today: we stream films from the cloud, back up our mobiles to the cloud and, whenever we access a service from a browser, there is a server somewhere in the network behind it. Nevertheless, many companies still run their own physical servers – often without appropriate safety measures in their offices – despite the countless advantages of a cloud infrastructure, even for numerous IT systems where one might not immediately think &quot;cloud&quot;.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>&quot;CapEx vs. OpEx&quot; is not everything</h3>
<p>The most frequently mentioned advantage of &quot;infrastructure as a service&quot; (IaaS) is that if IT resources are not purchased as hardware but sourced as a cloud service, there is no initial investment. Instead, expenses are relatively <strong>evenly distributed over the whole period of use</strong> and unlike a physical system with its fixed dimensions, cloud resources – and therefore costs – can easily be adapted to new requirements even at a later date.</p>
<p>Depending on the constellation, <strong>real savings are possible</strong> in addition to these evenly distributed costs. For most workloads, utilization rates vary significantly in the course of a day, a week or a year. If, for example, a new system is sized for peak demand periods at the end of each quarter, the expensive capacity bought for one&#x27;s own hardware will lie dormant for the rest of the year. By contrast, additional cloud infrastructure can be booked as required and only entails costs for as long as it is actually needed.</p>
<h3>No hardware management – reduced overheads</h3>
<p>However, in addition to purely financial considerations, the <strong>cloud primarily excels in practical terms</strong>. Handling hardware over its entire life cycle is associated with recurring effort: from evaluation to start-up and replacing defective components, all the way to upgrading to the next system. On top of this, you also need to run a designated server room or drive to an external server housing location and ensure a network connection at the site in question.</p>
<p>A cloud infrastructure on the other hand offers the proverbial all-round carefree package. Professional, carefully selected and certified data centers provide physical data security. Thanks to rolling life cycle management that takes place in the background, customers have access to <a href="https://www.cloudscale.ch/en/news/2019/11/19/even-more-power-thanks-to-plus-flavor">up-to-date hardware</a> at all times, while its monitoring and maintenance is guaranteed by specialists at the cloud provider. Redundancy and capacity reserves minimize the impact of hardware problems on customer workloads. In contrast to what is frequently the case for one&#x27;s own servers, cloud systems are also networked via multiple alternative paths in the Internet. This not only guarantees access to your own service in the case of a malfunctioning connection, but often also reduces latency to your own visitors or customers, which helps <strong>improve customer satisfaction</strong>.</p>
<h3>Safer and more agile by the day</h3>
<p>Once the first step has been taken in the cloud, numerous other optimization options become possible. As new servers do not need to be purchased in a complex and expensive process, but are available at the push of a button, upgrades and migrations, for example, are possible with almost no downtime: a new system is installed in the background and productive traffic at time X is <a href="https://www.cloudscale.ch/en/news/2017/04/20/high-availability-using-floating-ips">simply redirected</a>. When several servers are run in parallel, load balancing and failover set-ups are possible, which <strong>further increases reliability</strong>. Thanks to &quot;<a href="https://www.cloudscale.ch/en/news/2016/10/21/increasing-availability-using-anti-affinity">anti-affinity</a>&quot; it can also be guaranteed that these servers are actually deployed on separate physical machines at the cloud provider&#x27;s end. Particularly high standards can be met through <a href="https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations">geo-redundant set-ups</a> with physically separate server or cloud locations.</p>
<p>Day-to-day flexibility is just as important as the stable operation of productive systems. By creating and deleting new servers as required, engineers can quickly <strong>test new tools without risk</strong>, reproduce a problem or do a &quot;dry run&quot; of a tricky procedure. When selling technical solutions and during training, it is useful if customers can play around safely on a standalone demonstration system. Thanks to the integration of cloud APIs into <a href="https://www.cloudscale.ch/en/api/v1">current DevOps tools</a> (such as <a href="https://www.cloudscale.ch/en/news/2020/12/21/cloud-orchestration-with-ansible-collections">Ansible</a> and <a href="https://www.cloudscale.ch/en/news/2019/12/23/latest-features-with-terraform">Terraform</a>), the provision and cleaning of short-lived instances of this kind can be largely automated.</p>
<br/>
<p>Five years ago, the cloudscale.ch IaaS offer went live and has enjoyed continuous growth ever since. Our users regularly confirm to us the advantages of a cloud solution compared to a traditional set-up – not only in terms of contributing to a better overall product for their end customers, but also in terms of making their own work processes easier. Given the many reasons in favor of a cloud infrastructure, it is also clear that its <strong>full potential has by no means been exploited</strong> and that, in future, other and possibly less obvious use cases could benefit from the flexibility of a cloud infrastructure.</p>
<p>Keeping all your options open,<br/>
Your cloudscale.ch team</p>
<br/>
<p>PS: Although we list server prices per 24 hours here at cloudscale.ch, you benefit from <strong>to-the-second billing</strong> and any unused time is credited to your account as soon as you delete a server. This means that you always have the appropriate resources available even for extremely short applications.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Cloud Orchestration with Ansible Collections
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/12/21/cloud-orchestration-with-ansible-collections</link>
          <pubDate>Mon, 21 Dec 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/12/21/cloud-orchestration-with-ansible-collections</guid>
          <description>
            <![CDATA[<p>Ansible is a widely used automation tool for IT infrastructures, and the first integration with our cloud API already took place with its version 2.3. Since then we have continuously invested in and expanded its support. This article will use three simple examples to provide you with an insight into the most recent improvements, which have only been added since the launch of Ansible 2.10.0.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>&quot;cloudscale_ch.cloud&quot; Ansible collection</h3>
<p>In addition to <a href="https://www.cloudscale.ch/en/news/2019/12/23/latest-features-with-terraform">Terraform</a>, Ansible is one of the most popular IT orchestration tools. At cloudscale.ch, we use Ansible internally to, for example, <a href="https://www.cloudscale.ch/en/news/2020/07/23/network-automation-onie-ztp-ansible">manage our network components</a> and almost the whole cloud infrastructure. It goes without saying that we also endeavor to make the configuration and operation of cloud resources <strong>as simple as possible</strong> for our customers, too.</p>
<p>With Ansible 2.10, development and maintenance of community plug-ins were outsourced to what are known as &quot;collections&quot;. Many plug-ins found a new home in the &quot;community.general&quot; collection, while for others, such as the cloudscale.ch integration, development was split into separate source code repositories and organizations to allow faster development and individual release cycles. Today, there are already <strong>about 75 self-organized collections</strong> for Ansible 2.10.</p>
<p>Our own <a href="https://galaxy.ansible.com/cloudscale_ch/cloud">&quot;cloudscale_ch.cloud&quot; Ansible collection</a> provides us with <strong>several advantages at once</strong>: on the one hand, we are able to replicate extensions to our API in our plug-ins within a short period of time, and on the other hand, it is easier to test our plug-ins in an automated manner independently of the rest of the Ansible project.</p>
<h3>The suitable version for any requirement</h3>
<p>With releases every three weeks, Ansible 2.10 also contains the latest releases of the collections, which means that the most recent version of &quot;cloudscale_ch.cloud&quot; is always <strong>automatically supplied and installed with the official Ansible package</strong>. If you would like to use a specific feature scope outside this release cycle, you can take matters into your own hands with <code>ansible-galaxy</code> and use the preferred version of our collection with your existing Ansible version.</p>
<p>For reasons of traceability, we recommend that you create a <code>requirements.yml</code> file to <strong>maintain an overview of all external collections and roles</strong>:</p>
<pre><code class="language-yaml">collections:
  - name: cloudscale_ch.cloud
    version: 1.3.0
</code></pre>
<p>The collection can then be installed as usual via <code>ansible-galaxy</code>:</p>
<pre><code class="language-bash">ansible-galaxy collection install -r requirements.yml
</code></pre>
<p>Please note that existing playbooks will <strong>continue to work without adjustments</strong>. It is only essential to use the &quot;fully qualified collection name&quot; (FQCN) for plug-ins that have been newly added since Ansible 2.10. At the same time, there is no reason not to make a general switch to FQCN in existing playbooks, too.</p>
<p>Without FQCN as before:</p>
<pre><code class="language-yaml">- cloudscale_server:
    name: web1.example.com
    ...
</code></pre>
<p>Or with FQCN:</p>
<pre><code class="language-yaml">- cloudscale_ch.cloud.server:
    name: web1.example.com
    ...
</code></pre>
<h3>Example 1: Network management with subnets</h3>
<p>The network API was integrated into version 1.2.0 of our collection. <strong>Subnets can now also be managed</strong> with the most recent version 1.3.0, which allows the creation of new subnets as well as the adjustment of the gateway IP and DNS servers for existing subnets:</p>
<pre><code class="language-yaml">- name: Ensure network exists
  cloudscale_ch.cloud.network:
    name: Private in LPG1
    zone: lpg1
    auto_create_ipv4_subnet: false

- name: Ensure subnet exists
  cloudscale_ch.cloud.subnet:
    cidr: 10.11.0.0/16
    gateway_address: 10.11.0.1
    dns_servers:
      - 10.11.0.2
      - 10.11.0.3
    network:
      name: Private in LPG1
      zone: lpg1
</code></pre>
<h3>Example 2: Objects Users</h3>
<p>The <strong>module for Objects Users management</strong> was already added with version 1.1.0 of our collection (and therefore also in Ansible 2.10.0), thus enabling, for example, the automated creation of a backup configuration using our Object Storage:</p>
<pre><code class="language-yaml">- name: Create a backup user
  cloudscale_ch.cloud.objects_user:
    display_name: backup for ACME
    tags:
      customer: ACME Inc.
  register: res_object_user

- name: Configure S3cfg
  template:
    src: s3cfg.j2
    dest: ~/.s3cfg
</code></pre>
<p>In the <code>s3cfg.j2</code> template, we use the keys returned by the module:</p>
<pre><code class="language-jinja2"># {{ ansible_managed }}
[default]
access_key = {{ res_object_user.keys.access_key }}
secret_key = {{ res_object_user.keys.secret_key }}
check_ssl_certificate = True
guess_mime_type = True
host_base = objects.lpg.cloudscale.ch
host_bucket = objects.lpg.cloudscale.ch
use_https = True
</code></pre>
<h3>Example 3: Floating IPs</h3>
<p>Existing modules have also undergone improvements, which mean that it is now possible to <strong>create Floating IPs idempotently</strong>. This was implemented by means of an additional <code>name</code> parameter. To ensure backwards compatibility, this parameter is currently still optional. It will only become mandatory to specify this parameter from the next major version of our collection onwards.</p>
<pre><code class="language-yaml">- name: Start cloudscale.ch server
  cloudscale_ch.cloud.server:
    name: mx1.example.com
    image: debian-10
    flavor: flex-2
    ssh_keys: ssh-rsa XXXXXXXXXX...XXXX ansible@cloudscale-ch
    zone: lpg1
  register: server

- name: Request Floating IPs for my server
  cloudscale_ch.cloud.floating_ip:
    name: mx1.example.com
    ip_version: &quot;{{ item }}&quot;
    reverse_ptr: mx1.example.com
    server: &quot;{{ server.uuid }}&quot;
  with_items:
    - 4
    - 6
</code></pre>
<br/>
<p>Ansible and our &quot;cloudscale_ch.cloud&quot; collection provide you with a <strong>powerful, yet simple tool</strong> to create and operate a larger-scale cloud infrastructure quickly and transparently. And for everyone who (like us) is committed to open source: the source code of our collection is <a href="https://github.com/cloudscale-ch/ansible-collection-cloudscale">available on GitHub</a> under GPLv3.</p>
<p>Proud to be part of the Ansible universe,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Flexible and Efficient Thanks to Custom Images
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/12/09/flexible-and-efficient-thanks-to-custom-images</link>
          <pubDate>Wed, 09 Dec 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/12/09/flexible-and-efficient-thanks-to-custom-images</guid>
          <description>
            <![CDATA[<p>In addition to the popular, widespread and universally applicable Linux distributions, there are several specialized distributions and &quot;virtual appliances&quot; for the most varied requirements. And even with prevalent distributions, there are times when it makes sense to use an adapted installation as a base for new servers rather than the standard image. Thanks to &quot;custom images&quot; at cloudscale.ch, you can now adapt your virtual servers to your requirements even before the initial start-up, which minimizes subsequent configuration time.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Advantages of custom images</h3>
<p>Thanks to a wide range of ready-made images from popular Linux distributions and <a href="https://www.cloudscale.ch/en/news/2019/02/27/firewall-distribution-at-a-mouse-click">security appliances</a>, your servers at cloudscale.ch are ready for use in no time. In addition, further operating systems can be installed manually via <a href="https://www.cloudscale.ch/en/news/2020/01/14/use-your-own-iso-usb-images">ISO/USB images</a>. Now, you can combine the best of both worlds: using your own individual images makes it quick and easy for you to create new servers with <strong>complete flexibility in terms of pre-installed software and configurations</strong>.</p>
<p>Your own images will enable you to handle numerous use cases even more elegantly. You can now integrate tools that you would install on your servers anyway <strong>directly into your own image</strong>, e.g. packages for monitoring and configuration management, as well as utilities that make your everyday work easier. If you need your server for a specific purpose that already has a specialized distribution or appliance, you can simply import this as a custom image and you are good to go.</p>
<h3>Image management via API</h3>
<p>You can import your own images into your cloudscale.ch account with a <a href="https://www.cloudscale.ch/en/api/v1#custom-images">simple API call</a>. Shortly afterwards you will be able to create new servers on this base, referencing your own image e.g. with its specific UUID, which will remain valid for as long as the image exists in your account. Alternatively, you can enter the &quot;slug&quot; that you determined when you added the image, along with the prefix <code>custom:</code>. This is particularly useful if, in future, you want to include more recent versions as seamlessly as possible: if there are several image versions with e.g. the slug &quot;voip-pbx&quot;, the <strong>most recent image will automatically be used</strong> when starting a new server using <code>custom:voip-pbx</code>.</p>
<p>To import an image into your account, simply indicate the HTTP(S) address where the image is available in the API call. Our system will then <strong>load this image directly from this URL</strong> and make it available to you in the desired cloud locations. If you are creating/editing your image locally and do not have a web server available, you can use our Object Storage. Once you have uploaded the image (e.g. with <code>s3cmd --acl-public put ...</code>) into a bucket of your choice, all you have to indicate in the API call is the resulting URL: <code>https://BUCKET.objects.LOCATION.cloudscale.ch/IMAGE</code>.</p>
<h3>Preparation tips</h3>
<p>In the simplest case, your desired distribution makes an image file available that you can import one-to-one into cloudscale.ch. If multiple image files are available, a name containing &quot;OpenStack&quot; or &quot;cloud&quot; usually indicates the right version. Please be aware that the <strong>image file needs to be in &quot;raw&quot; format</strong>; if required, you can convert the image using e.g. <code>qemu-img</code> (often in the <code>qemu-utils</code> or <code>qemu-img-2</code> package):</p>
<pre><code class="language-bash">qemu-img convert -f qcow2 -O raw my-image.qcow2 my-image.raw
</code></pre>
<p>You will, however, gain the greatest benefit by adapting your image specifically to your requirements and pre-installing frequently used tools. One way of doing this is by means of a virtual server, e.g. based on QEMU/KVM, that you <strong>install locally and where you put together your sample installation</strong>. The file that serves as a virtual hard drive can then be imported as a custom image (after converting it to &quot;raw&quot;, if necessary) for your future cloud servers. If you maintain numerous images or update them frequently, this process can be automated with specialized tools such as <a href="https://www.packer.io">&quot;Packer&quot;</a>.</p>
<p>When creating your images, please ensure that they will also boot up in a slightly changed environment, as every cloud server you create based on your image will be assigned e.g. its own IP and MAC address. If it is not already included in your setup, we recommend <code>cloud-init</code> for this purpose. This useful package is the de facto standard in the cloud setting and can <strong>configure numerous parameters automatically</strong> when a server first boots up. It also evaluates the &quot;user data&quot; you can specify when starting a server, which allows further automated setting up of your servers during the initial boot process.</p>
<br/>
<p>For each cloud location, you can store those images that are already perfectly prepared for your future servers – you will only be charged for the space actually required on our SSD storage. Any one-time adjustments you make to your images are an investment that you will <strong>benefit from again with every additional server</strong>.</p>
<p>Preparation is everything!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[More Volumes – More Flexible Container Setups
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/11/23/more-volumes-more-flexible-container-setups</link>
          <pubDate>Mon, 23 Nov 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/11/23/more-volumes-more-flexible-container-setups</guid>
          <description>
            <![CDATA[<p>If you work with computers, you will almost inevitably use &quot;hard disks&quot; in various shapes and sizes, such as the internal SSD, external drives, USB sticks, and memory cards. &quot;Volumes&quot; are the equivalent to this in the cloud, and cloudscale.ch has already been supporting the connection of several SSD and bulk volumes to your virtual servers. Up to 128 volumes are possible with immediate effect, which opens up new options for the automated set-up of containers, in particular.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Root and additional volumes</h3>
<p>Every server at cloudscale.ch has a root volume on NVMe SSDs that contains the selected operating system after the initial start-up. The size of this volume can be determined for almost all images when creating the server to ensure that your data fit into this volume. If your project and the associated space requirements grow at a later point, <strong>scaling up the root volume is simple</strong> and can be performed on a running system.</p>
<p>In addition, it may make sense to <strong>save some of the data on separate volumes</strong>. Bulk volumes are a good choice when inexpensive storage space is required and performance is a lesser priority, e.g. for archive data. For a database that needs to remain separate from the rest of the system, on the other hand, an additional SSD volume would be the first-choice solution. Additional volumes can either be created when setting up a server or added to the server at a later date and scaled up as required. It is also possible to move additional volumes to another server or to delete them at any time.</p>
<h3>Persistent volumes in container setups</h3>
<p>Additional volumes are of particular significance in modern container setups. Containers or pods are volatile by nature, and as soon as they are replaced by a new instance or a new version, the data disappear. If, however, the data need to be maintained, the pod can be allocated a persistent volume, which means an <strong>additional volume for permanent data storage</strong>. Thanks to the <a href="https://www.cloudscale.ch/en/news/2019/03/15/persistent-volumes-in-kubernetes-with-csi">&quot;container storage interface&quot; (CSI)</a>, persistent volumes of this kind can be created automatically by a container orchestration system such as Kubernetes and directly attached to the correct node so that the pod running there has access to it.</p>
<p>cloudscale.ch now supports up to 128 volumes per virtual server, thus also offering adequate scope for tightly packed container setups. You can scale up your cluster and migrate workloads between nodes, while the defined &quot;persistent volume claims&quot; (PVCs) are automatically fulfilled via CSI so that <strong>storage is available in the correct location</strong>. If you have already been using our CSI driver, make sure that you update to <a href="https://github.com/cloudscale-ch/csi-cloudscale#max-number-of-csi-volumes-per-node">the most recent version</a>.</p>
<h3>Technical background</h3>
<p>Support for up to 128 volumes per virtual server was made possible by changing the background technology. Newly created virtual servers use <code>virtio-scsi</code> for the volumes instead of <code>virtio-blk</code>. This means that the number of volumes is <strong>no longer limited by the number of supported PCI devices</strong>. We have also patched a bug in OpenStack, on which our cloud is based. This bug meant that – in addition to limited PCI devices – volumes from <code>vda</code> up to at most <code>vdz</code>, i.e. a maximum of 26, could be used.</p>
<p>The only change for you as a user is the change to the name of the volumes: instead of <code>vda</code>, <code>vdb</code>, etc. you will now find the volume <code>sda</code> and possibly further <code>sdX</code> in your virtual server. This also eliminates one of the main differences that needed to be considered when switching from a physical computer to a Linux cloud server. If a large number of volumes are actually being used, numbering after <code>sdz</code> continues with <code>sdaa</code> to <code>sddx</code>, which covers the current maximum of 128 volumes. You may need to check in your tools or scripts that the <strong>new names are being used</strong> for operations on volumes and partitions.</p>
<p>Example showing two volumes in an Ubuntu server:</p>
<pre><code class="language-plain">ubuntu@my-server:~$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
[...]
sda       8:0    0   50G  0 disk
├─sda1    8:1    0 49.9G  0 part /
├─sda14   8:14   0    4M  0 part
└─sda15   8:15   0  106M  0 part /boot/efi
sdb       8:16   0  100G  0 disk
└─sdb1    8:17   0  100G  0 part
sr0      11:0    1  478K  0 rom
</code></pre>
<br/>
<p>With up to 128 volumes per virtual server, <strong>you are now also equipped for larger container setups</strong>. Using Kubernetes and CSI is not a requirement, though. It goes without saying that the additional flexibility can also be applied to other use cases, from sophisticated LVM configurations to the clean separation of different data sets.</p>
<p>Claim your volumes!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[New "swiss hosting" Label: We Are Launch Partner
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/10/22/swiss-hosting-label-launch-partner</link>
          <pubDate>Thu, 22 Oct 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/10/22/swiss-hosting-label-launch-partner</guid>
          <description>
            <![CDATA[<p>With the topic of data protection attracting increasing attention, the issue of the geographical location of data storage remains key. It is, however, not the only factor that determines which law is ultimately applicable. The aim of the newly launched &quot;swiss hosting&quot; label is to create certainty here: as a customer, you not only have the security of knowing that your data are stored in Switzerland, but also that organizations from abroad are unable to gain access via circuitous routes. As a launch partner, cloudscale.ch has been involved in &quot;swiss hosting&quot; from the outset.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-swiss-hosting-logo.png"/><h3>Data location and applicable law</h3>
<p>The term &quot;cloud&quot; originates from network diagrams where the cloud symbol is used to represent networks whose details are of no further relevance, e.g. the Internet. However, given that growing quantities of progressively significant data are being processed in this cloud, these details are increasingly coming under the spotlight. Laws and compliance specifications contain growing numbers of provisions relating to cloud utilization. And, last but not least, media reports and daily tracking and influencing attempts are raising awareness of <strong>where personal data end up</strong> and may be analyzed.</p>
<p>In this process, the geographical location of data processing takes on great significance as it largely determines which legal regulations apply in addition to those applicable to the location of the data owner. This means that, for Swiss companies, it is simplest if their data are also located in Switzerland. It is, however, equally important to be aware of <strong>who the data are entrusted to</strong>, since service providers abroad are additionally subject to the law in their own countries. <a href="https://www.inside-channels.ch/de/post/man-kann-nicht-einfach-nur-das-papier-unterschreiben-20200903">During an interview</a>, lawyer Simon Schlauri cites the USA&#x27;s CLOUD Act as an example: if required to do so by the authorities, American cloud providers must under certain circumstances hand over data stored in a foreign territory.</p>
<img src="https://static.cloudscale.ch/img/news-swiss-hosting-logo-2f1ce7c8704c.png" alt="&quot;swiss hosting&quot; Logo"/>
<h3>&quot;swiss hosting&quot; offers guidance</h3>
<p>The <a href="https://www.swissmadesoftware.org/en/about/swiss-hosting.html">new &quot;swiss hosting&quot; label</a> provides greater clarity in this area. Bearers of this label are SaaS and cloud providers who are subject to Swiss law and who guarantee that no foreign organizations can access their customers&#x27; data. If the actual hosting of these data is outsourced, the provider of the hosting services must adhere to the same rules. For companies who need to handle their own customer data carefully, &quot;swiss hosting&quot; offers invaluable <strong>guidance on selecting an SaaS or cloud provider</strong> and helps to minimize compliance evaluation efforts.</p>
<blockquote>
<p>[…] that data remains entirely in Switzerland and cannot be accessed or claimed by a foreign organization or government, no matter whether directly or indirectly. This also applies to foreign companies within the Group.</p>
<p>Extract from the &quot;swiss hosting&quot; contract pertaining to access protection</p>
</blockquote>
<p>As a launch partner, cloudscale.ch has been involved in &quot;swiss hosting&quot; from the outset. As a purely Swiss cloud provider that only uses data centers located in Switzerland, this label applies to all our services. Our clear positioning also makes us the <strong>ideal hosting partner for SaaS and managed service providers</strong> who want to use the &quot;swiss hosting&quot; label to provide their own customers with the certainty that data are hosted in Switzerland.</p>
<h3>&quot;Swissness&quot; as the key idea</h3>
<p>As &quot;Swissness&quot; means so much more to us than simply the selection of data centers, cloudscale.ch has consistently relied on Switzerland as a location from the outset. Our customers appreciate the fact, for example, that with us they have a local partner who <strong>speaks their language</strong> and takes their concerns seriously. In addition, there are tangible technical advantages given that cloudscale.ch is not only outstandingly networked at an international level but also with Swiss network operators, which ensures low latency to and from local Internet users.</p>
<p>For us, &quot;Swiss quality&quot; also means taking one&#x27;s time and paying attention to details. We developed our cloud control panel with a great deal of love in order to ensure that <strong>using our cloud is as straightforward as possible</strong>. And last but not least, cooperation with our partners is also important to us here in Switzerland. Using our infrastructure, our partners are able to implement sophisticated setups with a &quot;Swiss finish&quot; based on customer specifications.</p>
<br/>
<p>While the international legal situation is difficult to understand and continues to give rise to numerous unanswered questions, <strong>&quot;swiss hosting&quot; provides a pragmatic solution</strong>. By keeping data in Switzerland, you make life easier for yourself and for your customers. Beyond issues of pure compliance, &quot;Swissness&quot; here at cloudscale.ch also makes us a local and approachable partner for our customers.</p>
<p>Committed to close partnerships,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[cloudscale.ch CLI 1.0 Now Available
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/09/15/cloudscale-cli-now-available</link>
          <pubDate>Tue, 15 Sep 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/09/15/cloudscale-cli-now-available</guid>
          <description>
            <![CDATA[<p>Many professionals see &quot;Linux&quot; and &quot;command lines&quot; as a fixed combination and do not believe that any other tool achieves the efficiency of a shell. This is why we are delighted to announce version 1.0 of our &quot;cloudscale&quot; command line interface (CLI) application. The CLI application interacts with our API and makes it possible for you to manage your cloud resources without leaving the command line.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Simple and clearly laid out</h3>
<p>The <a href="https://github.com/cloudscale-ch/cloudscale-cli">cloudscale.ch CLI application</a> is written in Python and can be installed in a flash with <code>pip install cloudscale-cli</code>. Following installation, you will find the new command <code>cloudscale</code> that allows you to <strong>view, modify, create and remove cloud resources</strong>.</p>
<p>The CLI application is also <strong>suitable for use in scripts</strong>, in particular with the <code>--output json</code> option. You will now be able to elegantly deal with all the usage scenarios that you have wanted to automate for a while now, but where a powerful tool such as <a href="https://www.cloudscale.ch/en/news/2019/08/14/docker-machine-and-rancher">Rancher</a> or <a href="https://www.cloudscale.ch/en/news/2019/12/23/latest-features-with-terraform">Terraform</a> was not really suitable.</p>
<p>A list of <strong>all the parameters and the supported cloud resources</strong> can be found by executing <code>cloudscale --help</code>. Installation and configuration information and instructions are also available online in the <a href="https://cloudscale-ch.github.io/cloudscale-cli">project documentation</a> on GitHub.</p>
<h3>Sample applications</h3>
<p>The following examples show how <strong>simple and intuitive</strong> it is to manage your cloud resources with our CLI application. You can list your existing servers as follows, for example, and use the optional <code>--filter-tag</code> to additionally limit this to servers you have <a href="https://www.cloudscale.ch/en/news/2019/09/24/keeping-track-with-tags">allocated a certain tag</a> to.</p>
<pre><code class="language-plain">$ cloudscale server list --filter-tag project=gemini --delete
NAME    STATUS    ZONE    TAGS            UUID
------  --------  ------  --------------  ------------------------------------
web4    running   lpg1    project=gemini  5d461dfa-a92a-4c7b-9199-c02ed7b6b570
web3    running   lpg1    project=gemini  54071101-8586-4af7-a5d1-25649fd6b2b2
Do you want to delete? [y/N]
</code></pre>
<p>An individual server and several identical servers can be created with the <code>cloudscale create</code> command; you can use the <code>--count</code> option to indicate the number of servers required.</p>
<pre><code class="language-plain">$ cloudscale server create --flavor flex-2 --image debian-10 --ssh-key &quot;$(cat ~/.ssh/id_ed25519.pub)&quot; --tag project=gemini --count 3 --name &#x27;web{counter}&#x27; --wait
</code></pre>
<p>Please note that <code>{counter}</code> has been added to the name. This makes it possible to give all three servers different names: web1, web2 and web3.</p>
<p>The <code>--wait</code> option means that the command only completes after all the servers that are being created have achieved the final <code>status=running</code> stage.</p>
<p>If you add <code>--action</code> or <code>--delete</code> to the <code>list</code> function, you can stop, start, restart or delete several servers in a single process.</p>
<pre><code class="language-plain">$ cloudscale server list --filter-tag project=gemini --delete
NAME    STATUS    ZONE    TAGS            UUID
------  --------  ------  --------------  ------------------------------------
web4    running   lpg1    project=gemini  5d461dfa-a92a-4c7b-9199-c02ed7b6b570
web3    running   lpg1    project=gemini  54071101-8586-4af7-a5d1-25649fd6b2b2
Do you want to delete? [y/N]
</code></pre>
<p>The <code>--verbose</code> option adds additional information to the display.</p>
<h3>Good to know</h3>
<p>Our CLI application also has a convenient feature that allows a <strong>direct SSH connection to be established to your servers</strong>: <code>cloudscale server ssh web1</code>. This saves you the additional steps of first having the IP address listed and then laboriously compiling your SSH command by copying and pasting to the console.</p>
<p>Do you have any suggestions or modification requests? The code of our CLI application is <strong>open source and covered by the MIT license</strong>. We look forward to your feedback.</p>
<br/>
<p>Usability has been one of our top priorities at cloudscale.ch from the very beginning. While the aim was always for our cloud control panel to be as simple and intuitive to use as possible, the API and integration into various DevOps tools provided maximum efficiency. With the new CLI application, creating individual servers is as streamlined as managing them afterwards, namely <strong>on the command line</strong>.</p>
<p>Focusing on the essential,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Disabling TLS 1.0 and 1.1
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/07/31/disabling-tls-1_0-1_1</link>
          <pubDate>Fri, 31 Jul 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/07/31/disabling-tls-1_0-1_1</guid>
          <description>
            <![CDATA[<p>Encryption is standard on the Internet today with almost all websites and services using &quot;HTTPS&quot; and therefore TLS for data transmission. This umbrella term covers numerous techniques and algorithms that are constantly being further developed. It goes without saying that the cloudscale.ch systems support today&#x27;s technologies in order to provide the best possible protection for your data. Consequently, we are going to disable the now outdated TLS versions 1.0 and 1.1 on our systems from 2020-08-11.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Where and when we will be disabling TLS 1.0/1.1</h3>
<p>TLS versions 1.0 and 1.1 will be disabled on all cloudscale.ch <strong>systems that are accessible from the Internet</strong>. This includes the following systems, in particular, that you or your end customers may use:</p>
<ul>
<li>Cloud control panel and API for management of your cloud resources</li>
<li>S3-compatible Object Storage</li>
</ul>
<p>The changeover will take place in two stages: on 2020-08-11, we will disable these TLS versions on our Object Storage at our &quot;LPG&quot; site. In the following week, on 2020-08-18, the same changeover will take place on the Object Storage at the &quot;RMA&quot; location, and for our cloud control panel and API. The changeovers will take place <strong>without interruption to our services</strong>.</p>
<h3>No disruption expected</h3>
<p>TLS (Transport Layer Security) was specified as the <strong>successor to SSL encryption in 1999</strong> and is still occasionally called &quot;SSL&quot; in everyday use today. Although certain improvements were introduced with TLS version 1.1 in 2006, there were still fundamental limitations that can only be described as outdated from today&#x27;s security perspective.</p>
<p>TLS 1.2 was released as early as 2008 and is today supported by all modern applications, i.e. by server software as well as by the associated clients, such as web browsers. For this reason, <strong>we do not foresee any problems for our customers</strong>. During and after the changeover, access to our systems will continue to operate as expected with TLS 1.2 or the even more recent TLS 1.3.</p>
<p>If in doubt, we recommend that – despite the fact that TLS 1.2 is widely supported – you <strong>check the tools you use</strong>, in particular in the case of older or less common clients that require access to our API in the event of e.g. a failover scenario.</p>
<h3>Old TLS versions barely relevant today</h3>
<p>According to an evaluation on our own systems, <strong>TLS 1.0 and 1.1 are barely used for access</strong> today and, if access occurs at all, it is in the per mille range. These figures are not surprising given the consistent support for TLS 1.2 and in part already 1.3 in all modern client applications. The most common Internet browsers are even dropping support for the old TLS versions this year.</p>
<br/>
<p>Disabling outdated security protocols that are no longer relevant to practice, is an unremarkable and logical step for us <strong>in the interest of general IT security</strong>. Should you nonetheless encounter unexpected problems in this context, our support team is here to help.</p>
<p>Kind regards,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Network Automation with ONIE, ZTP and Ansible
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/07/23/network-automation-onie-ztp-ansible</link>
          <pubDate>Thu, 23 Jul 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/07/23/network-automation-onie-ztp-ansible</guid>
          <description>
            <![CDATA[<p>Network engineering and system engineering often seem to be a long way apart, which is also emphasized by the completely different operating concepts of the respective devices. Our new switching infrastructure has shown us that this does not have to be the case. Thanks in no small part to the open source approach of Cumulus Linux, the two worlds are converging and creating synergies with existing tools and processes. In this article, we will take a look at selected aspects and show that the migration of our network has resulted in more than just faster switch ports.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Nothing new (at first sight)</h3>
<p>Cumulus Linux, the network operating system based on Debian, makes it easy for experienced network engineers to get started. The important settings are accessible through a CLI, and this interface has been aligned with CLIs from major manufacturers, which <strong>enables network specialists to find their way around the new environment quickly</strong> and to build on knowledge they have acquired elsewhere. Even useful features that are only gradually being adopted in the industry are available by default on network devices running Cumulus Linux; it is, for example, possible to apply – and undo – a block of commands in one go.</p>
<p>However, the fact that Cumulus Linux is based on Debian opens up additional, powerful possibilities. Logging in does not take you into the familiar but limited CLI, but directly into a regular Linux shell. The CLI is just one command away, but above all, thanks to full root access, you can also <strong>use all the other tools</strong> that prove indispensable in a system engineer&#x27;s everyday life: from utilities such as <code>htop</code> and <code>watch</code> to config management (e.g. Ansible) and monitoring via Zabbix agent.</p>
<h3>Efficiency through config management</h3>
<p>The ability to administer network devices using a config management system is a key feature that many Cumulus users will not want to do without. Having relied on Ansible for the management of the cloudscale.ch servers for quite some time, Cumulus Linux now allows us to manage network devices through Ansible as well. In its simplest form, Ansible acts as a client of the CLI named &quot;NCLU&quot; (Network Command Line Utility): using familiar commands, numerous switches and routers can be <strong>consistently configured without manual interaction</strong>.</p>
<p>Tapping the full potential, however, requires the use of Jinja templates. Instead of long sequences of individual commands, in which the critical variations are easily overlooked, <strong>templates are maintained in relatively short and clear files</strong>. Thanks to the use of loops and conditionals, extensive and complex configurations can be represented in a better structured way. This greatly reduces the risk of making careless mistakes such as inconsistent MTU or VLAN configurations.</p>
<p>The following excerpts illustrate how we populate <code>/etc/network/interfaces</code> on our switches using the Ansible template module.</p>
<p>Ansible Inventory Variables:</p>
<pre><code class="language-yaml">vrfs:
  mgmt:
    description: VRF Mgmt
    ipv4_address: 127.0.0.1/8
  quarantine:
    description: VRF Quarantine (for test purposes)
    ipv4_address: &#x27;{{ &quot;10.0.0.0/24&quot; | ipaddr(device_id) | ipaddr(&quot;address&quot;) }}/32&#x27;
  private:
    description: VRF Private (networks without a default gateway)
  public:
    description: VRF Public (networks with a default gateway)
    ipv4_address: &#x27;{{ &quot;203.0.113.0/24&quot; | ipaddr(device_id) | ipaddr(&quot;address&quot;) }}/32&#x27;
    ipv6_address: &#x27;{{ &quot;2001:db8:bb::/64&quot; | ipaddr(device_id) | ipaddr(&quot;address&quot;) }}/128&#x27;
  dci:
    description: VRF DCI (networks on data center interconnect)
    ipv4_address: &#x27;{{ &quot;172.16.16.0/24&quot; | ipaddr(device_id) | ipaddr(&quot;address&quot;) }}/32&#x27;
</code></pre>
<p>Jinja2 Template:</p>
<pre><code class="language-jinja2">{% for name, vrf in vrfs.items() if name != &quot;default&quot; -%}
# {{ vrf.description }}
auto {{ name }}
iface {{ name }}
    {% if vrf.ipv6_address is defined -%}
    address {{ vrf.ipv6_address }}
    {% endif -%}
    {% if vrf.ipv4_address is defined -%}
    address {{ vrf.ipv4_address }}
    {% endif -%}
    vrf-table auto

{% endfor -%}
</code></pre>
<h3>Fully automated provisioning</h3>
<p>That said, ongoing maintenance of the configuration is only half the battle. We have decided to carry out all major upgrades of our switches in the form of a complete reinstall. This ensures a reproducible state, and we can test the upgrade process and other changes as often as we like in the lab beforehand. Cumulus Networks has developed &quot;ONIE&quot; (Open Network Install Environment) for this purpose. In a similar manner to the PXE environment known from servers, this open system allows booting and subsequent installation of the operating system via the network. Thanks to &quot;ZTP&quot; (Zero-Touch Provisioning), <strong>any desired settings can be defined in advance</strong>, so that the provisioning of the newly installed system can then be finalized by Ansible without the need for a manual intermediate step.</p>
<p>The following excerpt from our ZTP configuration automates the steps that are typically needed after reinstallation of a switch.</p>
<p>Excerpt from our ztp.sh:</p>
<pre><code class="language-sh">[...]

# In order to start switchd, you need to install a valid license
echo &#x27;user@example.com|3DSpMBACDihILepwdy4/5Ecd34jlAg4h+FiE/9zZawtujnk3Fw&#x27; &gt; /home/cumulus/license.txt
/usr/cumulus/bin/cl-license -i /home/cumulus/license.txt
systemctl restart switchd.service

# Move the eth0 (management) interface to a separate management VRF
/usr/bin/net add vrf mgmt &amp;&amp; /usr/bin/net commit

# Drop SSH keys in order to log in without using a password
{% for key in ssh_keys %}
echo &quot;{{ key }}&quot; &gt;&gt; /home/cumulus/.ssh/authorized_keys
{% endfor %}

# The following line is required somewhere in the script file for execution to occur
# CUMULUS-AUTOPROVISIONING

[...]
</code></pre>
<p>For us, reinstalling a switch with Cumulus Linux takes less than 10 minutes. This allows us to run through configuration changes, new versions, or upgrade paths as often as needed. Once we are ready to upgrade the production devices, we simply apply the same process that has been tested many times, virtually <strong>eliminating the risk of typos and inconsistencies</strong>. Incidentally, ONIE needs to be supported by the hardware in question, which is not produced by Cumulus Networks itself. The fact that virtually all major network manufacturers have integrated ONIE in a very short time demonstrates just how much this elegant solution was missing before.</p>
<br/>
<p>Having its roots in Debian and open source, Cumulus Linux fits in well with our philosophy. At the same time, the &quot;open source DNA&quot; also applies in the opposite direction. Cumulus Networks has, for example, <a href="https://cumulusnetworks.com/blog/vrf-for-linux/">contributed</a> the <strong>implementation of VRF (Virtual Routing and Forwarding) to the Linux kernel</strong>. This has moved the &quot;server&quot; and &quot;networking&quot; fields even closer together.</p>
<p>Open and efficient,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Information About Change of DNS Resolvers
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/07/03/change-of-dns-resolvers</link>
          <pubDate>Fri, 03 Jul 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/07/03/change-of-dns-resolvers</guid>
          <description>
            <![CDATA[<p>As part of the cloud infrastructure, cloudscale.ch operates DNS resolvers that can be used by customer systems for name resolution. While this service has proven itself and will of course continue to be available in the future, we are reengineering the relevant systems at the cloud location in Rümlang. In this article we will inform you about which items to check and if necessary adjust with regard to your servers in the &quot;RMA&quot; region. At the new Lupfig location, we designed the DNS systems according to the new concept right from the start, which means there is no need to take any action for your servers in the &quot;LPG&quot; region.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Change of DNS resolvers in &quot;RMA&quot; region</h3>
<p>Every server should be kept up-to-date – this also applies to our DNS systems. A further aim is to make our name server setup at the Rümlang location more robust and align it with the <a href="https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations">new Lupfig location</a>, where we were able to implement these optimization measures right from the start. To this end, we are setting up the relevant servers at the &quot;RMA&quot; location from scratch, taking a step-by-step approach to <strong>avoid any interruption in service for our customers</strong>.</p>
<p>For your servers in the &quot;RMA&quot; region, the following set of IPv4 and IPv6 addresses will be available as DNS resolvers with immediate effect:</p>
<ul>
<li>IPv4: <code>5.102.144.101</code> and <code>5.102.144.102</code></li>
<li>IPv6: <code>2a06:c01:f::101</code> and <code>2a06:c01:f::102</code></li>
</ul>
<p>These DNS resolvers are <strong>already available</strong> and can be fully used. This DNS configuration is also already assigned via DHCP to servers that request or renew an IP address from our DHCP servers (except for private networks where you have explicitly configured a different set of DNS resolvers <a href="https://www.cloudscale.ch/en/api/v1#subnets-create">via our API</a>).</p>
<p>Please note: the IP address <code>5.102.148.102</code> will be <strong>taken out of service as a DNS resolver as of 2020-08-15</strong>. Please ensure that your servers no longer use this IP for name resolution.</p>
<p>The above-mentioned change-over to the new set of DNS resolvers applies only to servers in the &quot;RMA&quot; (Rümlang) region. <strong>Servers in the &quot;LPG&quot; (Lupfig) region</strong> use a different set of DNS resolvers, which is <strong>not affected by this change</strong>.</p>
<h3>What you need to check with regard to your servers</h3>
<p>By default, new servers at cloudscale.ch use <strong>DHCP for their individual network configuration</strong>. Servers where you have left this setting unchanged do not, therefore, need any manual configuration changes. However, we recommend to verify that your servers have actually included at least two of these IPs in their DNS configuration.</p>
<p>If you have <strong>manually configured the DNS settings</strong> of your servers, please change them to the set of DNS resolvers mentioned above. Make sure that the IP <code>5.102.148.102</code> is no longer used as a resolver, as we will take this IP out of service by 2020-08-15. It goes without saying that you are also free to use your own DNS resolvers or a third-party DNS service instead of our systems.</p>
<p>No modifications are necessary for servers <strong>running a recursing resolver</strong> themselves (e.g. firewall systems based on <a href="https://www.cloudscale.ch/en/news/2019/02/27/firewall-distribution-at-a-mouse-click">OPNSense</a>).</p>
<h3>Further information</h3>
<p>In certain firewall configurations, the applicable <strong>firewall rules also need to be adapted</strong>, e.g. if DNS replies are not automatically accepted after a DNS query has been sent (&quot;stateless&quot; firewalls).</p>
<p>Since the IP address <code>5.102.144.102</code> will continue to be available as a DNS resolver, we do not expect name resolution to be interrupted in a standard scenario even without the necessary adjustments. Please note, however, that in a case of this kind, <strong>DNS lookups will be delayed</strong> if an incorrect DNS server is queried first, and another server is only attempted after some timeout. Examples of delayed processes include SSH (typical symptom: longer login times) and other services if they resolve IP addresses to hostnames for security or logging purposes.</p>
<br/>
<p>As mentioned, our redundant DNS resolvers are available with immediate effect via the new set of IP addresses. For optimal operation of the name resolution, please change the DNS setting of your servers in the &quot;RMA&quot; region as described above by 2020-08-15, or <strong>verify that this change has been applied automatically</strong>. Servers in the &quot;LPG&quot; region are not affected by this change and do not require any action. If necessary, our support team will of course be happy to answer your questions.</p>
<p>Kind regards,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[cloud-init – Server Initialization the Cloud Way
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/06/23/initialize-servers-with-cloud-init</link>
          <pubDate>Tue, 23 Jun 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/06/23/initialize-servers-with-cloud-init</guid>
          <description>
            <![CDATA[<p>Although you do not usually click through an OS installer manually to set up a new cloud server, each server still needs a certain degree of individual configuration. This is where cloud-init comes into play as a versatile package that takes care of all the basic settings required to get started with a new server. In addition, it allows you to perfectly integrate the server into your specific cloud environment and connect it to your own tools right from the start.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-cloud-init-en.png"/><h3>Ready to use immediately thanks to cloud-init</h3>
<p>In order to be able to start a new cloud server within seconds, the major Linux distributions provide images that essentially contain a snapshot of a hard disk with the fully installed operating system. A new server, which is created as a clone from such an image, is thus <strong>almost ready to use</strong>. Some details, however, such as the hostname or the authorized SSH keys, are not present in this generic image and still need to be set individually.</p>
<p>The <code>cloud-init</code> package included in many images is activated at system startup and <strong>manages these settings in a fully automated way</strong>. When detecting that the specific server is starting for the first time, the full-blown process is run; in addition to the name of the server and access credentials, cloud-init also takes care of creating new SSH host keys, among other things. At cloudscale.ch, the public part of these keys is also output to the serial console, which means that we can display the fingerprints in the cloud control panel, allowing you to verify that the connection is trusted right from your first login. On subsequent system boots, cloud-init can, for example, resize the file system for you if you have scaled up your server&#x27;s virtual volume in the meantime.</p>
<h3>Hub for the config: the metadata server</h3>
<p>Details such as hostname and SSH key, which you enter when launching a server, are stored on our metadata server. From here cloud-init can retrieve this data to properly configure your server. One way to get to the data is via what is known as the &quot;Magic IP&quot;: the server is assigned a special route via DHCP, and cloud-init can then retrieve its config from the URL <code>http://169.254.169.254</code>. We now also make this <strong>config available via &quot;Config Drive&quot;</strong>, with each new server being assigned a virtual CD-ROM drive (e.g. <code>/dev/sr0</code>) that contains its individual configuration.</p>
<p>To get the maximum benefit from cloud-init, you can use this tool for your own setup tasks as well. When launching a server, you can add &quot;User Data&quot; to <strong>specify a broad range of settings</strong> including any desired commands that will be executed on the new server without further interaction (see also the <a href="https://cloudinit.readthedocs.io/en/latest/topics/examples.html">cloud-init documentation</a>). In this way, your server installs the packages and patches of your choice and integrates itself into your config management or monitoring, before you even log in for the first time.</p>
<img src="https://static.cloudscale.ch/img/news-cloud-init-en-989da8a92e68.png" alt="Server configuration with cloud-init"/>
<p>It goes without saying that you can also <strong>read and use the data from the metadata server with your own tools</strong>, e.g. get the UUID of your server in order to perform automated actions via our API. If you configure the network options statically, the data pertaining to your server is now still available locally thanks to the new Config Drive. Alternatively, you can also find the information in <code>/run/cloud-init/instance-data.json</code>, where cloud-init stores a copy of the config it has read.</p>
<h3>Good to know</h3>
<p>As the version and capabilities of the <code>cloud-init</code> package may differ considerably between various Linux distributions, be sure to check for each specific case which of the many features are supported or can be activated with optional modules. When adding User Data, also bear in mind that the <strong>metadata can potentially be read by any user or tool</strong> on your server.</p>
<p>Incidentally, although widely used, cloud-init is not the only project to automate the setup of new servers; one alternative is <a href="https://github.com/coreos/ignition">Ignition</a>, which is used e.g. in Flatcar Container Linux. Ignition expects its config to be in JSON format, but when launching a server with Flatcar at cloudscale.ch, you can also choose to specify the User Data in the form of a <strong>YAML-formatted &quot;cloud-config&quot;</strong> as you would for cloud-init.</p>
<br/>
<p>Irrespective of whether you keep a copy/paste template for use in the control panel or start servers through the API, with cloud-init and Ignition, in addition to creating servers, many setup steps that you used to perform manually can be automated. And even during normal operation, you – or your scripts – can <strong>access the relevant metadata of your servers</strong> at any time.</p>
<p>For a great start,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Cumulus Linux: A Switch That Paid Off
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/06/04/cumulus-linux-switch-paid-off</link>
          <pubDate>Thu, 04 Jun 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/06/04/cumulus-linux-switch-paid-off</guid>
          <description>
            <![CDATA[<p>Based on a common definition, IaaS offerings consist of compute, storage, and network. While the first two areas often receive more attention, we will now devote two articles to our new switching infrastructure based on Cumulus Linux. In the first part, we will look at why the network is so important at cloudscale.ch, what the main advantages were that led us to the solution we use today, and how the new switching fabric affects our infrastructure as a whole.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-leaf-spine-diagram.png"/><h3>The switching infrastructure as a key element</h3>
<p>In everyday IT life, the network often receives little attention: once the systems are wired, the focus tends to shift to computing power and storage space for years to come. At cloudscale.ch, on the other hand, the topic is constantly present. Not only does the connection of our cloud servers to the Internet and thus the external availability of your services depend on the network, but with the trend towards microservices and cluster setups, <strong>internal networking between cloud servers is also vital</strong> for the performance and reliability of the overall system. And finally, our Ceph-based storage cluster can only fully leverage its advantages with a top network infrastructure.</p>
<p>In addition to our general growth and the increasing demand for switch ports, the opening of our <a href="https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations">second cloud site in Lupfig</a> also had an impact on our choice. While the two locations should be able to operate independently of each other and thus enable geo-redundant setups, connections between services at both locations should, at the same time, be as direct as possible. One of the things we liked about Cumulus Linux was that most of our requirements could be <strong>implemented using open standards</strong> and without the need for proprietary protocols from any particular vendor.</p>
<h3>How the solution with Cumulus Linux stands out</h3>
<p>The &quot;Cumulus Linux&quot; distribution maintained by <a href="https://cumulusnetworks.com">Cumulus Networks</a> is a novelty among network operating systems. Unlike the systems of traditional network vendors, it is based on Debian GNU/Linux and is <strong>for the most part open source</strong>. In order to ensure the stability and security required in an enterprise setting, certain versions are maintained as &quot;ESR&quot; (Extended Support Release) versions and provided with security updates over a long period of time – a strategy known from Ubuntu&#x27;s LTS versions that is also being adopted in an increasing number of other software projects. One of the components of Cumulus Linux is FRRouting, which is maintained under the umbrella of the Linux Foundation and which we are already <a href="https://www.cloudscale.ch/en/news/2017/11/27/new-border-routers-with-frr">using successfully on our border routers</a>.</p>
<p>We spent considerably more time implementing our new switching infrastructure than we had originally planned. Over the course of several releases, we gained experience with Cumulus Linux in our lab and fed our insights back into the network design in many iterations. We also benefited from the <strong>community that has formed around Cumulus Linux</strong>. There is, for example, a dedicated Slack channel where you can pick up tips and tricks from other Cumulus users; if an issue cannot be resolved this way in a timely manner, Cumulus&#x27;s own engineers often join the discussion and actively offer their help. Working directly with Cumulus Networks has also proved to be open and productive. Where we actually found bugs, they were carefully analyzed and fixed – including patches that often found their way back &quot;upstream&quot; into the individual open source projects.</p>
<h3>Key technical specs and tangible advantages</h3>
<p>Cumulus Linux is a network operating system that can be used on devices from a wide variety of manufacturers. The &quot;Cumulus Express&quot; combination that we chose includes hardware from <a href="https://www.edge-core.com">Edgecore</a>. Our switches feature a Broadcom Trident 3 ASIC that supports <strong>line-rate switching on all 32 100 Gbps ports</strong>, which provides a total of 3.2 Tbps. In addition, the &quot;breakout&quot; option means that any of the 100 Gbps ports can be split into 4 logical ports with 10 or 25 Gbps each, which provides even more flexibility with regard to the systems that can be connected.</p>
<img src="https://static.cloudscale.ch/img/news-leaf-spine-diagram-34ba13dbfad1.png" alt="Network Diagram per Cloud Zone (Simplified)"/>
<p>We have built our network following the leaf-spine concept, whereby each switch is configured in a redundant manner. All connections are redundant as well: a leaf pair (two &quot;top-of-rack&quot; switches) is connected to the two spines at a total of 800 Gbps. In addition to the multi-100-Gbps networking of our backbone, the hardware used also allows a <strong>gradual transition to a multi-25-Gbps connection</strong> for individual physical servers. This not only benefits the private networks between our customers&#x27; virtual servers, but also connectivity to the storage cluster. Finally, the dedicated connection between our two cloud locations is also well dimensioned at multi-10-Gbps via CWDM on route-redundant dark fibers.</p>
<br/>
<p>At cloudscale.ch we are aware that the network is the basis for all other features of the cloud. Accordingly, we place a strong emphasis on the performance, reliability, and support of all components used. However, the <strong>atypical approach of Cumulus Linux offers even more advantages</strong>, which we will describe in a separate article in order to give you some insights into e.g. how we at cloudscale.ch can benefit from synergies between network engineering and system engineering.</p>
<p>Lightning fast,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Mastering the Private Network with Managed DHCP
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/04/03/mastering-the-private-network-with-managed-dhcp</link>
          <pubDate>Fri, 03 Apr 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/04/03/mastering-the-private-network-with-managed-dhcp</guid>
          <description>
            <![CDATA[<p>In order to cleanly isolate your servers from the Internet and separate them into defined zones, cloudscale.ch already supports multiple private networks. You now have even more flexibility with the ability to define IP ranges according to your own scheme for your private networks and to make your work even easier by using additional DHCP options.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Define your own subnets</h3>
<p>As creating order is half the battle, a large number of system engineers have defined not only a naming scheme but also an address scheme to optimally support them in their daily work with their systems. It has already been helpful that cloudscale.ch allows any IP address to be used <a href="https://www.cloudscale.ch/en/news/2019/10/25/segmentation-with-multiple-private-networks">in private networks</a>, and you can now also use our DHCP systems to assign <strong>addresses from your self-defined subnet</strong> to your servers.</p>
<p>To define your own subnet, first create a private network via API specifying <code>&quot;auto_create_ipv4_subnet&quot;: false</code>; this will result in a fully functional layer 2 private network. Then, in this network, create your subnet and enter the IP range of your choice (at least a <code>/24</code>) as the <code>cidr</code> value. For each subnet you can <strong>also define the gateway and DNS servers</strong> that our DHCP systems should assign to your servers. An example:</p>
<pre><code class="language-sh">$ curl -i -X POST -H &quot;$AUTH_HEADER&quot; -H &quot;Content-Type: application/json&quot; --data &#x27;{&quot;cidr&quot;: &quot;192.168.1.0/24&quot;, &quot;network&quot;: &quot;61fa...10ed&quot;, &quot;gateway_address&quot;: &quot;192.168.1.1&quot;, &quot;dns_servers&quot;: [&quot;192.168.1.1&quot;, &quot;192.168.1.2&quot;]}&#x27; https://api.cloudscale.ch/v1/subnets
</code></pre>
<h3>With or without DHCP – IPs as needed</h3>
<p>When launching a new server with a private network, all options are now available to you. If a subnet is defined in the private network, the server is by default assigned a <strong>randomly selected address from the DHCP range</strong> of the subnet, along with the gateway (optional) and the DNS servers. You can, however, also specify a fixed IP address if you wish. It goes without saying that it is also possible to completely disable DHCP for a server if you prefer.</p>
<p>As an example, the following API call creates a server that has a <code>public</code> interface as well as one in the private network, where the server should be assigned a <strong>fixed IP address via DHCP</strong>:</p>
<pre><code class="language-sh">$ curl -i -X POST -H &quot;$AUTH_HEADER&quot; -H &quot;Content-Type: application/json&quot; --data &#x27;{&quot;name&quot;: &quot;firewall&quot;, ..., &quot;interfaces&quot;: [{&quot;network&quot;: &quot;public&quot;}, {&quot;addresses&quot;: [{&quot;subnet&quot;: &quot;5b71...2912&quot;, &quot;address&quot;: &quot;192.168.1.21&quot;}]}]}&#x27; https://api.cloudscale.ch/v1/servers
</code></pre>
<p>It goes without saying that you have the same options when <strong>adding interfaces to an existing server later on</strong>. For a complete overview of possible interface definitions, please refer to our <a href="https://www.cloudscale.ch/en/api/v1#interfaces-attribute-specification">API documentation</a>.</p>
<p><strong>DHCP is always enabled</strong> on the <code>public</code> interface (if present) and cannot be configured any further. You are, however, still free to configure all relevant settings on your server statically.</p>
<p>Please note that DHCP must be enabled on at least one interface (<code>public</code> or in at least one private network). In this way, the server learns a route to our metadata server to retrieve its configuration for <code>cloud-init</code>.</p>
<h3>Consistency and efficiency</h3>
<p>The advantage is obvious: when creating servers, you can also <strong>structure the private network appropriately</strong>. For a <a href="https://www.cloudscale.ch/en/news/2019/02/27/firewall-distribution-at-a-mouse-click">central firewall with OPNsense</a>, for example, you can directly define a fixed IP address. The other servers to be located behind this firewall are then assigned the firewall IP via DHCP both as a gateway and as a DNS resolver.</p>
<p>It is also easier to maintain your own DNS resolver because you now <strong>know the internal IP addresses of your servers from the beginning</strong> and can record them directly in the DNS. And if you are experimenting with current Kubernetes distributions, new nodes can automatically integrate themselves into the cluster thanks to a reverse DNS lookup for their own internal IP address.</p>
<br/>
<p>Using private networks, you have long been able to &quot;wire&quot; your virtual servers together at cloudscale.ch in the same way as with physical servers. Thanks to configurable IP addresses and subnets as well as DHCP options, you can now <strong>also model the logical level</strong> to enable your cloud setups to integrate ideally into your overall infrastructure.</p>
<p>The right address, right from the start!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[SARS-CoV-2 / COVID-19: Customer Information
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/03/15/sars-cov-2-covid-19-customer-information</link>
          <pubDate>Sun, 15 Mar 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/03/15/sars-cov-2-covid-19-customer-information</guid>
          <description>
            <![CDATA[<p>In the light of the rapid spread of coronavirus (COVID-19) throughout the world, we feel obliged to inform our customers about its (non-)impact on our services. In addition, we would like to show you how we will maintain operations and how we intend to play our part in delaying the global spread of the virus.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Why we do not expect any impact</h3>
<p>cloudscale.ch is a provider of infrastructure as a service (IaaS) in a self-service model, i.e. our customers launch, scale, extend and delete virtual servers and use services at their own discretion, while we provide the necessary infrastructure. Given the nature of our services, <strong>physical presence is thereby only necessary in very few cases</strong>.</p>
<p>We consider our office location – not least for reasons of &quot;business continuity&quot; – merely as a kind of Internet Cafe, which we can do without in an emergency situation. Our employees have therefore been able to stay at home and work from their home offices more often in recent days.</p>
<p>Last but not least, thanks to redundancy, our entire infrastructure is designed to ensure that your services are always available. <strong>There is also sufficient spare capacity available for further growth of customer demand.</strong> This way, we avoid hectic rush and can carefully plan any necessary work.</p>
<h3>How we want to play our part in the fight</h3>
<p>The top priority with regard to SARS-CoV-2 is currently to delay its spread and thus ease the burden on the health care system worldwide. For this reason, we have introduced the following rules, effective immediately:</p>
<ol>
<li><strong>All our employees work from their home offices wherever possible.</strong> As already mentioned, the nature of our service does not require us to work at the office.</li>
<li>We ask our employees and customers to <strong>refrain from in-person meetings</strong> and to hold meetings online or over the phone instead.</li>
<li>Our employees are encouraged <strong>to stay at home whenever possible</strong> and to follow the Self-Quarantine Manifesto at <a href="https://staythefuckhome.com">staythefuckhome.com</a>.</li>
</ol>
<br/>
<p>Please do not hesitate to contact us if you have any questions or concerns. We would like to take this opportunity to thank all those who are involved in the fight for human life and against the spread of the virus, and our thoughts are with all those affected and their families.</p>
<p>Stay safe!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Entropy and Random Numbers
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/03/09/entropy-random-numbers</link>
          <pubDate>Mon, 09 Mar 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/03/09/entropy-random-numbers</guid>
          <description>
            <![CDATA[<p>Even if it does not seem intuitively logical, &quot;randomness&quot; plays a central role in today&#x27;s IT, especially in the area of security. The major strength of computers, however, lies in the complete opposite, namely in exact and reproducible calculations. There are, however, a number of special techniques available to generate good randomness – also at cloudscale.ch.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Why randomness?</h3>
<p>The appeal of many computer games lies in the fact that the computer&#x27;s next move is not predictable but apparently random. Apart from this entertaining aspect of &quot;randomness&quot;, the <strong>security of your data and systems</strong> also depends on it: data encryption – e.g. with HTTPS or SSH – is based on mathematical methods that rely on the fact that a potential attacker cannot guess the key or derive it from other data.</p>
<p>Using special algorithms, a so-called &quot;pseudo random number generator&quot; (PRNG) calculates the necessary random values/numbers from <strong>input values that are as unpredictable as possible</strong>. The PRNG in the Linux kernel obtains this entropy (&quot;disorder&quot;) from various sources such as mouse movements and network traffic. The random numbers are output, for example, via <code>/dev/urandom</code> and <code>/dev/random</code>.</p>
<h3>Additional entropy sources</h3>
<p>A number of possible entropy sources (such as mouse movements) are obviously not available in virtualized cloud servers. Especially during the initial boot it can take a while to collect enough entropy from the few available sources to initialize the PRNG. For this reason, servers at cloudscale.ch can now also use the <code>rdrand</code> command, which is a feature of many modern CPUs to generate random numbers. Also newly available is the virtio device <code>/dev/hwrng</code>, which provides random numbers generated on our physical compute hosts.</p>
<p>Both of these new entropy sources are independent of your server&#x27;s ability to collect enough entropy on its own, and can help to <strong>initialize the server&#x27;s PRNG more quickly</strong>. However, whether the server actually detects and uses <code>rdrand</code> and <code>/dev/hwrng</code> depends on the Linux distribution and kernel you are using; if necessary, check the <code>CONFIG_RANDOM_TRUST_CPU</code> and <code>CONFIG_HW_RANDOM_VIRTIO</code> options of your kernel, e.g. in the <code>/boot/config-*</code> file (depending on the distribution).</p>
<h3>Benefit automatically from more entropy</h3>
<p>An increasing number of Linux distributions contain software that uses the <code>getrandom()</code> system call. This call, and services such as SSH that depend on it, wait after system startup until the server&#x27;s PRNG is initialized, which in some cases can lead to long delays. Servers at cloudscale.ch are not affected by such delays: thanks to <code>rdrand</code> and <code>/dev/hwrng</code>, <strong>the necessary entropy is available in no time</strong>, so that services requiring random numbers can be started right away.</p>
<p>You can also <strong>tap into the new entropy sources</strong> with existing servers at cloudscale.ch. Simply switch off your server completely and then restart it. After restarting, the <code>rdrand</code> feature of the CPU will be available to you, and you can verify it using the following command:</p>
<pre><code class="language-plain">$ grep rdrand /proc/cpuinfo
flags		: [...] rdrand [...]
</code></pre>
<p>If your specific operating system also supports the <code>hwrng</code> virtio device, it will be displayed with the following command:</p>
<pre><code class="language-plain">$ cat /sys/devices/virtual/misc/hw_random/rng_available
virtio_rng.0
</code></pre>
<br/>
<p>Although it is hardly ever talked about in everyday IT life, &quot;randomness&quot; is an indispensable ingredient for countless processes, especially in the context of security. At cloudscale.ch we ensure that your servers can generate <strong>enough randomness right from the start</strong>.</p>
<p>Not just random servers!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA["CacheOut" and "VRS": cloudscale.ch Not Affected
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/01/30/cacheout-vrs-not-affected</link>
          <pubDate>Thu, 30 Jan 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/01/30/cacheout-vrs-not-affected</guid>
          <description>
            <![CDATA[<p>After the discovery of several security holes in processors over the last two years, two new vulnerabilities, &quot;CacheOut&quot; and &quot;VRS&quot;, were disclosed this Monday. According to the current level of knowledge, the cloud services of cloudscale.ch are not affected by these new vulnerabilities, and there is no need for our customers to take action in this regard.</p>]]>
          </description>
          <content:encoded><![CDATA[<p>On 2020-01-27, two new vulnerabilities in recent Intel processors became known, one as &quot;<a href="https://software.intel.com/security-software-guidance/software-guidance/l1d-eviction-sampling">L1D Eviction Sampling</a>&quot; (L1DES) or &quot;<a href="https://cacheoutattack.com">CacheOut</a>&quot;, and the other as &quot;<a href="https://software.intel.com/security-software-guidance/software-guidance/vector-register-sampling">Vector Register Sampling</a>&quot; (VRS).</p>
<p>Verification of our infrastructure has shown that your virtual servers operated at cloudscale.ch are <strong>not affected</strong> by the current vulnerabilities: the exact CPU models of our Intel-based compute hosts do not suffer from these flaws. AMD CPUs, such as those in our AMD-based compute hosts, which we introduced in the context of our new <a href="https://www.cloudscale.ch/en/news/2019/11/19/even-more-power-thanks-to-plus-flavor">flavors featuring dedicated CPU cores</a>, are not affected at all. Based on current information, our storage clusters and the peripheral systems of our cloud infrastructure also only contain CPUs that are not affected.</p>
<p>As early as May 2019, we decided to <a href="https://www.cloudscale.ch/en/news/2019/05/17/information-on-zombieload-ridl-and-fallout">stop using &quot;simultaneous multithreading&quot;</a> on all compute hosts, a feature that could make it easier to exploit the current gaps. Although our CPUs are not affected by the two current vulnerabilities from the outset, the recent discoveries confirm that our defensive approach is effective in <strong>minimizing the risk for our customers</strong>.</p>
<p>Therefore, there is currently <strong>no need for our customers to take action</strong> in connection with the two newly disclosed security vulnerabilities. It goes without saying that we are monitoring the situation closely and will take appropriate measures depending on the latest findings.</p>
<p>Should you have any questions in connection with our security measures, we will be happy to answer them for you.</p>
<p>Best regards,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[S3-Compatible Object Storage: New URLs
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/01/17/object-storage-new-urls</link>
          <pubDate>Fri, 17 Jan 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/01/17/object-storage-new-urls</guid>
          <description>
            <![CDATA[<p>With &quot;LPG&quot;, we recently put a second cloud region into operation in Lupfig (Canton of Aargau), which complements our existing &quot;RMA&quot; region in Rümlang (Canton of Zurich) and <a href="https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations">enables geo-redundant setups</a>. On this occasion we would like to briefly inform you about some changes with regard to accessing your buckets/objects.</p>]]>
          </description>
          <content:encoded><![CDATA[<br/>
<p><strong>1.</strong> In order to create and access buckets <strong>in the new LPG region</strong>, please use only the following URLs:</p>
<ul>
<li><code>https://objects.**lpg**.cloudscale.ch</code> resp.</li>
<li><code>https://BUCKETNAME.objects.**lpg**.cloudscale.ch</code> or</li>
<li><code>https://objects.**lpg**.cloudscale.ch/BUCKETNAME</code></li>
</ul>
<p><strong>2.</strong> In order to create and access buckets <strong>in the existing RMA region</strong>, please use the following URLs as of now:</p>
<ul>
<li><code>https://objects.**rma**.cloudscale.ch</code> resp.</li>
<li><code>https://BUCKETNAME.objects.**rma**.cloudscale.ch</code> or</li>
<li><code>https://objects.**rma**.cloudscale.ch/BUCKETNAME</code></li>
</ul>
<p><strong>3.</strong> The previous URLs:</p>
<ul>
<li><code>https://objects.cloudscale.ch</code> resp.</li>
<li><code>https://BUCKETNAME.objects.cloudscale.ch</code> or</li>
<li><code>https://objects.cloudscale.ch/BUCKETNAME</code></li>
</ul>
<p>will continue to work for buckets/objects <strong>in the RMA region synonymously</strong> to the URLs stated in section 2 for a transitional period.<br/>
<strong>ATTENTION:</strong> The transitional period ends on 2020-12-31; please update links to these buckets/objects in time to the URLs specifying the RMA region (see section 2).</p>
<p><strong>4.</strong> Requests to existing buckets using the URL of another region are answered by our system with a HTTP status code <code>301 Moved Permanently</code>, which some tools follow automatically. Such requests are counted like regular requests in the cost calculation. However, we recommend that you do not rely on this redirection, but <strong>always use the correct URLs</strong>.</p>
<br/>
<p>If you have any further questions regarding our S3-compatible Object Storage, please do not hesitate to contact us.</p>
<p>Best regards,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Use Your Own ISO/USB Images
]]></title>
          <link>https://www.cloudscale.ch/en/news/2020/01/14/use-your-own-iso-usb-images</link>
          <pubDate>Tue, 14 Jan 2020 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2020/01/14/use-your-own-iso-usb-images</guid>
          <description>
            <![CDATA[<p>While &quot;Linux&quot; is arguably the typical operating system for cloud servers, detailed preferences can be quite diverse. At cloudscale.ch, you benefit on the one hand from a wide range of images of popular Linux distributions, while on the other hand you can use a small trick to also install almost any other system – and much more.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-using-own-iso-image.png"/><h3>Reasons to use your own image</h3>
<p>In addition to many popular Linux distributions, cloudscale.ch also offers a choice of <a href="https://www.cloudscale.ch/en/news/2019/02/27/firewall-distribution-at-a-mouse-click">two firewall distributions</a> when starting a new server. You may, however, prefer a different distribution, e.g. for personal considerations, or to keep your new system as similar to existing ones as possible. By using your individual ISO/USB image, you can install <strong>almost any operating system</strong> at cloudscale.ch.</p>
<p>You can, however, do even more when using your own image: even if you choose to use Ubuntu, Debian or one of the other operating systems we offer, you have <strong>complete control over the initial setup</strong>. You can customize the partitioning, use your preferred file system or set up your own disk encryption (in addition to <a href="https://www.cloudscale.ch/en/news/2019/07/25/bluestore-encryption-and-nvme-only-storage">measures on our side</a>). Furthermore, a live image can help to restore access to a crashed system after a mishap. And if required, you can create a consistent, sector-by-sector copy of your servers on an additional volume.</p>
<h3>How to use your own images</h3>
<p>As a starting point, you need a working server in the relevant cloudscale.ch account, to which you then attach an additional SSD volume with sufficient space for your ISO/USB image. This can be the server that you are going to reinstall with your favorite system, or just a temporary server that is used for the next step. Next, download your image to this server (e.g. using <code>curl</code> or <code>wget</code>) and subsequently <strong>copy the image, block by block, to the additional volume</strong>:</p>
<pre><code class="language-bash">sudo dd if=my-own-image.iso of=/dev/vdb bs=1M &amp;&amp; sync
</code></pre>
<p>Caution: It is important that you make absolutely certain that <code>of</code> (output file) is pointing to your additional volume as this path will be overwritten without further warning!</p>
<p>If you want to use the image to boot a server other than the current one, reattach the volume to the desired server now. Then open the VNC console of the target server and restart it. By pressing &quot;Esc&quot; in the VNC console, you get to the boot menu, where the additional volume with your image appears as an extra &quot;Virtio disk&quot;. Select it to start <strong>booting from your image</strong>. You can now use your image as you would a physical server and a DVD.</p>
<img src="https://static.cloudscale.ch/img/news-using-own-iso-image-323f45301c63.png" alt="Repair after a mishap using a live image"/>
<p>NB. If keystrokes appear multiple times in the VNC console, simply change &quot;vnc_lite.html&quot; to &quot;vnc.html&quot; in your browser&#x27;s address bar (and click &quot;Connect&quot;).</p>
<h3>Some additional notes</h3>
<p>After the target server has been installed or repaired, reboot it <strong>normally from its root volume</strong>. You can either delete the additional volume containing your image or keep it for future use.</p>
<p>The approach described in this article <strong>works with many images</strong> provided by the distributions for CDs/DVDs (.iso files) or for USB sticks, e.g. with <a href="https://ubuntu.com">Ubuntu</a> and <a href="https://www.centos.org">CentOS</a>. However, cloudscale.ch cannot guarantee that it will work with a particular image.</p>
<p>Please note that the images provided by us for launching servers contain the &quot;cloud-init&quot; package. This package ensures correct configuration of your servers, e.g. by transferring the specified server name and the authorized SSH keys into the server&#x27;s configuration. If you (re)install your server via a custom image, you will need to <strong>specify these settings manually</strong>. For the network details, you can either use DHCP or statically configure the details shown in the control panel.</p>
<br/>
<p>Irrespective of whether you only have to fix a small mishap or need to build up an individual server farm &quot;from scratch&quot;, by using your own ISO/USB images, you have <strong>full control over the software and configuration</strong> of your servers in all circumstances.</p>
<p>Has the right tools,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Use our Latest Features with Terraform
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/12/23/latest-features-with-terraform</link>
          <pubDate>Mon, 23 Dec 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/12/23/latest-features-with-terraform</guid>
          <description>
            <![CDATA[<p>We are constantly evolving our cloud offering further. We have, for example, recently been able to announce improvements regarding volumes and private networks as well as the opening of our second cloud location. Of course, this also means adapting the tools that interact with our API. We have enhanced our Terraform plug-in in several steps so that you can also make the most of the new possibilities using Terraform.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-infrastructure-as-code.png"/><h3>Terraform offers &quot;infrastructure as code&quot;</h3>
<p>Where a configuration management system often assumes existing an infrastructure on which the software has to be configured, <a href="https://www.terraform.io">Terraform</a> starts one step earlier: <strong>you define your required infrastructure in the form of text files</strong>, and changes over time can easily be tracked in your usual versioning system. Based on the current state and defined target situation, Terraform derives the necessary actions and creates your systems via our API, thus turning code into infrastructure.</p>
<p>Terraform is open source and can interact with a large number of cloud providers; the respective interface is implemented as a provider plug-in. We are constantly developing the &quot;cloudscale.ch Provider&quot; plug-in further and have enhanced it several times this year so that you can <strong>make the best use of the latest features</strong> of our cloud.</p>
<img src="https://static.cloudscale.ch/img/news-infrastructure-as-code-80af8ec8dc60.png" alt="&quot;Infrastructure as code&quot; with Terraform and the cloudscale.ch provider plug-in"/>
<h3>Support for the latest features</h3>
<p>For some time now, our servers have been supporting not only one SSD and one bulk volume each, but virtually <a href="https://www.cloudscale.ch/en/news/2019/01/22/flexible-management-of-ssd-and-bulk-volumes">any number of volumes</a>. As early as spring, support in Terraform followed, which means additional volumes are now defined as separate resources and can be <strong>dynamically attached to servers as well as scaled up during live operation</strong>. A restart is still required when scaling servers, and to avoid this catching you on the wrong foot, Terraform asks for your explicit permission for the restart with a special config argument.</p>
<p>Another new feature is the management of <a href="https://www.cloudscale.ch/en/news/2019/10/25/segmentation-with-multiple-private-networks">multiple private networks</a> and thus of &quot;tiered&quot; infrastructures in Terraform. Private networks are also defined as separate resources; all networks to which a server requires a connection are then specified via <code>interfaces</code>. An <strong>overview of all available resources and arguments</strong>, including examples, can be found in the <a href="https://registry.terraform.io/providers/cloudscale-ch/cloudscale/latest/docs">&quot;cloudscale.ch Provider&quot; documentation</a>.</p>
<h3>Designed for reliability</h3>
<p>It goes without saying that our Terraform plug-in also supports our <a href="https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations">new data center location in Lupfig</a>. You can select the desired location for each resource with the <code>zone_slug</code> argument. This allows you to define the complete infrastructure required <strong>for your geo-redundant setups</strong> &quot;as code&quot;.</p>
<p>In addition to redundancy, it is also important to quickly identify any problems. Thanks to a test suite, which is executed daily by Terraform developer HashiCorp against our productive infrastructure, we obtain <strong>timely feedback from a user&#x27;s perspective</strong>. This allows us to investigate potential anomalies before sporadic errors become a real problem for our customers.</p>
<p>By the way: the Terraform plug-in is based on our <a href="https://github.com/cloudscale-ch/cloudscale-go-sdk">Go SDK</a>, which is also available as open source and makes <strong>accessing our infrastructure easier for other tools written in Go</strong>.</p>
<br/>
<p>Terraform enables the <strong>automated – and therefore reproducible – provision of infrastructure</strong> and thus fits perfectly into the workflow of many of our customers. Use our latest features in Terraform and prepare the ground for your next sophisticated projects!</p>
<p>Happy holidays!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Even More Power Thanks to "Plus" Flavor
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/11/19/even-more-power-thanks-to-plus-flavor</link>
          <pubDate>Tue, 19 Nov 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/11/19/even-more-power-thanks-to-plus-flavor</guid>
          <description>
            <![CDATA[<p>Migrate even performance-hungry applications to cloudscale.ch now. Thanks to our brand new AMD-based hardware platform, you can benefit from dedicated CPU resources and up to 448 GB RAM per virtual server. This way, the computing power you need is guaranteed to be readily available at any time.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-flavors-comparison-en.png"/><h3>Characteristics of our new &quot;Plus&quot; flavors</h3>
<p>So far, the most common use case for cloud servers has probably been small VMs with at most short load peaks. By sharing physical resources, cloud offerings help avoid high hardware investments and infrastructure overheads. However, the advantages of the cloud over your own hardware also include the fact that <strong>you can remain flexible at all times and that ongoing hardware maintenance is taken care of</strong> – even in cases where sharing available computing power is not the main focus.</p>
<p>Our new &quot;Plus&quot; flavors are perfect for servers with high performance requirements. Unlike with the existing &quot;Flex&quot; flavors, the physically available computing power is not overcommitted; <strong>the allocated number of CPU cores is dedicated to each virtual server</strong>. You can and may make full use of this performance at any time, since the computing power of our &quot;Plus&quot; offers is not subject to so-called fair-use regulations. At the same time you are protected given that &quot;neighbors&quot; with compute-intensive workloads on the same hardware have no impact on the number of cores available to your server.</p>
<img src="https://static.cloudscale.ch/img/news-flavors-comparison-en-2ac109d024bd.png" alt="CPU Resources of &quot;Flex&quot; and &quot;Plus&quot; Flavors: Comparison"/>
<h3>Based on the latest hardware generation</h3>
<p>This offer is made possible by the use of leading-edge hardware. cloudscale.ch is one of the very first IaaS providers to employ compute nodes with the recently launched AMD EPYC &quot;Rome&quot; processors, which feature <strong>up to 64 physical cores per socket</strong>. This provides capacity to combine all the advantages of a cloud server with the computing power of a dedicated server, which results in the best of both worlds, so to speak.</p>
<p>At cloudscale.ch, we have, incidentally, already successfully relied on AMD systems in the past, namely for our Ceph-based storage cluster. The <strong>high number of PCIe lanes</strong>, in particular, was key and enabled us to migrate completely from conventional SSDs to <a href="https://www.cloudscale.ch/en/news/2019/07/25/bluestore-encryption-and-nvme-only-storage">even more powerful NVMe SSDs</a>.</p>
<p>In addition, it is good to know that after repeated descriptions of side-channel attacks targeting vulnerabilities in processor designs over the past two years, we have also <strong>deactivated simultaneous multithreading (SMT)</strong> for our latest generation of compute nodes. This ensures that SMT-based attacks are not possible from the outset.</p>
<h3>Take advantage of more power right away</h3>
<p>We particularly recommend our new &quot;Plus&quot; flavors for virtual servers that constantly require medium to high computing power. With 2 to 112 dedicated CPU cores, you have packed power at your exclusive service 24/7. This allows you, for example, to further increase the efficiency of Kubernetes clusters or other container setups. Workloads that depend on <strong>predictable performance</strong> can benefit as well given that, without competition from neighboring servers, the CPU cycles are reliably available to your real-time applications exactly when they are needed.</p>
<p>When creating a new server, simply select the desired &quot;Plus&quot; flavor, or scale existing servers as usual. It goes without saying that you can reconsider and switch to any of our other &quot;Flex&quot; and &quot;Plus&quot; flavors whenever you like. This will allow you to play it safe and, <strong>thanks to this flexibility, elegantly handle even short-term load peaks</strong>, e.g. during your Black Friday promotion.</p>
<p>Try it out for yourself: cloudscale.ch is offering you <strong>all the &quot;Plus&quot; flavors with a 25% discount until the end of 2019</strong>, i.e. at the price of the corresponding &quot;Flex&quot; offer with the same amount of RAM. If we are able to convince you of &quot;Plus&quot;, you will be charged the regular price from January 2020; if not, you can simply scale to any other flavor.</p>
<br/>
<p>With dedicated CPU power, cloudscale.ch now also meets the needs of your most demanding workloads. It goes without saying that our new <strong>&quot;Plus&quot; flavors are available at both cloud locations</strong> and are therefore <a href="https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations">also suitable for geo-redundant setups</a>. This means that you will be prepared for any run.</p>
<p>Got power,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Geo-Redundancy with Two Cloud Locations
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations</link>
          <pubDate>Wed, 06 Nov 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/11/06/geo-redundancy-with-two-cloud-locations</guid>
          <description>
            <![CDATA[<p>First things first: All services are now also available at a second, independent location. Thanks to top infrastructures in geographically separate data centers, it is now possible to design your setups geo-redundantly. It goes without saying that this not only applies to your servers; the new location also offers a separate Object Storage with the same familiar features.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-locations-hotstandby-en.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-locations-map-en.png"/><h3>How you can benefit from our new location</h3>
<p>One of the benefits of cloud services is that you do not have to worry about purchasing and maintaining hardware and infrastructure. However, the physical location of &quot;the cloud&quot; often still plays a role, e.g. with regard to data protection and the corresponding legislation. And despite all precautions, even cloud data centers are exposed to certain residual risks, such as nearby construction work or flooding. By using a second location with a <strong>different risk profile due to physical distance</strong>, you can protect the operation of your applications – even from the consequences of a worst-case scenario of this kind.</p>
<p>True to the old adage of &quot;not putting all your eggs in one basket&quot;, we recommend that you also keep your most important data and systems at a second location. This will enable you to <strong>resume productive operation within a short time</strong> in case of an emergency. Whether it is a hot standby system or an active/active setup with load balancing, the most you will ideally need to do is to point the DNS entry to the second location to ensure that you are back online in no time. Even simple data replication (in addition to the regular backup) may spare you considerable stress – and not only in the unlikely event of a disaster.</p>
<img src="https://static.cloudscale.ch/img/news-locations-hotstandby-en-388e875fb8b9.png" alt="Example Hot-Standby"/>
<h3>Technical background information</h3>
<p>In addition to the tried-and-tested location in Rümlang (Canton Zurich), we are now also offering our services from Lupfig (Canton Aargau) – by the way, the abbreviations &quot;RMA&quot; and &quot;LPG&quot; we introduced for this purpose are based on the <a href="https://www.unece.org/cefact/locode/welcome.html">UN/LOCODE scheme</a>. When setting up the new site, we took care to ensure that, even in the event of a total outage of one data center, <strong>all services at the other location can continue to run</strong>. Accordingly, we built the infrastructure to the same standards and also rely, for example, on a completely redundant power supply, an independent storage cluster (again with NVMe SSDs and triple replication), as well as on separate connections to the Internet and to peering partners.</p>
<p>For your traffic, we operate a fully redundant network infrastructure with at least 10 Gbps per link at both locations. And even though both sites are designed to work as independently as possible, they are nonetheless interconnected. For optimal and fail-safe <strong>networking between your systems in Rümlang and Lupfig</strong>, we use two direct fiber-optic lines that connect the two locations on separate routes without intersections.</p>
<img src="https://static.cloudscale.ch/img/news-locations-map-en-a963700a4d6f.png" alt="Locations Overview"/>
<h3>Practical use of multiple zones</h3>
<p>We have been able to offer all our services at our new location in Lupfig right from the start. When creating a new server in the cloud control panel, for example, <strong>simply select the preferred location</strong>, or include the <code>zone</code> parameter in your API call. It goes without saying that the API is backwards compatible; if no zone is specified, the default location is used, which you can set individually in the control panel. Once created, the resources are bound to their zone. This means that e.g. existing volumes or private networks can only be connected to servers at the same location.</p>
<p>As mentioned, we have also set up a separate Object Storage at the new location. Your objects users are automatically valid for both locations, and buckets are placed at the location where they were created. To enable direct access to the desired Object Storage or bucket, they have <strong>separate DNS names and IPs</strong>; the previous URL <code>objects.cloudscale.ch</code> is now a synonym for the URL <code>objects.rma.cloudscale.ch</code> of the existing location in Rümlang. Cross requests are answered with a status code <code>301 Moved Permanently</code>, which is automatically followed by some tools. To ensure optimum efficiency and reliability, however, we recommend that you use the correct URLs wherever possible.</p>
<br/>
<p>The effort has been worth it! With the new location in Lupfig, we can now also offer you the <strong>ideal infrastructure for geo-redundant setups</strong>. And no matter which location you choose as your &quot;primary&quot; one, you benefit from sophisticated redundancy, optimum performance and the exclusive storage of your data in Switzerland.</p>
<p>All the best from Zurich and Aargau,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Segmentation with Multiple Private Networks
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/10/25/segmentation-with-multiple-private-networks</link>
          <pubDate>Fri, 25 Oct 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/10/25/segmentation-with-multiple-private-networks</guid>
          <description>
            <![CDATA[<p>Not every server should be directly accessible from the Internet. This is why our customers often make use of the possibility of strategically positioning servers in a private network and protecting them behind a firewall, a load balancer or a VPN. Now you can define multiple separate private networks, which allows you to build more complex setups tailored to your specific requirements.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-private-networks-1.png"/><link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-private-networks-2.png"/><h3>Sample applications with multiple private networks</h3>
<p>Private networks are used, for example, if web workers, but not their database backends, should be accessible directly from the Internet. They are also employed to protect servers in the cloud behind a <a href="https://www.cloudscale.ch/en/news/2019/02/27/firewall-distribution-at-a-mouse-click">firewall such as OPNsense</a>. With the support of multiple private networks, you can now also <strong>combine these proven concepts</strong>, with a firewall/load balancer setup acting as the interface to the Internet and forwarding legitimate requests to the web workers via the first private network. These web workers in turn access the database backends via a second, separate private network to ensure proper isolation of the individual security zones.</p>
<img src="https://static.cloudscale.ch/img/news-private-networks-1-9082766fe857.png" alt="Sample application 1"/>
<p>In addition to your public web application, you may want to run internal services in the cloud, while protecting them from access from less trusted zones. Separate private networks can be used to implement this scenario as well. Traffic for your web application is routed by the firewall via one private network to your web server, while your internal tools are connected to your VPN endpoint via a separate private network. This way, your public and internal servers are safely separated in <strong>two independent private networks</strong>.</p>
<img src="https://static.cloudscale.ch/img/news-private-networks-2-a712c73d0861.png" alt="Sample application 2"/>
<h3>How to set up private networks</h3>
<p>The new API endpoint <code>networks</code> acts as the key interface for your private networks. With the request:</p>
<pre><code class="language-sh">$ curl -i -H &quot;$AUTH_HEADER&quot; https://api.cloudscale.ch/v1/networks
</code></pre>
<p>you are given an overview of your existing private networks:</p>
<pre><code class="language-json">HTTP/1.0 200 OK
Allow: GET, POST, HEAD, OPTIONS
Content-Type: application/json

[
  {
    &quot;href&quot;: &quot;https://api.cloudscale.ch/v1/networks/2db69ba3-1864-4608-853a-0771b6885a3a&quot;,
    &quot;created_at&quot;: &quot;2019-05-29T13:18:42.511407Z&quot;,
    &quot;uuid&quot;: &quot;2db69ba3-1864-4608-853a-0771b6885a3a&quot;,
    &quot;name&quot;: &quot;my-network-name&quot;,
    &quot;mtu&quot;: 9000,
    &quot;subnets&quot;: [
      {
        &quot;href&quot;: &quot;https://api.cloudscale.ch/v1/subnets/33333333-1864-4608-853a-0771b6885a3a&quot;,
        &quot;uuid&quot;: &quot;33333333-1864-4608-853a-0771b6885a3a&quot;,
        &quot;cidr&quot;: &quot;172.16.0.0/24&quot;
      }
    ],
    &quot;tags&quot;: {}
  }
]
</code></pre>
<p>With a POST request to the endpoint <code>networks</code>, you can <strong>easily create additional private networks</strong>. The only mandatory parameter you need to provide is a name so you can recognize the network later. Using the existing API endpoint <code>servers</code> you can now determine whether and in which private networks a server should have interfaces. This works at the time you create a new server as well as later on.</p>
<p>By default, our system assigns a <strong>randomly selected private IPv4 range</strong> (&quot;subnet&quot;) to the new network and will later assign IP addresses from this range to your servers in this network through DHCP. It goes without saying that you can deactivate both of these if desired. And as before, you can statically configure any IPv4 and IPv6 addresses on servers in your private networks. In the case of a network with a subnet, you simply need to make sure you avoid address conflicts with our DHCP servers. More information about the available options can be found in our <a href="https://www.cloudscale.ch/en/api/v1">API documentation</a>.</p>
<h3>Some useful hints</h3>
<p>For optimal performance in the private network, the MTU is 9000 bytes (&quot;jumbo frames&quot;) by default. You can, of course, change the MTU at any time, e.g. if your specific setup requires the 1500 bytes known from the classic Ethernet.</p>
<p>Please note that a server must be assigned an IPv4 address via our DHCP server in at least one of its networks. The reason for this is that DHCP is also used to set the required route so that cloud-init on your server can reach our metadata server.</p>
<p>In the near future we will also be expanding our cloud control panel so you can directly select existing or new private networks when creating a new server via your browser.</p>
<p>Last but not least, you can <a href="https://www.cloudscale.ch/en/news/2019/09/24/keeping-track-with-tags">set user-defined tags</a> for your private networks and filter for them.</p>
<br/>
<p>In many cases, a more granular segmentation of the servers used can make sense, either to reflect the logical structure of your setup, or to <strong>add another level to your security concept</strong>. The latest evolution of our private networks makes it possible for you to build up your server landscape in the way that best suits you and your specific use case.</p>
<p>Connected at all levels,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Keeping Track with Tags
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/09/24/keeping-track-with-tags</link>
          <pubDate>Tue, 24 Sep 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/09/24/keeping-track-with-tags</guid>
          <description>
            <![CDATA[<p>One server seldom comes alone: another server is quickly added for the test environment, an additional volume holds archived data, and Floating IPs enable high availability. When the naming scheme hits its limits, the new tags feature helps to ensure that you always know what belongs together.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>How tags help you stay on top of things</h3>
<p>Tags make it clear at a glance what a particular resource is all about or where it belongs. Such &quot;labels&quot; with key/value pairs can be attached to <strong>most of your resources</strong>: servers and server groups; volumes; Floating IPs; and Objects Users. This allows you, for example, to differentiate between &quot;prod&quot; and &quot;development&quot; environments, to associate resources with the respective end customer, or to record a comment.</p>
<p>When querying resources, you can <strong>filter them directly by tag to display only the relevant objects</strong>. Filtering can be done by a specific tag content (e.g. &quot;tag:Env=Prod&quot; for all production servers) or by the presence of a given tag regardless of its content (e.g. &quot;tag:Customer&quot; for all servers assigned to any end customer).</p>
<h3>How tags are created and retrieved</h3>
<p>Tags are currently implemented for our API. The already existing endpoints of the supported resources now recognize an element &quot;tags&quot;, which accepts a <strong>JSON object with freely definable key/value pairs</strong>. These are automatically included in the output of a GET request, as in the simplified example below:</p>
<pre><code class="language-json">[
  {
    &quot;href&quot;: &quot;https://api.cloudscale.ch/v1/servers/c10...&quot;,
    &quot;name&quot;: &quot;WebWorker1&quot;,
    &quot;created_at&quot;: &quot;2019-09-18T17:48:21.547057Z&quot;,
    &quot;status&quot;: &quot;running&quot;,
    &quot;flavor&quot;: {
      &quot;slug&quot;: &quot;flex-32&quot;,
      &quot;name&quot;: &quot;Flex-32&quot;,
      &quot;vcpu_count&quot;: 8,
      &quot;memory_gb&quot;: 32
    },
    &quot;server_groups&quot;: [],
    &quot;anti_affinity_with&quot;: [],
    &quot;tags&quot;: {
      &quot;Env&quot;: &quot;Prod&quot;,
      &quot;Customer&quot;: &quot;Cloud Corp.&quot;,
      &quot;Lifecycle&quot;: &quot;Installing&quot;,
      &quot;Comment&quot;: &quot;Don&#x27;t forget the monitoring! (Remove comment when done.)&quot;
    }
  }
]
</code></pre>
<p>Tags are set and changed using a PATCH request to the URL of the resource in question:</p>
<pre><code class="language-sh">$ curl -H &quot;$AUTH_HEADER&quot; -H &quot;Content-Type: application/json&quot; -X PATCH --data &#x27;{ &quot;tags&quot;: { &quot;Env&quot;: &quot;Prod&quot;, &quot;Customer&quot;: &quot;Cloud Corp.&quot;, &quot;Lifecycle&quot;: &quot;Live&quot; }}&#x27; https://api.cloudscale.ch/v1/servers/c10...
</code></pre>
<p>If only resources with a certain tag are to be retrieved, the desired criterion is simply appended as a URL parameter:</p>
<pre><code class="language-sh">$ curl -H &quot;$AUTH_HEADER&quot; https://api.cloudscale.ch/v1/servers?tag:Env=Prod
</code></pre>
<p>It goes without saying that further information and examples can also be found in our <a href="https://www.cloudscale.ch/en/api/v1#tags">API documentation</a>. In addition, our <strong>Ansible Cloud Module supports tags for the most important resources</strong> starting with the upcoming <a href="https://docs.ansible.com/ansible/latest/roadmap/ROADMAP_2_9.html">Ansible version 2.9</a>.</p>
<br/>
<p>With tags, we are responding to a request made by several power users among our customers. We are glad that all our users can benefit from this helpful feature, which <strong>allows you to navigate even large deployments with ease</strong>.</p>
<p>Keeping everything in order with tags,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Docker Machine and Rancher with cloudscale.ch
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/08/14/docker-machine-and-rancher</link>
          <pubDate>Wed, 14 Aug 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/08/14/docker-machine-and-rancher</guid>
          <description>
            <![CDATA[<p>An increasing number of our users are turning to container virtualization with Docker. One of the key factors in this consists of the right tools for setting up and managing such installations. By providing drivers for Docker Machine and Rancher, cloudscale.ch supports developers in further automating their deployments – both on the command line and via web interface.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Set up Docker hosts quickly and easily with Docker Machine</h3>
<p>Docker containers are usually started on Linux servers. Setting up such a Linux server at cloudscale.ch and installing the Docker environment on it now requires <strong>just a single command: docker-machine</strong>. With the &quot;cloudscale&quot; driver, Docker Machine can interact with our API and define all the required settings. In no time, the new Docker host is ready to start the first container on it.</p>
<p><a href="https://docs.docker.com/machine/install-machine/">Docker Machine</a> is a command line tool that can be installed either on your local computer or on a management server/jump host. Additionally, download our open source <a href="https://github.com/cloudscale-ch/docker-machine-driver-cloudscale">&quot;cloudscale&quot; driver</a> from GitHub into a directory in the path, and generate an API token via our cloud control panel. A command as shown in the following will then create a <strong>ready-made Docker host for your containers</strong>:</p>
<pre><code class="language-plain">$ docker-machine create -d cloudscale --cloudscale-token=&lt;API_TOKEN&gt; my-docker-host
</code></pre>
<p>A <code>docker-machine env my-docker-host</code> then returns the details required to use the new Docker host immediately.</p>
<h3>Managing K8s clusters at cloudscale.ch with Rancher</h3>
<p>Not just individual Docker hosts, but entire Kubernetes clusters can be conveniently managed using <a href="https://rancher.com">Rancher</a>. Rancher, which itself runs in a Docker container, provides a web interface you can use to <strong>configure your cluster as desired</strong>. After this, all the necessary virtual servers are automatically created and completely set up. The deployment of apps and ongoing cluster management also take place via the same GUI.</p>
<p>Rancher can interact with a large number of cloud providers – including cloudscale.ch. Under &quot;Node Drivers&quot;, simply enter <a href="https://github.com/cloudscale-ch/ui-driver-cloudscale">the details</a> according to GitHub and you will be able to <strong>select &quot;Cloudscale&quot; as the infrastructure provider when creating a new cluster</strong>. By the way, Rancher is completely open source, and we are in contact with Rancher Labs about including our driver in future versions of Rancher.</p>
<br/>
<p>With Docker Machine and Rancher, cloudscale.ch supports two essential tools for managing container environments even more easily. Whether via CLI or graphical web interface, you can create the <strong>ideal nodes for your workloads in no time at all</strong>.</p>
<p>Perfectly integrated,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[BlueStore, Encryption and NVMe-only Storage
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/07/25/bluestore-encryption-and-nvme-only-storage</link>
          <pubDate>Thu, 25 Jul 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/07/25/bluestore-encryption-and-nvme-only-storage</guid>
          <description>
            <![CDATA[<p>Good news from our storage department: Instead of &quot;SSD-only&quot; it is now &quot;NVMe-only&quot; – and thus even more performance at the same cost. In addition, &quot;BlueStore&quot;, the new storage backend of our Ceph cluster, ensures the integrity of all your data thanks to its integrated checksums. And last but not least, we have extended our security concept by another layer of protection through hard disk encryption.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>How NVMe helps us get even more performance out of SSDs</h3>
<p>Solid-state drives (SSDs) are significantly faster than conventional hard disks because they are based exclusively on memory chips instead of moving magnetic disks and read/write heads. This also shifts the bottleneck: current SSDs can store and deliver data faster than the widespread S-ATA connection allows for. <strong>Thanks to the NVMe standard, SSDs can be connected directly to the fast PCIe interface</strong> of a PC or server instead, thereby leveraging their full potential.</p>
<p>At cloudscale.ch, we have gradually replaced our SSDs by models featuring an NVMe interface, so that today <strong>all your SSD volumes are stored on NVMe SSDs entirely</strong>, delivering the best possible performance. This was made possible by sourcing new storage systems based on AMD Epyc CPUs that offer the necessary number of PCIe lanes. By the way: even with our bulk and object storage, the Ceph DB and object cache reside on NVMe SSDs for optimal performance.</p>
<h3>Why we decided to migrate our Ceph cluster to BlueStore</h3>
<p>BlueStore, the recommended storage backend since the Luminous release, was developed specifically for Ceph clusters and a first version was released about three years ago. Instead of storing the data on an XFS file system in the background, Ceph uses BlueStore to completely manage the block device by itself and thus has complete control over the journal, caches, etc. As part of the upgrade of our storage systems to Ubuntu 18.04 LTS, we migrated our Ceph cluster from XFS to BlueStore – not least because of the <strong>significant performance gain that BlueStore provides compared to XFS</strong>.</p>
<p>An additional advantage of BlueStore are the integrated checksums: these are automatically stored for all data and metadata and validated each time data is read from the storage media. BlueStore thus offers an <strong>additional mechanism for maintaining the integrity of your data</strong> ‐ one of the three central components of information security in addition to availability and confidentiality.</p>
<h3>What benefit disk encryption by the cloud provider offers</h3>
<p>Together with the migration to BlueStore, we also implemented the encryption of all data disks in our storage systems. This means that as of now <strong>all your volumes and objects are automatically encrypted &quot;at rest&quot;</strong>. In addition to the already established process that disks are secure-erased by our employees when taken out of service, this encryption provides a further protective layer and thus complements our existing measures to increase information security.</p>
<p>The main purpose of this encryption is to protect your data from third parties, e.g. in case we have to dispose of a defective SSD. It is in the nature of things that we still need to be in possession of the necessary keys in order to operate your servers and volumes as usual. <strong>Disk encryption by the cloud provider thus complements your own efforts</strong> to protect your data, such as encrypted transmission over the Internet or encryption of your volumes using LUKS.</p>
<br/>
<p>The security of your data has always been one of our top priorities (see also <a href="https://www.cloudscale.ch/en/news/2019/05/24/certified-as-per-iso-27001-27017-and-27018">Certified as per ISO 27001, 27017 and 27018</a>). Furthermore, we always want to offer you the best possible performance. All the more reason for us to be pleased that our storage system has made a further leap forward in both areas.</p>
<p>Pump up the volume(s)!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Measure Usage with Objects Metrics
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/06/12/measure-usage-with-objects-metrics</link>
          <pubDate>Wed, 12 Jun 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/06/12/measure-usage-with-objects-metrics</guid>
          <description>
            <![CDATA[<p>One of the advantages of our S3-compatible Object Storage is the usage-based billing: with no fixed costs, you will only be chrged for what you actually use. With Objects Metrics, you can now also retrieve usage data at a later date – for example for your own analysis, or to report it to end customers accurately to the day.</p>]]>
          </description>
          <content:encoded><![CDATA[<link rel="preload" as="image" href="https://www.cloudscale.ch/img/news-objects-metrics.png"/><h3>How the costs for Object Storage are structured</h3>
<p>No matter whether you use our Object Storage to keep backups, as a repository for your container images or as a web server for your static assets, the storage space used is averaged over the day and <strong>charged to your account at midnight (Zurich local time)</strong>. The daily costs also comprise a component for outbound network traffic (inbound traffic is free) and an amount for HTTPS requests made to your buckets and objects (also see our <a href="https://www.cloudscale.ch/en/pricing#object-storage">pricing</a>).</p>
<h3>Past use of your buckets at a glance</h3>
<p>The most current data was already reported in our cloud control panel in the past so you could track occupancy and access statistics almost in real time. Now you can also <strong>display past usage data for your buckets</strong>. Instead of the &quot;Live&quot; view, set a freely selectable time frame to see the total quantity of outbound traffic and requests as well as the storage usage averaged over this period.</p>
<img src="https://static.cloudscale.ch/img/news-objects-metrics-b84698af2265.png" alt="Objects Metrics"/>
<p>Using &quot;Hide Deleted&quot; you determine whether, for the shown Objects Users, the list should include buckets that have been deleted in the meantime. By the way, in order to ensure the <strong>best possible clarity of usage and costs</strong>, the dates in Objects Metrics always refer to local time in Zurich, regardless of the time zone you have specified for displaying times in your user account.</p>
<h3>Even more data available in our API</h3>
<p>Of course the Objects Metrics can be accessed via our API, too. The API <strong>also reports inbound traffic as well as the number of objects</strong> – averaged over the selected period – stored in the corresponding bucket. These two values have no impact on costs for our Object Storage, but can still be helpful for your usage evaluation. Furthermore, through the API you can also see usage by Objects Users that no longer exist.</p>
<p>If you need the metrics for a specific bucket or Objects User only, specify the desired filter directly in the API call:</p>
<pre><code class="language-sh">$ curl -H &quot;$AUTH_HEADER&quot; &#x27;https://api.cloudscale.ch/v1/metrics/buckets?start=2019-06-09&amp;end=2019-06-11&amp;bucket_name=mytestbucket&#x27;
</code></pre>
<pre><code class="language-json">{
  &quot;start&quot;: &quot;2019-06-08T22:00:00Z&quot;,
  &quot;end&quot;: &quot;2019-06-11T22:00:00Z&quot;,
  &quot;data&quot;: [
    {
      &quot;subject&quot;: {
        &quot;name&quot;: &quot;mytestbucket&quot;,
        &quot;objects_user_id&quot;: &quot;62c2...ab53&quot;
      },
      &quot;time_series&quot;: [
        {
          &quot;start&quot;: &quot;2019-06-08T22:00:00Z&quot;,
          &quot;end&quot;: &quot;2019-06-11T22:00:00Z&quot;,
          &quot;usage&quot;: {
            &quot;requests&quot;: 136,
            &quot;object_count&quot;: 157,
            &quot;storage_bytes&quot;: 18036937,
            &quot;received_bytes&quot;: 10855616,
            &quot;sent_bytes&quot;: 14278972
          }
        }
      ]
    }
  ]
}
</code></pre>
<p>This ensures that output is limited to <strong>data that is actually relevant to you</strong>. It goes without saying that you can find all the details about calling the API in our <a href="https://www.cloudscale.ch/en/api/v1#metrics">API documentation</a>.</p>
<br/>
<p>With Objects Metrics, we provide you with a new tool for <strong>breaking down usage of our Object Storage by the day</strong> – be it spontaneously or in an automated manner, for your accounting team or for profiling a specific setup. Either way, with our Object Storage you only pay for what you actually use.</p>
<p>For the best transparency,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Certified as per ISO 27001, 27017 and 27018
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/05/24/certified-as-per-iso-27001-27017-and-27018</link>
          <pubDate>Fri, 24 May 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/05/24/certified-as-per-iso-27001-27017-and-27018</guid>
          <description>
            <![CDATA[<p>Information security is becoming increasingly important in public perception, and more and more cloud users want to be sure that their data is in good hands – this is where independent certificates such as ISO 27001 come into play. Having passed certification successfully, cloudscale.ch Ltd meets the need for certified information security. In the following we provide a brief insight into this important topic:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>What ISO 27001, 27017 and 27018 actually are</h3>
<p>Be it paper formats, light bulbs or food safety: often unnoticed and behind the scenes, standards ensure smooth operation in all areas of life. The standards of the ISO 27000 series are well-known in the field of <strong>information security, which encompasses not only the confidentiality of information but also its integrity and availability</strong>. Certifications according to these ISO standards do not apply to a specific product, but to a company or its management system which is designed to guarantee information security.</p>
<p><strong>ISO/IEC 27001:2013 defines a set of 114 so-called controls</strong> and thus a list of requirements <strong>for all possible aspects of information security</strong> that have to be implemented. How this is done in concrete terms is up to the company – the standard gives enough leeway so that organizations of all sizes and industries can implement it. As a public cloud provider, we have therefore implemented the standard with a corresponding focus on Infrastructure-as-a-Service (IaaS).</p>
<p>In our case it was evident to implement two more standards at the same time: In contrast to ISO 27001 which is universally applicable, <strong>ISO 27017 deals specifically with cloud services</strong> and defines a number of additional controls that are relevant for cloud providers and users. <strong>ISO 27018 in turn is about the protection of personally identifiable information (PII)</strong> in public clouds – a topic that has received increased attention especially due to the EU GDPR. cloudscale.ch Ltd has also been audited successfully according to these two standards (ISO/IEC 27017:2015 and ISO/IEC 27018:2019).</p>
<p>You can find all certificates on our website or directly at:</p>
<ul>
<li><a href="https://www.cloudscale.ch/en/iso-27001-certificate.pdf">https://www.cloudscale.ch/en/iso-27001-certificate.pdf</a></li>
<li><a href="https://www.cloudscale.ch/en/iso-27017-27018-certificate.pdf">https://www.cloudscale.ch/en/iso-27017-27018-certificate.pdf</a></li>
</ul>
<h3>How our path to certification looked like</h3>
<p>Looking back, we can state that the introduction of an Information Security Management System (ISMS) and its certification has not changed our daily work significantly. <strong>The security mindset has always been part of our DNA</strong> and most of our information security measures have already been in place for years. For quite some time however, it has become apparent that end-to-end certification of the entire supply chain is important to our customers. <strong>The decision for the official certification according to ISO 27001 was made about 1.5 years ago</strong>. Subsequently, we sought external know-how for this process.</p>
<p>The actual &quot;ISO project&quot; took off in spring 2018 with a series of workshops together with our consultant, Dieter Roth, and a set of templates for the ISMS. The hard work then followed when it came to <strong>adapting the generic documents to our reality</strong> (and, admittedly, a tiny bit in the opposite direction). After all, our documented ISMS needed to reflect the guidelines and processes that we consider appropriate for our daily work. Of course, it was an advantage that our data centers were already ISO 27001 certified, so we did not need to work out our own regulations in this area.</p>
<p>Finally, the certification audit – sort of an exam situation – was surprisingly pleasant. <strong>On three days, we had to answer an independent expert&#x27;s questions</strong> and provide various evidences. We felt that the auditor from Swiss Safety Center understands our business, and it quickly became clear to him why we do things the way we do. We certainly had not dared to hope that not a single deviation would be found in the entire audit. All the more this result confirms our security culture, which has shaped our work right from the start.</p>
<h3>Which next steps lie ahead of us</h3>
<p>The initial certification is a long-awaited milestone and for many of our customers it is an affirmation for the trust that they put in us right from the start. However, one of the key requirements of the ISO/IEC 27000 standards is continuous improvement, which must be incorporated in the ISMS. Not only the security precautions, but also <strong>all processes have to be reviewed and developed over and over</strong>. Regular assessments are carried out in annual internal audits as well as in surveillance and recertification audits conducted by the certification authority every year.</p>
<p><strong>Of course, the sense of security also affects all of our future projects</strong>. Examples include ramping up another Swiss data center site, which enables our customers to build geo-redundant setups ( availability), the migration of our Ceph cluster to BlueStore, which features integrated checksums ( integrity), and disk encryption of our storage servers ( confidentiality).</p>
<br/>
<p>Information security has been a key concern of cloudscale.ch from the very beginning, and discussions with our customers confirm its importance time and again. It is not without pride that we see <strong>the successful certification according to ISO 27001, ISO 27017 and ISO 27018 as a recognition</strong> for our commitment and as a motivation to continue on our chosen path.</p>
<p>Signed and sealed,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Information on "ZombieLoad", "RIDL", and "Fallout"
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/05/17/information-on-zombieload-ridl-and-fallout</link>
          <pubDate>Fri, 17 May 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/05/17/information-on-zombieload-ridl-and-fallout</guid>
          <description>
            <![CDATA[<p>By now, it is common knowledge that new bugs in software are being discovered on a regular basis. The fact that security flaws can also be found in hardware became clear to a broader public in January 2018 when Meltdown and Spectre were making headlines. Last Tuesday, new vulnerabilities known as <a href="https://zombieloadattack.com/">ZombieLoad</a>, <a href="https://mdsattacks.com/">RIDL, and Fallout</a> were disclosed, against which the affected systems now have to be protected.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Who is affected by the current vulnerabilities</h3>
<p><strong>The latest CPU vulnerabilities affect all Intel processor models released in the last few years</strong>. Processors from other vendors such as AMD and ARM are not affected according to current knowledge. The flaws in the chip design allow one thread to access data that is processed by another thread on the same CPU core via so-called side channel attacks. Various buffers within the CPU retain data fragments, which can then be read by another thread in certain cases. Activated Hyper-Threading additionally facilitates the exploitation of these vulnerabilities.</p>
<p>This is particularly relevant <strong>when executing program code that is not trusted from the point of view of a specific process or user</strong>. Be it active content on websites you visit, or software in a &quot;neighboring&quot; virtual server in the cloud: malicious code can potentially access parts of your data that have just been processed on the same physical CPU core.</p>
<h3>How cloudscale.ch is tackling these security bugs</h3>
<p>It is in the nature of public cloud providers to run &quot;untrusted code&quot; on their compute nodes. Intel CPUs of the affected series are used at cloudscale.ch as well. Accordingly, we take this issue seriously and are working on eliminating the known attack vector completely. As a first step, we have applied all available security updates in our lab among other necessary changes. This includes <strong>microcode updates provided by Intel for the affected processor series, the deactivation of Hyper-Threading, an updated Linux kernel as well as patches for the virtualization and storage layer</strong>. As with every update, tests are currently running to ensure that the security updates do not have any unwanted impact on the stability of our infrastructure.</p>
<p>As soon as our tests confirm that the updated components are working as expected, we will update the productive systems using the same procedure. In order to secure all affected systems as quickly as possible while remaining operational, we have scheduled an <a href="https://www.cloudscale-status.net/incidents/71721">emergency maintenance window</a>, which will last <strong>from now until (and including) next Tuesday 2019-05-21</strong>. We will do our best to minimize the impact on your virtual servers: before we start working on a compute node, we will move all virtual servers to another, already updated node using live migration. However, it is possible that you may notice degraded server performance and/or short interruptions of network connectivity during live migration. We apologize for any inconvenience this may cause.</p>
<h3>What you should do in order to secure your servers</h3>
<p>The measures that we can take on our side protect your virtual servers&#x27; data against access from other virtual servers. In order to protect your data against access from other processes within the same virtual server, <strong>please install the security updates released by the respective Linux distribution and other software vendors</strong>.</p>
<p>To mitigate the vulnerabilities, Intel recommends flushing the affected buffers in the CPUs when switching between processes with different permissions. Intel&#x27;s microcode updates for the affected CPUs provide adjusted routines. After our maintenance window, i.e. as of Wednesday 2019-05-22, once you <strong>switch your virtual servers completely off and on again (a reboot is not sufficient)</strong>, the new CPU flag &quot;md_clear&quot; will be visible inside your server. Correspondingly updated versions of your operating system and other software may use this to detect if and how they should flush the CPU buffers to best protect your data from other, potentially less trusted processes within the same virtual server.</p>
<br/>
<p>Even though – according to current knowledge – the new attack scenarios are relatively difficult to exploit and potential access to data is not possible in a targeted manner, we make every effort to mitigate these vulnerabilities quickly and completely. For the best possible protection, we recommend that you <strong>promptly install all available security updates</strong> on your server as well. Should you have any questions regarding our current measures, please do not hesitate to contact us.</p>
<p>For secure servers,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Persistent Volumes in Kubernetes with CSI
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/03/15/persistent-volumes-in-kubernetes-with-csi</link>
          <pubDate>Fri, 15 Mar 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/03/15/persistent-volumes-in-kubernetes-with-csi</guid>
          <description>
            <![CDATA[<p>Even if the acronym reminds of an American TV series at first: Thanks to supporting the &quot;Container Storage Interface&quot; (CSI), cloudscale.ch is one of the first providers worldwide to offer an elegant and flexible solution for using persistent storage in a Kubernetes setup.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Why more and more of our customers are using Kubernetes</h3>
<p>There are several reasons to run applications or microservices in containers such as Docker. Examples include <strong>proper separation or independent deployments of individual services</strong>. Since container virtualization, unlike fully virtualized machines, causes hardly any overhead, even a fine-granular separation of services in containers <strong>consumes only marginally more resources</strong>.</p>
<p>Kubernetes (or &quot;K8s&quot;) is an orchestration solution for managing such containers. It launches all the required containers in the desired number and distributes them across the available nodes. If a container fails, it is <strong>automatically restarted on a suitable node</strong>. Last but not least, Kubernetes can be controlled using config files and scripts and thus <strong>perfectly integrated into configuration management systems</strong>.</p>
<h3>How CSI enables self-service for storage</h3>
<p>By default, such containers have ephemeral storage, i.e. changes in the file system of a container are lost on restarts. Of course, <strong>so-called &quot;persistent volumes&quot; can also be created and passed through to a container</strong>. Up to now, however, this was typically associated with manual effort as well as coupling to a specific node and therefore did not really fit into the concept of dynamic containers.</p>
<p>With the recently adopted &quot;Container Storage Interface&quot; (CSI), there is now a defined standard for triggering the automatic creation of an according volume from a &quot;Persistent Volume Claim&quot;. This volume is then mounted directly in the corresponding container – no matter which node the container is currently running on. Of course, the volume <strong>can also be re-attached to another container</strong> and deleted right from within Kubernetes if required.</p>
<h3>What other advantages CSI offers at cloudscale.ch</h3>
<p>Just like the volumes of our virtual machines, persistent volumes for Kubernetes are based <strong>on lightning-fast SSD-only storage</strong> and can be of virtually any size. For space requirements of 100 GB or more, inexpensive bulk storage is available as well. The selection is made right in the Persistent Volume Claim using the parameter &quot;storageClassName&quot; (<code>cloudscale-volume-ssd</code> or <code>cloudscale-volume-bulk</code>).</p>
<p>As an additional feature, we also offer <strong>full disk encryption with LUKS</strong>. Encrypted volumes are created easily by providing a special storageClassName and can only be used if a section with the correct encryption key is configured in Kubernetes – a potential attacker without this key will only see data garbage. Whether encrypted or not: at cloudscale.ch persistent volumes for Kubernetes are <strong>stored exclusively in data centers located in Switzerland</strong>.</p>
<br/>
<p>For your very first steps we recommend <a href="https://rancher.com">Rancher</a> for the easy installation of a basic setup with Kubernetes. From there, you just need to install the <a href="https://github.com/cloudscale-ch/csi-cloudscale/tree/master/deploy/kubernetes/releases">cloudscale.ch CSI driver</a> from GitHub and configure an API token generated in the Cloud Control Panel once. Of course, various <a href="https://github.com/cloudscale-ch/csi-cloudscale/tree/master/examples/kubernetes">configuration examples for containers and volumes</a> are available on GitHub, too.</p>
<p>Cast off!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Firewall Distribution at a Mouse Click
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/02/27/firewall-distribution-at-a-mouse-click</link>
          <pubDate>Wed, 27 Feb 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/02/27/firewall-distribution-at-a-mouse-click</guid>
          <description>
            <![CDATA[<p>In addition to the numerous Linux distributions to choose from, we now also offer OPNsense, a professional firewall distribution. Using OPNsense you can easily and effectively reduce the potential attack surface of your servers by placing critical systems in a private network behind your OPNsense firewall and protecting them from direct access from the Internet.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>What distinguishes OPNsense</h3>
<p>OPNsense is a popular distribution for operating e.g. routers, firewalls, NAT or VPN gateways. <strong>User-friendliness is a top priority</strong>: the firewall is set up with the help of a wizard and thereafter configured completely via a graphical web frontend. There is no need to deal with configuration files. However, if you do not want to restrict yourself to the web frontend, you can still enjoy full SSH access to your firewall.</p>
<p><strong>The feature set covers all your needs</strong>: in addition to its functionality as a NAT gateway for your private network, the firewall also acts, for example, as a VPN endpoint or as a load balancer for redundant web workers. Plug-ins for virtually every use case complete the package. OPNsense is based on FreeBSD and is an open-source project supported by a strong community. It is very economical in its use of RAM and CPU power, thus enabling cost-efficient operation.</p>
<h3>What a simple firewall setup can look like</h3>
<p>Private networks have been available at cloudscale.ch for quite some time and are ideal for servers that need to run in a professional data center but must not be directly accessible from the Internet. Instead of connecting servers to the LAN in your office, you can set them up at cloudscale.ch and connect them to your &quot;private network&quot;, which is completely invisible to the Internet and other customers. Create a virtual server with OPNsense to serve as a <strong>firewall between the private network and the Internet</strong>. This gives you full control over which data should flow where – and where it should not flow.</p>
<p>Accessing your data remains really easy as you can use the OPNsense web frontend to configure a VPN and create a user account for each authorized person. As soon as they connect to this VPN, they can <strong>access your servers as usual</strong> – irrespective their own location. It goes without saying that data traffic in the VPN is encrypted to protect your data from being spied upon. This way, you also provide employees working from home or on the road with optimal access to your internal IT tools.</p>
<p>An OPNsense firewall also provides additional protection for publicly accessible systems such as your website. You can set up another barrier against attackers and <strong>prevent many attacks altogether</strong> by making services such as database backends unreachable from the Internet. A reverse proxy on your OPNsense firewall (e.g. with the HAProxy plugin) forwards the HTTP(S) requests of your visitors to the web server in your private network and delivers the corresponding web pages via the Internet. A further advantage is that HAProxy also supports setups with multiple web servers, thus allowing you to distribute the load and even keep your website available in the event of a web server failure or during maintenance.</p>
<h3>Further tips for you</h3>
<p>For optimal security, we recommend the use of strong passwords and the timely installation of any security updates available for your firewall. If you prefer to use keys for SSH access, you can specify them in OPNsense&#x27;s user management. Like our cloud control panel, OPNsense also supports <strong>two-factor authentication via TOTP</strong>.</p>
<p>A VPN and a reverse proxy, as described above, are just two of many useful applications. It is also possible to <strong>route Floating IPs directly to internal servers</strong> and at the same time benefit from the advantages of a dedicated firewall system.</p>
<p>In addition to OPNsense, we <strong>also offer you to choose the pfSense CE distribution</strong> for your virtual servers. Based on the same roots, the two distributions are fine-tuned to the needs of the respective community. In most scenarios, however, choosing one of the two solutions is primarily a matter of taste.</p>
<br/>
<p>The OPNsense and pfSense CE distributions offer a great deal more than can be discussed here. <strong>Explore the numerous features</strong> in the user-friendly web frontends and learn more about the available functions in the extensive documentation of the <a href="https://wiki.opnsense.org">OPNsense</a> and <a href="https://docs.netgate.com/pfsense/en/latest/index.html">pfSense CE</a> distribution.</p>
<p>Batten down the hatches!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Ceph "Mimic" – Evolution of our Storage Cluster
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/02/12/ceph-mimic-evolution-of-our-storage-cluster</link>
          <pubDate>Tue, 12 Feb 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/02/12/ceph-mimic-evolution-of-our-storage-cluster</guid>
          <description>
            <![CDATA[<p>Recently, we updated our Ceph storage cluster to the latest version: &quot;Mimic&quot;. Ceph Mimic lays the foundation for the future development of our storage cluster, but also brings tangible improvements for the continuous management of our storage systems. And last but not least, Mimic also incorporates numerous minor improvements, e.g. in the area of our S3-compatible object storage.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>How Mimic simplifies storage cluster management</h3>
<p>Even though Ceph does many things automatically, some administrative tasks still remain. <strong>Mimic supports our sysadmins with a new dashboard</strong>, which summarizes all important information about the current cluster status at a glance. The command line tools of Ceph now consistently format their output as JSON so that this data can be processed in scripts more easily.</p>
<p>Improvements were also made to the upmap mechanism. This feature, which was first introduced in the previous version &quot;Luminous&quot;, makes it possible to distribute the data evenly across all OSDs if an imbalance has accumulated during ongoing storage usage. This way, selective space and performance bottlenecks can be avoided. Finally, Mimic receives (security) updates in a timely manner and is <strong>supported by the latest Ceph-Ansible playbooks</strong>, which already hold a great deal of know-how from the Ceph community.</p>
<h3>Three helpful features for objects you should know about</h3>
<p>One of the latest features of our S3-compatible object storage is &quot;bucket lifecycle&quot;: <strong>You can define when objects should expire</strong>, e.g. so that backups stored in the object storage are automatically removed after a certain period of time. Using a user-definable prefix, you can specify the set of objects for which a certain lifecycle should apply. The system then processes the defined lifecycles daily between midnight and 6:00 AM (CET/CEST).</p>
<p>&quot;Server-side encryption&quot; ensures that <strong>your data is stored encrypted in the object storage</strong>. Using the &quot;SSE-C&quot; mode supported by cloudscale.ch, key management remains completely in your hands: you decide which objects are protected by which key. The subsequent retrieval of these objects will then only be possible using the respective key.</p>
<p>Finally, &quot;bucket policies&quot; allow you to set <strong>detailed permissions for your buckets and objects</strong>. Define which other users should have access and which actions are allowed. If, for example, you want to make objects available to someone for download, simply create an additional objects user and grant them the necessary read rights using a bucket policy.</p>
<h3>What further improvements we are planning based on Mimic</h3>
<p>In the background, our Ceph storage cluster distributes all data across a series of storage systems, replicated three times. The individual data fragments are stored in an XFS file system on the physical disks. Following the upgrade to Ceph Mimic, we are now planning to switch to the new &quot;BlueStore&quot; storage backend, which had been officially introduced with Luminous. The POSIX file system as an intermediate layer is no longer necessary since BlueStore stores the data directly as objects on the block device. Another advantage of BlueStore are the integrated checksums for all data and metadata. This ensures that <strong>retrieved data is actually correct every time it is being read</strong>.</p>
<p>We will use the successive re-creation of the Ceph OSDs during the migration to BlueStore to implement one further improvement at the same time: the <strong>encryption of all data disks in our storage cluster</strong>. This will provide an extra layer of security to protect your data, e.g. in the event that we have to dispose of a defective disk.</p>
<br/>
<p>At cloudscale.ch you can take full advantage of a distributed and replicated storage cluster. And thanks to the ongoing development of Ceph by its active open-source community, you benefit from <strong>new features as well as performance and reliability improvements</strong> with every upgrade. Without lifting a finger.</p>
<p>Up to date with Ceph Mimic,<br/>
your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Flexible Management of SSD and Bulk Volumes
]]></title>
          <link>https://www.cloudscale.ch/en/news/2019/01/22/flexible-management-of-ssd-and-bulk-volumes</link>
          <pubDate>Tue, 22 Jan 2019 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2019/01/22/flexible-management-of-ssd-and-bulk-volumes</guid>
          <description>
            <![CDATA[<p>Thanks to the SSD-only root volume and an optional bulk volume, we already provide you with the optimal storage space for your application – both for heavily used data and for data that is accessed on rare occasions only. As of now, servers and volumes at cloudscale.ch are no longer tied to each other: You can add virtually any number of SSD and/or bulk volumes to your servers and even move them between your servers as required.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>How you can benefit from the new volume management</h3>
<p>Following our high ambition regarding simplicity, the start of a new server remains basically unchanged. For optimal performance, the operating system is installed on an SSD-only volume, whereby the first 10 GB are free as before. However, with a click on &quot;Show more options&quot; <strong>you can now add virtually any number of additional volumes to your new server</strong>. For example, you can create a separate volume for <code>/var</code> – without changing the existing partitioning of the root volume. Additional SSD and bulk volumes can also be added or removed later as required.</p>
<p>With most available operating system images, a maximum root volume size of 2 TB is viable (due to partitioning scheme and file system, among other things). With additional volumes you can now go beyond this limit and <strong>manage larger amounts of data on our high-performance SSD-only storage cluster</strong>. And if you want to start small first, scaling volumes up later now even works on the fly: after OpenStack did not support this feature in combination with Ceph in the past, engineers from cloudscale.ch have developed a corresponding solution, which will also be provided upstream to the official OpenStack project.</p>
<h3>Which other possibilities are available through our API</h3>
<p>The cloudscale.ch API of course offers all the benefits mentioned above as well. It is also possible to <strong>detach additional volumes from the respective virtual servers and connect them to other servers later</strong>. This allows you to use such volumes to move data from one virtual server to another. Please note that the root volume of a virtual server cannot be detached from the server and/or deleted separately.</p>
<p>In the same way you can &quot;put aside&quot; existing data, e.g. while the respective virtual server is being deleted and reinstalled. In this case, pass an empty list to the API call as the new server UUID – <strong>the volume with the data remains in your account</strong> and can be reattached to another virtual server at any later time. For more information about the newly available API calls, please refer to our <a href="https://www.cloudscale.ch/en/api/v1#volumes">API documentation</a>.</p>
<h3>What to look forward to as a Kubernetes user</h3>
<p>Support for &quot;CSI&quot; is already in the pipeline: The &quot;Container Storage Interface&quot;, whose specification is currently being finalized, is a defined interface for the <strong>automatic provisioning of &quot;Persistent Volumes&quot; in Kubernetes</strong>. Without manual intervention, persistent volumes of the desired type and size can be created as soon as a container references a corresponding &quot;Persistent Volume Claim&quot;. Persistent Volumes created in this way are not bound to a specific node, but available wherever they are needed by a container.</p>
<p>With the support for additional volumes and the extension of our API, we have laid the foundations on which the cloudscale.ch CSI driver will be based. <strong>Of course, the enhancements will also benefit all other orchestration solutions</strong>, including our Ansible Cloud Module and the <a href="https://www.terraform.io/docs/providers/cloudscale/index.html">Terraform plug-in for the provider &quot;cloudscale&quot;</a>.</p>
<br/>
<p>If in the past you sometimes wished for the flexibility of an external hard drive, additional volumes now offer you just that. They are also <strong>the ideal solution if, for example, you need more storage space only temporarily</strong> – as soon as you no longer need the space, simply delete the volume again. Or just let the orchestration tool of your choice do the work for you thanks to the new API calls.</p>
<p>The perfect volume for all your data,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Handling Errors in Control Panel and API
]]></title>
          <link>https://www.cloudscale.ch/en/news/2018/12/11/handling-errors-in-control-panel-and-api</link>
          <pubDate>Tue, 11 Dec 2018 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2018/12/11/handling-errors-in-control-panel-and-api</guid>
          <description>
            <![CDATA[<p>Sometimes things do not work out, in IT just as in everyday life. The more complex the processes, the more likely it is that some component will not behave as initially expected. At cloudscale.ch we really care about simple and consistent use. For us, this also includes being prepared for any eventuality that may occur behind the scenes – after all, our interfaces should not only be intuitive to use, but also help you reach your target in the best possible way.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>What is under the hood of our cloud</h3>
<p>The first point of contact when using cloudscale.ch is our cloud control panel, which we developed in-house. This software, which is written in Python, provides you with the <strong>web interface and API to manage your servers</strong> and handles the billing process for services used. In the background, our control panel relies heavily on OpenStack. As one of the leading open source projects in this area, the cloud platform manages the physically available computing power, allocates IP addresses and configures internal networks between your servers.</p>
<p>Equally fundamental is Ceph, a distributed storage solution that is also maintained as open source and ensures the <strong>replicated and performant storage</strong> of your volumes and objects. However, the &quot;smaller&quot; building blocks in our setup are also essential, e.g. our DNS system, ExaBGP for the dynamic allocation of Floating IPs, and RabbitMQ, which acts as a kind of glue between our control panel and other involved systems.</p>
<h3>Examples of potential sources of error</h3>
<p>Much of what appears as one coherent action from a user perspective requires <strong>several separate steps</strong> in the background. In addition to the actual creation of a new virtual server, for example, a network port is created, an IP address assigned, a volume with the selected operating system provided and a reverse DNS entry set. No matter how well everything is tested, you can never completely rule out the possibility of encountering an unexpected error in any of the integrated software components.</p>
<p>So-called race conditions are another possible source of errors. To avoid inconsistencies, certain actions or parts thereof can only be executed <strong>one at a time</strong>; if the same step is also required by an action running in parallel, one of the two actions fails. Furthermore, some steps depend on additional conditions, e.g. that certain safety limits (&quot;quotas&quot;) are observed.</p>
<h3>Pillars of error handling at cloudscale.ch</h3>
<p>In the context of error handling at cloudscale.ch, our primary goal is to ensure that every action results in a usable state. We have, therefore, implemented prepared rollbacks where appropriate. If an error in a sub-step would, for example, lead to a server that does not work and that, due to a subsequent error, can possibly not be deleted, the rollback function comes into play. It ensures that already completed sub-steps are reversed, so that – despite the error – <strong>a clean state is reached</strong> again at the end.</p>
<p>It goes without saying that it is even better to avoid failure as far as possible. This is why we permanently monitor our systems for error messages and check in each individual case whether this can be avoided in the future with a specific patch or other improvements. In addition, we have optimized transactions that could lead to race conditions so as to minimize the probability of actual parallel execution. Should a transaction nevertheless need to be aborted because it coincides with another parallel operation, the case is intercepted and the transaction is retried up to a defined number of times. In this way, the action chosen by the user might <strong>still be completed successfully</strong>.</p>
<br/>
<p>User-friendliness has always been a main concern at cloudscale.ch. Even though the systems behind the scenes are complex and there is always potential for something to go wrong, the aim is for actions in our cloud control panel and API to <strong>lead to the desired result whenever possible</strong>. And even when this is not the case, nothing should prevent you from taking further/other actions.</p>
<p>Well prepared,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Improved User Experience Thanks to React
]]></title>
          <link>https://www.cloudscale.ch/en/news/2018/11/21/improved-user-experience-thanks-to-react</link>
          <pubDate>Wed, 21 Nov 2018 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2018/11/21/improved-user-experience-thanks-to-react</guid>
          <description>
            <![CDATA[<p>React not only facilitates an improved user experience in our cloud control panel, but the JavaScript library also helps our developers to keep the source code maintainable. In addition to the comfort our users are used to, they also benefit from the rapid implementation of new features – both in our web-based GUI and in the API.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>How React contributes to the user experience of our cloud control panel</h3>
<p>Today&#x27;s users expect websites to respond quickly; the faster a page loads, the fewer visitors click away. This factor that directly impacts revenue in an online store is also important for our customers&#x27; workflow in our cloud control panel. This means that in an increasing number of locations in our control panel, we <strong>avoid reloading the entire page</strong>; instead, individual elements such as the status of your servers are retrieved in the background and simply updated on the page that is already displayed.</p>
<p>This approach is called &quot;Ajax&quot; and is supported by various libraries and frameworks. We chose React (in combination with Redux) because this open-source library is used by a large community and many compatible libraries are available for further functionality. With React, the components of the individual pages are generated in the browser using JavaScript. While a component executes an <strong>asynchronous operation</strong> (e.g. loading data), the user can keep on working – unlike with a reload of the whole page that would block everything.</p>
<h3>Advantages for our software development</h3>
<p>The fact that our cloud control panel is no longer delivered as a fully compiled HTML page by our systems also simplifies the work of our software developers. By retrieving a great deal of the information from our servers without any formatting and only displaying it in the browser by using React, the Ajax API can be kept practically identical to the existing public API, thus <strong>removing the need to implement the same functionality twice</strong>. This leaves more time for the rapid implementation of new features.</p>
<p>Another advantage of this approach is that <strong>unit tests can also be utilized for the graphical frontend</strong>. This is in contrast to HTML pages generated on the server with additional JavaScript, which can hardly be tested automatically. React is also ideal for a step-by-step transition: wherever React components noticeably improve usability, we have them in place already. Where the benefits are more subtle, we will combine the conversion to React with other improvements, which saves us the effort of dealing with the same item multiple times.</p>
<h3>Where our control panel is heading with React</h3>
<p>The more components are implemented using React, the closer our cloud control panel gets to a <strong>single-page app</strong>. Already today, the tabs in the server detail view can be addressed directly with their own URLs, while switching between the tabs does not require a reload of the page. Where useful, this approach will be adopted in other cases as well.</p>
<p>When developing new features, we usually <strong>first implement the functionality in our public API</strong> in order to allow various DevOps tools to make use of it. From here it is only a small step to make the feature available in the web interface, too. The required Ajax API endpoint is largely defined by the previously created endpoint of the public API, and React takes optimal care of presenting the functionality in the browser using JavaScript.</p>
<br/>
<p>While we support the integration of cloudscale.ch into an increasing number of orchestration and management tools, the usability of our own cloud control panel remains of particular importance to us. Thanks to React, you can <strong>manage your servers in a consistent and smooth workflow</strong>, almost as if our cloud control panel were a local app on your computer.</p>
<p>For server management with flow,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[OpenStack Upgrades: Open Heart Surgery
]]></title>
          <link>https://www.cloudscale.ch/en/news/2018/10/03/openstack-upgrades-open-heart-surgery</link>
          <pubDate>Wed, 03 Oct 2018 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2018/10/03/openstack-upgrades-open-heart-surgery</guid>
          <description>
            <![CDATA[<p>To us at cloudscale.ch it is important that the software we use is not only well-tested, but also up-to-date. While this allows for reliable operation of our systems, it also ensures the prompt availability of security updates. With OpenStack at the heart of our cloud, the upgrade to its major version &quot;Pike&quot; represents a milestone that we have mastered with minimal impact on productive operation.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Which role our redundant setup plays in upgrades</h3>
<p>Our system design for high availability also serves us well in the case of upgrades. In this setup, all components are present in at least two instances, or at least three if a quorum is required. This not only protects us and our customers from unplanned downtimes in the event of a failing individual system, but also allows us to process the systems sequentially during maintenance work so <strong>that the productive operation of the entire service is guaranteed without interruption</strong>.</p>
<p>Accordingly, the OpenStack components responsible for creating and managing the virtual servers have been updated system by system. And since at least two instances of each component were always in operation, <strong>the management of the virtual servers remained available to our customers via cloud control panel as well as through the API</strong>. In the case of the compute nodes on which the virtual servers run, an OpenStack upgrade would even be possible during production operation. However, simultaneous upgrades of the underlying Linux system often require a reboot. Even in such a case your virtual servers remain online without interruption, thanks to prior live migration to another compute node.</p>
<h3>How OpenStack supports non-disruptive upgrades</h3>
<p>OpenStack is a complex system of different components that interact with each other. It is not entirely self-evident that it is possible to update these components (or, in the case of a redundant structure, only individual instances of them) one after the other: in the course of this process, part of OpenStack&#x27;s overall system is still in the old state, while the other part is already on the newer level. The OpenStack project therefore consciously makes sure <strong>that the individual components also work together with components that are still running the previous version</strong>. Only then is it possible to keep the system as a whole functional during the entire upgrade process.</p>
<p>For the recent upgrade to OpenStack &quot;Pike&quot; we started as usual with comprehensive tests in our lab environment. We optimized our Ansible playbooks so that <strong>at least two instances of each OpenStack component remain available at all times</strong>. In order to make sure that the interaction of the components also across version boundaries works as expected, we continuously created new virtual servers via API – a potential problem would have been detected in the lab at this point.</p>
<h3>What we do to reduce the impact even further</h3>
<p>In some cases short interruptions (usually less than 5 minutes) of the cloud control panel and the API could not be avoided, e.g. if the configuration of a load balancer had to be adjusted to the new version of the OpenStack component behind it at the same time. While running virtual servers remain unaffected by this, changes are not possible in such a moment ‐ especially not moving a floating IP to another virtual server, which many of our customers use as a failover mechanism for highly available setups.</p>
<p>We are therefore working on limiting necessary downtimes even more by, for example, <strong>blocking affected operations while the control panel and API in general remain available</strong>. During the upgrade to &quot;Pike&quot;, we were already able to use such a mechanism: when major changes in OpenStack&#x27;s own API temporarily prevented the scaling of volumes, we were able to reflect precisely this in our interfaces and continue to allow all other actions.</p>
<br/>
<p>Upgrading a system as complex as OpenStack is a matter of several hours. It is not trivial to ensure the best possible availability throughout this process. We are already well positioned here having redundant systems at every level. Where interruptions are unavoidable nevertheless, we try to keep them to a minimum – ideally, only a single operation needs to be temporarily blocked. <strong>And as always: test, test, test</strong>.</p>
<p>Seriously prepared,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Continuous Integration (CI) at cloudscale.ch
]]></title>
          <link>https://www.cloudscale.ch/en/news/2018/05/17/continuous-integration-at-cloudscale_ch</link>
          <pubDate>Thu, 17 May 2018 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2018/05/17/continuous-integration-at-cloudscale_ch</guid>
          <description>
            <![CDATA[<p>At cloudscale.ch we are always striving to improve not only our products, but also our processes. One thing we have recently focused on is automated testing. To avoid breaking things as we move forward, we have developed a growing set of tests that are run before we deploy anything to production. In this short post, you will learn:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>What Continuous Integration at cloudscale.ch looks like</h3>
<p>While we obviously are an &quot;infrastructure company&quot;, we also write our own software, e.g. for our cloud control panel. Just like many of our customers, we use Git to keep track of the source code that makes our user experience unique. This is where <a href="https://about.gitlab.com/product/continuous-integration/">GitLab CI</a> comes into play: <strong>GitLab CI detects when new commits are pushed</strong>. On every new commit, it runs a &quot;Pipeline&quot; consisting of multiple testing jobs.</p>
<p>In the case of our cloud control panel we have <strong>two different sets of tests: Unit tests and integration tests</strong>. Unit tests are run isolated against a small piece of code and do not require interaction with other services. Integration tests on the other hand really test the entire infrastructure. For instance, we have <strong>tests that actually start servers in our OpenStack-based cloud, or create Buckets in the Object Storage</strong> which is backed by our Ceph cluster. This gives us a good indication that the now-changed code still fits the use cases our customers rely on.</p>
<h3>How Continuous Integration helps with our quality assurance</h3>
<p>To make sure that all of our services are continuing to work, we are also running our tests every night, in addition to the permanent monitoring of all the systems and individual components. This enables us to <strong>detect potential problems within our infrastructure early</strong> and take action to ensure the best service possible.</p>
<p>In addition, our Terraform implementation is also tested on a daily basis <strong>using a special set of acceptance tests</strong>. This way we make sure that our Terraform provider interacts seamlessly with our API.</p>
<p>A good test suite is at the core of every good software and should continue to evolve and improve over time. While we keep improving and extending our software, we can be confident that our large and growing set of tests helps us avoid breaking things. Automated testing not only enables us to develop code faster, but <strong>also allows for broader test coverage, resulting in higher product quality</strong>.</p>
<h3>Why we move further towards Continuous Delivery (CD)</h3>
<p>While automated integration testing takes a huge load of tedious, manual work off our shoulders, there is still room for improvement: We also plan to use Continuous Delivery with GitLab CI. Continous Delivery will allow us to push to the production branch and <strong>just watch that commit being deployed</strong>.</p>
<p>We know that you want to use the latest features and improvements as soon as possible – who wouldn&#x27;t? This is why we empower our engineers by giving them the tools they need to avoid repetition. After all, it is not about repetitive background tasks when creating value for our users, but about <strong>building actual solutions – and, of course, getting them delivered</strong>.</p>
<br/>
<p>Testing and delivering,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[New Status Page for the Latest Service Information
]]></title>
          <link>https://www.cloudscale.ch/en/news/2018/04/09/new-status-page-for-the-latest-service-information</link>
          <pubDate>Mon, 09 Apr 2018 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2018/04/09/new-status-page-for-the-latest-service-information</guid>
          <description>
            <![CDATA[<p>First things first: Please bookmark our new status page <a href="https://www.cloudscale-status.net">https://www.cloudscale-status.net</a> right now in order to have access to the latest maintenance announcements and status information at any time. In this article you will learn:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Why we use a separate domain</h3>
<p>More and more companies are offering their customers up-to-date information on the status of their systems. However, there is a good reason <strong>not to publish this information directly (or at least not exclusively) on the company website</strong>, e.g. at example.com/status or status.example.com: Such information is required exactly in the event that central components – potentially including the company website – have failed.</p>
<p>At cloudscale.ch we pay particular attention to preventing potential failures through redundancy at all levels whenever possible. However, we also think through those scenarios in which something fails nevertheless and carefully examine how the effects thereof can be limited in the best ways possible. Especially in case of a malfunction we want to <strong>avoid that corresponding information is affected by that malfunction as well</strong> – no matter where the exact problem is located.</p>
<p>Hence, we operate our status page <strong>not only on separate hardware, but also in a different data center</strong> than our cloud infrastructure, including a separate internet uplink. With the same determination, we have chosen the domain name: We decided to go with the address <a href="https://www.cloudscale-status.net">https://www.cloudscale-status.net</a> and corresponding DNS servers so we can continue to inform you even in the event of a failure of the &quot;.ch&quot; name servers.</p>
<h3>Which tools and technologies we use</h3>
<p><strong>For the operation of our status page we rely on <a href="https://cachethq.io">Cachet</a>.</strong> This tool is easy to use and adapt and has already proven itself with some of our partners. Nginx is used as the web server and a certificate from &quot;Let&#x27;s Encrypt&quot; for TLS-protected data transmission.</p>
<p>In addition to displaying the latest maintenance announcements and status information via browser, Cachet also supports querying the corresponding information via RSS and Atom feeds. If you prefer to be informed about new announcements and updates by email, <strong>you can subscribe directly on the status page.</strong> Of course, these emails are sent independently of our cloud infrastructure, too.</p>
<h3>How we use the different channels of communication</h3>
<p>Information about the current status of our systems and planned maintenance work can now be found 24/7 on our new status page at <a href="https://www.cloudscale-status.net">https://www.cloudscale-status.net</a>. Of course, we have also placed a link to the status page on our website. However, we recommend that you <strong>add the status page directly to your bookmarks</strong> so that you still have access to the latest status information even in case of unexpected problems.</p>
<p>Should maintenance work require any action on your part (e.g. the restart of a server), we will continue to send you the respective information directly to the email address of your user account. <strong>Please make sure that this address is up-to-date</strong> and inform us of any changes.</p>
<br/>
<p>All systems operational!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Update regarding "Meltdown" and "Spectre"
]]></title>
          <link>https://www.cloudscale.ch/en/news/2018/03/06/update-regarding-meltdown-and-spectre</link>
          <pubDate>Tue, 06 Mar 2018 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2018/03/06/update-regarding-meltdown-and-spectre</guid>
          <description>
            <![CDATA[<p>Recently, <strong>two important security vulnerabilities code-named <a href="https://meltdownattack.com/">Meltdown</a> and <a href="https://spectreattack.com/">Spectre</a></strong> have been discovered independently by several parties including researchers at Graz University of Technology and Google Project Zero. At cloudscale.ch we take these threats very seriously and do our best to ensure the safety of our cloud infrastructure.</p>]]>
          </description>
          <content:encoded><![CDATA[<p>Back in January we already summarized information about the measures taken so far regarding these vulnerabilities. With this update we would like to inform you that <strong>we have already applied the available fixes for Meltdown and Spectre</strong> (on 2018-01-10 and 2018-02-26 respectively) on all of our compute nodes.</p>
<h3>Detailed information regarding Meltdown (CVE-2017-5754)</h3>
<p>Fixes for the Meltdown vulnerability were available shortly after disclosure. We have applied the Linux kernel update that fixes the Meltdown vulnerability on all of our compute nodes on 2018-01-10. To protect yourself against attacks from inside your cloud servers <strong>you need to apply the corresponding security updates provided by your Linux distribution as well.</strong></p>
<h3>Detailed information regarding Spectre (CVE-2017-5715, CVE-2017-5753)</h3>
<p>The Spectre vulnerability comes in two variants:</p>
<p><strong>Spectre variant 1 can be fully mitigated with updated software.</strong> However, all vulnerable parts of the code needed to be identified first.</p>
<p>Fixing Spectre variant 2 is more complicated. The fix that has been proposed initially involved a CPU microcode update. As this update caused system stability issues it was later withdrawn by Intel. Thanks to extensive testing in our lab the flawed microcode update was never applied to our compute nodes in production.</p>
<p>An alternative approach to fix Spectre variant 2 was then developed by Google engineers and the Linux kernel community. This approach uses a technique called <a href="https://support.google.com/faqs/answer/7625886">retpoline (return trampoline)</a> which offers two main advantages: It does not need a CPU microcode update and the performance penalty is much smaller.</p>
<p>On 2018-02-26 we installed the Linux kernel update which contains the retpoline fix. <strong>With this update all known variants of the Spectre vulnerability have been fixed on all of our compute nodes.</strong> We expect additional updates to be necessary if further vulnerable parts of the Linux kernel will be identified and will install those updates as they become available.</p>
<h3>Security advice for our customers</h3>
<p><strong>Please note that you need to apply the relevant security updates on your cloud servers as well</strong> in order to fix the Meltdown and Spectre vulnerabilities. Otherwise your cloud servers will remain vulnerable to attacks from within your server. We suggest using the <a href="https://github.com/speed47/spectre-meltdown-checker">script published by Stéphane Lesimple</a> to check whether your servers are still vulnerable or not.</p>
<p>We will continue to track the availability of CPU microcode updates but no longer consider this a priority as – for most Linux distributions – alternative approaches are available to fix the Spectre variant 2 vulnerability.</p>
<p><strong>We advise all of our customers to install the retpoline enabled Linux kernels provided by their distribution whenever possible.</strong> We will inform you if further actions will be required.</p>
<p>Please do not hesitate to contact us if you have any questions.</p>
<p>Best regards from Zurich - Switzerland,<br/>
your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Information regarding "Meltdown" and "Spectre"
]]></title>
          <link>https://www.cloudscale.ch/en/news/2018/01/19/information-regarding-meltdown-and-spectre</link>
          <pubDate>Fri, 19 Jan 2018 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2018/01/19/information-regarding-meltdown-and-spectre</guid>
          <description>
            <![CDATA[<p>Recently, <strong>two important security vulnerabilities code-named <a href="https://meltdownattack.com/">Meltdown</a> and <a href="https://spectreattack.com/">Spectre</a></strong> have been discovered independently by several parties including researchers at Graz University of Technology and Google Project Zero. They were first published in a <a href="https://googleprojectzero.blogspot.ch/2018/01/reading-privileged-memory-with-side.html">blog post by Google Project Zero</a> on 2018-01-03 after rumors spread during the holiday season.</p>]]>
          </description>
          <content:encoded><![CDATA[<p>At cloudscale.ch we take these new threats very seriously and do our best to ensure the safety of our cloud infrastructure. With this update we would like to inform you about the current status of the mitigations.</p>
<h3>Measures taken so far</h3>
<p>To ensure these vulnerabilities cannot be exploited at cloudscale.ch, <strong>we have already applied Linux kernel updates containing software mitigations for the most severe vulnerability (Meltdown).</strong> We will test further updates in our lab as they become available. Once we are confident that no regression occurs, we will apply them to our infrastructure. We will keep you up to date regarding our maintenance schedule and will do our best to minimize the operational impact.</p>
<p>Please note that you need to apply the relevant security updates to your cloud servers as well in order to fix the vulnerabilities.</p>
<h3>Detailed information regarding Meltdown (CVE-2017-5754)</h3>
<p>We have already applied the Linux kernel update that fixes the Meltdown vulnerability on all of our compute nodes on 2018-01-10. To protect yourself against attacks from inside your cloud servers <strong>you need to apply the corresponding security updates provided by your Linux distribution as well.</strong></p>
<h3>Detailed information regarding Spectre (CVE-2017-5715, CVE-2017-5753)</h3>
<p>Mitigation of the Spectre vulnerability in the Linux kernel is ongoing. Currently, neither the upstream Linux kernel nor the Linux distribution we use on our compute nodes have released updates to fix it. <strong>Proposed code changes are under review by the Linux kernel community and we expect those to be released soon.</strong> A CPU microcode update and a kernel update will be needed in order to fix this vulnerability on our compute nodes. Furthermore, additional updates to the virtualization layer are required that will update the virtual CPU type of your cloud servers.</p>
<p>Once the relevant patches have been released, an additional Linux kernel upgrade as well as a shutdown and restart of all your cloud servers will be required. We will inform all of our customers as soon as the virtual CPU type has been updated. Only then will you be able to fix the Spectre vulnerability from inside your cloud servers.</p>
<p>If you have any further questions, please do not hesitate to contact us.</p>
<p>Best regards from Zurich - Switzerland,<br/>
your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[New Border Routers with FRRouting (FRR)
]]></title>
          <link>https://www.cloudscale.ch/en/news/2017/11/27/new-border-routers-with-frr</link>
          <pubDate>Mon, 27 Nov 2017 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2017/11/27/new-border-routers-with-frr</guid>
          <description>
            <![CDATA[<p>cloudscale.ch is growing – and with it the demands on our network. In the course of a comprehensive revision, we are currently optimizing the already fully redundant network. With the replacement of our border routers we successfully completed a first expansion step with which <strong>we were able to increase the capacity of our Internet connections significantly.</strong></p>]]>
          </description>
          <content:encoded><![CDATA[<h3>How and why we evaluated new border routers</h3>
<p>When it comes to high-performance routers for large networks, the names of industry giants such as Brocade, Cisco, or Juniper quickly come to mind. Routing and forwarding usually takes place in specifically designed hardware in combination with proprietary software of the respective manufacturer. However, <strong>as a cloud provider that has open-source embedded in its DNA, we wanted to expand the field of potential solutions.</strong></p>
<p>The main requirement for our new border routers was to have 6-8 ports with a <strong>throughput of at least 10 Gbps</strong> – here we reached the limits of our existing solution, which was one of the main reasons for a replacement. We also needed several 1 Gbps ports to connect the management infrastructure and other network components. Redundant power supplies and the possibility of out-of-band management were of course a prerequisite. With regard to the software, our catalogue of requirements contained the usual protocols: OSPF and BGP.</p>
<p>Thanks to the preceding evaluation of a new leaf-spine setup, we became aware of <a href="https://frrouting.org">Free Range Routing (FRRouting)</a>: This open-source project is backed by several vendors (including Cumulus Networks) with the goal of developing <strong>a stable and scalable routing daemon.</strong> Since spring 2017, FRRouting has been under the umbrella of the Linux Foundation.</p>
<p>Thus, with the right hardware, FRRouting could be a promising candidate for our border routers as well. We considered hardware from Intel (our partner for compute and storage nodes) and <strong>from Lanner, whose x86-based appliances stand out with their high density of modular network ports.</strong></p>
<h3>PoC and migration in two steps</h3>
<p>After a proof of concept with FRRouting (first virtualized with Linux KVM and then bare-metal on Lanner hardware) we were convinced: This setup not only covers our performance and reliability needs, but also integrates perfectly into our general architecture, thanks to running on Ubuntu. Ubuntu, the operating system of our choice for the infrastructure at cloudscale.ch, is also FRRouting&#x27;s reference test platform for Linux.</p>
<p>An important advantage of the new setup is that <strong>we can virtualize our entire network topology, including the new border routers, in our lab.</strong> This helped us with the planning of the upcoming migration, because that way we were able to test all installation processes and the developed configuration as often as we wanted without disturbing productive operation.</p>
<p>In a second step, we replaced the existing routers with the new devices during a maintenance window and thereby successfully completed the migration.</p>
<h3>Advantages and possibilities of the new architecture</h3>
<p>Thanks to solid performance, the possibility of expansion by adding interface modules and the active development of FRRouting by its broad community, <strong>our new routers are ready for the future.</strong> They also fit seamlessly into the aforementioned leaf-spine setup from Cumulus Networks, which we will put into operation in summer 2018. Using an identical software basis <strong>not only guarantees best compatibility, but also efficiency in maintenance.</strong> Finally, in addition to the completely redundant setup, the attractive price of the new hardware also allowed us to purchase a fully equipped spare device.</p>
<p>With the successful migration, we have also established the basis for integrating our border routers into our configuration management even better – in the future, <strong>we will make all changes to the network configuration via this central tool.</strong> Once all network components are running on FRRouting, we will be ready for the next milestone: the conversion of our network to BGP unnumbered to reduce its complexity and the consumption of IPv4 addresses.</p>
<br/>
<p>Open and optimally connected.<br/>
Your cloudscale.ch team</p>
<br/>
<p>PS: On November 9, 2017, our CEO Manuel Schweizer gave a presentation about <a href="http://www.swinog.ch/wp-content/uploads/2018/07/Manuel_Schweizer_Free-Range-Routing-FRR.pdf">FRRouting and BGP unnumbered</a> at the SwiNOG-Meeting in Berne.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA["Infrastructure as Code" with Terraform
]]></title>
          <link>https://www.cloudscale.ch/en/news/2017/11/01/infrastructure-as-code-with-terraform</link>
          <pubDate>Wed, 01 Nov 2017 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2017/11/01/infrastructure-as-code-with-terraform</guid>
          <description>
            <![CDATA[<p>When projects grow beyond a single server, it usually makes sense to manage the required infrastructure automatically. After <a href="https://www.cloudscale.ch/en/news/2017/05/17/ansible-cloud-module-and-libcloud-integration">Libcloud and Ansible</a> it is now also possible to use Terraform to define complete setups at cloudscale.ch &quot;as code&quot;.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>What distinguishes Terraform</h3>
<p><strong>Terraform is a prominent representative of the term &quot;Infrastructure as Code&quot;</strong> and fits perfectly into the DevOps philosophy of today&#x27;s projects. Where servers were previously commissioned manually, rarely configured identically and often not documented at all, this reverse approach now ensures efficiency and order: a configuration file that is easy to read for both humans and computers describes the required infrastructure – and Terraform does the rest.</p>
<p>Terraform works declaratively: You describe the desired state of your setup and Terraform derives the necessary actions from it. As soon as you apply the configuration you have defined, <strong>Terraform will make the respective adjustments via our API until the described state is reached.</strong> And because your infrastructure – just like code – changes over time, Terraform configurations are best managed in versioning systems such as Git.</p>
<p>Terraform is an open-source project. The software is available <a href="https://www.terraform.io/downloads.html">free of charge</a>.</p>
<h3>How to use cloudscale.ch with Terraform</h3>
<p>If you are already using Terraform and would like to manage your servers at cloudscale.ch, <strong>simply include the plugin for the provider &quot;cloudscale&quot;.</strong> It runs as a separate process and communicates with the main Terraform process via an RPC interface. In addition, you need an API token with read/write access, which you need to create once in our Cloud Control Panel. Using this token, Terraform authenticates itself to our API on every change to your infrastructure.</p>
<p>Starting a server with Terraform is very easy, as the following section shows:</p>
<pre><code class="language-hcl"># Create a new Server
resource &quot;cloudscale_server&quot; &quot;web-worker01&quot; {
  name           = &quot;web-worker01&quot;
  flavor_slug    = &quot;flex-4&quot;
  image_slug     = &quot;debian-9&quot;
  volume_size_gb = 50
  ssh_keys       = [&quot;ssh-ed25519 XXXXXXXXXX...XXXX user@example.com&quot;]
}
</code></pre>
<p>This is just a simple example for a single server; all available options for more complex setups can be found in <a href="https://registry.terraform.io/providers/cloudscale-ch/cloudscale/latest/docs">our Terraform provider documentation</a>.</p>
<h3>Two tips from practice</h3>
<p>Automating things saves a lot of time. But what if the automatism does not work as intended? Terraform comes in handy here: with the command <code>terraform plan</code> you can see in detail <strong>what changes Terraform would make on the way from the current to the desired configuration.</strong> If necessary, you can revise the settings as needed – until the &quot;execution plan&quot; exactly meets your expectations. Only then does a <code>terraform apply</code> actually make changes to your server infrastructure.</p>
<p>Possibly not everyone who has access to your code or configuration repository should also have access to your services at cloudscale.ch. <strong>We therefore recommend that you pass your API token as a variable within Terraform</strong> and exclude the corresponding source file from versioning. Alternatively, set a shell environment variable called &quot;CLOUDSCALE_TOKEN&quot;, which is automatically used by Terraform.</p>
<br/>
<p>At cloudscale.ch you have always been able to adjust your infrastructure as desired. Terraform now enables you to define your desired configuration and it will make the necessary adjustments for you.</p>
<p>We have the infrastructure for your code!<br/>
Your cloudscale.ch team</p>
<br/>
<p>PS: Terraform is a software written in Go and requires a corresponding Go SDK for each provider. <a href="https://github.com/cloudscale-ch/cloudscale-go-sdk">Our Go SDK</a> has been released under the MIT license and allows Go developers (regardless of Terraform) to <strong>send HTTPS requests natively from any Go application to our API</strong>.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Guest article APPUiO: OpenShift on cloudscale.ch
]]></title>
          <link>https://www.cloudscale.ch/en/news/2017/10/16/guest-article-appuio-openshift-on-cloudscale_ch</link>
          <pubDate>Mon, 16 Oct 2017 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2017/10/16/guest-article-appuio-openshift-on-cloudscale_ch</guid>
          <description>
            <![CDATA[<p>The buzzwords of the hour are containers, <a href="https://www.docker.com/">Docker</a>, <a href="https://kubernetes.io/">Kubernetes</a>. Accordingly, in this guest article we are covering <a href="https://www.openshift.com/">OpenShift</a> – an open-source project based on those same technologies. In this article we would also like to show you that <strong>OpenShift is much more than just &quot;Docker and Kubernetes&quot;</strong> and give you a short introduction to OpenShift on cloudscale.ch:</p>]]>
          </description>
          <content:encoded><![CDATA[<p>By Tobias Brunner, APPUiO team</p>
<h3>What is OpenShift and why should I use it?</h3>
<p>OpenShift itself is described as &quot;the most secure and comprehensive enterprise grade container platform based on the industry standards Docker and Kubernetes&quot;. But there is much more to it! <strong>OpenShift is a complete Kubernetes cluster with many features that round off the product:</strong> Integrated build pipelines, Docker registry, application router (load balancer), top security based on SELinux and a RBAC system (role-based access control system), a web-based console for easy access to the platform, central logging system, metrics for each PoD, configuration management with Ansible, source-to-image builds and much more.</p>
<p>A comparison can be made with the Linux kernel: OpenShift is to Kubernetes what a Linux distribution is to the kernel. Thus, OpenShift is a Kubernetes distribution – one of the first of its kind.</p>
<p>OpenShift comes in two different flavors:</p>
<ul>
<li>OpenShift Container Platform: The commercial version from Red Hat for installation in your own data center or in the cloud.</li>
<li><a href="https://www.openshift.org/">OpenShift Origin</a>: The open-source upstream project of the commercial version with a very active GitHub repository: <a href="https://github.com/openshift/origin">https://github.com/openshift/origin</a></li>
</ul>
<p>With OpenShift, software developers get the possibility to develop and test in shorter iterations. <strong>Each commit in Git can automatically trigger the entire process from development to production.</strong> For that purpose, OpenShift offers automatic image builds, memory management, deployment, scaling, monitoring, logging and much more. With this system the so-called &quot;time-to-market&quot; can be shortened significantly – that way, hourly or even more frequent deployments are a breeze.</p>
<h3>How do I start with OpenShift?</h3>
<p>There are many ways to start with OpenShift. A few hints and examples to get you started:</p>
<ul>
<li>
<p>With the official <a href="https://github.com/openshift/openshift-ansible">Ansible playbooks</a> an OpenShift cluster can be installed and configured, e.g. at cloudscale.ch. With these playbooks all aspects, even the smallest details, can be configured. Furthermore, the playbooks help to maintain and update the cluster. These playbooks are documented both directly in the Git repository as well as on the documentation page.</p>
</li>
<li>
<p>A local OpenShift cluster can be started with <a href="https://github.com/minishift/minishift">Minishift</a> (based on Minikube) or with the simple command &quot;oc cluster up&quot; on your own workstation. Simply download the <a href="https://github.com/openshift/origin/releases">OpenShift client &quot;oc&quot;</a> from GitHub, unpack it and make it executable. This will launch a complete OpenShift cluster on the local Docker daemon:<br/></p>
<br/>
<pre><code class="language-plain">% oc cluster up
Starting OpenShift using openshift/origin:v3.6.0 ...
Pulling image openshift/origin:v3.6.0
Pulled 1/4 layers, 28% complete
Pulled 2/4 layers, 83% complete
Pulled 3/4 layers, 88% complete
Pulled 4/4 layers, 100% complete
Extracting
Image pull complete
OpenShift server started.

The server is accessible via web console at:
    https://127.0.0.1:8443

You are logged in as:
    User:     developer
    Password: &lt;any value&gt;

To login as administrator:
    oc login -u system:admin

% oc new-app https://github.com/appuio/example-php-sti-helloworld.git
[...]
% oc expose svc example-php-sti-helloworld
[...]
% curl -s
http://example-php-sti-helloworld-myproject.127.0.0.1.nip.io/ | grep title
    &lt;title&gt;APPUiO PHP Demo&lt;/title&gt;
</code></pre>
</li>
<li>
<p>The <a href="https://github.com/appuio/techlab">APPUiO techlab</a> on GitHub provides a step-by-step guide that explains how to run applications on OpenShift. If you prefer to receive in-person instructions, APPUiO offers you <strong>a free half-day hands-on workshop</strong> (see <a href="https://appuio.ch/techlabs.html">https://appuio.ch/techlabs.html</a> for further information and registration).</p>
</li>
<li>
<p>A comprehensive documentation on microservices architecture sheds some light on the development and operation of cloud-native applications which run perfectly on OpenShift: <a href="https://docs.appuio.ch/end-user/index.html">APPUiO Microservices Example</a></p>
</li>
</ul>
<p>The available documentation for OpenShift is already very comprehensive and is being extended constantly. It can be found at <a href="https://docs.openshift.com/">https://docs.openshift.com</a> for the OpenShift Container Platform and at <a href="https://docs.openshift.org/">https://docs.openshift.org</a> for OpenShift Origin. APPUiO offers its own specific documentation at <a href="http://docs.appuio.ch/">http://docs.appuio.ch/</a>, which is maintained by the APPUiO team and the community at GitHub.</p>
<h3>Why should I run OpenShift on cloudscale.ch?</h3>
<p>After the first attempts with OpenShift on our own hardware more than two years ago, we started looking for a Swiss infrastructure partner to host the APPUiO public platform. With cloudscale.ch we have found the perfect partner. <strong>Together with the engineers at cloudscale.ch a tailor-made environment was created to meet the requirements of OpenShift.</strong></p>
<p>Some important features that cloudscale.ch offers (among others) are: a private network between virtual servers for the cluster communication, an S3-compatible object storage e.g. for the Docker registry data, Floating IPs for high availability, additional SSD disks for a suitable node partitioning, native IPv6, and much more.</p>
<p><strong>Today, cloudscale.ch is an important pillar for the stable operation of the APPUiO public platform.</strong></p>
<h3>About APPUiO</h3>
<p><strong>APPUiO ‐ the Swiss Container Platform</strong> – is a managed OpenShift service offered by <a href="https://www.puzzle.ch/">Puzzle</a> and <a href="https://vshn.ch/">VSHN</a>. We run OpenShift on any cloud according to customer requirements – particularly on cloudscale.ch.</p>
<p>Having over two years of experience with OpenShift v3, <strong>we are the leading provider in Switzerland with a comprehensive knowledge of the operation of OpenShift.</strong> We not only operate dozens of private OpenShift clusters but also a <a href="https://appuio.ch/public.html">public shared plattform</a>. We have been operating the public platform on the infrastructure of cloudscale.ch since the beginning and have found a reliable partner in them who has always met our specific implementation requirements.</p>
<p>The operation of a complex platform such as OpenShift is not easy – there are many components in a large overall system. <strong>For this reason we have developed over 120 cluster checks and over 50 checks per server, which ensure the correct operation of the platform.</strong> This includes checks that perform a complete end-to-end test (from building the source code to testing the application over HTTPS) to ensure the functionality of a cluster. Many of our tools and scripts can be found on GitHub under the APPUiO organization. For questions you can reach us in the APPUiO forum or in the APPUiO community chat.</p>
<p>In order to try OpenShift for free today, please contact <a href="https://www.cloudscale.ch/de/ueber-uns">cloudscale.ch</a> or <a href="https://appuio.ch/#contact">APPUiO</a>.</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Ready to go with our Object Storage
]]></title>
          <link>https://www.cloudscale.ch/en/news/2017/08/28/ready-to-go-with-our-object-storage</link>
          <pubDate>Mon, 28 Aug 2017 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2017/08/28/ready-to-go-with-our-object-storage</guid>
          <description>
            <![CDATA[<p>After the official introduction of our S3-compatible Object Storage this summer, we are now showing you how to use this technology quickly and effectively in your projects. Furthermore, you will learn more about our attractive pricing scheme.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>How to get ready in just a few steps</h3>
<p>To use our Object Storage, you need a corresponding user account, which you can create in our Cloud Control Panel with just a few clicks. <strong>Transfer the displayed &quot;Access Key&quot; and &quot;Secret Key&quot;</strong> together with the hostname objects.cloudscale.ch to an S3-compatible application of your choice, such as a file manager, a backup program or your CMS – <strong>that&#x27;s it.</strong></p>
<p>Via our S3-compatible API you can now create buckets (analogy: directories) and upload objects (analogy: files). Depending on the application, it is advisable to define different access rights: for backups and private data, you should use the &quot;private&quot; ACL; public objects, however, such as the images of your website, can be marked as &quot;public&quot; and then <strong>directly integrated and delivered using:</strong></p>
<pre><code>src=&quot;https://BUCKETNAME.objects.cloudscale.ch/OBJECTNAME&quot;
</code></pre>
<p><strong>Update:</strong> The URL has been changed in the meantime. Read more: <a href="https://www.cloudscale.ch/en/news/2020/01/17/object-storage-new-urls">S3-Compatible Object Storage: New URLs</a>.</p>
<h3>Which concepts will help you to structure your objects</h3>
<p>Of course, you can set up multiple user accounts for the Object Storage in our Cloud Control Panel. This makes it easy to <strong>separate test from production data or different customer projects from each other,</strong> for example. All your Objects Users and their buckets are listed in our Cloud Control Panel, each with used space, traffic and requests.</p>
<p><strong>Access control lists (ACLs) go even one step further:</strong> Using the S3-compatible API, you can make individual buckets and objects available to other Object Users, either with read-only or write access – allowing for even more complex setups. For example, you can use ACLs to make sure that some objects can only be accessed by scripts, while other objects can only be accessed by certain individuals.</p>
<h3>What pricing scheme will be used</h3>
<p>The Object Storage of cloudscale.ch will only be <strong>billed for effective usage and does not cause any fixed costs.</strong> The costs incurred will be charged to your account at midnight every day – so there are no surprises at the end of the month.</p>
<p>After the free introductory phase, the following three components will be included in the daily price calculation starting September 2017:</p>
<ul>
<li><strong>Used space at CHF 0.003 per GB</strong><br/>
Space is measured hourly and averaged over the day.</li>
<li><strong>Outgoing traffic at CHF 0.02 per GB</strong><br/>
Corresponds to the price of additional traffic for our cloud servers.</li>
<li><strong>Requests at CHF 0.005 per 1&#x27;000 requests</strong></li>
</ul>
<p>Inbound traffic is free of charge.</p>
<p><strong>Update:</strong> The price for used space has been <a href="https://www.cloudscale.ch/en/news/2025/01/30/object-storage-lower-price-and-practical-information">lowered by 66% to CHF 0.001 per GB in the meantime</a>.</p>
<br/>
<p>Whether you are looking for a secure storage location for your off-site backups, want to store your data centrally or need highly available storage for your cluster-capable application: our Object Storage offers exactly that – at a fair price and <strong>operated in Swiss data centers exclusively.</strong></p>
<p>For your small and large objects,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[More Flexibility with Floating Networks
]]></title>
          <link>https://www.cloudscale.ch/en/news/2017/07/06/more-flexibility-with-floating-networks</link>
          <pubDate>Thu, 06 Jul 2017 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2017/07/06/more-flexibility-with-floating-networks</guid>
          <description>
            <![CDATA[<p>A few months ago, we introduced <a href="https://www.cloudscale.ch/en/news/2017/04/20/high-availability-using-floating-ips">Floating IPs</a>. These are particularly useful when it comes to increasing the availability. Using &quot;Floating Networks&quot;, further exciting applications can now be covered:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Free allocation of entire IPv6 ranges</h3>
<p>One of the advantages of IPv6 is that there are more than enough IPv6 addresses available for every conceivable application. So why not use a separate IPv6 address for each service, customer or project? With our new Floating Networks you can now <strong>assign your servers their own &quot;/56&quot; address range</strong>. The new IPv6 Floating Networks are free of charge for you – as are the individual IPv6 addresses already available.</p>
<p>Similar to Floating IPs, Floating Networks can be assigned to another server at any time as well. No matter whether you are planning maintenance work or have a hot standby system ready for all cases: With a mouse click or an API call, <strong>you can assign the entire address range to another server within seconds,</strong> which can then transparently take all requests.</p>
<h3>Central stateful firewall without NAT</h3>
<p>NAT (Network Address Translation), often used to hide many private IP addresses behind a public IP, has become widespread, especially due to the scarcity of IPv4 addresses – despite a number of disadvantages. <strong>IPv6 promises to return to &quot;end-to-end connectivity&quot;:</strong> source and target systems can address each other directly and unambiguously.</p>
<p>This is precisely why <strong>a central firewall that filters incoming connections remains an important component of most security architectures.</strong> With cloudscale.ch, such a setup can now be implemented easily:<br/>
Create a server with a &quot;public&quot; and a &quot;private network interface&quot; and, in a second step, assign a Floating Network to it – this server is your central firewall. The traffic filtered by the firewall will be forwarded to connected servers via the private network interface. Now you need to configure individual, unique IPv6 addresses from the Floating Network on all these private network interfaces. This allows your servers to be reached via these IPv6 addresses, while <strong>you can easily and centrally control traffic on your firewall.</strong></p>
<p>The following diagram illustrates the setup:</p>
<pre><code class="language-plain">                             +-------------+
                             |  Internet   |
                             +------+------+
                                    |
                             +------+------+
                             |  Server 1   |
                             |  &quot;Firewall&quot; |
                             +------+------+
                                    |
       +-----------------+----------+------+---------------------+
       |                 |                 |                     |
+------+------+   +------+------+   +------+------+       +------+------+
|  Server 2   |   |  Server 3   |   |  Server 4   |  ...  |  Server N   |
+-------------+   +-------------+   +-------------+       +-------------+
</code></pre>
<h3>Migration of your IP space to cloudscale.ch</h3>
<p>Many companies already use their own IP addresses, which have been assigned to them in the form of &quot;Provider Independent Space&quot; (PI Space) or in their role as &quot;Local Internet Registry&quot; (LIR). Thanks to Floating Networks it is now also possible to <strong>move such IP addresses (both IPv4 and IPv6) to cloudscale.ch.</strong> Keep your well-established IP addresses, but spare the effort of having your own physical infrastructure and take advantage of the flexibility of our Swiss cloud instead. Contact us to learn more about the possibilities of such a setup.</p>
<br/>
<p>Leverage the full potential of IPv6 with our new Floating Networks today and benefit from the same flexibility that we have already introduced with Floating IPs. With our new Floating Networks, you can adapt the network to your individual requirements – or even use your own existing IP addresses.</p>
<p>Ready for your next hop,<br/>
your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Launch of Our S3-Compatible Object Storage
]]></title>
          <link>https://www.cloudscale.ch/en/news/2017/06/30/launch-of-our-s3-compatible-object-storage</link>
          <pubDate>Fri, 30 Jun 2017 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2017/06/30/launch-of-our-s3-compatible-object-storage</guid>
          <description>
            <![CDATA[<p>After an extended beta phase, our S3-compatible Object Storage – one of the most popular feature requests from our users – is now officially available. After our last post about <a href="https://www.cloudscale.ch/en/news/2017/01/03/beta-phase-s3-compatible-object-storage">use cases and tools</a>, we would like to give you a few insights into how we at cloudscale.ch get the most out of this technology:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>High reliability thanks to consistent redundancy</h3>
<p>At cloudscale.ch, high-availability already plays a central role in the design phase of new features and this is no different with our Object Storage. Thus, requests to the Object Storage are distributed to two load balancers using &quot;DNS round robin&quot;. These, in turn, <strong>can take over each other&#x27;s IP and with it the entire load in case of a failure or maintenance work.</strong> It goes without saying that this does not only apply to IPv4: An ever increasing number of systems worldwide use IPv6 and can thereby access our Object Storage natively.</p>
<p>Behind the two load balancers, <strong>redundantly designed RADOS gateways</strong> store the data on a Ceph storage cluster. Similar to our SSD-only and bulk storage, we use a replication factor of 3 to ensure maximum protection of your data on our Object Storage.</p>
<h3>Maximum speed for typical applications</h3>
<p>In addition to high-availability, we also optimized the performance of our Object Storage. This includes the so-called cache tier: <strong>frequently used objects are automatically held in a special cache and can thus be accessed at lightning speed.</strong> This benefits your customers if you deliver the static files on your website or images in a newsletter directly from our Object Storage.</p>
<p>We also choose our hardware carefully: For our Object Storage, we have chosen a <strong>combination of hard drives for optimal storage density and NVMe SSDs for maximum speed.</strong> In conjunction with a sophisticated setup of Ceph, we also achieved an additional increase in speed even for rarely used objects.</p>
<h3>Free introductory phase</h3>
<p>On the occasion of the official launch of our Object Storage, we would like to give all users the opportunity to <strong>test this new feature free of charge until the end of August 2017.</strong> Evaluate different tools and discover new use cases to find out how you can optimize your application with our Object Storage.</p>
<p>From September 2017 onwards, a moderate pricing for the Object Storage will be introduced, which will take the <strong>actual space usage, data traffic and the number of requests</strong> into account. We will, of course, inform you in good time about the exact billing mode.</p>
<br/>
<p>Most certainly, you are already using applications that can benefit from an Object Storage. Take advantage of the free introductory phase and test the opportunities that are opening up to you!</p>
<p>For objects by the bucket,<br/>
your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Ansible Cloud Module and Libcloud Integration
]]></title>
          <link>https://www.cloudscale.ch/en/news/2017/05/17/ansible-cloud-module-and-libcloud-integration</link>
          <pubDate>Wed, 17 May 2017 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2017/05/17/ansible-cloud-module-and-libcloud-integration</guid>
          <description>
            <![CDATA[<p>Back in November 2016 <a href="https://www.cloudscale.ch/en/news/2016/11/03/workflow-automation-with-our-new-api">we introduced our API</a>, which allows you to manage your cloud servers automatically – directly from within your own application or deployment setup. With Ansible and Libcloud this just got even more comfortable: These two important open-source projects now support our API natively.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Ansible Cloud Module</h3>
<p>Ansible is a platform for the orchestration and configuration management of applications, servers (especially Unix/Linux) and entire server farms. In addition to numerous other modules, <strong>Ansible allows you to interact with different cloud providers with its Cloud Modules</strong> – starting with version 2.3.0 it supports managing your servers at cloudscale.ch as well.</p>
<p>Creating a new server is like a walk in the park:</p>
<pre><code class="language-yaml">- name: Start cloudscale.ch server
  cloudscale_server:
    name: my-shiny-cloudscale-server
    flavor: flex-4
    image: debian-8
    ssh_keys: ssh-rsa XXXXXXXXXX...XXXX ansible@cloudscale.ch
</code></pre>
<h3>Apache Libcloud</h3>
<p>Apache Libcloud is a Python library which abstracts the APIs of different IaaS providers. <strong>Libcloud enables you to access the resources of various cloud providers via a uniform Python syntax.</strong> Starting with <a href="https://libcloud.readthedocs.io/en/latest/compute/drivers/cloudscale.html">version 1.5.0</a> Libcloud also includes a driver for the cloudscale.ch API to manage servers on our infrastructure directly from your source code.</p>
<p>Using Python and Libcloud you can create new servers with just a few lines of code:</p>
<pre><code class="language-python">from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver

cls = get_driver(&#x27;cloudscale&#x27;)
driver = cls(key=&#x27;XXXXX&#x27;)

name = &#x27;my-shiny-cloudscale-server&#x27;
size = [ s for s in driver.list_sizes() if s.id == &#x27;flex-4&#x27; ][0]
image = [ i for i in driver.list_images() if i.id == &#x27;debian-8&#x27; ][0]
ssh_keys = [ open(&#x27;id_rsa.pub&#x27;).read().strip() ]

node = driver.create_node(name=name,
                          size=size,
                          image=image,
                          ex_create_attr={&#x27;ssh_keys&#x27;:ssh_keys})
</code></pre>
<h3>Two tools, different areas of application</h3>
<p>Ansible and its so-called playbooks are used to define and execute a sequence of tasks for system management. Therefore, <strong>Ansible has cleary defined areas of application: orchestration of IT processes, deployment of applications, or configuration management.</strong></p>
<p>With Ansible and the Cloud Module it is possible to automate the whole process of setting up and operating highly available web services at cloudscale.ch – from creating virtual servers and deploying applications up to non-disruptive maintenance during operating system or application updates.</p>
<p>In contrast, Libcloud is a Python library. With Python as a universal programming language a wide range of applications can be implemented. So, <strong>Libcloud comes in handy whenever Python code is used to access cloud services.</strong> All of the included cloud providers are supported automatically without the developer having to write code for their specific integration.</p>
<p>Therefore, Libcloud is suitable wherever you use Python already – e.g. when directly integrated into your (web) application for more scalability, or in order to extend existing deployment and monitoring tools.</p>
<h3>What the future holds</h3>
<p>We are constantly working on new features for our cloud infrastructure and our API. In order to integrate these into Libcloud and Ansible, we are working with the respective open-source communities. For example, <strong>an integration into Ansible is planned for the recently introduced &quot;Floating IP&quot; feature</strong>.</p>
<br/>
<p>A common advantage of cloud servers is that they can be deployed with just a few clicks. You can go one step further now: automate your server deployment with Ansible or Libcloud – no need to click at all.</p>
<p>Server setup made easy!<br/>
Your cloudscale.ch team</p>
<br/>
<p>PS: We believe that an <strong>integration of the cloudscale.ch API into Terraform</strong> would be valuable as well. What&#x27;s your take on this?</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[High-Availability Using Floating IPs
]]></title>
          <link>https://www.cloudscale.ch/en/news/2017/04/20/high-availability-using-floating-ips</link>
          <pubDate>Thu, 20 Apr 2017 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2017/04/20/high-availability-using-floating-ips</guid>
          <description>
            <![CDATA[<p>The entire infrastructure at cloudscale.ch has been designed for maximum availability of your virtual servers. By using Floating IPs you can now make your services highly available at the software level as well. Let us briefly explain:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>How you can increase availability using Floating IPs</h3>
<p>It will happen sooner or later: a process crashes, a limit is reached or there is maintenance work to be done. Therefore, many sysadmins are operating the same service on several servers and change DNS records when needed in order to move requests from one server to another. Even with a short TTL this causes additional overhead or even service interruptions – and it is another source of errors and subsequent problems.</p>
<p><strong>You can move our new Floating IPs between your virtual servers at will</strong>; a new server can transparently take all requests giving you time to update, restart, scale or debug the previously active server.</p>
<p>The <a href="https://www.cloudscale.ch/en/keepalived-config-unicast-example.txt" title="Example configuration for keepalived">example of <em>keepalived</em></a> shows how easy it is to achieve high-availability: Two servers periodically check the state of their respective counterpart. When a problem occurs on the active server, the standby server automatically takes over by assigning itself the Floating IP using our API. Of course, you can also assign Floating IPs to a different server manually by using our cloud control panel.</p>
<p><strong>Update:</strong> If you run your servers on Ubuntu 18.04 or 20.04, please also see the <a href="https://www.cloudscale.ch/en/keepalived-config-unicast-example-1804.txt" title="Example configuration for keepalived on Ubuntu 18.04">revised example of <em>keepalived</em></a>.</p>
<p><strong>Ideally, you should combine Floating IPs with our Anti-Affinity feature</strong> to achieve the highest level of availability. This ensures that two servers, that can cover for each other, always run on separate physical machines. Even though we already have redundancy in place on many different levels, by doing so you can further reduce the risk of a hardware failure.</p>
<h3>Which further benefits Floating IPs offer</h3>
<p>Even if you do not plan to dynamically move IP addresses between servers, you can still benefit from the new Floating IPs: You now have the possibility to <strong>add additional IP addresses to your servers as needed</strong>. By adding up to five additional IPv4 and IPv6 addresses you can separate services or customers from each other without having to create or maintain separate servers.</p>
<p><strong>Update:</strong> Meanwhile, the limit has been increased from five to a total of fifteen Floating IPs or Floating Networks per server.</p>
<p>Since Floating IPs are not hard-wired to a virtual server, <strong>they will be retained in your user account when you delete the server</strong>. This prevents your &quot;Service IP&quot; from being lost when replacing servers – given, of course, that your customers are communicating with this Floating IP exclusively.</p>
<h3>What happens in the background</h3>
<p>From a technical point of view, each Floating IP is representing a separate, small IP range – hence the netmask &quot;/128&quot;, &quot;/32&quot; or &quot;255.255.255.255&quot; respectively. When you create a Floating IP or assign it to a new server, our routers receive corresponding instructions: Traffic directed to the Floating IP is to be sent immediately to that server. These instructions are being processed via a <strong>separate, redundant setup consisting of two ExaBGP speakers</strong>, which communicate with our routers via multiple BGP sessions.</p>
<p>At the SwiNOG meeting in Berne, André Keller from VSHN AG and our CEO Manuel Schweizer presented more detailed background information on this particular approach; the slides can be found at <a href="http://www.swinog.ch/meetings/swinog30/">http://www.swinog.ch/meetings/swinog30/</a>. As usual, we hid the technical details behind a self-explanatory user interface – our cloud control panel. To use Floating IPs via API you can find all the needed information in the API documentation at <a href="https://www.cloudscale.ch/en/api/v1">https://www.cloudscale.ch/en/api/v1</a>.</p>
<br/>
<p>Let it float!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Ceph Performance Boost with NVMe SSDs
]]></title>
          <link>https://www.cloudscale.ch/en/news/2017/02/16/ceph-performance-boost-with-nvme-ssds</link>
          <pubDate>Thu, 16 Feb 2017 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2017/02/16/ceph-performance-boost-with-nvme-ssds</guid>
          <description>
            <![CDATA[<p>Many customers value the high level of performance of our servers. Along with powerful processors and fast networking, our distributed SSD storage cluster contributes substantially to superior speed. After thorough testing in our lab, we recently gave it an extra performance boost:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Upgrading the Ceph storage platform</h3>
<p>Ceph forms the basis of our distributed storage cluster. The open source solution ensures that your data is stored in a multi-redundant way and by that always available. Distributing read and write operations across a large number of storage nodes increases speed significantly. The physical separation of compute and storage nodes enables high-availability setups and allows storage capacity to be scaled virtually infinitely.</p>
<p>In order to ensure smooth operation, we rely on Ceph&#x27;s &quot;LTS&quot; releases (&quot;Long Term Stable&quot; releases). Using a widely supported version increases reliability and minimizes the response time in case of an issue. After comprehensive testing in our lab we have updated the storage cluster from &quot;Hammer&quot; to the latest LTS release &quot;Jewel&quot;. Impressive fact: Compared to Hammer, a performance increase of up to 35% has been measured using Jewel.</p>
<h3>Reinstallation of the storage nodes with Ubuntu 16.04 LTS</h3>
<p>Most of our servers are running Ubuntu, and here too we rely on versions with long-term support. While the productive systems were running smoothly with version 14.04 LTS, we have thoroughly tested the newer Ubuntu 16.04 LTS as well as the replacement process in our lab. During the last few days we have reinstalled all of our storage nodes with 16.04 and thus will continue to benefit from current software and long-term support.</p>
<h3>Enhancement through super fast NVMe SSDs</h3>
<p>When using Ceph, all data is stored in the so-called journal first. In a second step, data gets transferred to its actual storage location. In many cases SSDs are used for the journal to speed up write operations while data is then stored on magnetic hard disks.</p>
<p>At cloudscale.ch however, we use SSDs exclusively (except for bulk storage) to also ensure read access at lightning speed. To further optimize our setup, we have enhanced our storage nodes through NVMe SSDs. These serve as a super fast journal and speed up write access even more compared to the current SSDs. This new configuration also eliminates the &quot;double-write penalty&quot;: Thanks to the separate NVMe journal data is written to the SSDs just once, which almost doubles data throughput.</p>
<br/>
<p>After carefully planning every step, we were able to implement all these improvements in operation – probably without you even noticing. Of course, all customers automatically profit from the increased performance so there is no need to modify or replace your current servers.</p>
<p>At lightning speed,<br/>
your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Beta Phase: S3-Compatible Object Storage
]]></title>
          <link>https://www.cloudscale.ch/en/news/2017/01/03/beta-phase-s3-compatible-object-storage</link>
          <pubDate>Tue, 03 Jan 2017 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2017/01/03/beta-phase-s3-compatible-object-storage</guid>
          <description>
            <![CDATA[<p>We have always considered ourselves to be a platform from developers for developers: The requirements and requests of the developer community have a substantial influence on our roadmap. Thus, the latest feature, an S3-compatible object storage with data located in Switzerland exclusively, has also been implemented by popular request.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Three use cases for object storage</h3>
<p>Object storage (or object-based storage) is a modern approach to storing and retrieving data, which is being used more and more often by current applications. A typical use case is storing backups: Backups can, for instance, be created (and optionally encrypted) with <a href="http://duplicity.nongnu.org/">Duplicity</a> and then stored outside the respective systems. This way they remain available independent of the primary system.</p>
<p>Since the object storage is being accessed using HTTP/HTTPS calls, this leads to another use case almost automatically: Files flagged as &quot;public&quot; (e.g. images or videos) can be accessed directly via a static URL, and can therefore be incorporated into websites or HTML newsletters without burdening the web server.</p>
<p>If you are already using Docker on your servers, your deployment will become even easier: Entire Docker images can be stored directly on our new object storage using the <a href="https://docs.docker.com/registry/">Docker registry</a>.</p>
<h3>Supported tools</h3>
<p>Objects stored in the object storage are often files, grouped in &quot;buckets&quot;. These in turn can be seen as a simplified version of folders. Hence, usage in the shell or scripts is pretty straightforward: The open source tool <a href="http://s3tools.org/s3cmd">s3cmd</a> for example follows familiar concepts to manage objects, similar to SCP or FTP.</p>
<p>Besides the aforementioned Duplicity, an increasing number of modern applications can make use of object storage directly, such as Apache Hadoop or various content management systems such as Drupal and Wordpress.</p>
<h3>Participating in closed beta</h3>
<p>Before making this new feature accessible to everyone, we would like to invite a limited number of users to test our object storage extensively. Feedback will of course be considered as part of the further development.</p>
<p>Would you like to be among the first to test our object storage? We are looking forward to learning about your experience and are happy to provide you with beta access – a short <a href="https://www.cloudscale.ch/en/about">email</a> will do.</p>
<br/>
<p>Ready to listen!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[OpenStack Mitaka and More
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/11/28/openstack-mitaka-and-more</link>
          <pubDate>Mon, 28 Nov 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/11/28/openstack-mitaka-and-more</guid>
          <description>
            <![CDATA[<p>An old IT adage says that every company has a test environment. Yet some companies are in the fortunate position of having a separate production environment as well...<br/> All joking aside, we firmly believe that we can only ensure reliable operations in production when using a separate test environment – and our customers will surely agree with us on this point. The topics in this brief review:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Upgrade to OpenStack &quot;Mitaka&quot;</h3>
<p>We have previously described our approach regarding OpenStack upgrades: We are running a separate test environment using the same hardware components as in our productive setup. There we are able to test every change until we are convinced that it can be rolled out safely into production.</p>
<p>A few weeks ago, another OpenStack upgrade was scheduled: &quot;Mitaka&quot; was going to replace &quot;Liberty&quot;. A delicate matter, if one bears in mind that OpenStack is at the very heart of our cloud infrastructure. Thanks to comprehensive tests in advance, we were able to upgrade all of the systems smoothly and without affecting our customers&#x27; virtual servers. Only changes through our cloud control panel were not possible for a short period of time – the corresponding announcement to our customers was, of course, already part of the preparatory work.</p>
<h3>Faster Live Migration</h3>
<p>In addition to major upgrades, regular systems maintenance and improvements are part of our day-to-day routine. Usually, our customers do not notice a thing: we can move all virtual servers to other compute hosts on the fly thus avoiding disruptions to customer systems. We were even able to speed up this process by a factor of 5 by switching from &quot;libvirt tunnelled migration&quot; to &quot;QEMU direct migration&quot; recently. Now maintenance tasks such as kernel upgrades that require a restart of all compute hosts can be completed even faster.</p>
<h3>OpenStack Summit in Barcelona</h3>
<p>For us as an active member of the OpenStack community, the OpenStack Summit in Barcelona in late October 2016 was more than just a mandatory appearance. In an inspiring environment consisting of thousands of developers and users from around the world, everything revolved around OpenStack and its comprehensive ecosystem. A wide range of presentations provided insights into all possible aspects, including how to manage huge setups, maintaining OpenStack&#x27;s own open-source code, and a look ahead at future improvements to Ceph. Since 2014, Ceph has been the backbone of our distributed storage cluster and is now being used more and more frequently with OpenStack.</p>
<p>Equally valuable, of course, was the personal exchange with new and familiar faces, whether in numerous sessions with core developers or spontaneous discussions outside the official agenda. And, as far as statistics go, our regular upgrades to new versions of OpenStack make us one of the most modern OpenStack cloud service providers in the world.</p>
<br/>
<p>Our own past experience as well as the community with its combination of drive and professionalism continue to reinforce our opinion: OpenStack as the technological basis for our self-service cloud was clearly the right choice.</p>
<p>With this conviction we will therefore continue to contribute to further developing the OpenStack source code in the future and are already working on the next features for our own cloud.</p>
<p>First and foremost,<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Workflow Automation With Our New API
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/11/03/workflow-automation-with-our-new-api</link>
          <pubDate>Thu, 03 Nov 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/11/03/workflow-automation-with-our-new-api</guid>
          <description>
            <![CDATA[<p>Many projects quickly grow into a set of several virtual servers: e.g. to separate &quot;development&quot;, &quot;staging&quot; and &quot;production&quot; environments, to test new releases of a certain software or by deploying additional servers to handle load peaks. Thanks to our new API you can now include the deployment of new servers in your configuration management tool or task-specific scripts and thereby simplify and speed up your workflow.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Get ready in just one step</h3>
<p>All it takes to use our new API is an API token: A secret string that you need to include in all your calls to our API.</p>
<p>You can create as many tokens as you need in your account settings of our cloud control panel. For reporting-only purposes, creating a &quot;read access&quot; token is a safe choice. If you intend to also make changes through the API (e.g. create or reboot a server), enable &quot;write access&quot; as well.</p>
<p>Make sure to choose a meaningful name so you can tell your tokens apart. This will help you to identify a token in case you need to revoke it later. Please note that we only keep a hashed copy of your tokens in our database. Therefore they cannot be displayed again later on.</p>
<h3>How to use our API</h3>
<p>Our API is an HTTPS API following the REST paradigm, and therefore compatible with virtually everything you might be working with. As a basic example, you can display a list of your current servers right from your local command line by using:</p>
<pre><code class="language-bash">curl -H &#x27;Authorization: Bearer IhrCloudscaleApiToken&#x27; https://api.cloudscale.ch/v1/servers
</code></pre>
<p>Any required parameters can be supplied in either JSON- or URL-encoded form. The API returns information in JSON format for easy parsing and further processing. All available functions are documented at <a href="https://www.cloudscale.ch/en/api">https://www.cloudscale.ch/en/api</a>, complete with the correct HTTP method, parameters as well as an example.</p>
<h3>Some use cases and hints</h3>
<p>The potential use cases enabled by our API are endless: If your coworkers need to set up predefined infrastructures repeatedly, you can now provide them with a script that makes use of our API and the <a href="https://www.cloudscale.ch/en/news/2016/07/21/improved-efficiency-comfort-and-control">cloud-config feature</a>. Or you might want to automate information retrieval – for your project members, for documentation or a real-time dashboard.</p>
<p>Be aware that all API calls are executed without further confirmation so you can reliably use them in an automated fashion. Since your API tokens can be compared to a password, be careful about where you store them (e.g. public repositories) and who can access them (e.g. script location or bash history), and revoke them quickly if necessary.</p>
<p>When deleting a server, you will now get a pro rata refund for the remainder of our 24-hour billing period. Feel free to test different setups and features, and replace older servers with a fresh installation at no extra cost. If you are using additional, short-lived servers to cover periodical load peaks, this change will obviously benefit you too.</p>
<br/>
<p>While we keep working on our easy-to-use cloud control panel for humans, we are happy to announce that cloudscale.ch is now equally easy to use for computers.</p>
<p>Happy automating!<br/>
Your cloudscale.ch team</p>
<br/>
<p>PS: As an announcement to our fellow Python users: We are working on being included in an upcoming Libcloud release to further facilitate using our API. Stay tuned!</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Increasing Availability Using Anti-Affinity
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/10/21/increasing-availability-using-anti-affinity</link>
          <pubDate>Fri, 21 Oct 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/10/21/increasing-availability-using-anti-affinity</guid>
          <description>
            <![CDATA[<p>May we introduce: Anti-Affinity. Use this small but powerful new feature to build even more resilient setups. Furthermore, we would like to share some insights in how we approach high availability (HA) on many different levels.</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>How to benefit most from Anti-Affinity</h3>
<p>With the most popular virtualization technologies, a crashing physical compute host inevitably takes down all the servers that were running on it. As a remedy, you may already have &quot;N+1 redundancy&quot; in place by clustering multiple virtual servers (e.g. multiple web workers or a DB cluster) to increase your solution&#x27;s availability. It is possible, however, that some of those servers are running on the same physical host by coincidence.</p>
<p>With our new Anti-Affinity feature, you can ensure that servers with identical tasks will always be running on separate physical hosts. This effectively protects you against the impact of a single compute host&#x27;s hardware defects.</p>
<h3>What measures we take to maximize availability</h3>
<p>Of course, a server failure still is annoying. That is why we at cloudscale.ch only use systems which are designed to always keep running and stay online:</p>
<p>All of our systems are equipped with redundant, hot-swappable power supplies. All physical servers are connected to multiple switches simultaneously and can be administered through a separate out-of-band management network.</p>
<p>In case of a defective compute host, all affected virtual servers are restarted promptly on separate, hot-standby compute hosts. Thanks to our distributed storage cluster based on Ceph, the content of the hard disk will be left intact. Moreover, with a replication factor of 3, your data is well protected against hardware defects – a risk category which we further reduce by using enterprise grade SSDs only.</p>
<p>On the level of supplies we have built-in redundancies, too, aiming for the highest availability: For one thing, we can rely on the data center&#x27;s redundant cooling as well as two power sources, both of them backed by UPS systems and diesel generators. For another, we maintain multiple Internet connections with different upstream providers and a link to the SwissIX Internet Exchange.<br/>
Finally, we run our own critical software (e.g. OpenStack components) following the N+1 principle to operate seamlessly through a possible outage.</p>
<h3>Why HA is more than just a reliable server</h3>
<p>To us, availability means that your servers are running and reachable. Think about what availability means to you – or your users, for that matter. What could go wrong, and how can you prevent or minimize a negative impact? Choose the approach and tools which suit you and your use case best. In the end it is all about being prepared.</p>
<p>For servers that work!<br/>
Your cloudscale.ch team</p>
<br/>
<p>PS: Matching the topic, André Keller of VSHN AG and our CEO Manuel Schweizer will give a presentation addressing &quot;How to increase availability using ExaBGP&quot;. For registration and more information on this event taking place in Berne on November 4, 2016, see <a href="http://www.swinog.ch/meetings/swinog30/" title="SwiNOG #30 in Berne">http://www.swinog.ch/meetings/swinog30/</a></p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Our Latest Features – Seriously Secure
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/09/13/our-latest-features-seriously-secure</link>
          <pubDate>Tue, 13 Sep 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/09/13/our-latest-features-seriously-secure</guid>
          <description>
            <![CDATA[<p>Many of our users care a lot about security. So do we! We are glad to introduce a whole bundle of new security features that we have rolled out over the last couple of days:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Two-Factor Authentication</h3>
<p>Do you use complex passwords? Are you sure that nobody watches you typing? Two-factor authentication (2FA) takes security one step further: Logging in requires an additional token (also known as one-time password or OTP) which changes every 30 seconds and is valid for a short time only.</p>
<p>Safeguard your cloudscale.ch account with 2FA now by enabling this new feature in the security settings. Using the popular &quot;Google Authenticator&quot; or any other TOTP app, you can turn your smartphone into an additional key protecting your cloud. By the way: Generating your tokens happens completely offline.</p>
<h3>Session Management</h3>
<p>If you close your browser while logged into our cloud control panel, normally your session cookie will automatically be deleted. Next time, you will need to log in again. Now you can choose to &quot;stay signed in&quot; upon login. Of course you can stay logged in with multiple devices simultaneously.</p>
<p>But what if you forgot to log out after using someone else&#x27;s computer? Just navigate to the new session overview to see all devices currently logged in, complete with IP address and browser. Here you can end any session immediately to make sure nobody tampers with your account.</p>
<h3>Host Key Verification</h3>
<p>SSH host keys are sort of a server&#x27;s passport: they guarantee that you are connecting to the right machine and that there is no man-in-the-middle manipulating your traffic. If a server presents a different host key than the one in your known_hosts file, you will immediately get a warning and will probably want to investigate further.</p>
<p>Many people inevitably trust host keys on first use. Now you can actually verify the keys of your newly created servers: Virtual servers generate their unique host keys on first boot and write the public parts to their serial console. There we pick them up and display their fingerprints in our cloud control panel. Go check for yourself!</p>
<br/>
<p>Trust is good, control is better. Keeping control just got easier with our latest security features. After all, cloud computing is to us what online banking is to others.</p>
<p>Signed, not hashed.<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Introducing Bulk Storage
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/08/17/introducing-bulk-storage</link>
          <pubDate>Wed, 17 Aug 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/08/17/introducing-bulk-storage</guid>
          <description>
            <![CDATA[<p>One of the key ingredients of a highly performing server is fast data access. This is why all of our virtual machines come with 10 GB of distributed SSD-only storage included. But what if your specific use case asks for lots of disk space rather than performance? That&#x27;s where &quot;Bulk Storage&quot; comes into play.</p>]]>
          </description>
          <content:encoded><![CDATA[<p>We have summarized some key information on this new feature below:</p>
<h3>When to best use bulk storage</h3>
<p>It does not have to be &quot;big data&quot;: Bulk storage is perfect for keeping archived versions of your work or an off-site backup of your local hard disk. Run the free &quot;Seafile&quot; or &quot;ownCloud&quot; software to keep folders synchronized across your devices – the new bulk storage is the cost-efficient place to keep your files in the cloud.</p>
<p>You can also benefit from adding bulk storage in a traditional server setting: Use it to store your website&#x27;s static assets like images and videos, for your database dumps and ever-growing log files. And it is the easiest way to keep certain files separated from your root file system.</p>
<h3>How to use it in practice</h3>
<p>Using our cloud control panel you can add bulk storage to your virtual servers at any time. When launching a new server, the additional volume will automatically be ext4 formatted and mounted to /mnt/bulk so you can use the extra space right away.</p>
<p>You can also add bulk storage in the &quot;storage&quot; tab of your existing servers. We provide instructions on how to partition and mount it manually for all of our supported operating systems. And similar to the SSD-only storage, you can always scale up your bulk storage volume in case you need more space.</p>
<h3>The technology under the hood</h3>
<p>While we use spinning disks to keep costs for bulk storage low, we put in quite some effort to offer the best place for your big chunks of data. To ensure high availability, we use Ceph with a replication factor of 3 which is identical to the configuration of our SSD-only storage cluster. For optimal performance, bulk storage is kept on dedicated storage servers separated from the existing systems. In addition we were able to speed up write operations dramatically by using SSDs for the Ceph journals.</p>
<p>Needless to say that all of your data on our bulk storage cluster is kept exclusively in Swiss data centers – this holds true for all of our systems.</p>
<br/>
<p>Some of the most valuable data does not require cutting-edge IOPS. Use our new bulk storage feature to cost-efficiently store those files in a safe place, ready whenever you need them.</p>
<p>Move your attic to the cloud!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Improved Efficiency, Comfort, and Control
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/07/21/improved-efficiency-comfort-and-control</link>
          <pubDate>Thu, 21 Jul 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/07/21/improved-efficiency-comfort-and-control</guid>
          <description>
            <![CDATA[<p>As an IT professional, you are probably optimizing things constantly. So are we! We have released several enhancements that will help you to streamline your workflow:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Use &quot;cloud-config&quot; for your initial VM setup</h3>
<p>If setting up new cloud servers is a repetitive task, why not automate it? All of the operating systems that we currently offer have the cloud-init package built-in. By providing so-called &quot;User Data&quot; you can now pass your own settings to cloud-init: your tailor-made cloud-config. The possibilities are endless – we know of people who do not even need to manually SSH into their new servers anymore.</p>
<p>You can give it a try by using the &quot;timezone&quot; or &quot;fqdn&quot; (Fully Qualified Domain Name) parameter to configure your new server right from the launch screen. For more complex operations, the &quot;runcmd&quot; statement will take your commands and run them on first boot. This is an advanced feature: check out the <a href="https://cloudinit.readthedocs.io/en/latest/topics/examples.html" title="Cloud-Init Examples">examples</a> and proceed with caution.</p>
<h3>Rename your servers to accurately reflect their purpose</h3>
<p>While you are optimizing your setup, the number of machines or even the software you are running might change: Once you realize that your DB replica is mainly used to generate reports, you will probably rename the machine with Linux&#x27;s on-board tools. For a consistent experience you can now adjust server names in our cloud control panel as well.</p>
<p>If you are working on multiple projects or for several different customers, using descriptive server names can support you in keeping track of your assets. Moreover, it will help you to avoid confusion further down the road.</p>
<h3>Change PTR records for a coherent appearance</h3>
<p>You can now use our cloud control panel to set and update the PTR records of your servers&#x27; IP addresses. If you specify the FQDN of your server using cloud-config (as described above), we will automatically set the PTR records of your IPs accordingly. Of course, you can change those entries at any time, should you ever need to. Keep in mind that caching DNS servers might reflect changes with some delay.</p>
<p>For some applications, especially email, it is vital to have matching &quot;forward&quot; and &quot;reverse&quot; DNS entries (&quot;Forward-confirmed reverse DNS&quot;): When setting a new PTR record for a specific IP address, make sure to have an according A/AAAA record pointing back to the same IP. Furthermore, your services should identify themselves using that same string.</p>
<br/>
<p>Who says that small improvements can not have a huge impact? With the latest additions, you can manage your cloud with a smile on your face.</p>
<p>Keep on optimizing!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Private Networking Available at cloudscale.ch
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/07/04/private-networking-available</link>
          <pubDate>Mon, 04 Jul 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/07/04/private-networking-available</guid>
          <description>
            <![CDATA[<p>You asked for it, here it is: cloudscale.ch now offers private networking. Interconnect your virtual servers in a more secure way using a dedicated interface separated from the public Internet. Let us quickly take you through the most relevant aspects of this new feature:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>When you should use private networks</h3>
<p>A well-known use case for private networks are &quot;tiered architectures&quot;: Create a number of frontend servers (e.g. web workers) that directly serve your users over the Internet. Those servers then use a second, private network interface to connect to their backend servers (e.g. DB or business logic) which are not publicly accessible. This design minimizes the number of exposed services, increasing the overall security of your setup.</p>
<p>Thinking beyond common web services, you can now use our cloud for virtually any application that you previously operated on-premises. Be it email, file storage, your wiki, or your virtual PBX: Just use our &quot;private network&quot; feature. With a VM acting as a gateway, central firewall, and/or VPN endpoint, you are in full control over who has access to your (private) machines.</p>
<h3>How to set up private networking</h3>
<p>Private networking at cloudscale.ch works out of the box: Each server on your private network receives an IP address by DHCP; this address is also displayed in our cloud control panel. You may, of course, statically configure any IPv4 and/or IPv6 address you like – it is your network, after all.</p>
<p>In case of an already running server that has no private network interface yet, you can add one any time later. Config snippets will help you setting up the additional interface in the operating system you chose.</p>
<h3>A peek behind the scenes</h3>
<p>We allocate a separate VXLAN to each user, tunneling your private network&#x27;s traffic between our compute nodes and thereby keeping it completely separated from other customers&#x27;. Using this setup, we also make sure that packets inside your private network never leave our backbone.</p>
<p>We assign a random /24 subnet out of the private 172.16.0.0/12 address block to each user. By doing so, we try to avoid confusion with private addresses that you might be using elsewhere. In case you prefer the DHCP servers to pick addresses from a different subnet, just let our support team know.</p>
<br/>
<p>Our self-service cloud platform now offers a solid foundation for an even broader range of applications, thanks to the added support of tiered architectures. Use a private network to protect your valuable data, and only expose the services you actually want to be publicly accessible.</p>
<p>Happy (private) networking!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[IPv6 – Welcome to "The Bigger Internet"
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/06/21/ipv6-welcome-to-the-bigger-internet</link>
          <pubDate>Tue, 21 Jun 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/06/21/ipv6-welcome-to-the-bigger-internet</guid>
          <description>
            <![CDATA[<p>We are happy to announce that we are now offering native IPv6 support for all our cloud servers. By enabling IPv6 on your servers today, you can directly reach the rapidly growing number of IPv6 users around the world.</p>]]>
          </description>
          <content:encoded><![CDATA[<p>To get you started, we have summarized the most important aspects for you:</p>
<h3>Why you should use IPv6 now</h3>
<p>An increasing number of Internet users and devices are already using IPv6, some of them even without having a &quot;legacy&quot; IPv4 address. Once you enable IPv6 on a server, they can access your services more efficiently: fewer DNS requests, no more NAT, and no bottlenecks imposed by tunneling services which were required to connect both worlds. Did you know that various content providers report application performance improvements of up to 25% when using IPv6?</p>
<p>Let&#x27;s face it: IPv6 is state of the art.</p>
<h3>How to enable IPv6 on your virtual servers</h3>
<p>IPv6 can be enabled for both existing and new virtual machines: When creating a new VM, simply check &quot;Enable IPv6&quot; and do not forget to add according AAAA records to your DNS zones – that&#x27;s it! For existing VMs, click on the &quot;Enable IPv6&quot; button in the &quot;Network&quot; tab of our cloud control panel. You will also find instructions on how to get DHCPv6 up and running for your virtual server&#x27;s operating system there.</p>
<p>By default, we assign a single IPv6 address per VM. Need more? Our support is happy to route up to a /48 to your VM&#x27;s IPv6 address on request.</p>
<h3>A quick note on security</h3>
<p>You will possibly want to add according firewall rules when enabling IPv6 on your servers. Please remember to allow DHCPv6 traffic when sticking with our default way of assigning IPv6 addresses.</p>
<br/>
<p>Switzerland is one of the world&#x27;s leaders in IPv6 adoption. With cloudscale.ch, you can be, too.</p>
<p>Happy networking!<br/>
Your cloudscale.ch team</p>
<p>PS: Our CEO, Manuel Schweizer, gave a speech at the IPv6 Business Conference in Zurich last week. You can download the slides of all presentations at <a href="http://www.ipv6conference.ch/sessions/">www.ipv6conference.ch/sessions/</a></p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Upgrading OpenStack from Kilo to Liberty
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/06/09/upgrading-openstack-from-kilo-to-liberty</link>
          <pubDate>Thu, 09 Jun 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/06/09/upgrading-openstack-from-kilo-to-liberty</guid>
          <description>
            <![CDATA[<p>At cloudscale.ch, we have been using OpenStack ever since we started out in 2014. Recently, we upgraded from OpenStack &quot;Kilo&quot; to the newer &quot;Liberty&quot; release. It has been an interesting journey, and we would like to share some of our experiences with you:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Why we upgraded to Liberty</h3>
<p>Having access to security updates is vital – what is true for your own devices applies to cloud service providers even more. While OpenStack&#x27;s Kilo release was reaching end of life, we considered upgrading to a newer release for yet another reason: new and improved capabilities in OpenStack Liberty allow us to provide you with new features faster than before.</p>
<p>We are aware that the even newer OpenStack &quot;Mitaka&quot; has been released recently, so why did we choose Liberty anyway? We are really serious about providing you with a stable cloud infrastructure. Therefore we wanted to gain more experience with Mitaka in our lab before entrusting it with the orchestration of your servers.</p>
<h3>How we prepared for the upgrade</h3>
<p>Knowing how to proceed in theory is one thing, having tested the procedure is another. That is why we built a lab environment using hardware components identical to our live cloud infrastructure. This allowed us to replicate the then-current setup and go through the whole upgrade process several times, sorting out potential problems and eliminating pitfalls.</p>
<p>In preparation for upcoming features in our cloud control panel, we have also taken this opportunity to adjust and extend our OpenStack settings. We will cover those features in separate posts.</p>
<h3>What this means for the future</h3>
<p>OpenStack Liberty is a big step forward. It provides continuity and reliability, given the fact that it is well-proven yet supported for quite some time. But more important, it is a cornerstone for the future evolution of our cloud infrastructure, allowing for numerous advanced features which we will release over the next weeks and months.</p>
<p>With a dedicated and independent lab environment, we can now test things in a more efficient way. Be it a new feature, an optimized configuration or the next major upgrade, we have the means to safely do as many dry runs as we want before actually touching a productive system.</p>
<br/>
<p>After this upgrade, we have an even better technological base to build on. And on top, we gained another powerful tool to meet the quality standards we are committed to.</p>
<p>On your mark, test, go!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Re-Implementation of our Cloud Control Panel
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/05/06/re-implementation-of-our-control-panel</link>
          <pubDate>Fri, 06 May 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/05/06/re-implementation-of-our-control-panel</guid>
          <description>
            <![CDATA[<p>Sometimes, fundamental improvements happen in the background and are therefore almost invisible to end users. We went through a full code change (including switching programming languages) over the last couple of months. A natural next step if you look at the broader context:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>Where it all began</h3>
<p>Creating a new product is all about priorities. In our case: Harness the available technology and turn it into a self-service cloud offering which is powerful yet easy to use. With a small team and a clear vision of the end result, we focused on delivering what we had promised as quickly as possible.</p>
<p>While our main building blocks, OpenStack and Ceph, were determined quickly, we had to follow a pragmatic approach when it came to implementing our unique cloud control panel. Using the existing knowledge we had on the team was key and therefore the release in January ended up being written in Lua.</p>
<h3>Why we decided it was time for a change</h3>
<p>Having reached GA, one of the most important milestones in the history of a start-up, it was now time for a re-evaluation. It quickly became clear to us that there was no better way to take full advantage of OpenStack and its evolving ecosystem than to switch to Python ourselves.</p>
<p>With more and more software developers working on our control panel, we decided to establish new processes: Test-driven development helps us move fast while ensuring high quality. Continuous delivery with multiple deployments per day allows you to take advantage of improvements and new features on a regular basis. Using Python and its surrounding frameworks we were able to leverage the power of agile software development.</p>
<h3>What benefits you can expect</h3>
<p>Following our improved processes, we managed to replace our entire code base without our users even noticing – a seamless rollout. Using newly gained synergies, we are making headway with new features even faster. In about a month&#x27;s time we will provide full IPv6 support for your virtual machines. And private networks are just a little further down the road, with even more cool stuff in the pipeline to be released this summer.</p>
<br/>
<p>In retrospect, re-implementing our code base was the right thing to do at this stage. After releasing an initial product and getting positive feedback from our customers, we are now ready to bring it to the next level.</p>
<p>Get ready for (much) more!<br/>
Your cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Growing the Team at cloudscale.ch
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/03/03/growing-the-team</link>
          <pubDate>Thu, 03 Mar 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/03/03/growing-the-team</guid>
          <description>
            <![CDATA[<p>After developing, testing and launching our new cloud offering, we were facing a challenge that is typical for the stage we are in: growth. In our case, growing the team turned out to be trickier than anticipated. In this short update, we will cover:</p>]]>
          </description>
          <content:encoded><![CDATA[<h3>What our specific requirements were</h3>
<p>With our recently released self-service platform and a growing user base, everything started to look urgent: supporting our customers, adding new features, maintaining and extending our technical infrastructure – and lots of background tasks that you would probably find hard to believe.</p>
<p>It seemed sensible to look mainly for system engineers with programming skills, hoping that they are capable to fill in wherever the most pressing issues would surface. Knowledge in Python was a hard requirement (since we are using OpenStack), along with an open mind as our cloud control panel is currently written in Lua.</p>
<h3>How we managed to find the right people</h3>
<p>Given the situation we found ourselves in, we just did not have the time and resources to run job ads and sift through hundreds of applications. A job posting in HN&#x27;s &quot;Who is hiring?&quot; yielded some interesting contacts, but ultimately did not lead to successful hires, either.</p>
<p>It was a well-tried, classical method which lead us to meeting our new team members: know someone who knows someone. We were lucky enough to find two software engineers with a track-record in Python and outstanding Linux skills. Moreover, both make substantial contributions in their field of expertise, and are deeply involved with the open-source community.</p>
<h3>Why reaching out paid off for us</h3>
<p>Unrelated to growing our staff, we chose to reach out to other OpenStack users (after all, we are the new guys on the block). It turned out one of these specialists was very interested in joining our team, and with his skill set in OpenStack and Linux system engineering he was a perfect addition to the growing cloudscale.ch team.</p>
<br/>
<p>We would like to welcome the new faces behind cloudscale.ch! And on our current way of growth, we are looking forward to meeting more great people like them.</p>
<p>Cordially,<br/>
your (growing) cloudscale.ch team</p>]]></content:encoded>
        </item>
        <item>
          <title><![CDATA[Yes, we are open!
]]></title>
          <link>https://www.cloudscale.ch/en/news/2016/01/12/yes-we-are-open</link>
          <pubDate>Tue, 12 Jan 2016 00:00:00 GMT</pubDate>
          <guid isPermaLink="false">https://www.cloudscale.ch/en/news/2016/01/12/yes-we-are-open</guid>
          <description>
            <![CDATA[<p>After admitting more and more users onto our platform, we can start the new year with the most exciting announcement a start-up can probably make:</p>]]>
          </description>
          <content:encoded><![CDATA[<p><strong>General availability of our Swiss public cloud</strong></p>
<p>We have opened free and instant registration to everyone. Having spent more than a year of hard work (with even more to come) creating a simple self-service cloud platform, we are proud to say: Here it is! Welcome to <a href="https://www.cloudscale.ch">https://www.cloudscale.ch</a></p>
<h3>How we prepared for GA</h3>
<p>We recently added some more hardware to our cloud setup to ensure you will always get the capacity and power you need. We implemented internal improvements so we can handle a growing user base, both technologically and as an organization. And we are growing our team to keep up the pace.</p>
<h3>What has changed for you</h3>
<p>We have adjusted our plans to fit your project size and budget even better by lowering our prices and replacing the previous &quot;Flex-1&quot; offering with the bigger &quot;Flex-2&quot; flavor instead. Simply scale up any existing Flex-1 servers to Flex-2 at no additional cost.</p>
<h3>Why this is only the beginning</h3>
<p>We are excited to welcome lots of new fans from around the globe and are curious to hear what you have achieved using our services – we love feedback!<br/>
Having a fully working cloud infrastructure in place is just a starting point. We are already working on upcoming features like backups and snapshots as well as IPv6 support.</p>
<br/>
<p>Chances are that your friends want to try us too! Or have they ever launched a server in 10 seconds before? If you pass them our link, they can now sign up immediately and profit from some free credits as well in order to test-drive their first server at cloudscale.ch. Thanks for your support!</p>
<p>Best regards from Zurich – Switzerland,<br/>
your cloudscale.ch team</p>]]></content:encoded>
        </item>
      </channel>
    </rss>