<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Antariksh's Tech diary]]></title><description><![CDATA[Antariksh's cratchpad for documenting everything about tech, software engineering and AI.]]></description><link>https://blog.antariksh.dev</link><generator>RSS for Node</generator><lastBuildDate>Thu, 23 Apr 2026 00:33:14 GMT</lastBuildDate><atom:link href="https://blog.antariksh.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How I connected 300+ AI Models to OpenClaw with Kilo Code API Key]]></title><description><![CDATA[A step-by-step guide to setting up Kilo Code API with OpenClaw — with the sync script that makes it all work.

A few weeks ago, I was juggling three different API keys and auths just to test various AI models. GPT-5.2, sonnet-4.5, Gemini... you get t...]]></description><link>https://blog.antariksh.dev/how-i-connected-300-ai-models-to-openclaw-with-kilo-code-api-key</link><guid isPermaLink="true">https://blog.antariksh.dev/how-i-connected-300-ai-models-to-openclaw-with-kilo-code-api-key</guid><category><![CDATA[AI]]></category><category><![CDATA[automation]]></category><category><![CDATA[claude]]></category><category><![CDATA[#GLM]]></category><category><![CDATA[gpt]]></category><category><![CDATA[kilo code]]></category><category><![CDATA[Kimi]]></category><category><![CDATA[Moonshot]]></category><category><![CDATA[openclaw]]></category><dc:creator><![CDATA[Antariksh Chavan]]></dc:creator><pubDate>Thu, 19 Feb 2026 11:15:58 GMT</pubDate><content:encoded><![CDATA[<p><em>A step-by-step guide to setting up Kilo Code API with OpenClaw — with the sync script that makes it all work.</em></p>
<hr />
<p>A few weeks ago, I was juggling three different API keys and auths just to test various AI models. GPT-5.2, sonnet-4.5, Gemini... you get the picture. Each had its own pricing, rate limits, and authentication quirks. It was a mess and quite pricey.</p>
<p>Then I stumbled upon a tweet by Kilo Code mentioning GLM-5 and MiniMax 2.5 available for free. One API key, 300+ models, unified interface. Sounds perfect, right?</p>
<p>Well, almost. Getting it working with OpenClaw wasn't as straightforward as the docs suggested. I ran into weird context window bugs, reasoning parameter conflicts, and authentication errors that took hours to debug.</p>
<p>This guide is everything I wish I knew before starting. Let's save you that headache.</p>
<hr />
<h2 id="heading-why-kilo-code-api">Why Kilo Code API?</h2>
<p>Managing five different AI provider keys is annoying. Kilo solves that. One API key, 300+ models, all through a single OpenAI-compatible endpoint:</p>
<pre><code>https:<span class="hljs-comment">//api.kilo.ai/api/gateway</span>
</code></pre><p>Supports GPT, Claude, Gemini, Kimi, GLM, and hundreds more. Often cheaper than going direct. Has a generous free tier too.</p>
<hr />
<h2 id="heading-step-1-get-your-kilo-api-key">Step 1: Get Your Kilo API Key</h2>
<ol>
<li>Sign up at <a target="_blank" href="https://kilo.ai">kilo.ai</a></li>
<li>Go to your dashboard → <strong>API Keys</strong></li>
<li>Generate a new key and copy it</li>
</ol>
<hr />
<h2 id="heading-step-2-connect-kilo-to-openclaw">Step 2: Connect Kilo to OpenClaw</h2>
<p>Run the onboarding wizard:</p>
<pre><code class="lang-bash">openclaw onboard
</code></pre>
<p>When prompted, select:</p>
<ul>
<li>AI Provider → <strong>Custom Provider</strong></li>
<li>Base URL → <code>https://api.kilo.ai/api/gateway</code></li>
<li>API Type → <strong>OpenAI-compatible</strong></li>
<li>API Key → Your Kilo key from Step 1</li>
<li>Model ID → e.g. <code>anthropic/claude-sonnet-4.5</code></li>
<li>API Alias → <strong><code>kilo-api</code></strong></li>
</ul>
<blockquote>
<p>⚠️ The alias <code>kilo-api</code> is important — the sync script in Step 3 references this exact name in the config. Use it as-is.</p>
</blockquote>
<p>At this point one model works. But to unlock all 300+, you need the next step.</p>
<hr />
<h2 id="heading-step-3-sync-all-models">Step 3: Sync All Models</h2>
<p>The onboarding only registers one model. This script syncs all 300+ and fixes three bugs you would hit otherwise:</p>
<ul>
<li><strong>Context window stuck at 4096</strong> — some models have 128K–256K windows but OpenClaw defaults them down</li>
<li><strong>Reasoning parameter conflict</strong> — 7 models have mutually exclusive reasoning params, causing a 400 error</li>
<li><strong>Model not allowed</strong> — models must be in both provider list AND allowlist; onboarding only does one</li>
</ul>
<h3 id="heading-save-as-sync-kilo-modelssh">Save as ~/sync-kilo-models.sh</h3>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
<span class="hljs-comment"># sync-kilo-models.sh - Sync model metadata from Kilo API to OpenClaw config</span>
<span class="hljs-comment"># Models are added directly under the kilo-api provider (no prefix)</span>

CONFIG_FILE=<span class="hljs-string">"/home/antarikshc/.openclaw/openclaw.json"</span>
KILO_API_KEY=$(jq -r <span class="hljs-string">'.models.providers."kilo-api".apiKey'</span> <span class="hljs-string">"<span class="hljs-variable">$CONFIG_FILE</span>"</span>)

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Fetching models from Kilo API..."</span>
MODELS_JSON=$(curl -s <span class="hljs-string">"https://api.kilo.ai/api/gateway/models"</span> \
  -H <span class="hljs-string">"Authorization: Bearer <span class="hljs-variable">$KILO_API_KEY</span>"</span>)

<span class="hljs-comment"># Check if we got valid data</span>
<span class="hljs-keyword">if</span> [ -z <span class="hljs-string">"<span class="hljs-variable">$MODELS_JSON</span>"</span> ] || [ <span class="hljs-string">"<span class="hljs-variable">$MODELS_JSON</span>"</span> = <span class="hljs-string">"null"</span> ]; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"❌ Failed to fetch models from Kilo API"</span>
  <span class="hljs-built_in">exit</span> 1
<span class="hljs-keyword">fi</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Building OpenClaw model config..."</span>

<span class="hljs-comment"># Create the models array from Kilo data</span>
<span class="hljs-comment"># Handle reasoning parameter:</span>
<span class="hljs-comment"># - If model has BOTH "reasoning" AND "reasoning_effort" -&gt; set reasoning: false (they're mutually exclusive)</span>
<span class="hljs-comment"># - If model has ONLY "reasoning" -&gt; set reasoning: true</span>
<span class="hljs-comment"># - Otherwise -&gt; set reasoning: false</span>
MODELS_ARRAY=$(<span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$MODELS_JSON</span>"</span> | jq <span class="hljs-string">'[.data[] | 
  {
    "id": .id,
    "name": (.name // .id),
    "reasoning": (
      if (.supported_parameters // [] | contains(["reasoning"])) and 
         (.supported_parameters // [] | contains(["reasoning_effort"])) then
        false  # Both present - mutually exclusive, let user configure manually
      elif (.supported_parameters // [] | contains(["reasoning"])) then
        true   # Only reasoning supported
      else
        false  # No reasoning support
      end
    ),
    "input": ["text"],
    "cost": {
      "input": ((.pricing.prompt // "0") | tonumber * 1000000),
      "output": ((.pricing.completion // "0") | tonumber * 1000000),
      "cacheRead": 0,
      "cacheWrite": 0
    },
    "contextWindow": (.context_length // .top_provider.context_length // 128000),
    "maxTokens": (.top_provider.max_completion_tokens // 4096)
  }
]'</span>)

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Found <span class="hljs-subst">$(echo <span class="hljs-string">"<span class="hljs-variable">$MODELS_ARRAY</span>"</span> | jq 'length')</span> models"</span>

<span class="hljs-comment"># Count reasoning models</span>
REASONING_COUNT=$(<span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$MODELS_ARRAY</span>"</span> | jq <span class="hljs-string">'[.[] | select(.reasoning == true)] | length'</span>)
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - <span class="hljs-variable">$REASONING_COUNT</span> models with reasoning (exclusive)"</span>

<span class="hljs-comment"># Count models with both reasoning + reasoning_effort (set to false)</span>
BOTH_COUNT=$(<span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$MODELS_JSON</span>"</span> | jq <span class="hljs-string">'[.data[] | select(
  (.supported_parameters // [] | contains(["reasoning"])) and 
  (.supported_parameters // [] | contains(["reasoning_effort"]))
)] | length'</span>)
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  - <span class="hljs-variable">$BOTH_COUNT</span> models have BOTH reasoning + reasoning_effort (set to false to avoid conflict)"</span>

<span class="hljs-comment"># Create the agents.defaults.models entries (for allowlist)</span>
MODELS_ALLOWLIST=$(<span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$MODELS_JSON</span>"</span> | jq <span class="hljs-string">'[
  .data[] | 
  {
    "key": ("kilo-api/" + .id),
    "value": {"alias": (.id | split("/") | last | gsub(":free"; "") | gsub(":exacto"; ""))}
  }
] | from_entries'</span>)

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Updating OpenClaw config..."</span>

<span class="hljs-comment"># Update both the provider models AND the agents.defaults.models</span>
jq --argjson models <span class="hljs-string">"<span class="hljs-variable">$MODELS_ARRAY</span>"</span> --argjson allowlist <span class="hljs-string">"<span class="hljs-variable">$MODELS_ALLOWLIST</span>"</span> <span class="hljs-string">'
  .models.providers."kilo-api".models = $models |
  .agents.defaults.models = $allowlist
'</span> <span class="hljs-string">"<span class="hljs-variable">$CONFIG_FILE</span>"</span> &gt; <span class="hljs-string">"<span class="hljs-variable">${CONFIG_FILE}</span>.tmp"</span> &amp;&amp; mv <span class="hljs-string">"<span class="hljs-variable">${CONFIG_FILE}</span>.tmp"</span> <span class="hljs-string">"<span class="hljs-variable">$CONFIG_FILE</span>"</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"✅ Config updated!"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Sample models synced:"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$MODELS_ARRAY</span>"</span> | jq -r <span class="hljs-string">'.[] | "  - \(.id): contextWindow=\(.contextWindow), reasoning=\(.reasoning)"'</span> | head -10
<span class="hljs-built_in">echo</span> <span class="hljs-string">"  ... and <span class="hljs-subst">$(echo <span class="hljs-string">"<span class="hljs-variable">$MODELS_ARRAY</span>"</span> | jq 'length - 10')</span> more"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Allowlist entries: <span class="hljs-subst">$(echo <span class="hljs-string">"<span class="hljs-variable">$MODELS_ALLOWLIST</span>"</span> | jq 'length')</span>"</span>
</code></pre>
<h3 id="heading-run-it">Run It</h3>
<pre><code class="lang-bash">chmod +x ~/sync-kilo-models.sh
~/sync-kilo-models.sh
</code></pre>
<blockquote>
<p>✅ Expected: Found 319 models ... Done! 319 models synced.</p>
</blockquote>
<hr />
<h2 id="heading-step-4-restart-and-verify">Step 4: Restart and Verify</h2>
<pre><code class="lang-bash">openclaw gateway restart
</code></pre>
<p>Then test a couple of models:</p>
<pre><code>/model kilo-api/moonshotai/kimi-k2<span class="hljs-number">.5</span>
Hey, working?

<span class="hljs-regexp">/model kilo-api/</span>anthropic/claude-opus<span class="hljs-number">-4.6</span>
How about you?
</code></pre><p>If both respond, you're done! 🎉</p>
<p>You will see all the Kilo models with a <code>kilo-api/</code> prefix.</p>
<hr />
<h2 id="heading-free-models-worth-knowing">Free Models Worth Knowing</h2>
<ul>
<li><code>z-ai/glm-5:free</code> — 202K context, general + reasoning</li>
<li><code>minimax/minimax-m2.5:free</code> — 204K context, coding &amp; productivity</li>
<li><code>stepfun/step-3.5-flash:free</code> — 256K context, fast responses</li>
<li><code>arcee-ai/trinity-large-preview:free</code> — 131K context, creative writing</li>
</ul>
<hr />
<h2 id="heading-quick-reference">Quick Reference</h2>
<pre><code class="lang-bash"><span class="hljs-comment"># Switch model</span>
/model kilo-api/moonshotai/kimi-k2.5

<span class="hljs-comment"># Check current model</span>
/status

<span class="hljs-comment"># Re-sync models (run monthly or after Kilo adds new ones)</span>
~/sync-kilo-models.sh &amp;&amp; openclaw gateway restart
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Deploying MongoDB Replica Set on Google Cloud Platform]]></title><description><![CDATA[What is Replication?

Synchronizing the same set of data across multiple servers is a common practice that is followed to ensure the availability of data across various servers.  Data Replication is the process of storing data in more than one site o...]]></description><link>https://blog.antariksh.dev/deploying-mongodb-replica-set-on-google-cloud-platform</link><guid isPermaLink="true">https://blog.antariksh.dev/deploying-mongodb-replica-set-on-google-cloud-platform</guid><category><![CDATA[database]]></category><category><![CDATA[Devops]]></category><category><![CDATA[GCP]]></category><category><![CDATA[MongoDB]]></category><dc:creator><![CDATA[Antariksh Chavan]]></dc:creator><pubDate>Mon, 04 May 2020 14:31:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1588883063281/UBtJip66H.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-replication">What is Replication?</h2>
<blockquote>
<p>Synchronizing the same set of data across multiple servers is a common practice that is followed to ensure the availability of data across various servers.  Data Replication is the process of storing data in more than one site or node. </p>
</blockquote>
<h2 id="heading-why-do-you-need-replication">Why do you need Replication?</h2>
<blockquote>
<p>Replication provides redundancy and increases data availability. With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server.</p>
</blockquote>
<h2 id="heading-mongodb-replica-set">MongoDB Replica Set</h2>
<blockquote>
<p>MongoDB Replica Set is a group of MongoDB processes known as mongod instances that basically host the same data set. It is featured by one primary node, several secondary nodes for bearing data and optionally one arbiter node.</p>
</blockquote>
<p>You can learn more about MongoDB Replica Set <a target="_blank" href="https://docs.mongodb.com/manual/replication/">here</a></p>
<h1 id="heading-deploying-on-google-cloud-platform-gcp">Deploying on Google Cloud Platform (GCP)</h1>
<p>Let's begin with the deployment. We will be using Google Cloud's Compute Engine for creating VM Instances and the official MongoDB server binaries. We will be deploying 3 nodes (<a target="_blank" href="https://docs.mongodb.com/manual/core/replica-set-arbiter/">Primary + Secondary + Arbiter</a>) on Compute Engine.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1585413369759/SctzIXOxW.png" alt="MongoDB Primary-Secodary-Arbiter architecture" />
<em>Picture Credits: docs.mongodb.com</em></p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li>Basic knowledge of Linux Terminal and DevOps</li>
<li>Google Cloud Project: GCP provides a free 300$ trial for everyone signing up the first time</li>
<li>Domain Name control panel: This is optional but recommended for easily connecting to the Replica Set from public networks.</li>
</ul>
<h2 id="heading-the-metal">The Metal</h2>
<p>We need a minimum of 3 VM instances for Replica Set. Two of the VM's serve as Primary and Secondary nodes for data replication and third is the arbiter node. It is up to you to set the capacity for the VM's instances. For arbiter node, you can save a few bucks by setting low capacity (even GCP's free f1-micro instance gets the job done).</p>
<blockquote>
<p>This article demonstrates deployment process for Primary + Secondary + Arbiter nodes but you can use the same process for deploying another Secondary node instead of Arbiter.</p>
</blockquote>
<h3 id="heading-mongodbs-default-port">MongoDB's default port</h3>
<p>MongoDB defaults to 27017 port for connection. For our VM's to open this port we need to create a Firewall Rule.</p>
<p>In GCP console, Go to <strong>Firewall Rule</strong> under VPC Network and create a <code>mongo-default-port</code> rule with the following config.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1585758787529/jxw3bCo6j.jpeg" alt="GCP-Mongo-Port.jpg" /></p>
<h3 id="heading-primary-and-secondary-node">Primary and Secondary Node</h3>
<p>We are now ready to launch our Virtual Machines. In this PSA configuration, it is recommended to keep Primary and Secondary nodes of the same config, as in any point of time these two nodes can interchange Primary and Secondary positions. For the Arbiter node, you can opt-in for a low resource VM. In regards to Storage, it is recommended to use SSD. For more information about Hardware considerations <a target="_blank" href="https://docs.mongodb.com/manual/administration/production-notes/">read this</a></p>
<p>Here's an example VM for one of the primary nodes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1586017582796/BDKrDSaUH.jpeg" alt="GCP-Mongo-VM.jpg" /></p>
<ul>
<li>You can choose any Linux distro but Debian is recommended for its stability and bloatware-free environment.</li>
<li>Make sure you untick HTTP/HTTPS ports (unless you have a specific use case) and add <code>mongo-default-port</code> network tag which we created in the last step.</li>
<li>When working with stateful VM's always <a target="_blank" href="https://cloud.google.com/compute/docs/disks/add-persistent-disk">attach an external disk</a> to store your data, which will allow you to retain the data when upgrading/migrating VM's and have a backup service like <a target="_blank" href="https://cloud.google.com/compute/docs/disks/scheduled-snapshots">GCP Snapshot Schedule</a>.</li>
</ul>
<p>If you want to configure your VM for performance, now is a good time. One of the recommended steps for Linux server is <a target="_blank" href="https://docs.mongodb.com/manual/tutorial/transparent-huge-pages/">Disable Transparent Huge Pages (THP)</a>. You can find a few more performance tweaks on the internet but be careful because every linux distro behaves differently.</p>
<h2 id="heading-setting-up-mongodb-replicas">Setting up MongoDB Replicas</h2>
<blockquote>
<p>You need to replicate the following process on all 3 VM's</p>
</blockquote>
<h3 id="heading-install-mongodb">Install MongoDB</h3>
<p>Here's the <a target="_blank" href="https://docs.mongodb.com/manual/administration/install-on-linux/">official guide</a> on installing MongoDB on Linux. Feel free to use any other community guides as long as it's downloading binaries from the official source.</p>
<h3 id="heading-database-path-optional">Database path (Optional)</h3>
<p>By default, MongoDB will store the data in <code>/var/lib/mongodb</code>. If you wish to change the directory or if you are using external disk then you need to create a directory for that path.</p>
<pre><code>sudo mkdir -p /mnt/disks/persistent/db
sudo chmod <span class="hljs-number">777</span> /mnt/disks/persistent/db
</code></pre><h3 id="heading-authentication">Authentication</h3>
<p>We will be using <code>keyfile authentication</code> for this guide, but for a Production environment, you should consider using more secure <a target="_blank" href="https://docs.mongodb.com/manual/core/security-x.509/">x.509 certificates</a>.</p>
<p>The keyfile contains a password/secret in plain-text. You can choose any password or generate a complex random string to store in the keyfile. We are placing the key inside <code>/var/tmp</code> so it is accessible to all users.</p>
<p>Use nano/vim to create the keyfile and type in your password/secret text.</p>
<pre><code>nano /<span class="hljs-keyword">var</span>/tmp/mongo-keyfile
</code></pre><p>Assign permission and ownership to keyfile:</p>
<pre><code>sudo chmod <span class="hljs-number">400</span> /<span class="hljs-keyword">var</span>/tmp/mongo-keyfile
sudo chown -R mongodb:mongodb /<span class="hljs-keyword">var</span>/tmp/mongo-keyfile
</code></pre><h3 id="heading-configuration">Configuration</h3>
<p>MongoDB uses <code>/etc/mongod.conf</code> as startup configuration for <code>mongod</code> process. We can configure that file to support our Replica Set. You can learn more about mongod.conf <a target="_blank" href="https://docs.mongodb.com/manual/reference/configuration-options/">here</a>.</p>
<p>Below is the configuration file we will use in this guide.</p>
<pre><code># mongod.conf

# Storage: Where and how to store data
# Default dbPath: <span class="hljs-regexp">/var/</span>lib/mongodb
<span class="hljs-attr">storage</span>:
  dbPath: &lt;path-to-database-dir&gt;
  journal:
    enabled: true

# Authorization with keyfile
security:
  keyFile: &lt;path-to-keyfile&gt;

# Logging
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# Network Interfaces
# Allows access from all IP address
net:
  port: 27017
  bindIp: 0.0.0.0
  bindIpAll: true

# Replication config
replication:
   replSetName: dev-rs0
   enableMajorityReadConcern: false

# How the process runs
processManagement:
  timeZoneInfo: /usr/share/zoneinfo

# Cloud config for enabling free monitoring
cloud:
   monitoring:
      free:
         state: on
         tags: [dev-node-0]
</code></pre><p>Set <code>storage.dbPath</code>, <code>security.keyFile</code> and <code>replication.replSetName</code> for your mongod.conf. </p>
<p>Now, restart your MongoDB deamon (<strong>mongod</strong>) to let it start with latest mongod.conf.</p>
<pre><code>sudo service mongod restart
</code></pre><p>Verify that the deamon has started successfully.</p>
<pre><code>sudo service mongod status
</code></pre><blockquote>
<p>If the mongod process crashes then check out the debugging section at the bottom of the article.</p>
</blockquote>
<h2 id="heading-networking">Networking</h2>
<h3 id="heading-vm-mapping">VM Mapping</h3>
<p>We will be connecting our Replica Set with <strong>hostnames</strong>, this allows us to connect to our replica set using External or Internal IP. </p>
<p>We need to map our Virtual Machine's <code>/etc/hosts</code> file to each other with the hostnames pointing to other VM's <strong>Internal IP</strong>.</p>
<pre><code><span class="hljs-number">10.128</span><span class="hljs-number">.0</span><span class="hljs-number">.7</span>    mongo-node0.example.com  # Current Machine
<span class="hljs-number">10.128</span><span class="hljs-number">.0</span><span class="hljs-number">.8</span>    mongo-node1.example.com  # Secondary Node
<span class="hljs-number">10.128</span><span class="hljs-number">.0</span><span class="hljs-number">.9</span>    mongo-node2.example.com  # Arbiter Node
</code></pre><p>Similarly, you need to edit the <code>/etc/hosts</code> file on the rest of the VM's as well. You can even use the local IP <code>127.0.0.1</code> instead the Internal IP for the VM that you are currently logged into (current machine).</p>
<h3 id="heading-domain-mapping">Domain Mapping</h3>
<p>If you have a registered domain then you need to edit the DNS records to add <strong>A records</strong> pointing to VM's <strong>External IP</strong>. This will allows us to connect our Replica Set from the public internet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1587069351021/qr6WCyKFq.jpeg" alt="GCP-Mongo-DNS.jpg" /></p>
<p>If you don't have a registered domain or If you wish to <strong>not</strong> make the replica set available on the public internet then use any arbitrary domain name as hostname and on any system that you are using to connect to your replica set, edit the system's <code>hosts</code> file to point the hostnames to VM's IP.</p>
<p>If you are trying to connect any other VM (deployed on GCP) to your Replica Set then you can still use the VM's Internal IP to point the hostname. But for devices that will be accessing the Replica Set from <strong>outside</strong> of GCP network (like your local developer machine) then use the VM's <strong>External IP</strong> to point the hostname:</p>
<pre><code><span class="hljs-number">35.247</span><span class="hljs-number">.10</span><span class="hljs-number">.47</span>      mongo-node0.example.com  # Primary Node
<span class="hljs-number">35.247</span><span class="hljs-number">.199</span><span class="hljs-number">.41</span>     mongo-node1.example.com  # Secondary Node
<span class="hljs-number">35.82</span><span class="hljs-number">.190</span><span class="hljs-number">.33</span>      mongo-node2.example.com  # Arbiter Node
</code></pre><h2 id="heading-initiating-replica-set">Initiating Replica Set</h2>
<p>Its time for us to initiate Replica Set and connect them to each other. SSH into the VM that you want as Primary node and connect to the running mongod.</p>
<pre><code>mongo
</code></pre><p>Initiate the Replica Set on this node to make this a Primary node.</p>
<pre><code>rs.initiate()
</code></pre><p>Check the replica status with</p>
<pre><code>rs.status()
</code></pre><p>If the replica set has successfully been initialized then you'll see the replica set name and an entry in <code>members</code> array with the current machine state as <strong>PRIMARY</strong>.</p>
<p>Once the node is in PRIMARY mode, you will lose access to any administrative commands used inside MongoDB. So lets create an user with DB Admin priviledges on the Mongo's default <code>admin</code> database.</p>
<pre><code>use admin
</code></pre><p>and create the user with admin roles.</p>
<pre><code>db.createUser(
  {
    <span class="hljs-attr">user</span>: <span class="hljs-string">"username"</span>,
    <span class="hljs-attr">pwd</span>: <span class="hljs-string">"password"</span>,
    <span class="hljs-attr">roles</span>: [
      {
        <span class="hljs-attr">role</span>: <span class="hljs-string">"readWriteAnyDatabase"</span>,
        <span class="hljs-attr">db</span>: <span class="hljs-string">"admin"</span>
      },
      {
        <span class="hljs-attr">role</span>: <span class="hljs-string">"userAdminAnyDatabase"</span>,
        <span class="hljs-attr">db</span>: <span class="hljs-string">"admin"</span>
      },
      {
        <span class="hljs-attr">role</span>: <span class="hljs-string">"dbAdminAnyDatabase"</span>,
        <span class="hljs-attr">db</span>: <span class="hljs-string">"admin"</span>
      },
      {
        <span class="hljs-attr">role</span>: <span class="hljs-string">"clusterAdmin"</span>,
        <span class="hljs-attr">db</span>: <span class="hljs-string">"admin"</span>
      }
    ]
  }
)
</code></pre><p>The above list of roles allows almost all the permissions that any admin might need but you can configure the roles as per your requirement. It is mandatory to keep atleast one user with <strong>clusterAdmin</strong> role to handle the Replica Set.</p>
<p>Now exit and reconnect to mongod with user credentials</p>
<pre><code>mongo -u username
</code></pre><p>you will be prompted to enter password. Now you can use any regular database commands. Whatever data you create now will be replicated by the nodes that we will add in the next step.</p>
<p>When you initiate the Replica Set it will be created with a default  <a target="_blank" href="https://docs.mongodb.com/manual/reference/replica-configuration">Replica configuration</a> . You can check the configuration with</p>
<pre><code>rs.conf()
</code></pre><h2 id="heading-adding-other-nodes-to-replica-set">Adding other nodes to Replica Set</h2>
<h3 id="heading-setting-the-priority-for-members">Setting the priority for members</h3>
<p>We need to edit the default Replica config for setting the priority of our current primary node. Priority can be any integer. I'll recommend setting higher priority for the member you want as the first preference for Primary Node voting. In this case, we are going to set priorities as this:</p>
<ul>
<li>10 - Primary Node</li>
<li>5 - Secondary Node</li>
<li>0 - Arbiter Node</li>
</ul>
<p>As replica set currently only has one primary node connected, we will change its priority by reconfiguring the rs.conf()</p>
<p>Save the current configuration in a variable</p>
<pre><code>cfg = rs.conf()
</code></pre><p>Make sure you have 1 node present in members array and then set its new priority just like you would do in any other programming language.</p>
<pre><code>cfg.members[<span class="hljs-number">0</span>].priority = <span class="hljs-number">10</span>
</code></pre><p>Now to make the changes take effect, reconfigure the Replica set with new configuration</p>
<pre><code>rs.reconfig(cfg)
</code></pre><h3 id="heading-connecting-secondary-node">Connecting secondary node</h3>
<p>Before proceeding to connect the secondary node, make sure its mongod process is properly running. </p>
<p>Now from the PRIMARY node, add secondary node configuration.</p>
<pre><code>rs.add( { <span class="hljs-attr">host</span>: <span class="hljs-string">"mongo-node1.example.com:27017"</span>, <span class="hljs-attr">priority</span>: <span class="hljs-number">5</span>, <span class="hljs-attr">votes</span>: <span class="hljs-number">1</span> } )
</code></pre><p>After exchanging a couple of heartbeats and syncing the data, the node should attain SECONDARY status in <code>rs.status()</code>.</p>
<h3 id="heading-connecting-arbiter-node">Connecting arbiter node</h3>
<p>Similar to secondary node, we will now add arbiter node.</p>
<pre><code>rs.add( { <span class="hljs-attr">host</span>: <span class="hljs-string">"mongo-node2.example.com:27017"</span>, <span class="hljs-attr">priority</span>: <span class="hljs-number">0</span>, <span class="hljs-attr">votes</span>: <span class="hljs-number">1</span>, <span class="hljs-attr">arbiterOnly</span>: <span class="hljs-literal">true</span>, <span class="hljs-attr">hidden</span>: <span class="hljs-literal">true</span> } )
</code></pre><blockquote>
<p>This node won't replicate any data and will be hidden to any read requests.</p>
</blockquote>
<p>...and that's about it. You should now have a fully functional MongoDB Replica Set deployed on Google Cloud Platform.</p>
<h2 id="heading-connecting-to-replica-set-from-clients">Connecting to Replica Set from clients</h2>
<p>There are various ways you can connect to the replica set but make sure you use the specified domain names of nodes and the replica set name that was specified in mongod.conf.</p>
<p>From MongoDB 3.4 onwards, you can use the <a target="_blank" href="https://docs.mongodb.com/manual/reference/connection-string/">Mongo Connection String</a> in shell command, example:</p>
<pre><code>mongo <span class="hljs-string">"mongodb://username:password@mongo-node0.example.com:27017,mongo-node1.example.com:27017,mongo-node2.example.com:27017/?replicaSet=repl-set-name"</span>
</code></pre><p>You will be using the same string in various client drivers like Node.js, Java and Go.</p>
<h2 id="heading-debugging">Debugging</h2>
<h3 id="heading-mongod-process-crashing">mongod process crashing</h3>
<p>You can find the logs for crashes in the file specified at <code>systemLog.path</code> in mongod.conf. By default the path will be <code>/var/log/mongodb/mongod.log</code>. More info about <a target="_blank" href="https://docs.mongodb.com/manual/reference/log-messages/">log messages</a>.</p>
<h3 id="heading-replica-nodes-not-able-to-connect-to-each-other">Replica Nodes not able to connect to each other</h3>
<ul>
<li>Use <code>ping</code> to test the general network connectivity of each node to other nodes.</li>
<li>Verify your <code>hosts</code> file for correctly mapping between IP and Domain Name.</li>
<li>Make sure you have correct <strong>domain names</strong> as <code>members.host</code> in rs.conf(). Reconfigure if you dont.</li>
</ul>
<h3 id="heading-replica-set-voting">Replica Set Voting</h3>
<p>If you want to test the voting process of Replica Set then there's a helper method <code>rs.stepDown()</code> for switching the primary node. This method, when called on Primary Node, will make the node forgo its Primary status and trigger a vote between Replica Sets.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring Apollo GraphQL for Android - Real-world examples]]></title><description><![CDATA[This is Part II of the Apollo GraphQL series that I'm writing for Android. If you are new to Apollo GraphQL then I recommend that you read my  first blog post of this series.
In the previous part, I covered Introduction and setting up Apollo GraphQL ...]]></description><link>https://blog.antariksh.dev/exploring-apollo-graphql-for-android-real-world-examples</link><guid isPermaLink="true">https://blog.antariksh.dev/exploring-apollo-graphql-for-android-real-world-examples</guid><category><![CDATA[Android]]></category><category><![CDATA[Apollo GraphQL]]></category><category><![CDATA[GraphQL]]></category><category><![CDATA[Kotlin]]></category><dc:creator><![CDATA[Antariksh Chavan]]></dc:creator><pubDate>Sun, 01 Sep 2019 10:03:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1567332187161/AQawL4PQo.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is Part II of the Apollo GraphQL series that I'm writing for Android. If you are new to Apollo GraphQL then I recommend that you read my  <a target="_blank" href="https://blog.antariksh.dev/exploring-apollo-graphql-for-android-cjyx2nnfl004kpjs1kta26sjx">first blog post</a> of this series.</p>
<p>In the previous part, I covered Introduction and setting up Apollo GraphQL for Android and we also finished hitting our first GraphQL query. This means we received the data from the server. 
Now what about updating the data on the server? This resembles the POST request for REST API's. But in GraphQL world it's called Mutation, using which we can update the data on the server.</p>
<h2 id="heading-mutation">Mutation</h2>
<p>Now let's create the similar .graphql file for our Mutation.</p>
<pre><code>mutation UpdateUser(
    $userId: <span class="hljs-built_in">String</span>!
    $name: <span class="hljs-built_in">String</span>
    <span class="hljs-attr">$phone</span>: <span class="hljs-built_in">String</span>
    <span class="hljs-attr">$email</span>: <span class="hljs-built_in">String</span>
    <span class="hljs-attr">$age</span>: Int
) {
    updateUserDetails(
        userId: $userId
        <span class="hljs-attr">name</span>: $name
        <span class="hljs-attr">phone</span>: $phone
        <span class="hljs-attr">email</span>: $email
        <span class="hljs-attr">age</span>: $age
    ) {
        userId
    }
}
</code></pre><p>The above mutation is straightforward and can be performed with similar method as query. But usually the User entity will have many more fields and it will be defined in a Custom Data type.
So our mutation might look like:</p>
<pre><code>mutation UpdateUser($user: UserInput!) {
    updateUserDetails(
        user: $user
    ) {
        user {
            _id
            name
        }
    }
}
</code></pre><p>You might be wondering on how to pass objects with custom data types (like UserInput in this case) in the parameters. Well, as the Apollo generate classes for you, it also provides a builder() function to let you instantiate those input classes with data.</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">val</span> userInput = UserInput.builder()
    .userId(<span class="hljs-string">"user-id-string"</span>)
    .name(<span class="hljs-string">"John Doe"</span>)
    .phone(<span class="hljs-string">"987654321"</span>)
    .email(<span class="hljs-string">"hello@world.com"</span>)
    .age(<span class="hljs-number">21</span>)
    .build()
</code></pre>
<p>Now you can pass this object in our mutation parameter.</p>
<pre><code class="lang-kotlin">apolloClient.mutate(
    UpdateUserMutation.builder()
        .user(userInput)
        .build()
)
    .enqueue(<span class="hljs-keyword">object</span> : ApolloCall.Callback&lt;UpdateUserMutation.Data&gt;() {

        <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onResponse</span><span class="hljs-params">(response: <span class="hljs-type">Response</span>&lt;<span class="hljs-type">UpdateUserMutation</span>.<span class="hljs-type">Data</span>&gt;)</span></span> {
            <span class="hljs-keyword">if</span> (!response.hasErrors()) {
                <span class="hljs-comment">// Response successful</span>
                Log.d(TAG, <span class="hljs-string">"Response: <span class="hljs-subst">${response.data()}</span>"</span>)
            } <span class="hljs-keyword">else</span> {
                <span class="hljs-comment">// Request was successful but contains errors</span>
                Log.d(TAG, <span class="hljs-string">"Response has errors: <span class="hljs-subst">${response.errors()}</span>"</span>)
            }
        }

        <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onFailure</span><span class="hljs-params">(e: <span class="hljs-type">ApolloException</span>)</span></span> {
            <span class="hljs-comment">// Request Failed</span>
            e.printStackTrace()
        }
    })
</code></pre>
<p>And there you go, we have our first mutation done.</p>
<h3 id="heading-custom-type-adapter">Custom Type Adapter</h3>
<p>There are times when data types at server side and client side won't be compatible. We can make use of Custom Type Adapters to tackle that issue. Below is the example of server accepting Date as a 'String' in UTC Format. The CustomTypeAdapter will have encode() and decode() functions for you to override and add your own parsing logic.</p>
<pre><code class="lang-kotlin"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">CustomDateAdapter</span> : <span class="hljs-type">CustomTypeAdapter</span>&lt;<span class="hljs-type">Date</span>&gt; </span>{

    <span class="hljs-keyword">companion</span> <span class="hljs-keyword">object</span> {
        <span class="hljs-keyword">const</span> <span class="hljs-keyword">val</span> DATE_FORMAT = <span class="hljs-string">"yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"</span>
    }

    <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">encode</span><span class="hljs-params">(value: <span class="hljs-type">Date</span>)</span></span>: CustomTypeValue&lt;*&gt; {
        <span class="hljs-comment">// Parse Date in UTC TimeZone</span>
        <span class="hljs-keyword">val</span> calendar = Calendar.getInstance()
        calendar.timeZone = TimeZone.getTimeZone(<span class="hljs-string">"UTC"</span>)
        calendar.time = value
        <span class="hljs-keyword">val</span> time = calendar.time

        <span class="hljs-comment">// Parse Date in UTC Format</span>
        <span class="hljs-keyword">val</span> sdf = SimpleDateFormat(DATE_FORMAT, Locale.ENGLISH)
        <span class="hljs-keyword">return</span> CustomTypeValue.GraphQLString(sdf.format(time))
    }

    <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">decode</span><span class="hljs-params">(value: <span class="hljs-type">CustomTypeValue</span>&lt;*&gt;)</span></span>: Date {
        <span class="hljs-comment">// Parse UTC formatted Date String in Date</span>
        <span class="hljs-keyword">val</span> sdf = SimpleDateFormat(DATE_FORMAT, Locale.ENGLISH)
        sdf.timeZone = TimeZone.getTimeZone(<span class="hljs-string">"UTC"</span>)
        <span class="hljs-keyword">val</span> date = sdf.parse(value.value.toString())

        <span class="hljs-comment">// Change timezone to default</span>
        <span class="hljs-keyword">val</span> calendar = Calendar.getInstance()
        calendar.timeZone = TimeZone.getDefault()
        calendar.time = date

        <span class="hljs-keyword">return</span> calendar.time
    }
}
</code></pre>
<p>Now to actually make the CustomTypeAdapter work, add the CustomTypeAdapter into our ApolloClient instantiation.</p>
<pre><code class="lang-kotlin">ApolloClient.builder()
    .serverUrl(<span class="hljs-string">"http://localhost:8080/graphql"</span>)
    .okHttpClient(okHttpClient)
    .addCustomTypeAdapter(CustomType.DATE, CustomDateAdapter())
    .build()
</code></pre>
<p>And then specify on which data type you have mapped the CustomTypeAdapter in your <strong>app level build.gradle</strong> file</p>
<pre><code class="lang-gradle">android {
    ...
    apollo {
        customTypeMapping = ["Date": "java.util.Date"]
    }
    ...
}
</code></pre>
<h2 id="heading-file-upload-support">File Upload Support</h2>
<p>Since version 1.0.1, Apollo has added native support for uploading file on Android. It is based on this  <a target="_blank" href="https://github.com/jaydenseric/graphql-multipart-request-spec">specification</a> for backend GraphQL server.</p>
<p>To start with File uploading, you need to add <strong>customTypeMapping</strong> to app level build.gradle file, similar to CustomTypeAdapter:</p>
<pre><code class="lang-gradle">apollo {
  customTypeMapping = [
    "Upload" : "com.apollographql.apollo.api.FileUpload"
  ]
}
</code></pre>
<p>GraphQL schema uses custom scalar type named Upload for FileUpload.</p>
<pre><code class="lang-graphql"><span class="hljs-keyword">mutation</span> UploadFile(
    <span class="hljs-variable">$file</span>: Upload!
    <span class="hljs-variable">$type</span>: Int!
) {
    uploadFile(
        <span class="hljs-symbol">file:</span> <span class="hljs-variable">$file</span>
        <span class="hljs-symbol">type:</span> <span class="hljs-variable">$type</span>
    )
}
</code></pre>
<p>and to call the mutation with file upload:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">val</span> file: File = ...
<span class="hljs-comment">// Get the MIME Type for file or else return image/png as default</span>
<span class="hljs-keyword">val</span> mediaType = MediaType.parse(file.getMimeType() ?: <span class="hljs-string">"image/png"</span>)

<span class="hljs-keyword">val</span> apolloCall = UploadFileMutation.builder()
    .file(FileUpload(mediaType.toString(), file))
    .type(<span class="hljs-number">2</span>)
    .build()

<span class="hljs-comment">// Enqueue the call however you like it</span>
</code></pre>
<p>GraphQL also allows you to upload multile files in single mutation inside an array or separate input fields, if your API accepts it.</p>
<h2 id="heading-rx-support">Rx Support</h2>
<p>It's pretty straight forward to convert Callbacks into RxJava's Reactive streams. All you have to do is add Rx2 extension dependency:</p>
<pre><code class="lang-gradle">implementation 'com.apollographql.apollo:apollo-rx2-support:x.y.z'
</code></pre>
<p>and then wrap the query into Rx2Apollo wrapper:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">val</span> observable = Rx2Apollo.from(
    apolloClient.query(
        GetAttendeeDetailsQuery.builder()
            .userId(userId)
            .eventId(eventId)
            .build()
    )
)
    .subscribeOn(Schedulers.io())
    .observeOn(AndroidSchedulers.mainThread())
</code></pre>
<blockquote>
<p> Note: Apollo is soon dropping the support for RxJava1</p>
</blockquote>
<p><strong>For more information on how to use the Observables you can refer many other blog posts like  <a target="_blank" href="https://www.vogella.com/tutorials/RxJava/article.html">this</a>  and  <a target="_blank" href="https://blog.danlew.net/2014/09/15/grokking-rxjava-part-1/">this</a>.</strong></p>
<h2 id="heading-coroutines-support">Coroutines Support</h2>
<p>Apollo supports Coroutines with simple extension functions in Kotlin. You can check out them  <a target="_blank" href="https://github.com/apollographql/apollo-android/blob/master/apollo-coroutines-support/src/main/kotlin/com/apollographql/apollo/coroutines/CoroutinesExtensions.kt">here</a>.</p>
<p>When it comes to query it's recommended to use Coroutine Channels as they emit multiple responses, (usually one from cache and other one from network).</p>
<pre><code class="lang-kotlin">GlobalScope.launch {
    <span class="hljs-keyword">val</span> attendeeQuery = apolloClient.query(
        GetAttendeesQuery.builder()
            .eventId(eventId)
            .build()
        )
            .httpCachePolicy(HttpCachePolicy.CACHE_FIRST)

    <span class="hljs-keyword">val</span> channel = attendeeQuery.toChannel()
    channel.consumeEach {
        <span class="hljs-comment">// it.data() contains the response</span>
    }
}

<span class="hljs-comment">// invoke channel.cancel() in Activity/Fragment/ViewModel's onDestroy callbacks</span>
<span class="hljs-comment">// to avoid any memory leaks</span>
</code></pre>
<p>In case of Mutations there aren't any multiple response callbacks, so using .toDeferred() is the way to go.</p>
<pre><code class="lang-kotlin">GlobalScope.launch {
    <span class="hljs-keyword">val</span> mutation =  apolloClient.mutate(
        UpdateUserMutation.builder()
            .user(userInput)
            .build()
    )

    <span class="hljs-keyword">val</span> deferred = mutation.toDeferred()
    <span class="hljs-keyword">val</span> response = deferred.await()
}
</code></pre>
<p>The .toDeferred() docs reads, "Converts an ApolloCall to an Deferred. This is a convenience method that will only return the first value emitted. If the more than one response is required, for an example to retrieve cached and network response, use toChannel() instead."</p>
<p><strong>Error handling in Coroutines is done using try-catch block.</strong></p>
<p><code>GlobalScope is only used for demonstration purpose. Please use appropriate Coroutine Scopes or suspend functions in your project.</code></p>
<h3 id="heading-synchronous-requests-with-coroutines">Synchronous requests with Coroutines</h3>
<p>At times you'll need to make a synchronous request, which Apollo used to support out of the box. But has dropped the support due to many API call's returning multiple callbacks due to caching.</p>
<p><strong>The .toDeferred() method can act as Synchronous requests as the .await() function will suspend until we get the response.</strong></p>
<p>There's one more implementation provided by one of the community member. (ref.  <a target="_blank" href="https://github.com/apollographql/apollo-android/issues/606#issuecomment-354562134">#606</a>).</p>
<p>To use it, add this Extension function:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">suspend</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-type">&lt;T&gt;</span> ApolloCall<span class="hljs-type">&lt;T&gt;</span>.<span class="hljs-title">execute</span><span class="hljs-params">()</span></span> = suspendCoroutine&lt;Response&lt;T&gt;&gt; { cont -&gt;
    enqueue(<span class="hljs-keyword">object</span> : ApolloCall.Callback&lt;T&gt;() {
        <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onResponse</span><span class="hljs-params">(response: <span class="hljs-type">Response</span>&lt;<span class="hljs-type">T</span>&gt;)</span></span> {
            cont.resume(response)
        }

        <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onFailure</span><span class="hljs-params">(e: <span class="hljs-type">ApolloException</span>)</span></span> {
            cont.resumeWithException(e)
        }
    })
}
</code></pre>
<p>Now you can make synchronous requests by just calling <strong>.execute()</strong> on Apollo Call's.</p>
<pre><code class="lang-kotlin">GlobalScope.launch {
    <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">val</span> response = client.mutate(
            UpdateUserMutation.builder()
                .user(userInput)
                .build()
        ).execute()

        <span class="hljs-keyword">if</span> (!response.hasErrors()) {
            <span class="hljs-comment">// Response successful</span>
            <span class="hljs-comment">// get data in response.data()</span>
        } <span class="hljs-keyword">else</span> {
            <span class="hljs-comment">// Response has errors</span>
        }
    } <span class="hljs-keyword">catch</span> (e: Exception) {
        <span class="hljs-comment">// Request failed</span>
        <span class="hljs-comment">// It's best practice to catch various exception's in separate catch blocks</span>
    }
}
</code></pre>
<p>You can use runBlocking { ... } instead of GlobalScope to return any values from inside the coroutine. This method is really useful in  <a target="_blank" href="https://gist.github.com/antarikshc/4103c5a0bd4651e0522fdd3b06b449cd">WorkManager's</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring Apollo GraphQL for Android]]></title><description><![CDATA[From what looks like such a great client, it definitely lacks the documentation and community support that it deserves. In this blogging series, I'll try to bridge the gap between what is documented and what you can achieve with Apollo's GraphQL clie...]]></description><link>https://blog.antariksh.dev/exploring-apollo-graphql-for-android</link><guid isPermaLink="true">https://blog.antariksh.dev/exploring-apollo-graphql-for-android</guid><category><![CDATA[Android]]></category><category><![CDATA[Apollo GraphQL]]></category><category><![CDATA[GraphQL]]></category><category><![CDATA[Kotlin]]></category><dc:creator><![CDATA[Antariksh Chavan]]></dc:creator><pubDate>Sun, 04 Aug 2019 14:39:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1567332151961/s8HKvP4zc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>From what looks like such a great client, it definitely lacks the documentation and community support that it deserves. In this blogging series, I'll try to bridge the gap between what is documented and what you can achieve with Apollo's GraphQL client for Android. Now let's get the basics out of the way,</p>
<h3 id="heading-what-is-graphql">What is GraphQL?</h3>
<blockquote>
<p>"GraphQL decouples apps from services by introducing a flexible query language. Instead of a custom API for each screen, app developers describe the data they need, service developers describe what they can supply, and GraphQL automatically matches the two together. Teams ship faster across more platforms, with new levels of visibility and control over how their data is used." - apollographql.com</p>
</blockquote>
<p>In simple words, GraphQL lets the client decide which data they want, rather than having the server send a fixed set of data. GraphQL is an alternative approach to building an API over REST.</p>
<h4 id="heading-adding-graphql-to-your-android-project">Adding GraphQL to your Android project:</h4>
<p>First of all you will need to add repository and dependency on Project level build.gradle file:</p>
<pre><code class="lang-gradle">buildscript {
  repositories {
    jcenter()
  }
  dependencies {
    classpath 'com.apollographql.apollo:apollo-gradle-plugin:x.y.z'
  }
}
</code></pre>
<p>x.y.z represents the version number you want to add. You can find the latest version number over at  <a target="_blank" href="https://github.com/apollographql/apollo-android">Apollo-Android</a> GitHub repo.</p>
<p>Now apply the plugin on App level build.gradle file:</p>
<pre><code class="lang-gradle">apply plugin: 'com.apollographql.android'
</code></pre>
<p>This should give you the core Apollo GraphQL client. To support Apollo GraphQL on Android we need to build <strong>apollo-runtime</strong> and <strong>android-support</strong> dependencies.</p>
<pre><code class="lang-gradle">implementation 'com.apollographql.apollo:apollo-runtime:1.0.0'
implementation 'com.apollographql.apollo:apollo-android-support:1.0.0'
implementation 'com.squareup.okhttp3:okhttp:3.14.2'
</code></pre>
<p>This provides us with support for working with Android's UI thread and Worker threads. We are building OkHttp, because Apollo GraphQL uses OkHttp as the underlying network driver.</p>
<p>Additionally, if you want Rx2 or Coroutines support then Apollo has you covered as well.</p>
<pre><code class="lang-gradle">implementation 'com.apollographql.apollo:apollo-rx2-support:1.0.0'
implementation 'com.apollographql.apollo:apollo-coroutines-support:1.0.0'
</code></pre>
<ul>
<li>Replace 1.0.0 with the latest version of the libraries</li>
</ul>
<p>There's one more step left before we get started. As Apollo handles the Database Schema on backend, we need to add <strong>schema.json</strong> in our Android project.</p>
<pre><code class="lang-bash">apollo schema:download --endpoint=http://localhost:8080/graphql schema.json
</code></pre>
<p>This will require you to install Apollo-CLI.</p>
<p>At the level of /main folder (inside your project) you need to create the following directory structure and paste the schema.json</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1564036718411/Uk0IVIjXv.png" alt="Android-GraphQL-FileStructure.png" /></p>
<p>And that's about it. We are ready to rock the GraphQL client.</p>
<h4 id="heading-lets-write-our-first-basic-query">Let's write our first basic query</h4>
<p>Create the file GetUser.graphql with the following text in the same location as schema.json:</p>
<pre><code class="lang-graphql"><span class="hljs-keyword">query</span> GetUser( <span class="hljs-variable">$userId</span>: String!, <span class="hljs-variable">$phone</span>: String ) {
  getUserDetails( <span class="hljs-symbol">userId:</span> <span class="hljs-variable">$userId</span>, <span class="hljs-symbol">phone:</span> <span class="hljs-variable">$phone</span> ) {
    name
    email
    age
  }
}
</code></pre>
<ul>
<li>GetUser is the client side name for the query and getUserDetails is what the server expects as a query. </li>
<li>$userId, $phone are the variables that we declare to pass down query parameters.</li>
<li>'name', 'email', 'age' are the variables that we want the query to return. </li>
</ul>
<h4 id="heading-now-the-actual-execution-of-the-query">Now the actual execution of the query:</h4>
<p>Following function builds and returns the Apollo client.</p>
<pre><code class="lang-kotlin"><span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">provideApolloClient</span><span class="hljs-params">( )</span></span>: ApolloClient {
        <span class="hljs-keyword">return</span> ApolloClient.builder()
            .serverUrl(<span class="hljs-string">"http://localhost:8080/graphql"</span>)
            .okHttpClient(OkHttpClient().newBuilder().build())
            .build()
}
</code></pre>
<p>Now build your Android project which will trigger Apollo to generate all the needed classes. </p>
<blockquote>
<p>Remember, Apollo also parse the response and generate classes for you. So there's no need to build your own data class and then parse response with any JSON parsing libraries.</p>
</blockquote>
<pre><code class="lang-kotlin"><span class="hljs-keyword">val</span> apolloClient = provideApolloClient( )

apolloClient.query(
            GetUserQuery.builder()
                .userId(<span class="hljs-string">"provide-user-id"</span>)
                .phone(<span class="hljs-string">"provide-phone"</span>)
                .build()
        )
            .enqueue(<span class="hljs-keyword">object</span> : ApolloCall.Callback&lt;GetUserQuery.Data&gt;() {

                <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onResponse</span><span class="hljs-params">(response: <span class="hljs-type">Response</span>&lt;<span class="hljs-type">GetUserQuery</span>.<span class="hljs-type">Data</span>&gt;)</span></span> {
                    <span class="hljs-keyword">if</span> (!response.hasErrors()) {
                        <span class="hljs-comment">// Here response().data() contains the data you requested</span>
                    } <span class="hljs-keyword">else</span> {
                        Log.d(TAG, <span class="hljs-string">"Request Failure <span class="hljs-subst">${response.errors()}</span>"</span>)
                    }
                }

                <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onFailure</span><span class="hljs-params">(e: <span class="hljs-type">ApolloException</span>)</span></span> {
                    e.printStackTrace()
                }
            })
</code></pre>
<p>That's about it!</p>
<p>Now that we have taken the first step into GraphQL world let's dive deeper into some real-world use cases into the  <a target="_blank" href="https://blog.antariksh.dev/exploring-apollo-graphql-for-android-real-world-examples-ck00t4osc001yb4s1naj302my">Part II</a> of this series.</p>
<p>Bonus tip: Before you are writing any new query/mutation, just update the GraphQL schema.json and see if the existing .graphql files compile or not. As GraphQL codegen all the classes at compile time and does not state which file is causing the validation failure.</p>
<p>While writing this article I have found  <a target="_blank" href="https://plugins.jetbrains.com/plugin/8097-js-graphql">this</a> cool GraphQL plugin for Android Studio which will help you with code completion and validation errors. To setup the plugin you need to create a config file named <em>.graphqlconfig</em> in the same folder where you have kept schema.json and .graphql files.</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"name"</span>: <span class="hljs-string">"Demo App GraphQL Schema"</span>,
    <span class="hljs-attr">"schemaPath"</span>: <span class="hljs-string">"schema.json"</span>,
    <span class="hljs-attr">"extensions"</span>: {
        <span class="hljs-attr">"endpoints"</span>: {
            <span class="hljs-attr">"Default GraphQL Endpoint"</span>: {
                <span class="hljs-attr">"url"</span>: <span class="hljs-string">"http://192.168.0.123:2000/graphql"</span>,
                <span class="hljs-attr">"headers"</span>: {
                    <span class="hljs-attr">"user-agent"</span>: <span class="hljs-string">"JS GraphQL"</span>
                },
                <span class="hljs-attr">"introspect"</span>: <span class="hljs-literal">true</span>
            }
        }
    }
}
</code></pre>
]]></content:encoded></item></channel></rss>