RSS

API Logging News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.

Business Concerns Around APIs Monitoring Their API Consumption

I heard an interesting thing from a financial group the other day. They won’t use APIs because they know that API providers are logging and tracking their every API request, and they don’t want any 3rd party providers to know what they are doing. It is the business version of privacy and security concerns, where businesses are starting to get nervous about the whole app, big data, and API economy. Exposing another way in which bad behavior in the space will continue to come back and hurt the wider community–all because we refuse to reign in the most badly behave around us, and because we all want to be able to exploit and capture value via our APIs–sometimes at all costs.

I’ve been a big advocate of keying up valuable data and content, requiring developers to authenticate before they get access resources, allowing API providers to log and analyze all API traffic. When done right, this can help API providers better understand how consumers are using their resources, but it can also be something that quickly becomes more surveillance than it is ever about awareness building, and will be something that will run developers off when they become aware of what is happening. This is one of the reasons I have surveillance as a stop along the API lifecycle, so that I can better understand this shift as it occurs, and track on the common ways in which APIs are used to surveil businesses and individuals.

As surveillance, and the extraction of individual, business, and government value continues at the API layer, the positive views of APIs will continue to erode. They will eventually be seen as only about giving away your digital assets for free, and be seen as a mechanism for surveillance and exploitation. While I miss the glory days of the API space, I don’t miss the naivety of the times. I do not have any delusions that technology is inherently good, or even neutral anymore. I expect the business and political factions of the web to use APIs for bad. I’ll keep preaching that they can be used for good, but I’ll also be calling out the ways in which they are burning bridges, and running people off. As the badly behaved tech giants, and venture backed startups keep mining and surveilling their way to new revenue streams.

When APIs fall out of popularity, and companies, organizations, institutions, and government agency fall back to more proprietary, closed-doors approaches to making digital resources available, I won’t be surprised. I’ll know that it wasn’t due to any technical fault of APIs, it is because of the business and political motivations of API providers. Profits at all costs. Not actually caring about the privacy and security of consumers. Believing they aren’t the worst behaved, and looking the other way when partners and other actors are badly behaved. It is just businesses in the end. It isn’t personal. I’m just hoping that individuals begin to wake up to the monitoring and exploitation that is occurring, in the same way businesses are–otherwise, we are all truly screwed.


Venture Capital Obfuscating The Opportunities For Value Exchange At The Api Management Level

404: Not Found


API Life Cycle Basics: API Logging

Logging has always been in the background of other stops along the API lifecycle, most notably the API management layer. However increasingly I am recommending pulling logging out of API management, and making it a first-class citizen, ensuring that the logging of all systems across the API lifecycle are aggregated, and accessible, allowing them to be accessed alongside other resources. Almost every stop in this basics of an API life cycle series will have its own logging layer, providing an opportunity to better understand each stop, but also side by side as part of the bigger picture.

There are some clear leaders when it comes to logging, searching, and analyzing large volumes of data generated across API operations. This is one area you should not be reinventing the wheel in, and you need to be leveraging the experience of the open source tooling providers, as well as the cloud providers who have emerged across the landscape. Here is a snapshot of a few providers who will help you make logging a first class citizen in your API life cycle.

Elastic Stack - Formerly known as the Elk Stack, the evolved approach to logging, search, and analysis out of Elastic. I recommend incorporating it into all aspects of operations, and deploying APIs to make them first class citizens. Logmatic - Whatever the language or stack, staging or production, front or back, Logmatic.io centralizes all your logs and metrics right into your browser. Nagio - Nagios Log Server greatly simplifies the process of searching your log data. Set up alerts to notify you when potential threats arise, or simply query your log data to quickly audit any system. Google Stackdriver - Google Stackdriver provides powerful monitoring, logging, and diagnostics. AWS CloudWatch - Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS.

I recommend cracking open logging from EVERY layer, and shipping them into a central system like Elastic for making them accessible. While each stop along the API lifecycle will come with its own logging and analysis solutions, depending on the services and tooling used, logs should also be shipped as part of a central system for analysis at the bigger picture level. Each stop along the API life cycle will have its own tooling and service, which will most likely come with its own logging and analysis services. Use these solutions. However, don’t stop there, and consider the benefits from looking at log data side by side, and what the big picture might hold.

Logging will significantly overlap with the security stop along the API life cycle. The more logging you are doing, and the more accessible these logs are, the more comprehensive your API security will become. You’ll find this becomes true at other stops along the API life cycle, and you will be able to better deliver o discovery, testing, define, and deliver in other ways, with a more comprehensive logging strategy. Remember, logging isn’t just about providing a logging layer, it is also about having APIs for your logging, providing a programmatic layer to understand how things are working, or not.


API Life Cycle Basics: API Logging

Logging has always been in the backround of other stops along the API lifecycle, most notably the API management layer. However increasingly I am recommending pulling logging out of API management, and making it a first-class citizen, ensuring that the logging of all systems across the API lifecycle are aggregated, and accessible, allowing them to be accessed alongside other resources. Almost every stop in this basics of an API life cycle series will have its own logging layer, providing an opportunitiy to better understand each stop, but also side by side as part of the bigger picture.

There are some clear leaders when it comes to logging, searching, and analyzing large volumes of data generated across API operations. This is one area you should not be reinventing the wheel in, and you need to be leveraging the experience of the open source tooling providers, as well as the cloud providers who have emerged across the landscape. Here is a snapshot of a few providers who will help you make logging a first class citizen in your API life cycle.

Elastic Stack - Formerly known as the Elk Stack, the evolved approach to logging, search, and analysis out of Elastic. I recommend incorporating it into all aspects of operations, and deploying APIs to make them first class citizens. Logmatic - Whatever the language or stack, staging or production, front or back, Logmatic.io centralizes all your logs and metrics right into your browser. Nagio - Nagios Log Server greatly simplifies the process of searching your log data. Set up alerts to notify you when potential threats arise, or simply query your log data to quickly audit any system. Google Stackdriver - Google Stackdriver provides powerful monitoring, logging, and diagnostics. AWS CloudWatch - Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS.

I recommend cracking open logging from EVERY layer, and shipping them into a central system like Elastic for making them accessible. While each stop along the API lifecycle will come with its own logging and analysis solutions, depending on the services and tooling used, logs should also be shipped as part of a central system for analysis at the bigger picture level. Each stop along the API life cycle will have its own tooling and service, which will most likely come with its own logging and analysis services. Use these solutions. However, don’t stop there, and consider the benefits from looking at log data side by side, and what the big picture might hold.

Logging will significantly overlap with the security stop along the API life cycle. The more logging you are doing, and the more accessible these logs are, the more comprehensive your API security will become. You’ll find this becomes true at other stops along the API life cycle, and ou will be able to better discovery, testing, define, and deliver in other ways, with a more comprehensive logging strategy. Remember, logging isn’t just about providing a logging layer, it is also about having APIs for your logging, providing a programmatic layer to understand how things are working, or not.


API Developer Account Usage Basics

I’m helping some clients think through their approach to API management. These projects have different needs, as well as different resources available to them, so I’m looking to distill things down to the essential components needed to get the job done. I spent some time thinking through the developer account basics, and now I want to break down the aspects of API consumption and usage around these APIs and developer accounts. I want to to think about the moving parts of how we measure, quantify, communicate, and invoice as part of the API management process.

Having A Plan We have developers signed up, with API keys that they’ll be required to pass in with each API call they make. The next portion of API management I want to map out for my clients is the understanding and management of how API consumers are using resources. One important concept that I find many API providers, and would be API providers don’t fully grasp, is service composition. Something that requires the presence of a variety of access tiers, or API plans, which define the business contract we are making with each API providers. API plans usually have these basic elements:

  • plan id - A unique id for the plan.
  • plan name - A name for the plan.
  • plan description - A description for the plan.
  • plan limits - Details about limits of the plan.
  • plan timeframe - The timeframe for limits applied.

There can be more than one plan, and each plan can provide different types of access to different APIs. There might be plans for new users versus verified ones, as well as possibly partners. The why and how of each plan will vary from API provider to provider, but their are all essential to managing API usage. Something that needs to be well defined, in place, with APIs and consumers organized into their respective tiers. Once this is done, we can begin thinking about the next layer, logging.

Everything Is Logged Each call made to the API contains API key which identify the consumer, who should always be operating within a specific plan. Each API call is being logged as part of the web server, and ideally the API management layer, providing timestamped entries for every drop of APIs consumed. These logs are used across API management operations, providing a history of all usage that will be evaluated on a per API, as well as per API consumer level. If a request and response isn’t available in the API logs, then it didn’t happen.

Quantifying API Usage Every log entry recorded will have the keys for a specific API consumer, as well as the path of the API they are consuming. When this usage is viewed through the lens of the API plan an API consumer is operating within, you have the ability to quantify usage, and place a value on overall consumption by any time frame. This is the primary objective of API management, quantifying who is accessing API resources, and how much they are consuming. This is how valuable API resources are being secured, and in many situations monetized, using simple web APIs.

API usage begins with quantifying what consumers have used, but then there are other dimensions that should be considered as well. For example, usage across API users for a single path, or group of paths. Also possibly usage across APIs and consumers within a particular plan. Then you can begin to look across time periods, regions, and other relevant information, providing a more complete picture of how APIs are being put to use. This awareness is why API providers are employing API management techniques, beyond what can be extracted from database logs, or just website or application analytics–providing a more wholesale view of how APIs are consumed.

Invoicing For Consumption Now that we have all API usage defined, and available through the lens of API plans, we have the option of invoicing for consumption. We know each call API consumers have made, and we have a rate specific for each API as part of the API plans each consumer is subscribed to. All we have to do is the math, and generate an invoice for the designated time period (ie. daily, weekly, monthly, quarterly). Invoicing doesn’t have to always be settled with cash, as consumption may be internally, with partners, and with a variety of public consumers. I’d also warn against thinking of consumption always costing the consumer, and sometimes it might be relevant to pay API some consumers for some of their usage–incentivizing a particular type of behavior that benefits a platform.

Measuring, quantifying, and accounting for API consumption is the primary reason companies, organizations, institutions, and government agencies are implementing API management layers. It is how Amazon generates revenue from their Amazon Web Services. It is how Stripe, Twilio, Sendgrid, and other rockstar API providers generate the revenue behind their operations. It is how government agencies are understanding how the public is putting valuable public resources to work. API management is something that is baked into cloud platforms, with a variety of service providers available as well, providing a wealth of solutions for API providers to choose from.

Next I will be looking at the API account and usage layers of API management from the view of API provider, as well as the API consumer via their developer area. Ideally, API management solutions are providing the dashboards needed for both sides of this equation, but in some of the projects I’m working on this isn’t available. There is no ready to go dashboard for API providers or consumers to look at when standing up an AWS API Gateway in front of an API, falling short when it comes to going the distance as an API management solution. You can define accounts, issue keys, establish plans and limit manage API consumption, but we need AWS CloudFront, and other services to deliver on API logging, authentication, and other common aspect of management–with API management dashboards being a custom thing when employing AWS for management. One consideration for why you might go with a more robust API management solution, beyond what an API gateway offers.


That Point Where API Session Management Become API Surveillance

I was talking to my friends TC2027 Computer and Information Security class at Tec de Monterrey via a Google hangout today, and one of the questions I got was around managing API sessions using JWT, which was spawned from a story about security JWT. A student was curious about managing session across API consumption, while addressing securing concerns, making sure tokens aren’t abused, and there isn’t API consumption from 3rd parties who shouldn’t have access going unnoticed.

I feel like there are two important, and often competing interests occurring here. We want to secure our API resources, making sure data isn’t leaked, and prevent breaches. We want to make sure we know who is accessing resources, and develop a heightened awareness regarding who is accessing what, and how they are putting them to use. However, the more we march down the road of managing session, logging, analyzing, tracking, and securing our APIs, we are also simultaneously ramping up the surveillance of our platforms, and the web, mobile, network, and device clients who are putting our resources to use. Sure, we want to secure things, but we also want to think about the opportunity for abuse, as we are working to manage abuse on our platforms.

To answer the question around how to track sessions across API operations I recommended thinking about that identification layer, which includes JWT and OAuth, depending on the situation. After that you should be looking other dimensions for identifying session like IP address, timestamps, user agent, and any other identifying characteristics. An app or user token is much more about identification, than it ever provides actual security, and to truly identify a valid session you should have more than one dimension beyond that key to acknowledge valid sessions, as well as just session in general. Identifying what healthy sessions look like, as well as unhealthy, or unique sessions that might be out of the realm of normal operations.

To accomplish all of this, I recommend implementing a modern API management solution, but also pulling in logging from all other layers including DNS, web server, database, and any other system in the stack. To be able to truly identify healthy and unhealthy sessions you need visibility, and synchronicity across all logging layers of the API stack. Does the API management logs reflect DNS, and web server, etc. This is where access tiers, rate limits, and overall consumption awareness really comes in, and having the right tools to lock things down, freeze keys and tokens, as well as being able to identify what healthy API consumption looks like, providing a blueprint for what API sessions should, or shouldn’t be occurring.

At this point in the conversation I also like to point out that we should be stopping and considering at what point all of this API authentication, security, logging, analysis, and reporting and session management becomes surveillance. Are we seeking API security because it is what we need, or just because it is what we do. I know we are defensive about our resources, and we should be going the distance to keep data private and secure, but at some point by collecting more data, and establishing more logging streams, we actually begin to work against ourselves. I’m not saying it isn’t worth it in some cases, I am just saying that we should be questioning our own motivations, and the potential for introducing more abuse, as we police, surveil, and secure our APIs from abuse.

As technologists, we aren’t always the best at stepping back from our work, and making sure we aren’t introducing new problems alongside our solutions. This is why I have my API surveillance research, alongside my API authentication, security, logging, and other management research. We tend to get excited about, and hyper focused on the tech for tech’s sake. The irony of this situation is that we can also introduce exploitation and abuse around our practices for addressing exploitation and abuse around our APIs. Let’s definitely keep having conversations around how we authenticate, secure, and log to make sure things are locked down, but let’s also make sure we are having sensible discussions around how we are surveilling our API consumers, and end users along the way.


The Concept Of API Management Has Expanded So Much the Concept Should Be Retired

API management was the first area of my research I started tracking on in 2010, and has been the seed for the 85+ areas of the API lifecycle I’m tracking on in 2017. It was a necessary vehicle for the API sector to move more mainstream, but in 2017 I’m feeling the concept is just too large, and the business of APIs has evolved enough that we should be focusing in on each aspect of API management on its own, and retire the concept entirely. I feel like at this point it will continue to confuse, and be abused, and that we can get more precise in what we are trying to accomplish, and better serve our customers along the way.

The main concepts of API management at play have historically been about authentication, service composition, logging, analytics, and billing. There are plenty of other elements that have often been lumped in there like portal, documentation, support, and other aspects, but securing, tracking, and generating revenue from a variety of APIs, and consumers has been center stage. I’d say that some of the positive aspects of the maturing and evolution of API manage include more of a focus on authentication, as well as the awareness introduced by logging and analytics. I’d say some areas that worry me is that security discussions often stop with API management, and we don’t seem to be having evolved conversations around service conversation, billing, and monetization of our API resources. You rarely see these things discussed when we talk about GraphQL, gRPC, evented architecture, data streaming, and other hot topics in the API sector.

I feel like the technology of APIs conversations have outpaced the business of APIs conversations as API management matured and moved forward. Advancements in logging, analytics, and reporting have definitely advanced, but understanding the value generated by providing different services to different consumers, seeing the cost associated with operations, and the value generated, then charging or even paying consumers involved in that value generation in real-time, seems to be being lost. We are getting better and the tech of making our digital bits more accessible, and moving them around, but we seemed to be losing the thread about quantifying the value, and associating revenue with it in real-time. I see this aspect of API management still occurring, I’m just not seeing the conversations around it move forward as fast as the other areas of API management.

API monetization and plans are two separate area of my research, and are something I’ll keep talking about. Alongside authentication, logging, analysis, and security. I think the reason we don’t hear more stories about API service composition and monetization is that a) companies see this as their secret sauce, and b) there aren’t service providers delivering in these areas exclusively, adding to the conversation. How to rate limit, craft API plans, set pricing at the service and tier levels are some of the most common questions I get. Partly because there isn’t enough conversation and resources to help people navigate, but also because there is insecurity, and skewed views of intellectual property and secret sauce. People in the API sector suck at sharing anything they view is their secret sauce, and with no service providers dedicated to API monetization, nobody is pumping the story machine (beyond me).

I’m feeling like I might be winding down my focus on API management, and focus in on the specific aspects of API management. I’ve been working on my API management guide over the summer, but I’m thinking I’ll abandon it. I might just focus on the specific aspects of conducting API management. IDK. Maybe I’ll still provide a 100K view for people, while introducing separate, much deeper looks at the elements that make up API management. I still have to worry about onboarding the folks who haven’t been around in the sector for the last ten years, and help them learn everything we all have learned along the way. I’m just feeling like the concept is a little dated, and is something that can start working against us in some of the conversations we are having about our API operations, where some important elements like security, and monetization can fall through the cracks.


Image Logging With Amazon S3 API

I have been slowly evolving my network of websites in 2017, overhauling the look of them, as well as how they function. I am investing cycles into pushing as much of my infrastructure towards being as static as possible, minimizing my usage of JavaScript wherever I can. I am still using a significant amount of JavaScript libraries across my sites for a variety of use cases, but whenever I can, I am looking to kill my JavaScript or backend dependencies, and reduce the opportunity for any tracking and surveillance.

While I still keep Google Analytics on my primary API Evangelist sites, as my revenue depends on it, whenever possible I keep personal projects without any JavaScript tracking mechanisms. Instead of JavaScript I am defaulting to image logging using Amazon S3. Most of my sites tend to have some sort of header image, which I store in a common public bucket on Amazon S3, all I have to do is turn on logging, and then get at logging details via the Amazon S3 API. Of course, images get cached within a user’s browser, but the GET for my images still gives me a pretty good set of numbers to work with. I’m not concerned with too much detail, I just generally want to understand the scope of traffic a project is getting, and whether it is 5, 50, 500, 5,000, or 50,000 visitors.

My two primary CDNs are Amazon S3 and Github. I’m trying to pull together a base strategy for monitoring activity across my digital footprint. My business presence is very different than my personal presence, but with some of my personal writing, photography, and other creations I still like to keep a finger on the pulse of what is happening. I am just looking to minimize the data gathering and surveillance I am participating in these days. Keeping my personal and business websites static, and with a minimum footprint is increasingly important to me. I find that a minimum viable static digital footprint protects my interests, maximize my control over my work, and minimizes the impact to my readers and customers.


Evolving API SDKs at Google With Storage, Logging and Analytics

One layer of my API research is dedicated to keeping track on what is going on with API software development kits (SDK). I have been looking at trends in SDK evolution as part of continuous integration and deployment, increased analytics at the SDK layer, and SDKs getting more specialized in the last year. This is a conversation that Google is continuing to move forward by focusing on enhanced storage, logging, and analytics at the SDK level.

Google provides a nice example of how API providers are increasing the number of available resources at the SDK layer, beyond just handling API requests and responses, and authentication. I’ll try to carve out some time to paint a bigger picture of what Google is up to with SDKs. All their experience developing and supporting SDKs across hundreds of public APIs seems to be coming to a head with the Google Cloud SDK effort(s).

I’m optimistic about the changes at the SDK level, so much I’m even advising APIMATIC on their strategy, but I’m also nervous about some of the negative consequences that will come with expanding this layer of API integration–things like latency, surveillance, privacy, and security are top of mind. I’ll keep monitoring what is going on, and do deeper dives into SDK approaches when I have the time and $money$, and if nothing else I will just craft regular stories about how SDKs are evolving.

Disclosure: I am an advisor of APIMATIC.


Log Files Are Only For When Things Go Wrong

I'm always amazed at the number of companies I work with that do not consider log files a first class data system. Log files for servers, web servers, and other systems or applications are only for when something goes wrong. I have to admit, I'm in the same situation. I have APIs on the logging for my API management layer, but I do not have easy API access to my Linux servers or the Apache web server that runs on top of them. 

I know that some companies use popular solutions like New Relic to do this, and I keep track on about eight API friendly logging solutions. I'm going to have to spend some time in my API logging research digging around for a solution I can use to stand up an API for my server(s). I'm not looking for any tooling on top of it, just something dead simple to parse my log files, put into a database, and allow me to wrap in a simple API for developing things on top of it.

The first thing I'll build is some analytics. Maybe there is some ready-to-go solution already out there, that are API-drive and familiar with server, or web server logs? It bothers me that logs aren't a first class data system in my operations. Maybe the awareness of crafting and developing against a logging API would help me get more in tune with this layer of data exhaust being generated from my operations daily. I'm guessing it will also help me get a handle on performance and security across my systems, helping take the health my operations up a notch or two.


Considering the Logging and Observability Layer for Amazon Alexa Enablement

I am going through the Amazon Alexa platform, profiling it as part of my voice API research, and also thinking deeply about the impact voice-enablement, and conversational interfaces will or will not make in our lives. I've been pretty vocal that I am not too excited about voice-enablement in my own world but it is something I understand other folks are, and I'm also interested in Amazon's approach to operating their platform--making it something I'm digging deeper into. 

I do not own any of the voice enabled Amazon devices, but I am playing with their simulator Echosim.io. I'm not interested in building any skills or applications for Alexa, but I am curious to learn how the platform functions, so I will be developing prototypes so that I can understand things better. One of the things I'm looking to understand as I'm getting up to speed is how the logging for the platform works, so I can evaluate the observability of the voice platform from developers, as well as an end-user perspective.

From a developer perspective, I see that you have access to device synchronize state, user inactivity, and exceptions encountered via the Alexa Voice Service System Interface, and from an end-user perspective, there is a history section under device settings. It is a decent start from a logging perspective, but I'm thinking that transparency and observability at this level are going to be important to establish trust between end-users and the platform. With a lot of the speculation around the ambient capabilities of these devices, I'm thinking a robust logging, reporting, and auditing layer to Alexa will be critical to making folks feel like it is something they want in their homes.

I'm thinking through the logging layer of my drone platform usage, and what is available to me as end-user, and developer. Alexa provides me another dimension of this logging layer, but this time in the service of voice-enablement. Further rounding off my view of what should be available when it comes to logging, reporting, and observability of the different types of devices we are connecting to the Internet. As I push forward in both these areas of my Internet of Things (IoT) research I will publish my work here on API Evangelist, as well as the individual voice, drone, and logging research areas.


Connecting My API Logging With My API DNS Using CloudFlare Page Rules API

As I'm spending time learning more about what my DNS provider CloudFlare offers when it comes to securing my APIs. To facilitate this, I am playing around with how I can utilize my Apache log files, to help me better drive the definition of DNS security using the CloudfFare API. I guess this is kind of a real time reactive, but also hopefully eventually a proactive solution to quantifying and defining the frontline of my API operations.

I originally embarked on this endeavor to help me manage some of the shift in the API Evangelist network and help mitigate 404 errors across my network of API research. I had recently migrated what I call my API Stack research to a new domain (stack.network), and I am anticipating quite a few broken links in stories over the years that reference this area of my work. I have been trying to attack this from the content level by rewriting links as I find them, but I'm thinking I could automate this using my Apache log files and setting up PageRules using CloudFlareAPIs as well.

Once I started sifting through the Apache log files I began to see other traffic patterns that were more in the area of security, then with the stability of my platform and its linkages. As with any type of log file, it is taking some time for all of this to come into focus for me. I will have to spend a great deal of time evaluating traffic from specific IP ranges, user agents, etc., but I know I should be able to quickly establish some rules at the DNS level that will better help me lock down the front line of my API traffic.

Right now I am just keeping my Apache log files backed up to Amazon S3 to help alleviate server load, and keep around for historical purposes. I have built a log file viewer for sifting through my API traffic, and at the moment I'm manually creating page rules in CloudFlare, but it is something I hope to automate via the CloudFlare API once I have established an awareness of the common types of rules I will be creating. Once I evolve to this point I will write about again, and hopefully talk more about how API access to the logging for my API traffic, in conjunction with how API at the DNS level for my API is helping me better define and secure the frontline of my API operations.


Connecting My API Logging With My API DNS Using CloudFlare Page Rules API

As I'm spending time learning more about what my DNS provider CloudFlare offers when it comes to securing my APIs. To facilitate this, I am playing around with how I can utilize my Apache log files, to help me better drive the definition of DNS security using the CloudfFare API. I guess this is kind of a real time reactive, but also hopefully eventually a proactive solution to quantifying and defining the frontline of my API operations.

I originally embarked on this endeavor to help me manage some of the shift in the API Evangelist network and help mitigate 404 errors across my network of API research. I had recently migrated what I call my API Stack research to a new domain (stack.network), and I am anticipating quite a few broken links in stories over the years that reference this area of my work. I have been trying to attack this from the content level by rewriting links as I find them, but I'm thinking I could automate this using my Apache log files and setting up PageRules using CloudFlareAPIs as well.

Once I started sifting through the Apache log files I began to see other traffic patterns that were more in the area of security, then with the stability of my platform and its linkages. As with any type of log file, it is taking some time for all of this to come into focus for me. I will have to spend a great deal of time evaluating traffic from specific IP ranges, user agents, etc., but I know I should be able to quickly establish some rules at the DNS level that will better help me lock down the front line of my API traffic.

Right now I am just keeping my Apache log files backed up to Amazon S3 to help alleviate server load, and keep around for historical purposes. I have built a log file viewer for sifting through my API traffic, and at the moment I'm manually creating page rules in CloudFlare, but it is something I hope to automate via the CloudFlare API once I have established an awareness of the common types of rules I will be creating. Once I evolve to this point I will write about again, and hopefully talk more about how API access to the logging for my API traffic, in conjunction with how API at the DNS level for my API is helping me better define and secure the frontline of my API operations.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.