19.0.0 released Nov 08, 2023
An "application" (or a "program") is a single executable program created by vv utility; it can process requests. This executable can either run as:
Application, request, source code
The source code for an application is contained in a flat directory. Each request handler is represented by a namesake .vely file. For example, a request "customer" is always entirely contained in file "customer.vely". Typically, requests handled in an application are connected in some way that makes it advantageous to group them together, be it logically, via common dependencies in code, because of reliance on common infrastructure (such as a database for instance), based on performance etc.
- a server (or "daemon"), which means as any number of processes staying resident in memory, either permanently or based on a workload (see FastCGI),
- as a program that runs from a command line, a script etc. (see command-line).
An application contains all request handlers in it, and so can handle any request. Thus, when it runs as a server, any of its processes can handle any request that an application serves. See vely-dispatch-request on how a request is served within an application.
A "project" is a set of applications that are related in some way; the relationship between them may be purely logical, i.e. only in terms of some kind of business, functional or other commonality. Your project can start being a single program. It may stay that way forever, or it may be split into multiple applications. A split is made much easier by the fact that each request is a single .vely file.
Separation of a single application into multiples may be as simple as placing its .vely files into separate directories, creating new applications with vf and building with vv. Note that you may have non-request source files implementing code that is used in more than one place, i.e. shared among request handlers. Such files can stay in one program's source directory, and other programs may simply use those files as soft-linked (using Linux's ln). Access to basic services, such as databases, files or a network, does not change with such a separation, assuming shared infrastructure.
In this scenario, each application will now have its own application path (see request-URL); thus, if any of the newly created applications build URLs that would point to another application, they must be changed. Whether your application is micro, mini or a macro service (macro service being a "monolith"), or a combination thereof, it isn't immutable, and can change over time.
The request's input and output are the same regardless of the program's mode of execution, i.e. whether a program runs as an application server or from command line (or both); a request is always served as an HTTP request; in addition, a command-line program can suppress HTTP header output. An application may need both modes of execution for different aspects of its functioning. For example, much of the web interface would run as application server(s), and perhaps data conversion and periodic cron jobs would be better served by command-line programs. In many cases, the same code may serve both, such as when the same tasks are performed as a batch job and as a web request.
Requests are always valid HTTP requests, and operate as such. This makes testing/mocking easier regardless of the mode of execution (i.e. application server or command line). Testing can be performed via:
For example, REST interface can be easily tested by using a command-line program, without the need for a web interface.
Access to databases is provided via statements like run-query, which are independent of the data source; different databases can be swapped without changing the code, even between different vendors (save for any differences in SQL dialects). A separate data model (i.e. data abstraction over actual queries) may or may not be needed; here are some reasons why you can go without it for simpler design/development and better maintainability:
- run-query is designed to provide a readable interface (i.e. with input and output data) in single statement, thus playing a role of a data-accessor. Query text may be used directly, or from a another source (i.e. an array of queries, defined elsewhere, obtained from a function etc.).
- Too much data abstraction may result in a disconnect between application logic and data when it comes to readability of the design and the code, while the abstraction itself may present a significant overhead.
- It may be beneficial for queries to convey application logic and be close to their functional place of use in order to maximize performance and maintainability.
- Using the same database structure directly in development and testing reduces side effects that affect performance and functionality often seen with substitute data sources.
A "session" is any data connected to a particular end-user who is communicating with application(s). An end-user would login to your application(s) and during such a session any data exchanged would be:
Session management, stickyness
Vely application servers run as a separate layer, i.e separately from web server(s) for performance, safety, scalability and usability; they can be accessed in a number of ways, with web servers being just one. Session information should never be kept in any particular web server instance or application process; rather it resides in a database layer, which can be:
- secure, i.e. no other end-user could eavesdrop or alter the data exchanged, and
- relevant and separate from other end-users.
This makes application design easier and more robust from the start, because it allows for proper session store that scales without having to worry about "sticking" to a particular end-point web server or process. Rather, stickyness is achieved by keeping the session information in a data layer; such session information can then be accessed from any process of any application by simply querying it.
- a database like MariaDB or PostgreSQL,
- a caching solution (like Redis),
- Vely application using a database (persistent, in-memory such as SQLite, etc.),
- or other high performance data stores.
For performance considerations, typically there are three components to a database design in regards to session management:
The minimal information you'd need for any kind of session management scenario to work is:
- Credentials data,
- Session data, and
- User data
User ID would be obtained during login (based on credentials such as user name and password for instance), in order to grant access to application(s); this is what Credentials data is for, and during such login a Session ID is created.
- User ID - a unique identifier assigned to each end-user.
- Session ID - a unique session identifier assigned to each session of each end-user.
The subsequent requests from a logged-on user would be based on both User ID and Session ID, which would be used to verify that end-user has the permission to access data, and which is initially provided to the end-user and then passed back to application(s) via secure cookies. After that, User ID is used to perform the actual requests, while Session ID is used to update the Session data with whatever information application(s) require. All such data manipulations are performed via queries (not necessarily SQL queries, though it may be the most common kind).
With this architecture, the stickyness of an end-user's session is achieved regardless of which web server(s), application(s) or application's processes are handling the request; any architecture may want to prioritize this independence from the underlying infrastructure and physical implementation. In addition, for better performance and scaling up, the Credentials, Session and User data can be separated. In most cases, all three of those are contained in a single database (see multitenant SaaS example of this). When you need to scale up, you can separate Credentials and Session data, as well as User data (i.e. transactional database that contains the actual useful end-user's data) in their own physical databases.
Note that this kind of separation can be across different CPUs on a same server, or on different servers connected to a high-speed local network, or some other form of separation. You may also choose to have session data in-memory only to speed up updates and queries - this decision is about business requirements and allowable risks in case such database needs to be restarted for whatever reason. If better reliability is needed (in case database(s) go down), a high-availability database solution may be used, like clustering/mirroring/failover etc. The strategy used should generally avoid process synchronization on an application or caching level, as it tends to eventually slow down the application and grow in complexity.
These concepts are shown here:
Vely is functional and declarative, with basic services provided to you via statements; that's their major purpose. For example, data sanitation and database access including connection handling, input, output, distributed computing, files, encryption, pattern matching and other such basic components are built-in - these are provided by Vely statements. You can write non-request code to create higher-value components of your own that can be shared between requests in the same program, or even between different programs.
Services (local and remote) in a functional and declarative model
A request can be viewed as an HTTP function, where input is provided and output is made available via HTTP protocol to the caller, though this is decoupled from the web (and network in general): it can function as a web service or a command-line program execution. When running on a server, a request is suitable for a wide variety of methodologies (REST-like, generic Remote-Call type of processing, etc.) by calling it with:
A request may service either a singular purpose (such as with a microservice) or be divided into tasks (and then subtasks). Tasks (or subtasks) do not have to be exlusively separated in terms of functionality, i.e. differents tasks may perform overlapping functions via shared code. Any input parameter (see task-param) can indicate which task a request should perform. Whatever services your application provides, each such service can be identified with:
- call-server, for accessing remote services via fast and direct binary protocol on secure networks, or
- call-web, for using remote services on the web with secure SSL/TSL connection.
For an example of tasks and subtasks, see if-task. Your application should not have (sub)tasks that are too many or too deep. Often, the best solution may be for a request to perform a single task (i.e. it would not need a task-param). Tasks should generally serve a request's purpose and such purpose should be as elementary as possible. The delineation of where "elementary" ends isn't a hard rule; rather it's best to remember that a request should be a logical action that (for whatever reason) is not conducive to further simplification by dividing it.
- a request and
- an optional task within a request.
- an optional subtask within a task, etc.
Interpreting and handling task(s) in as little code as close together as possible is preferrable. This means if determining which task to execute and actually executing its functionality is possible without any additional layers of abstraction, it is likely to be more readable and easier to maintain.
In general, the execution flow of a program is:
Within a request, if tasks are used, the support for tasks is semantic and self-documenting:
Without using tasks, you would likely have request paths such as "/customer/add" or "/customer/update" served by source files "customer__add.vely" or "customer__update.vely" (see request-URL). It is up to you to decide which method better serves your purpose: using tasks within a request, or using separate requests each handling a single task. In general you might want to keep the number of tasks per requests small, likely no more than 3 or 4; if you have more, it may be better to split them into separate requests. However, depending on your application design, this isn's a hard rule: your application logic, and requirements about how it is designed and maintained may override such guidelines.
To begin with, the application design should start with a question: what are the requests my application will process? The answer to this question may be known in advance for small applications. For medium-sized or large applications, the answer is a process in itself, typically revealed during the prototyping and the lifecycle of application's design and use.
There are a few important considerations to take into account when deciding what a request is, i.e. what is its purpose and its input and output, and how to write it:
- Its purpose: understanding any application is easier to design and implement if its components are chosen to logically represent the way end-users consume it, such as roles and functionality on one hand, and resources on the other. "End-users" doesn't necessarily mean humans; many applications's end-users are other machines, be it servers in a multi-layered design, API endpoints in user-interfacing devices (such as browsers), end-point consumer electronics (such as thermostats or cars), etc.
- Its scope: a request should be the simplest that serves a purpose you had in mind. Dividing it into simpler ones just for the sake of division isn't a good strategy, just as "packing" a request with more functionality than it warrants isn't a good strategy either.
- Performance: creating too many layers of an "onion" or too many layers of abstraction may lower performance to unacceptable levels. In addition, since request names are constants, Vely optimizes request dispatching to the point where the cost of such dispatching (within a process) is constant, regardless of the number of requests (i.e. number of .vely files) your application serves. Don't be afraid to have as many requests as it makes sense to you.
- Ease of understanding: future maintainers of your applications may not appreciate too many layers of abstraction either.
- Sharing and extensibility: is a request standalone, without shared code or data source dependencies (meaning shared with other requests)? Sharing of code can be local via non-requests, or remote via call-server; it may be better to start with local sharing (i.e. non-request code) and use remote services when necessary, especially if horizontal scaling is needed (i.e. adding more computer instances to handle the load). For example, two or more requests can overlap in functionality: you may have a request that creates an entity, and a request that updates it. You can also have a request that creates and updates an entity at the same time to avoid performance issues of multiple calls. A non-request code would be used to reuse the functionality across such requests without duplicating code.
You are free to copy, redistribute and adapt this web page (even commercially), as long as you give credit and provide a dofollow link back to this page - see full license at CC-BY-4.0
. Copyright (c) 2019-2023 Dasoftver LLC. Vely and elephant logo are trademarks of Dasoftver LLC. The software and information on this web site are provided "AS IS" and without any warranties or guarantees of any kind. Icons from table-icons.io
copyright Paweł Kuna, licensed under MIT