The bizarre world of V

Sep 25, 2024

V ("The V Programming Language", "vlang") is a programming language that first emerged in 2019. A brainchild of Alexander Medvednikov, it is described as a "[s]imple, fast, safe, compiled language for developing maintainable software"1, supposedly drawing inspiration from languages like Go, Rust, Swift, C, among others. The vision for V is a language that combines all the good parts of the aforementioned inspirations, while avoiding most of the complexity.

V's tall ambitions and its barebone state upon release resulted in mixed reception in online spaces. This article aims to summarize the situation, covering V's unfulfilled promises, questionable decisions, and its uncertain future.

Table of Contents

The article has a loose structure of starting with the more easily falsifiable claims about V, and getting more into subjective criticisms later on.

Disclaimers

Some of you may ask, why write this article, especially at this point in time? The flamewars have mostly ceased, V has a small, passionate community of contributors who have not been detracted from their goals by criticism and keep driving the project forward.

The simple answer is, I usually speak (er, write) about such topics only when I think I have something to say that has not been said before. In this case, I have not seen anyone take this broad of a look at V in a while — one that would look at both the historical context and the current state of V, while at the same time providing the receipts.

V will get discussed outside of its own community again. Someone will ask "what was the V controversy all about anyway?", and the creator will dismiss any arguments on frivolous basis again. I think it's better for me to write down my observations once in a long-form blog post, rather than in various ephemeral discussions.

One thing I must note is, I have no experience developing programming languages. Because of that, this article contains minimal criticisms on hardcore technical / scientific PL development basis, and mostly takes the perspective of a programming language user instead.

I'd also like to clearly state that I have zero intention to cause any harassment to Alex or any other V community member. Let's keep the discussion grounded and professional.

Analysis

Shifting timelines

This section mostly deals with historical claims about V's functionality. Anyone evaluating V for use today should obviously make judgement based on the language's current state. However, I think it is crucial to pay attention to the fact that at V's inception, many claims were made that were even more fantastical than the ones V makes nowadays.

As we will see shortly, many of these claims were not true on V's initial release in 2019. Instead of the functionality reaching parity with the promises made, statements about V's features were slowly watered down over time, until some of them quietly disappeared altogether. V as it exists in 2024 still does not live up to even these (now weaker) promises.

C/C++ translation

The ability of the V compiler to translate arbitrary C and C++ code to V was a feature advertised very early on in V's lifecycle. On February 2019, V's website stated:

V can translate your entire C/C++ project and offer you the safety, simplicity, and up to 200x compilation speed up.

[...]

[The translator] supports the latest standard of notoriously complex C++ and allows full automatic conversion to human readable code.2

This claim was supplemented by links to supposed articles detailing the translation of popular C and C++ software:

Read about translating Doom & Doom 3, LevelDB, SQLite.2

The "Doom & Doom 3" anchor linked to https://vlang.io/doom, which may have existed at some point, but has not been archived by the Wayback Machine. Meanwhile, the LevelDB and SQLite links were placeholders (linking to #).

The promised articles then failed to appear in the next few months:

  • As of March 3rd, 2019, the hyperlinks become inactive. <a> tags are replaced <u> tags — underlined to look like links. The section describing this feature ends with:

    [...] LevelDB, SQLite (coming in March).3

  • On April 4th, 2019, it states "coming in early April" instead.4
  • On May 6th, 2019, the text becomes simply "coming soon".5

To this day, the home page says that the (single) article is coming:

A blog post about translating DOOM will be published. 24

Although in 2022, a tutorials/C2V. Translating simple programs and DOOM./README.md appeared in the GitHub repository.46 This is probably the article in question, and Alex just forgot to link it from the homepage.

Now, I'll let one who has never procrastinated on their side project or an article cast the first stone, but let's see what Alex says about the actual functionality elsewhere. In a Hacker News thread in late March 2019, he states:

C++ translation will be done by the time the language is open sourced. I can already compile simpler projects.

I support [translating] STL and even plan to support Boost6.

On May 19, 2019, vlang.io states:

[The translator] already supports C and will soon support the latest standard of notoriously complex C++.7

The claim of "supporting the latest standard of notoriously complex C++" has been downgraded to "will soon support".

Sometime before June 24, 2019, the "C++ translation", among an assortment of other features, gets a "WIP" (work in progress) label on the website29, seemingly as a reaction to the lukewarm reception in the Hacker News thread about the open-source release on June 22, 2019.8 One commenter points out:

Now that (part of?) the code has been released, it seems to be little more than a transpiler from V to C, with allowed inlined C, with most advertised features stubbed out.

There might have been further developments on the translation feature during mid-2019 to end of 2020. I have only so much willingness to scour the archives.

On New Year's Day 2021, V's Twitter account announces:

C2V itself will be open-sourced next week.9

More time passes without any official news. Then, on June 22, 2022 — roughly 1.5 years after the "next week" announcement — an "Initial commit" is made to the C2V repository.10 The v translate compiler command is finally officially released in V 0.3 a week later.11

To summarize C2V, the public was once promised that V "supports the latest standard of notoriously complex C++ [...]". 5 years later, it is nowhere near these capabilities. It seems to be able to translate some C code, although features as simple as reading the command line arguments produce invalid V code62. V's roadmap states that full C99 support is slated for version 1.018, indicating that the C translation feature is still incomplete (C99 is not really some exotic thing anymore).

C++ translation is nowhere to be seen: C2V's tests directory seemingly does not contain a single C++ test case. The part of the code that would seem to be responsible for C++ translation has not been touched since January of 2023, and the method handling declarations of destructors (an essential feature of C++) is empty.12

Indeed, it looks like V can not translate any C++:

$ v version
V 0.4.6 fccd7cd
$ cat trivial.cpp
#include <iostream>
using namespace std;

int main() {
  cout << "Hello from V?" << endl;
  return 0;
}
$ v translate trivial.cpp
C to V translator 0.4.0
  translating /home/justinas/vlang-box/trivial.cpp ... C++ top level
C++ top level
C++ top level
C++ top level
C++ top level
C2V command: '/home/justinas/.vmodules/c2v/c2v' 'trivial.cpp'
C2V failed to translate the C files. Please report it via GitHub.

Native x64 code generation

Producing native code without the use of LLVM or a C compiler as an intermediary was another claim made at the very start of V's public existence.

On February 2019, the V homepage stated:

Does V use LLVM?

No. V compiles directly to machine code. It's one of the main reasons it's so light and fast. Right now only x64 architecture is supported.

V can also emit human readable C, which can then be compiled to run on any platform.2

The parts emphasized (by me) seem to imply that the native code generation backend is the main one, and is one of the primary reasons behind V's claims of fast compilation, while the C generation backend is merely an additional option.

On March 30, 2019, Alex once again reiterates on a Hacker News thread:

Right now [V] can emit x64 machine code13

V's initial open-source release on June 22, 201914 seemingly did not contain any code implementing a native x64 backend. What looks like a call to the supposed x64 backend is commented out.

That same day, Alex stated in a Hacker News comment:

Work on x64 generation started back in August [...] I haven't touched it in a while, and it simply doesn't compile at the moment.15

NB: Seeing as this comment was made in July 2019, "August" might be a mistake, unless he meant August 2018 or similar.

As with many other features, "direct machine code generation" gained a "WIP" flag soon after the open-source release.8 Once again, I did not bother to go through the entire Git history. However, it seems that the native x64 backend in some state first appeared in a commit made on Nov 22, 201916, and the feature is mentioned in the release notes for version 0.1.2317.

Still to this day, it does not seem that the native backend is in a functional state. For example, it fails as soon as one tries to declare an array:

$ v version
V 0.4.6 4a7c70c
$ cat hello.v
fn main() {
    foo := [1, 2, 3]
    println(foo)
}
$ v -b native hello.v
/home/justinas/vlang-box/compiler/vlib/builtin/builtin.c.v:358:1: warning: globals are not supported yet
  356 | }
  357 |
  358 | __global total_m = i64(0)
      | ~~~~~~~~~~~~~~~~~~~~~~~~~
  359 | // malloc dynamically allocates a `n` bytes block of memory on the heap.
  360 | // malloc returns a `byteptr` pointing to the memory address of the allocated space.
/home/justinas/vlang-box/compiler/vlib/builtin/builtin.c.v:718:1: warning: globals are not supported yet
  716 |
  717 | @[markused]
  718 | __global g_main_argc = int(0)
      | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  719 |
  720 | @[markused]
/home/justinas/vlang-box/compiler/vlib/builtin/builtin.c.v:721:1: warning: globals are not supported yet
  719 |
  720 | @[markused]
  721 | __global g_main_argv = unsafe { nil }
      | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  722 |
  723 | @[if vplayground ?]
/home/justinas/vlang-box/compiler/vlib/builtin/builtin.v:25:1: warning: globals are not supported yet
   23 |
   24 | // will be filled in cgen
   25 | __global as_cast_type_indexes []VCastTypeIndexName
      | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   26 |
   27 | fn __as_cast(obj voidptr, obj_type int, expected_type int) voidptr {
native error: unknown variable `foo`

Or a map:

$ cat map.v
fn main() {
    mut a := map[string]int{}
    a["foo"] = 42
    println(a)
}
$ v -b native map.v |& tail -n 1
native error: expr: unhandled node type: v.ast.MapInit

Or a struct:

$ cat struct.v
struct Foo {
    greeting string
}

fn main() {
    foo := Foo{greeting: "hello"}
    println(foo)
}
$ v -b native struct.v |& grep "error"
native error: unsupported type for mov_reg_to_var ast.TypeInfo(ast.Struct{

You get the idea. "Compilation directly to machine code", touted as the secret of V's fast build times in 2019, may as well not exist even in 2024, let alone back then. It does not seem like there is much effort to change this situation either: the change log for the recent V 0.4.6 release lists 27 merged pull requests in the "C backend" category, whereas the only mention of the native backend is "ci: update native backend ci matrix"41.

Memory unmanagement

One of the main selling points of V from the very beginning was "innovative memory management".

For context, it is well known that all mainstream paradigms of memory management have certain problems:

  • Garbage collection usually has a non-negligible performance cost at runtime.
  • Manual memory management à la C's malloc() and free() is error prone, and in practice results in issues such as memory leaks, double frees, and use-after-free.
  • The RAII + ownership + borrow checking model of Rust avoids the disadvantages of the two previously mentioned models, but comes with its own cognitive cost of having to explicitly encode information about lifetimes into the program when the compiler is not smart enough to figure it out automatically.

The memory management model of V was envisioned to solve all of these problems at once: there would be no garbage collector, and you wouldn't need to manage memory manually either. Instead, the compiler would take care of freeing memory for you ("like Rust"4), but without having to struggle with lifetime specifiers or anything of the sort.

Early V

In early 2019, V's website stated:

Is there garbage collection?

No. V's memory management is similar to Rust but much easier to use. More information about it will be posted in the near future.3

About a month later, this section was expanded:

Is there garbage collection?

No. V manages memory at compilation time (like Rust). Right now only basic cases are handled. For others, manual memory management is required for now. The right approach to solve this will be figured out in the near future.4

After V's initial release, the docs were updated to describe the unimpressive status quo:

There's no garbage collection or reference counting. V cleans up what it can during compilation. For example:

[...]

The strings [...] are cleaned up when the function exits.

[...]

For more complex cases manual memory management is required. This will be fixed soon.

V will detect memory leaks at runtime and report them. To clean up, for example, an array, use the free() method[.]19

I did not go and check whether V 0.0.12 lived up to these promises as documented, because I have no clue how to correctly compile early versions of V. I was, however, able to pull V 0.1.21 (released Sep 30th, 2019) from nixpkgs, and compile a version of the example in the above documentation (after applying some syntax fixes). Valgrind is not happy with it.

$ v version # NB: version in nixpkgs is tagged 0.1.21, V self-identifies as 0.1.20.
V 0.1.20 5ac62bb
$ cat autofree_initial.v
import strings

fn draw_text(s string, x, y int) {
    /* empty */
}

fn draw_scene() {
    name1 := 'Alice'
    name2 := 'Bob'
    draw_text('hello $name1', 10, 10)
    draw_text('hello $name2', 100, 10)
    draw_text(strings.repeat(`X`, 10000), 10, 50)
}

fn main() {
    for i := 0; i < 1000; i++ {
        draw_scene()
    }
}
$ valgrind ./autofree_initial |& grep "definitely lost"
==1107111==    definitely lost: 10,024,000 bytes in 4,000 blocks

Seeing how draw_scene allocates roughly 104 bytes, and it is called 103 times, Valgrind reporting about 107 bytes is an indication that every single string allocated is leaked. In the intermediate C code (generated via v -o autofree_initial.c autofree_initial.v) it is evident that draw_scene does not contain any calls to free().

At around the same time, V decided to aim for an even stronger guarantee:

(Work in progress) [...] If your V program compiles, it's guaranteed that it's going to be leak free.63

Even Rust, despite having compile-time memory management, does not promise complete freedom from memory leaks, and in its documentation demonstrates a scenario in which memory leaks happen.64

V 0.2 and the "zero leaks" demo

It's not a surprise that V pre-0.2 leaked so much memory, because despite the claims in documentation, "autofree" seemingly did not exist at all back then. It was finally officially introduced, gated behind an -autofree flag, in V 0.2 half a year later.20

As an addition, that same day Alex released a video that supposedly demonstrates how autofree prevents leaks21. All the demo shows is that the Ved editor, after opening it and scrolling through a large text file once, caps at a certain, lower amount of memory used when compiled with -autofree compared to the same program compiled without autofree (i.e. using the pre-0.2 behavior of "YOLO leak everything"). In no way does it prove (as Alex would later state it does23) that the program has "zero leaks". For that, at the very least, I would expect the compiled executable to be analysed via Valgrind.

I will add that a video like this is may be a great tool to hype up your creation, but a terrible way to actually prove your feature works, since there are multiple ways in which such demo could be "enhanced" to misrepresent the actual situation. The only proof accepted by the wider community should be instructions that allow one to independently reproduce the results.

But such instructions were not made available, so let's try our best to reproduce by getting V 0.2 and the version of Ved that was available at that time:

$ wget https://github.com/vlang/v/releases/download/0.2/v_linux.zip
$ unzip v_linux.zip
$ mv v v0.2
$ ./v0.2/v version
V 0.2 e4f94b6
$ git clone https://github.com/vlang/ved
$ cd ved
$ git checkout afa14852b78df5234072a3b321e2d1ecb611e120  # last Ved commit before V 0.2 release
$ wget https://raw.githubusercontent.com/azadkuh/sqlite-amalgamation/master/sqlite3.c # get an 8MB file to test with

To test, I will compile Ved, then run it under Valgrind. Once Ved starts, I'll hold the "Page Down" key on my keyboard to scroll to the end of the file, and then promptly close the application so that Valgrind reports the results.

Let's try without any flags first:

$ ../v0.2/v .
Compilation with tcc failed. Retrying with cc ...
$ valgrind ./ved ./sqlite3.c |& grep "definitely lost"
==397075==    definitely lost: 23,386,605 bytes in 3,350,781 blocks

Without autofree, Ved leaks 23 megabytes. What about with autofree?

$ ../v0.2/v -autofree .
Compilation with tcc failed. Retrying with cc ...
$ valgrind ./ved ./sqlite3.c |& grep "definitely lost"
==398821==    definitely lost: 1,584,084 bytes in 1,573,227 blocks

That's much better, Ved only leaks 1.5 megabytes. However, that is not the same as "[a]ll objects are freed during compilation" as the V homepage still states today.24

The V 0.2 release notes20 also promised that autofree would be "enabled by default in 0.3", presumably because it was expected to become good enough by that point. That, of course, did not happen. Instead, before the V 0.2.4 release, garbage collection based on Boehm GC was added22, and remains the default memory management option to this day. In a GitHub discussion about autofree, Alex explains the reasons behind this:

When I started working on V, I was very anti-GC, expecting them to be slow and use a lot more RAM. I integrated a GC just for a test, and was surprised at how well it worked with V [...]

What started as a test and a temporary way to allow developers to write leak free programs, became the stable and well working default option. 23

I'll give it to him, that is fair reasoning: GC performs "well enough", and for now, does a better job than autofree could.

Status quo

What bothers me the most is how no solid principle behind autofree has ever been explained (at least to my knowledge). V's homepage simply states:

[T]he compiler inserts necessary free calls automatically during compilation.24

This simplistic statement sounds like a thought that any young programmer might have when they encounter manual memory management for the first time: "computers are good at automating stuff, so why can't the compiler insert calls to free() for me?". I don't know enough about PL theory to confidently state whether that is possible in the general case, but people smarter than me seem to think it is not72.

When asked about how autofree will function, V's creator sometimes draws parallels to languages such as Rust3 and Lobster25, the latter of which I know little about. There are no explanations on how the ideas from these languages would be implemented, nor how V would avoid limitations of their memory management models.

The GitHub discussion23 about autofree only gives a couple of notes about techniques that enable autofree to function. One technique mentioned is simple escape analysis: heap-allocated types that do not leave the function they were created in are said to be deallocated at the end of the scope.

Bizarrely, the example that is supposed to demonstrate that autofree handles this case, fails:

$ v version
V 0.4.6 4a7c70c
$ cat autofree.v
struct Foo {
    x int
}

fn foo() {
    foo := &Foo{x: 3} // never escapes
    println(foo)
}

fn main() {
    foo()
}
$ v -autofree autofree.v
$ valgrind ./autofree |& grep "definitely lost"
==146888==    definitely lost: 20 bytes in 2 blocks

Strings, on their own, are covered by escape analysis:

$ cat string.v
import strings

fn produce_str() string {
    foo := strings.repeat(`A`, 10)
    return foo
}

fn main() {
    println(produce_str())
}
$ v -autofree string.v
$ valgrind ./string |& grep "no leaks"
==597912== All heap blocks were freed -- no leaks are possible

But if you put them in a struct, they leak:

$ cat struct.v
import strings

struct Foo {
    x string
}

fn new_foo() Foo {
    return Foo{x: strings.repeat(`A`, 10)}
}

fn main() {
    println(new_foo())
}
$ v -autofree struct.v
$ valgrind ./struct |& grep "definitely lost"
==598343==    definitely lost: 11 bytes in 1 blocks

Not even primitive types are safe: trying to take a pointer to an integer automatically promotes this integer to a heap allocation that is never deallocated45.

Escape analysis fails in the other direction too. Trying to return a &string from a function triggers a null pointer dereference, because the string's inner buffer is freed and its address overwritten by 0 before returning:

$ v -autofree dangling.v
$ cat dangling.v
import strings

fn produce_str() &string {
    str1 := strings.repeat(`A`, 10)
    str2 := &str1
    return str2
}

fn main() {
    println(produce_str())
}
$ v -autofree dangling.v
$ ./dangling
7f04dab88a80 : at ???: RUNTIME ERROR: invalid memory access
/tmp/v_1000/dangling.01J1QTVGTZXJMVJVDE3CKG1WAQ.tmp.c:9761: by string_clone
/tmp/v_1000/dangling.01J1QTVGTZXJMVJVDE3CKG1WAQ.tmp.c:11231: by string_str
/tmp/v_1000/dangling.01J1QTVGTZXJMVJVDE3CKG1WAQ.tmp.c:13086: by main__main
/tmp/v_1000/dangling.01J1QTVGTZXJMVJVDE3CKG1WAQ.tmp.c:13131: by main

If we remove the secondary function, V narrowly avoids a double free.

$ cat double.v
import strings

fn main() {
    str1 := strings.repeat(`A`, 10)
    str2 := &str1
    println(str2)
}
$ v -autofree double.v
$ ./double
&AAAAAAAAAA
double string.free() detected

The "double free detected" log message in this case comes from V itself, not a sanitizer or the allocator. V string type's is_lit field acts as something resembling Rust's drop flags44 here.

Dangling pointers can also be produced using an array:

$ cat array.v
fn ref_from_array() &int {
    a := [1, 2, 3]
    return &a[0]
}

fn main() {
    println(ref_from_array())
}
$ v -autofree array.v
$ valgrind ./array |& grep -A 4 "Invalid read"
==594613== Invalid read of size 4
==594613==    at 0x634B7B: main__main (in /home/justinas/vlang-testcases/array)
==594613==    by 0x63515C: main (in /home/justinas/vlang-testcases/array)
==594613==  Address 0x4a750f0 is 0 bytes inside a block of size 12 free'd
==594613==    at 0x484988F: free (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)

The only other "technique" described23 is not worth discussing in detail. It simply states that assigning a string variable to another one (foo := bar) copies the string instead of aliasing. This is true, however not sufficient to avoid memory management issues (as we've demonstrated by using string pointers). Alex also adds that for arrays, bare by-value assignment is forbidden, and the user is always asked to clone() the array explicitly. This is not true (anymore?), arrays are cloned implicitly, just like strings:

$ cat array.v
fn main() {
    foo := [1, 2, 3]
    bar := foo
    println(bar)
}
$ v run array.v
[1, 2, 3]

To summarize, autofree in mid-2024:

  • Eagerly frees standalone strings and arrays at the end of the block and does not leak them, but this results in dangling pointers and null dereferences.
  • Does not handle structs at all, and introducing a struct in the mix breaks the small amount of functionality that otherwise works for strings and arrays.
  • Does not free primitive types that needlessly escape to the heap in the first place.

In 2019 it was stated that there is no garbage collection3 (implying V's "innovative memory management" eliminates the need for GC — it did not say "there is no GC, but we will add it soon and it will be the main technique"). Today, autofree is impotent, only handling a few individual patterns of allocation, and often still getting them wrong. This is an embarrassing state of affairs. V is not a language with "innovative memory management", it is a language using an off-the-shelf garbage collector by default, with an "autofree" option that is not viable in any meaningful way.

To be fair, autofree is at least described on the current website as "still experimental and not production ready yet". However, in the same paragraph, it says that, when used, "[autofree] takes care of most objects (~90-100%)"24. To my knowledge, Alex has never shared a methodology to measure this.

I would test autofree using present-day Ved and the latest V compiler, alas, it immediately crashes if compiled with autofree.

$ pwd
/home/justinas/vlang-testcases/ved
$ git rev-parse HEAD
91d395901829d3c23073ff7d4ae1b137f1a14742
$ v version
V 0.4.6 b6c7b46
$ v -autofree .
$ ./ved
size=gg.Size{
    width: 2560
    height: 1440
}
V panic: as cast: cannot cast `map[string]toml.ast.Value` to `[]toml.ast.Value`
v hash: b6c7b46
/tmp/v_1000/ved.01J1JK2CEB2HCB4CA18WYEWKQB.tmp.c:16660: at _v_panic: Backtrace
/tmp/v_1000/ved.01J1JK2CEB2HCB4CA18WYEWKQB.tmp.c:17160: by __as_cast
/tmp/v_1000/ved.01J1JK2CEB2HCB4CA18WYEWKQB.tmp.c:48683: by toml__Doc_value_
/tmp/v_1000/ved.01J1JK2CEB2HCB4CA18WYEWKQB.tmp.c:48652: by toml__Doc_value
/tmp/v_1000/ved.01J1JK2CEB2HCB4CA18WYEWKQB.tmp.c:49387: by main__Config_init_colors
/tmp/v_1000/ved.01J1JK2CEB2HCB4CA18WYEWKQB.tmp.c:49359: by main__Config_reload_config
/tmp/v_1000/ved.01J1JK2CEB2HCB4CA18WYEWKQB.tmp.c:51422: by main__main
/tmp/v_1000/ved.01J1JK2CEB2HCB4CA18WYEWKQB.tmp.c:55727: by main

Why enabling autofree makes TOML serialization crash remains an open question.

V's website has since pivoted from advertising "innovative memory management"65 to "flexible memory management"24, boasting 4 distinct ways of managing memory. Another review of V, linked in "recommended reading" goes into more detail about why swapping between these models is not at all as seamless as the docs suggest.

Redefining terms

More bizzareties lie in the smaller bullet points of V's feature list, where V authors outline several more hefty goals which are not achieved, and at times seemingly not even well understood by the team.

Undefined behavior

V's current homepage proudly proclaims that the language allows "[n]o undefined behavior"24. The claim even has a "new!" tag, indicating that this has recently been achieved, as opposed to when this feature was marked "WIP" back in 202230 (although the "WIP" caveat remains in the README on GitHub1).

However, the claim of "no undefined behavior" is far from holding up. In many cases, primitive V code is translated 1-to-1 to syntactically equivalent C code with all the nasty implications of it.

For example, V code that attempts to divide by zero is translated to C code that attempts to divide by zero, a well known case of undefined behavior in C31. An issue about this was created in V's issue tracker 3 years ago, and then promptly closed by contributors claiming that "the behavior is the same as Go"32.

Whether the behavior is "the same as Go" is irrelevant here, since V's claim is that it produces no undefined behavior, not that integer division works "exactly as in Go". It is trivial to demonstrate that the behavior is, in fact, not always the same as in Go: in my comment on the issue, I gave an example of V (or rather the C compiler that it delegates to) producing a program that yields 1/0 = 0, which is already different from Go. The behavior of 1/0 = 0 would perhaps be an okay trade-off if it was defined by V and did not trigger undefined behavior in the intermediate C code But it's not, and it does.

The misunderstanding of what undefined behavior means is not limited to external V contributors. When this issue was brought up in a Hacker News thread, Alex reiterated the obviously false claim that division works "just like in Go"33. The only way that I imagine one could argue that V produces no undefined behavior here is by applying some mental gymnastics à la "division by zero in V is defined to produce undefined behavior in the intermediate C code".

I get it, undefined behavior is not a straightforward concept to grasp. I myself did not understand it well until I read some great articles3473. The problem is claiming that your language has a property X, where X generally has an agreed upon meaning, and then moving the goalposts of what constitutes X.

Static binaries that are not

V's current homepage24 claims that:

[By compiling V code y]ou get a single statically linked binary [...] without any dependencies.

However, this is easily confirmed as false:

$ cat hello.v
fn main() {
    println("hello world")
}
$ v hello.v
$ file hello
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, not stripped
$ ldd hello
        linux-vdso.so.1 (0x00007ffe0bdb1000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f0c90200000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f0c905f5000)

By default, V produces a binary that is both dynamically linked and relies on the C standard library.

I don't have a bone to pick with the decision to use libc. As Alex rightly points out, for some OSes, libc is the only stable system API35. I do have a small issue with saying "libc is not really a dependency", because a dependency is a dependency.

But my main qualm with the claim is that "static binary" has a defined meaning, and V does not output static binaries by default. In fact, it is hard to find any documentation on how to produce static binaries in V at all. In a GitHub discussion thread, Alex suggests using musl-gcc for this35. Let's ignore the fact that musl-gcc is an external tool which one must acquire separately, and try the suggestion:

$ sudo apt install musl-tools
<...>
$ v -cc musl-gcc hello.v
==================
/usr/bin/ld: pthread_start.c:(.text+0xa9): undefined reference to `__pthread_unregister_cancel'
collect2: error: ld returned 1 exit status
...
==================
(Use `v -cg` to print the entire error message)

builder error:
==================
C error found. It should never happen, when compiling pure V code.
This is a V compiler bug, please report it using `v bug file.v`,
or goto https://github.com/vlang/v/issues/new/choose .
You can also use #help on Discord: https://discord.gg/vlang .

It does not work.

StackOverflow points me to the -freestanding flag36. Let's try that out:

$ cat array.v
fn main() {
    mut foo := [1, 2, 3]
    foo.sort()
    println(foo)
}
$ v -freestanding array.v
$ file ./array
./array: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=19986c1fa887f421581deb1982106b06a5a33ee6, not stripped
$ ./array
V panicsort does not work with -freestanding

The -freestanding flag does make V produce a static binary, but it also has the effect of not linking to libc at all. Combined with the fact that V implements core functionality like sorting an array by delegating to C standard library functions37, this means that a "freestanding" V program can barely do anything non-trivial.

Finally, after filing an issue on V's GitHub I was pointed by a kind stranger to the (obvious in hindsight) solution of using -cflags '-static'42. That works:

$ v -cflags '-static' hello.v
$ file hello
hello: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, BuildID[sha1]=fa28c8ba332aa9a6f7b5e490e5b8442d6b840ee6, for GNU/Linux 3.2.0, not stripped

Since all this does is request static linkage from the C compiler that V delegates to, I am not sure if there is a way to attempt static linking when using the native code generation backend. But given the state of that backend as a whole, it is currently a pointless exercise.

In short, I think if V promises static binaries, it should either produce them by default, or give an easy, documented way to switch to static linking (e.g. a -static flag in the v command-line tool).

Previously, Alex has stated that "statically linking glibc is not possible"43, which is not technically true as we've just shown (although glibc is definitely not an ideal candidate for static linking38). I am confused why V's homepage promises something that Alex thinks is impossible to achieve.

Other misnomers

Among its other safety features, V claims:

No null (allowed in unsafe code)24

As we have now come to expect, this is not universally true, but at least in this case, the detailed docs spell it out:

Zero-value references, or nil pointers, will NOT be supported in the future, for now data structures such as Linked Lists or Binary Trees that rely on reference fields that can use the value 0, understanding that it is unsafe, and that it can cause a panic.39

Strangely, nil requires unsafe, but the equivalent 0 literal in a pointer context does not. This suggests very primitive reasoning by the compiler: an "evil" keyword is forbidden, but completely equivalent code without the keyword is allowed.

Option types have existed in V since version 0.311 (before that, they were a weird Option/Result mix). With that in mind, it is weird why it is still allowed to have a 0 valued pointer in safe code instead of forcing users to use ?&T to achieve the goal of having an optional reference. Perhaps it is because a bunch of places in the standard library use 0 pointers and nobody has fixed that yet?

Coincidentally, it was also the version 0.3 change log that announced "[n]ull can be used in unsafe only (for example, for C interop)"11.

I don't think it is honest to advertise "no null" on the front page, and then immediately walk it back in the docs. In addition, dereferencing a null pointer is undefined behavior in C, yet again proving that "no undefined behavior" claims are unsubstantiated.

V also previously claimed "pure functions by default"29, and many people at the time pointed out that allowing I/O in functions is usually a dealbreaker for considering functions "pure".

Today, "purity" as a term is not used, but V's documentation makes the strange claim that "[function] evaluation has no side effects (unless the function uses I/O)"40. I am entirely unsure why the authors decided to make this claim, which sounds as empty as "your programs are guaranteed to be bug-free as long as you do not introduce any bugs". In my opinion, this section of the documentation would look a lot more solid if it only contained the core, unambiguous claims: global variables are not allowed by default, and function arguments are immutable by default.

Casting a wide net

Another thing I find strange about the V project is just how much secondary ground it intends to cover, despite the core language being in an unfinished state. I have no problem with community projects like Vinix — I think it is very cool to build a complex software project, such as an operating system, as a proof-of-concept that your creation is not a toy language. What concerns me is bloat in the core v project itself, the current state of these secondary features, and the impact it may have on the ETA of the eventual V 1.0 release.

Backends

For one, take V's several backends. We've already discussed how C code generation is V's main backend, and the direct-to-machine code backend, once claimed to be the main one, is in a state where all it can compile is essentially "hello world".

That did not stop V from gaining additional backends. A prominent one is the JavaScript backend, which comes in four flavours:26

    * `js`                - V outputs JS source code which can be passed to NodeJS to be ran.
    * `js_browser`        - V outputs JS source code ready for the browser.
    * `js_node`           - V outputs JS source code to run with nodejs.
    * `js_freestanding`   - V outputs JS source code with no hard runtime dependency.

The list is majorly confusing, because from the descriptions it is unclear how js and js_node different. It is also unclear what "no hard runtime dependency" means. Perhaps it is supposed to only generate code that fits some ECMAScript standard, and does not assume any browser-ish or Node.js-ish constructs to be present?

Either way, trying to figure out the supposed differences between these backends is moot, since all four produce the same code that hard-depends on a Node.js-like environment.27 That's right, using v -b js_browser does not produce JS code that can run in a browser. The generation of code for Node.js works to some degree, but ends up producing invalid JavaScript syntax when trying to do very simple things such as splitting a string into characters.28

But I'm not worried about syntax. What I think is hard in transpiling something like V into something like JavaScript is semantics. For example, V utilizes a traditional synchronous model with explicit threads and coroutines, while JS usually runs on an implicit, single-threaded event loop.

These two models of concurrency that are not directly compatible. So how does V solve this? We can observe that threading in V works normally when using the C backend:

$ cat threads.v
import rand
import rand.seed
import rand.pcg32
import time

fn run (i int) {
    println("hello from thread ${i}")
    mut rng := &rand.PRNG(pcg32.PCG32RNG{})
    rng.seed(seed.time_seed_array(pcg32.seed_len))
    duration := rng.i64_in_range(100_000, 1_000_000) or { panic(err) }
    time.sleep(duration)
    println("goodbye from thread ${i}")
}

fn main() {
    for i := 0; i < 10; i++ {
        spawn run(i)
    }

    // I should build up an array of threads here and use threads.wait(),
    // but that does not compile with the JS backend.
    time.sleep(10_000_000)
}
$ v run threads.v
hello from thread 0
hello from thread 1
hello from thread 2
goodbye from thread 1
hello from thread 3
hello from thread 4
hello from thread 5
hello from thread 6
hello from thread 7
hello from thread 8
goodbye from thread 3
hello from thread 9
goodbye from thread 0
goodbye from thread 6
goodbye from thread 5
goodbye from thread 9
goodbye from thread 2
goodbye from thread 4
goodbye from thread 7
goodbye from thread 8

Threads interleave as expected. But what happens if transpile the same code to JS and run it in Node?

$ v -b js -o threads.js threads.v
$ timeout 5 nodejs threads.js
hello from thread 0
goodbye from thread 0
hello from thread 1
$ timeout 5 nodejs threads.js
hello from thread 0
$ timeout 5 nodejs threads.js
hello from thread 0

It just... hangs? Non-deterministically? There's plenty wrong with the generated code: threads are converted to Promises and time.sleep is implemented as a busy-wait, so the "threads" block the event loop and never interleave. Despite all this, I am still unsure why the program "hangs" and a random thread will seemingly block forever (or for a very long time).

I'll leave it as an exercise to the reader to figure out what V's os.read_file translates to in JS. Hint: it's a no-no in asynchronous code.

What about the Go backend?

$ cat hello.v
fn main() {
    println("hello world")
}
$ v -b go -o hello.go hello.v
using Go WIP backend...
hello.v:2:5: error: unknown function: println
    1 | fn main() {
    2 |     println("hello world")
      |     ~~~~~~~~~~~~~~~~~~~~~~
    3 | }

Let the record show that neither Go nor JS backends are marked as "experimental" in the v build docs, although the native and wasm backends are26. The JavaScript backend is also listed as a "key feature" alongside the C backend in V's GitHub README, implying some degree of importance1. Again, V 0.4.6 release notes do not seem to mention JavaScript or Go backends at all, suggesting that not much is being done to drive them forward.

The value proposition of having so many different backends is generally unclear to me. Targeting JavaScript because it is the only language that runs natively in the browser (apart from WebAssembly, which is still limited) is somewhat understandable, but compiling to another high-level language such as Go, which V shares a lot of design goals with anyway, seems like a giant timesink. It also raises some of the same questions about semantics: will both of V's threads and coroutines be backed by Go goroutines, or just one of them? How will threads and coroutines differ when targeting Go?

The fact that the compiler does not seem to have an intermediate representation, and every backend has to generate code directly from the AST also means there is a lot of the same work necessary for each backend.

Batteries included (but not yet charged)

Drawing most of its inspiration from Go, V intends to be a small, "simple" language, but at the same time, have a rich standard library. While the breadth of V's stdlib is impressive, there is a lot of work to be done regarding its quality. I only took a cursory look at V's standard library, but it was enough to observe current incoherences.

For one, io.Reader and io.Writer interfaces appeared in V 0.220, yet to this day the standard library only consumes these interfaces in a few places. Instead, most of the modules require the user to fully buffer the data in memory.

For example, the net.http module47 only accepts the request body as an in-memory string, and returns the response body as a single in-memory buffer as well. To stream the response body instead, one must either:

  • Manually set the on_progress callback and the stop_copying_limit property
  • Use the strange Downloader interface, which uses the above under the hood.

There does not seem to be an obvious way to stream the request body.

This is a weirdly high-level and inefficient-by-default interface for a language that claims to target, among others, systems engineers, and aims to be "as fast as C"24. The same pattern of "take a buffer with the entire input, return a new allocation with the entire output" is prevalent in other modules such as compress and encoding.

The context module, introduced before V 0.2.448 and modelled after Go's module of the same name, intends to implement request cancellation and deadline functionality. Yet, three years later, the only meaningful usage of the context.Context type outside of tests seems to be in the vweb web framework, where a context.EmptyContext{} is used once49. Request cancellation is a problem that is generally applicable to any subroutine that does I/O or any other kind of potentially long-running task, not just V's web framework.

There are a lot of smaller weirdnesses in V's stdlib, but I have no intention to delve deep into code quality in this article. All of these flaws seem fixable without changes to the core language itself.

That said, I'd like to mention one more decision I feel dubious about. Features like threads and channels being first-class citizens in the language have the advantage of creating a very good developer experience when the included tools are sufficient. However, this also means that versions of these primitives created by third parties can not take advantage of the syntax/compiler level support.

This is also one of my gripes with Go, the language I used professionally for the past few years: while the built-in primitives are good enough in many cases, once you hit their limitations and need to reach for a 3rd-party library, you lose the fluent integration with the language. You can't go an alternative thread implementation, nor can you select and <- from a third-party channel. Before the introduction of generics in Go, similar "second-class citizen" status applied to other abstractions outside the standard library, such as data structures.

Where V, in my opinion, takes this to the extreme, is the introduction of language-level SQL syntax that supports its built-in ORM50. While SQL is still seen as the king in the database world, it does feel strange that support for this specific technology is embedded so deeply into the V toolchain itself. Given the fact that V currently offers no way for users to introduce similar language syntax extensions by themselves (which can be done in some other languages, e.g. in Rust via macros), this means that interacting with any non-SQL databases will feel massively less nice.

At the very least, it seems possible for 3rd-party SQL database drivers to utilize integration with the sql keyword by implementing the orm.Connection interface, although if the database uses a different dialect, then the support for the dialect needs to either be upstreamed to the orm module, or the database backend needs to re-implement parts of the lower-level orm logic.

V 1.0 and the stability guarantee

The reason why V's extensive scope matters is the implications it has for deliverability of a stable V 1.0 release. V's GitHub repo states:

The V core APIs (primarily the os module) will also have minor changes until they are stabilized in V 1.0. Of course, the APIs will grow after that, but without breaking existing code.

After the 1.0 release V is going to be in the "feature freeze" mode. That means no breaking changes in the language, only bug fixes and performance improvements. Similar to Go.

Will there be V 2.0? Not within a decade after 1.0, perhaps not ever.1

This is not too dissimilar to 1.0 stability guarantees that languages like Go or Rust provide. V's stability goals are not documented in as much detail, but at the very least we can infer that the syntax of the language itself should not change after 1.0, which seems like an obvious and achievable goal.

Additionally, while V's standard library will continue to evolve, V promises to not make breaking changes to it (e.g. removing any public items or changing function signatures) after the 1.0 release. Given the current state of the stdlib, this will require a huge undertaking of its own. Because no detailed standards for idiomatic V code were set early-on, over time the standard library accumulated modules with very different coding conventions. Before V 1.0 can be released, someone, or a group of people, will need to go through every single element of the ever-growing standard library and ensure that these interfaces fit together well and do not hamper future evolution.

Given how small the core V team is — a quick look at the GitHub stats indicates that only a few developers have been consistently commiting code for the past couple of years and at most a couple dozen are merging PRs — this responsibility will likely fall on Alex himself.

V's roadmap for version 1.0 also mentions the goals to make autofree production-ready and implement full support for the C99 standard in C2V51. It does not say anything about additional backends, such as direct x86-64 code generation or the JavaScript backend. If it is intended for these to reach complete parity with the C backend, such that users of V can switch between the 2 or 3 first-class backends seamlessly, V 1.0 may take many more years, if not decades, to complete at the current development pace (or, frankly, at any development pace).

The author

This will be the only section where I talk about Alexander, the creator of V, as a person, or rather his online presence. Once again, I'll remind the reader that it is not my intention to bully anyone, and I'll try to only bring up Alex's behavior as it is relevant to his public software projects.

"It's just a bug"

I am not the first to criticise V (and I might not be the last). The criticism is not always taken in stride by Alex or the contributors.

One common answer to commenters pointing out the deficiencies of V is "it's just a bug, we will fix this / we have already fixed this since", or pointing out that the language is at version 0.X, and thus should obviously be seen as pre-production.

I agree that bugs are normal in any software project. However, there must be a line somewhere marking the difference between "generally works, but has some bugs", and "generally does not work". Otherwise, to invoke reductio ad absurdum for one moment, I could claim to have solved the halting problem by writing a program that answers the halting question for one specific application, and then stating that the fact of it not working for any and every program is "just a bug".

I'd like to think that the sections above have pointed out several core areas in V where the problems run deeper than "just having a few bugs here and there". The lack of solid principles while aiming to surpass everything that has come before, the almost complete lack of a plan on how to get there, and at times the V team not fully understanding what they're promising does not inspire confidence.

There are other ways Alex deflects criticism, such as stating that critical blog posts come from "V haters", or that the critics are either creators of other similar programming languages, or shills for these PLs, and thus have a vested interest in scrutinizing V unfairly. To give Alex some credit here, I've seen one or two novelty accounts on Hacker News dedicated solely to critical or outright mean comments regarding V, and I don't think that's a very cool thing to do. As for the second point, I'm aware that at least the creator of Odin did criticize V before the initial release61, and I won't claim that all of his criticisms were completely valid (although V's closed-source nature and a somewhat ambiguous feature list back then did not make it easy to fully evaluate the language).

"Things change"

As I've covered before, V's memory management claims have changed over time from "at compile time, like Rust" to "mostly at compile time, falling back to RC" to "mostly at compile time, falling back to tracing GC".

bheadmaster: The documentation I've quoted and posted a source to specifically says that all objects are freed either by 1) autofree or 2) reference counting.

amedvednikov: You pointed to an old version of the documentation. RC was changed to tracing GC. Things can change in the design.67

My problem with this is that every time without fail new features, whatever state of quality (or existence) they are in, are announced loudly, but when these plans eventually hit a wall, the deprioritization or material downgrades in these features are but a footnote or a silent change to the homepage.

At some point, we have to admit that V is having an identity crisis. V was supposed to be "safe", but does not enforce data race safety, ensure valid memory references, or prevent undefined behavior. It was supposed to be "as fast as C", but the creator does not consider 5% slowdown a meaningful difference, using the strange argument that companies such as Facebook / Meta must not care about such performance delta [on the backend, where V would most likely be used], because they ship heavy JavaScript to their users' browsers68.

V is "compiled" and it could be argued that it is "simple", so at least 2 out of its 4 defining qualifiers stand. What is V today, other than a much less mature sparkling Go?

A broader pattern

Before V, there was Volt – Alex's project to create a "fast native desktop client for all major messaging services"52. Although the archive of Volt's website is only available starting from 2018, it seems that in 2017, Volt already existed as Eul53.

Volt seems to have followed a similar "overpromise and underdeliver" path as V. A snapshot of the site taken on June 15th, 201852 contains the claim that the 1.0 release for Volt is slated for June 15 (that very same day), and the support for most chat platforms — everything from Telegram to Signal — is supposed to come in June (presumably 2018).

This seemingly did not happen, as commenters on a Hacker News thread in early 2019 point out54:

I want to like this, ever since it was called eul. But features are consistently pushed back which suggests that the dev doesnt have a great handle on what is going on, the availability of mac/linux versions seems consistently misleading, it’s not open source, and it uses icons of services not available (e.g. gmail) in a way that seems dishonest. idk.

More than a little disingenuous. While they advertise that the app supports a number of services, when you actually download the app and try it only Slack and Skype are available - the rest are "coming later in February"

Volt's current website55 offers a download of version 0.96 for ARM-based Mac OS machines only (other platforms are to be supported "soon"). Icons for messaging services seem to indicate that only Slack, Gmail, Twitter and Discord are supported, with support for 9 more services yet to come. Volt's GitHub repository seems to only be used as a deserted issue tracker and downloads page, despite the 5 year old README stating that "In 2021 [...] the app [will be] open-sourced"56.

Despite not taking the messaging world by storm, Volt led to creation of V. Before V had a dedicated website, it had a section on volt.ws57. Many of the more impressive claims about V seem to have first been published on that page, including the C/C++ translator "translat[ing] your entire C/C++ codebase", generating x64 code directly, and automatic memory management ("similar to Rust, but [...] much easier")58.

If I was any meaner, I would perhaps point and laugh at the tutorial that was supposed to demonstrate using V to develop a 3D shooter game, but only shows code to animate a bouncing square (à la the iconic DVD screensaver)58, but at least that tutorial was clearly marked as an "unfinished draft". It's a pity it was never completed, because I would gladly read an article like that, especially targeting people otherwise not experienced in game development.

Before Volt was Volt, back in 2017, there was Gitly, a minimal alternative to the likes of GitHub and GitLab59. Gitly might have been Alex's first public project of this scale, and it garnered some pretty positive feedback in the Hacker News thread. The selling points seemed to be similar to ones Volt and V would later tout: simple, fast, lightweight.

However, that version of Gitly seems to have eventually been abandoned, at least according to another Hacker News commenter:

I realized the other day that the V author is the same person that created gitly, which was a really nice looking git forge. I believe the author's stated plan was to open source it, but the website went offline after some time without an open source release ever happening. I hope the author follows through on this one, because both projects look(ed) pretty neat!60

To which Alex responded:

It will be back, open source, re-written in V[.]

[...]

I started developing Volt/V in the middle of developing gitly.

That's my biggest drawback. I finish 90% of the project, and jump to a different thing.

I've grown a lot since then, and I'm slowly wrapping up everything.

To be fair, Gitly was in fact rewritten in V, and does exist as an alpha-state open-source project66. Nevertheless, the above comment from Alex sheds a light on why his projects consistently fail to get to a production-ready state.

Another commenter on Hacker News claims (I have not verified this independently):

I loved the idea [of V] [...] [b]ut it just didn't seem to go anywhere.

[...]

At one point I even searched in the Discord for messages by Alex that contained "this week", "this month" and "this year", also using "next" instead of "this". It came out to 50+ times that deadlines had been missed (closer to 80+ probably).69

He just like me fr

I will not call Alex a fraud or a scammer as many have done before. My most charitable interpretation is that Alex is just like many programmers, including myself. We think of a cool new idea, implement a prototype and it fills us with joy: something that moments ago only existed in our imagination is now a functioning thing. Both "constantly abandoning side projects for new ones" and the inaccuracy of software development estimations are giant memes for a reason.

The author of V is not wrong in his defense that implementation details or even the direction of a software project might change in the course of its development, or that timeline might get adjusted. My problem with that is, again, the lack of transparency in how these changes are made. It is the fact that Alex, to my knowledge, never answers criticisms on this basis in a humble way, admitting that the initial goals were too lofty, deadlines too aggressive, or that the Hacker News commenters doubting that V could do all these things back in 2019 were partially right.

"Failure" does not have to be a failure, and V itself does not have to be the sole product of V's development. Alex has time and time again promised to write blog articles on various aspects of V's development. Had he done that, instead of silently backpedalling on claims about V, we could have had very interesting reads reflecting on the challenges of creating a new programming language. An article covering, say, "how we tried to create autofree and why we're changing course", would in my eyes be much more interesting than the current broken version of autofree itself. Not to mention that it would serve as a much needed piece of communication to V's users and the broader community.

There have also been claims of Alex unjustly banning people from the V community.7071 I can't say I'm particularly interested in looking into each of these cases and evaluating whether the banhammer was deserved. As for my contributions to the V issue tracker, I'd like to think that any outside observer would recognize them as made in good faith, and I hope to avoid the banhammer myself.

Edited on 2024-09-27: About 9 hours after publishing this article and posting it on r/programming, I was "temporarily muted" from V's subreddit. To my knowledge, I have never interacted in the subreddit, nor was I planning on promoting this article in any community spaces of the V project.

Outro

This kind of article is unusual for me and I truly do not know how to end it. Is V all that it claims it is? Certainly not. Is V "unfixable"? Not necessarily, but it needs so much to get "fixed". Not just refocusing in terms of its goals, not just a lot of technical work, but also a major change in the attitude of its creator.

V Language Review (2023) goes into more detail on several technical aspects of V, notably the problems that arise when swapping between V's memory management modes, and the flawed implementation of the newly-introduced coroutines.

Footnotes


  1. Snapshot of V's GitHub repository as of 2024-06-20 

  2. Snapshot of V's website as of 2019-02-22 

  3. Snapshot of V's website as of 2019-03-03 

  4. Snapshot of V's website as of 2019-04-04 

  5. Snapshot of V's website as of 2019-05-06 

  6. Part of Hacker News thread about the V playground 

  7. Snapshot of V's website as of 2019-05-19 

  8. Hacker News: "The V Programming Language is open source" 

  9. Tweet from @v_language on 2021-01-01 

  10. C2V's initial commit on GitHub 

  11. V 0.3 release on GitHub 

  12. cpp.v in C2V repo 

  13. Part of Hacker News thread about the V playground 

  14. V 0.0.12: initial open-source release 

  15. Alex's comment on a Hacker News thread about V's initial open-source release 

  16. A commit introducing x64 machine code generation in V 

  17. V 0.1.23 release on GitHub 

  18. V's ROADMAP.md as of 2024-03-04 

  19. V's documentation on memory management as of 2019-06-23 

  20. V 0.2 release 

  21. Demo of Ved compiled with autofree 

  22. Commit adding Boehm GC to V 

  23. "How autofree works", a discussion on V's GitHub repository 

  24. Snapshot of V's website as of 2024-06-25 

  25. "Info about Lobster?" on V's GitHub 

  26. v build documentation as of 2024-06-19 

  27. '"Hello world" does not work in browser with the "js_browser" backend' on V's GitHub 

  28. "str.runes() fails on the JavaScript backend" on V's GitHub 

  29. Snapshot of V's website as of 2019-06-24 

  30. Snapshot of V's website as of 2022-07-31 

  31. C2x standard draft, section 6.5.5 

  32. "Division by zero is undefined behavior" on V's GitHub 

  33. Part of a Hacker News thread about V 0.3 release 

  34. "What The Hardware Does" is not What Your Program Does: Uninitialized Memory 

  35. 'How to build small "native binaries without any dependencies"?' on V's GitHub 

  36. "How vlang compile static binary?" on StackOverflow 

  37. array.v in V's source code 

  38. "Why is statically linking glibc discouraged?" on StackOverflow 

  39. V docs: Structs with reference fields 

  40. "Functions 2" in V docs 

  41. V 0.4.6 release on GitHub 

  42. "Unclear how to compile a static binary" on V's GitHub 

  43. "Question static linked binary v compiler" on V's GitHub 

  44. "Drop Flags" in the Rustonomicon 

  45. "Trivial program puts variable on heap unnecessarily, autofree does not free it" on V's GitHub 

  46. tutorials/C2V. Translating simple programs and DOOM./README.md in V's GitHub 

  47. net.http in V docs as of 2024-07-02 

  48. "context: Adds module context based on Golang's context" on V's GitHub 

  49. vweb in V's docs as of 2024-07-02 

  50. orm in V's docs as of 2024-07-02 

  51. V's roadmap as of 2024-03-04 

  52. Volt's website as of 2018-06-15 

  53. Eul's website as of 2017-07-15 

  54. "Volt: Fast native desktop client for Slack and Skype" on Hacker News 

  55. Volt's website as of 2024-05-30 

  56. Volt's GitHub repository 

  57. Description of V on Volt's website as of 2019-02-10 

  58. "Writing a 3D shooter in V/pure OpenGL in an hour" 

  59. Show HN: Gitly.io – high performance Git service with a 10s installation time 

  60. Part of Hacker News thread about the V playground 

  61. "This language is not as advertised" on V's GitHub 

  62. "C2V does not translate parameters in main()" on V's GitHub 

  63. Snapshot of V docs page as of 2019-10-08 

  64. "Reference Cycles Can Leak Memory" in "The Rust Programming Language" book 

  65. Snapshot of V's website as of 2021-04-28 

  66. Gitly's repo on GitHub 

  67. Part of Hacker News thread about V 0.4 release 

  68. tzsharing's thread in a discussion on V's GitHub 

  69. _ramj's comment on a Hacker News thread about "V Language Review (2023)" 

  70. "V is for Vvork in Progress" by Xe Iaso 

  71. "V is an Ethical Disaster", author unknown 

  72. "Why don't compilers automatically insert deallocations?" on Computer Science Stack Exchange 

  73. "C Is Not a Low-level Language" on ACM Queue