Friday, December 28, 2007

Following to the Web 2.0 idiotism or more fun with web

Do you need web ? Speaking more - do you need web 2.0 ?
No, I don't, but anyway in any cases of your open source development you need in the web, because it's a communicate point - where you can exchange some ideas/source code or other related stuff for your project.
Exiting web servers are too feature overloaded or doesn't contain any features in your need. Also, you need CMS for your stuff - and most of them request from you apache, php, mysql, set of the many many libs ... And in the final phase you have an overloaded web server with there things, but you don't get that you want.
The new try_to_run project is a redleaf - is a simple and small web server (already implemented) and a big set of modules targeted to the developers need. It includes the configurable CMS, small toys set - like C source code to html with highlighting converting and many other.
You know already about monsters like sourceforge.net - yeah, now I'm working on the concept of the solution to make responsible for developers install and run the web software to get done.
Currently, this project on the development stage, and httpd works already with all needed stuff and features.
Why I need to do this ? So, it's just my fun like my old projects that was run, some of theirs working in the present days - like xneur - it was fun too ;)

Monday, October 01, 2007

New directions, new profile, new ideas

Jari development has a new targets in development. The first of all we're decided to move our dev/hours to the exiting GNU mach microkernel and implement our ideas based on it. This decision based on the real time needs via the following points :
- there are many trivial task thats simply takes the time on its
- there are already exiting code base that can be used for Jari ideas
- there are no time to create a newly microkernel from scratch
The possible ways may be different and depends on the development flow in the time.
I don't know yet if we will takes the GNU Hurd servers to modify and extend, but you can be sure we will split Jari ikernel and Jari EZA code base with the GNU Mach code base.
With this objectivies I hope that we will ran in the early dates.
Now, I've planned to get Jari run on the third decade of the 2009 with the minimal drivers set, libc, CLI and varios utilities.
Look out this blog for further details in the future.

Wednesday, April 11, 2007

Jari: small new things

While the low level under the heavy development stage, I'm returned to the compiler and some software. But about it by the way ...
C compiler:
We're developing the new C dialect, eC - stands for extended C.
No, it doesn't mean the next C++ or something else. Our dialect targeted to functional programming. I will extend the C macros, add the lambda functionality, and optimize all works with the types.
Also eC compiler will have a mechanism that will checks the works with pointers and variables to avoid the double malloc/free, it will be like an option.
Backing to C, it will backward compatible i.e. you will be able to compile C sources with eC compiler.
About this and other new features I will blog separately some later.
LibC/LibeC:
We will include the several memory allocation models, we will provide the standart alloc() and free(), but with it we're can offer the wrappers and callbacks to make the internal memory allocator simply and faster. You will be able to create struct mem_alloc_t and use it with your own way, with the libc/libec standart functions/politics or assign your function that preffered for your application. Also the libs will have several new functions for allocating memory - rt_alloc(), local_alloc(), ext_alloc(), func_alloc(), there functions will provide the more flexible parallel programming issues.
In addition, libs will be extended for string operating functions, the libec will have the standart high customizable parser.
coreutils:
Core utils will includes tools for parallel task management, via this tools you can get ran processes via your cluster by the your custom way, run nonlocal architecture binaries on the needed node and other features.

See for next blog messages, I will give here more examples and will describe all things written above.

Wednesday, January 24, 2007

offtopic: m$ the big technologies brake

Microsoft is a big shit deals company, but it's sounds like fanatic scream.
Okay, take a look for the arguments.
The first of all and generally people says - "hey, solution from microsoft are totally used and it tells that it's a better solutions" - it's a typical disunderstanding about technology and marketing targeted *only* for domination and "big money must came to us totally". We're can compare non-IT products, but products that really depends on the technologies, in example - every day used thing A. What are you doing before the "big bought" ? You compare the avialable things on the market, what are principles of your choose? Generally you looking for a better look&feel, after this you are look for the functionality, price, quality etc ... Are you sure on this? I don't sure, the first you spamed via the advertisment and marketing action like - "hey look at this, it's a used by everyone, it's better ..." and for more than 60 percent this thing will be preffered on your choose.
The same deal are applicable for the IT market, the end-user (non proffesional) will choose the product that has a more advertisment and if advertisment and marketing will be more aggressive the customer will be more targeted to the product. So, imaginate that a millions end-users attacked via the aggressive marketing, theirs don't thinking about functionality, there are thinking "if an advertisment is so sounds good, then product is good too". It's not a problem of the end-users it's a just big lack for bad business. Microsoft uses this way, it's mean that they don't improve the new features, stability, innovation and via intercepting the whole market they are kill all the ways to get the another concurrents, following this way the new thing in the business came - "Why we should to do something new and better if our products are buying ?". There are no concurrents and if there are coming to the market the microsoft killing its via the agressive marketing and brow up the set of the myth like "in the foo OS no stable, no features, it's a bad - choose our ones", or following with the another way - just buying another companies and closing the projects (remember Xenix for example).
But if you can see there are no fundamentally changes on the microsoft products long the many years. New GUI? New incompatible within formats? New animations? New installation programs? it's a things just for emulating that they doing something - for customers, otherwise customers didn't buy the new products and didn't pay for the microsoft.
But there are some limitations, the first leak of this business strategy that world contains not only end-users, the second leak is that selling products an ugly, the third leak that anyway microsoft must support their products, in short words I mean that there are many professional people that can analyze with system methods exiting solution and tell - "hey! folks! it's a big piece of shit!" and they are will be right. The other thing it's a critical industry that includes an industry automatic systems, science research that needs no for good GUI or pretty looks pooks, they are need for stable and featured product, that has a functionality, it's a 24x7 servers, embedded systems, industrial system, computer clusters, distributed networks, and other critical systems.
In this case microsoft will never shift out other, better solution from this segments. But thinks, that segments has a new innovations and for working with them end-user need have a features and new functionality, but microsoft doesn't do it or do it too slowly and ugly (remember when microsoft approved the tcp/ip stack and improve network support for their products and how it's ugly implemented for the current time). Microsoft sets a technology brake for others including developers, end-users, science, etc ...
But it cannot be eternal process, and now microsoft marketing department know about it, but there doesn't follow via the right way - the right way is a perform a better solutions in a products, compare win 200 with win xp - do you something new, excluding GUI look ? There are nothing, take a look for vista_ something new, excluding poor resource using and new GUI ? The answer is no, so why the microsoft will lay to the customers that linux is bad, that linux is old, that price of the linux solution is big - okay okay, but what the things are microsoft can offer for exchanging unixes ? There are nothing.
Think deeper, microsoft will die or redirected to the support and porting their products to the linux and get this segment.

Questions?

Tuesday, January 23, 2007

Microkernels: the bad or good ?

Last days I've summarized my opinions about different architectures of the operating systems kernels such as classic monolitic kernel, as an exokernel and microkernel.
Summarizing, all concepts have a self weak points and we're need to select a better solution that will correlating a perfomance, stability and security.
In this case I take a look to the microkernel architecture.
The some myth with the microkernels:
- Low perfomance, do you ready to loss just 5-10% of perfomance switching to the stable OS? I think your reply is yes. Many systems use there resources for the overloaded interface or architecture addons that trying to do a best stability. In example a nooks layer in the linux kernel, or solution from the microsoft research - singularity, that not so good, because it's takes points from the perfomance and look like an ugly addon to the worth architecture.
- Non trivial implementation, it's sounds with smiley. On the same deal it's not a true. Just compare simply of the GNU mach microkernel with linux kernel, or minix3 with the *bsd kernel. Of couse you can said that - "the linux kernel contains a big set of the drivers for devices, filesystems etc ...", but just think how it's hard to maintain this. Microkernel haven't a large number of the source code lines, it's a simple to review it, fix and update with new things, that makes it more pretty for maintain and fixing.
Just imaginate that you have a stable and secure system with all included innovations if needed. It's like a software on your mp3/ogg player that can works stable for many times without updates and many more things that you need on the linux or m$ windows.
All kernel servers are separated from each other and device driver in example cannot down all the system, in example you are working with the big document and some device driver (for your scsi controller in example) falling off, in this case all the request in the microkernel are cached and system process reload this driver and reply the requests, and you are continuing the job thinking that nothing happened, on the classic kernels you will take a look to the kernel panic or BSoD and all your job will be lost.
In addition, did you see a good abstraction in the Operating System, where you don't think about file system location? This is a simple job for the microkernel architecture.
In example, you came to the office and turn on your notebook the system detects your placement and starting to work with the office shared filesystem, and you don't care where is it located? What I need to mount/ add the volume to do this job - I really don't care. On both systems *nix and m$ windows I need to mount a NFS volume or add the logical volume via windows network and take care with back up ones or other things.
By the way, with the microkernel you don't need to have hardcoded things that can affect your perfomance and other varios things, all is modular.
Like a postscript I want to tell you - user friendly system is a not a system with pretty look interface, it's a stable and simple system for both end-users and developers, where you don't think about system security and permanent drivers installations and rebooting.