Skip to Main Content
Apache Events The Apache Software Foundation
Apache 20th Anniversary Logo

This was extracted (@ 2024-03-20 21:10) from a list of minutes which have been approved by the Board.
Please Note The Board typically approves the minutes of the previous meeting at the beginning of every Board meeting; therefore, the list below does not normally contain details from the minutes of the most recent Board meeting.

WARNING: these pages may omit some original contents of the minutes.
This is due to changes in the layout of the source minutes over the years. Fixes are being worked on.

Meeting times vary, the exact schedule is available to ASF Members and Officers, search for "calendar" in the Foundation's private index page (svn:foundation/private-index.html).

Tashi

21 Aug 2013

An infrastructure for cloud computing on big data.

Tashi has been incubating since 2008-09-04.

Shepherd notes:

 rvs: Tashi looks completely dormant at this point. Despite my repeated
 on-list and off-list emails it appears that I couldn't find anybody to
 compile a report. The only discussion that resulted from my attempts is
 captured over here: http://markmail.org/thread/mveeuubf2fcdmgcw

 Personally I think we need to figure out a path to *some* kind of a
 resolution here. I don't think Tashi benefits from being an incubator
 project and we need to figure out how to get it to a different trajectory.

17 Jul 2013

An infrastructure for cloud computing on big data.

Tashi has been incubating since 2008-09-04.

Three most important issues to address in the move towards graduation:

 1.
 2.
 3.

Any issues that the Incubator PMC (IPMC) or ASF Board wish/need to be aware
of?


How has the community developed since the last report?

How has the project developed since the last report?

Date of last release:



Signed-off-by:

 [ ](tashi) Matthieu Riou
 [ ](tashi) Craig Russell


Shepherd notes:

17 Apr 2013

An infrastructure for cloud computing on big data.

Tashi has been incubating since 2008-09-04.

Three most important issues to address in the move towards graduation:

 1.
 2.
 3.

Any issues that the Incubator PMC (IPMC) or ASF Board wish/need to be aware of?


How has the community developed since the last report?

How has the project developed since the last report?

Please check this [ ] when you have filled in the report for Tashi.

Signed-off-by:
Matthieu Riou: [ ](tashi)
Craig Russell: [ ](tashi)


Shepherd notes:

16 Jan 2013

Tashi originally encompassed just the tools to manage virtual machines
using Xen and QEMU, but has been merged with Zoni, which manages the
physical aspects of a cluster like power control, network settings and
handing out physical machines.

Activities from October to December:

In the period from October to December, the project did not ask to make
another incubating release, but is ready to start the process for a new
release incorporating the development efforts of the last 6 months.

This period was dedicated to testing and performance testing of the system.

Minor updates to the code include adding support for sub-domains
within Tashi.  Also some bug fixes to the recently imported code from
a collaborator.  Next efforts will be to more tightly integrate the
physical and virtual aspects of the system as well as create a front
end UI.


The project has a user community, but it is small. Growth mostly has
happened by word of mouth. To show potential users at large the utility
of this project, the author of this report is creating web pages to
demonstrate how to accomplish distributed computing tasks. Base images
of (free) OS installs will be provided to allow new users to get started
quickly. Hopefully this will increase visibility of the project.

Items to be resolved before graduation:
 * Generate more publicity for the project.
 * Develop members of the user community to submit feature
   extensions.

(no signoffs)

17 Oct 2012

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing on massive internet-scale datasets (what we call Big Data).
The idea is to build a cluster management system that enables the Big
Data that are stored in a cluster/data center to be accessed, shared,
manipulated, and computed on by remote users in a convenient, efficient,
and safe manner.

Tashi originally encompassed just the tools to manage virtual machines
using Xen and QEMU, but has been merged with Zoni, which manages the
physical aspects of a cluster like power control, network settings and
handing out physical machines.

Activities July-October:

In the period from July to October, the project did not ask to make
another incubating release, but is ready to start the process for a new
release incorporating the development efforts of the last 6 months.

Development efforts this period have included providing a separate
administration client, allowing addition of users and networks, and host
reservations and availability for scheduling.

The project has received code contributions from one non-committer in
this period. Diogo Gomes provided support for deriving the IP addresses
of guests automatically, without having to scan the subnet. Thanks Diogo!

Additional stability and user experience improvements were also
committed.

Upcoming software goals are to investigate what is needed to support
IPv6, replace RPyC, and to provide the ability to hand out server slices
(operating system level virtualization). Besides CPU and memory, disk
storage should also be a schedulable resource.

The project has a user community, but it is small. Growth mostly has
happened by word of mouth. To show potential users at large the utility
of this project, the author of this report is creating web pages to
demonstrate how to accomplish distributed computing tasks. Base images
of (free) OS installs will be provided to allow new users to get started
quickly. Hopefully this will increase visibility of the project.

Items to be resolved before graduation:

 - Generate more publicity for the project.
 - Develop members of the user community to submit feature extensions.

Signed-off-by: mfranklin

25 Jul 2012

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing on massive internet-scale datasets (what we call Big Data).
The idea is to build a cluster management system that enables the Big
Data that are stored in a cluster/data center to be accessed, shared,
manipulated, and computed on by remote users in a convenient, efficient,
and safe manner.

Tashi originally encompassed just the tools to manage virtual machines
using Xen and QEMU, but has been merged with Zoni, which manages the
physical aspects of a cluster like power control, network settings and
handing out physical machines.

In the period from April to July, the project did not ask to make
another incubating release, but is ready to start the process for a new
release incorporating the development efforts of this period.

Development efforts this period have included making user actions
display assurance messages via the client in case of successful
operations, extending the SQL database backend to support all Instance
and Host fields that are already recorded via the alternative "pickled"
backend.

The primitive scheduler gained additional resilience to refrain from
scheduling load on hosts that are down transiently. The node manager
service now tries to ensure that undelivered messages to the cluster
manager are resubmitted regularly. Virtual machine migration was revised
to ensure stale state wasn't being shadowed by new data, only to
reappear when the migrated VM was shut down.

The code underwent a complete automatic analysis, fixing several issues.
Furthermore, a few other minor additions, fixes and documentation
updates were made.

The project has received code contributions from two non-committers in
this period. MIMOS via Luke Jing Yuan have contributed "convertz" to the
code base, a utility to convert a VM image to an image deployable to a
physical machine provisioned by Zoni. Alexey Tumanov of CMU provided a
communications timeout wrapper to handle the problem of threads hanging
forever, trying to communicate over a broken network connection.

Upcoming software goals are to separate the client into an
administrative and a user interface, to investigate what is needed to
support IPv6, replace RPyC, and to provide the ability to hand out
server slices (operating system level virtualization). Besides CPU and
memory, disk storage should also be a schedulable resource.

The project has a user community, but it is small. Growth mostly has
happened by word of mouth. To show potential users at large the utility
of this project, the author of this report is creating web pages to
demonstrate how to accomplish distributed computing tasks. Base images
of (free) OS installs will be provided to allow new users to get started
quickly. Hopefully this will increase visibility of the project.

Items to be resolved before graduation:

 - Generate more publicity for the project.
 - Develop members of the user community to submit feature extensions.

Signed off by mentor:
Shepherd: Jukka Zitting

18 Apr 2012

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud computing
on massive internet-scale data sets (what we call Big Data). The idea is to
build a cluster management system that enables the Big Data that are stored in
a cluster/data center to be accessed, shared, manipulated, and computed on by
remote users in a convenient, efficient, and safe manner.

Tashi originally encompassed just the tools to manage virtual machines
using Xen and QEMU, but has been merged with Zoni, which manages the
physical aspects of a cluster like power control, network settings and
handing out physical machines.

In the period from January to April, the project had received permission
to publish a release. Shortly thereafter, a further release was
approved, incorporating several bug fixes detected during deployment
of the first release and from Jira reports.

In the process of making our first release, contact to two of our three
mentors was re-established. The third mentor has not been heard from
(also in other parts of Apache) for quite a while. We obtained sufficient
administrative access to Jira to manage our problem reports.

Development efforts this period have mainly been in adding resilience
to Tashi components, as well as returning more helpful messages in case
of errors. Some parts of the code base that were relevant only to Thrift
have been removed.

Upcoming software goals are to investigate what is needed to support IPv6,
considering replacement for RPyC and providing the ability to hand
out server slices (operating system level virtualization).

The project has a user community, but it is small. Growth mostly has
happened by word of mouth. To show potential users at large the
utility of this project, the author of this report will apply some of
the advice posted to general@apache.incubator.org, as well as create
web pages demonstrating the project's utility, provide sample VM images
and disseminate other information relating to the deployments close to him.
He will also urge others to make similar information publicly available.

One of our users has requested the creation of a private branch for his
team to work on. Perhaps this will result in a valuable feature addition
to the project.

Items to be resolved before graduation:

 - Generate more publicity for the project.
 - Develop members of the user community to submit feature extensions.

Signed off by mentor:

24 Jan 2012

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud computing
on massive internet-scale datasets (what we call Big Data). The idea is to
build a cluster management system that enables the Big Data that are stored in
a cluster/data center to be accessed, shared, manipulated, and computed on by
remote users in a convenient, efficient, and safe manner.

Tashi originally encompassed just the tools to manage virtual machines
using Xen and QEMU, but has been merged with Zoni, which manages the
physical aspects of a cluster like power control, network settings and
handing out physical machines.

Development activities have included:-
 * Accounting server has been added to the codebase
 * Primitive scheduler changes
         * bug fixes
         * Add support for user choice of dense packing or not
         * Guard against starting more than one VM with
           persistent disk
 * Client changes
         * Check syntax of user commands
         * Add support for querying available images
         * Add support for querying image size
         * Add support for copying of images
 * QEMU VMM changes
         * bug fixes
         * Reserve some memory for the host itself
         * Make scratch location configurable
         * Live migrations take a long time, eliminate
           some timeout values
 * Cluster manager changes
         * bug fixes
         * Reduce network traffic
         * Move accounting functions to new accounting server
 * Branched off new stable version and release candidate
 * Audit compliance with Incubator policies

The project is still working toward building a larger user and development
community. User groups have been identified in Ireland, Slovenia and Korea,
Malaysia, as well as the United States.

Items to be resolved before graduation:
 * A stable branch exists which could be a release candidate, but
   the codebase is large and test hardware is currently in
   short supply. We are confident that the code in the stablefix
   branch will work if running QEMU emulation, Pickle or sqlite
   data storage, primitive scheduler. Xen, other data stores and
   schedulers have not been tested recently.
 * Develop community diversity (Committers currently at Telefonica,
   Google and CMU)

26 Oct 2011

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud computing on
massive internet-scale datasets (what we call Big Data). The idea is to build a
cluster management system that enables the Big Data that are stored in a
cluster/data center to be accessed, shared, manipulated, and computed on by
remote users in a convenient, efficient, and safe manner.

Tashi has previously encompassed just the tools to manage virtual
machines using Xen and KVM, but is gaining the facility to hand out
physical machines as well.

Development activities have included:-
 * Zoni has been merged into the mainline code trunk
 * Additional capability added to Zoni
 * Implement hint to influence scheduler packing policy
 * Reject incorrect arguments to tashi-client to prevent
   unintended defaults from being used
 * Migrated to rpyc version 3.1
 * Add "free capacity" info function to Tashi
 * Support for auto creation of zoni tftp boot menus
 * Fixed deadlocks in clustermanager
 * Rewrite CM to concentrate decay handlers into one spot
 * Use Linux LVM for local scratch space creation
 * VMM is now authoritative to what is running
 * Retry deploying held VMs at a later time

The project is still working toward building a larger user and development
community. User groups have been identified in Ireland, Slovenia and Korea,
Malaysia, as well as at Georgia Tech.

Items to be resolved before graduation:
 * A stable branch exists which could be a release candidate, but
   the codebase is large and test hardware is currently in
   short supply. We are confident that the code in the stablefix
   branch will work if running QEMU emulation, Pickle or sqlite
   data storage, primitive scheduler. Xen, other data stores and
   schedulers have not been tested recently.
 * Should have example accounting code (data is kept, but
   interpretation is currently manual)
 * Develop community diversity (Committers currently at Telefonica,
   Google and CMU)

20 Jul 2011

2011-July Tashi Incubator Status Report

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing on massive internet-scale datasets (what we call Big Data). The
idea is to build a cluster management system that enables the Big Data that
are stored in a cluster/data center to be accessed, shared, manipulated,
and computed on by remote users in a convenient, efficient, and safe manner.

Tashi has previously encompassed just the tools to manage virtual
machines using Xen and KVM, but is gaining the facility to hand out
physical machines as well.

Development activities have included:-
 * Refactor primitive scheduler to be less convoluted
 * Ensure that an old CM handle expires to not talk to a dead CM
 * Use virtio networking by default for performance
 * Enable config option for Miha Stopar's auto host registration
 * Clean unused and untested modules from stable branch
 * Reduce VM startup time when using scratch (old sparse files)
 * Conversion of sparse file scratch space to Linux LVM2
 * Work on migrating VMs between hosts
 * Resource usage messages sent to clustermanager for accounting

The project is still working toward building a larger user and development
community. User groups have been identified in Ireland, Slovenia and Korea,
as well as at Georgia Tech. CMU usage is growing as other groups hear about
the availability of the resource. Intel has restructured its research
division and folded some operations into adjoining academic sites.

Items to be resolved before graduation:
 * A stable branch exists which could be a release candidate, but
   the codebase is large and test hardware is currently in
   short supply. We are confident that the code in the stablefix
   branch will work if running QEMU emulation, Pickle or sqlite
   data storage, primitive scheduler. Xen, other data stores and
   schedulers have not been tested recently.
 * Should have example accounting code
 * Develop community diversity (Committers currently at Telefonica,
   Google and CMU)

20 Apr 2011

2011-April Tashi Incubator Status Report

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing
on massive internet-scale datasets (what we call Big Data). The idea is to
build a cluster management system that enables the Big Data that are stored
in
a cluster/data center to be accessed, shared, manipulated, and computed on
by
remote users in a convenient, efficient, and safe manner.

Tashi has previously encompassed just the tools to manage virtual
machines using Xen and KVM, but is gaining the facility to hand out
physical machines as well.

Development activities have included:-
 * Importing bug fixes from CMU deployment
 * Don't display exceptions if failure is expected
 * Validate fields and clean invalid instance entries
 * Eliminating inconsistent state between node and cluster manager
 * Allow hosts to be soft powered off via ACPI
 * Refactor primitive scheduler to be less convoluted (ongoing)
 * Define a stable branch for tested code
 * Example support for dynamic scratch space disks
 * Enforce designated copy-on-write data filesystem for Qemu

The project is still working toward building a larger user and development
community. User groups have been identified in Ireland, Slovenia and Korea,
as well as at Georgia Tech. CMU usage is growing as other groups hear about
the availability of the resource. Intel has restructured its research
division and folded some operations into adjoining academic sites.

Items to be resolved before graduation:
 * A stable branch exists which could be a release candidate, but
   the codebase is large and test hardware is currently in
   short supply. We are confident that the code in the stablefix
   branch will work if running QEMU emulation, Pickle or sqlite
   data storage, primitive scheduler. Xen, other data stores and
   schedulers have not been tested recently.
 * Scratch space integration would be desirable, accounting
   integration is probably a necessity
 * Develop community diversity (Committers currently at Telefonica,
   Google and CMU)

19 Jan 2011

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing on massive internet-scale datasets (what we call Big Data). The
idea is to build a cluster management system that enables the Big Data that
are stored in a cluster/data center to be accessed, shared, manipulated, and
computed on by remote users in a convenient, efficient, and safe manner.

Tashi has previously encompassed just the tools to manage virtual machines
using Xen and KVM, but is gaining the facility to hand out physical machines
as well.

Development activities have included:-
 * fix for xen root declaration (necessary if using external kernel)
 * parameterize xen root disk declaration
 * implement Miha Stopar's fix to improve handoff during migration
 * implement Miha Stopar and Andrew Edmond's patch to register and
   unregister hosts, and improve locking of resources
 * allow use of virtio disks under qemu

Richard Gass has created a branch to work on the physical machine
reservation
component (zoni-dev):-
 * allow physical machine registration
 * add integration with Apache web server for control
 * add facilities for DNS/DHCP registration of physical resources
 * make changes to Zoni DB layout (convert some tables to InnoDB)
 * add initial infrastructure hardware (switch and PDU)
 * demonstrate initial VM usage reports (shame-tashi)
 * add logging to infrastructure hardware controllers
 * add abstraction layer for hardware controllers
 * add debug console to zoni
 * add DNS/DHCP key creation functions
 * add physical to virtual cluster manager service
 * add primitive agent to keep minimal amount of machines powered
   on and scale up from there
 * allow zoni-cli to talk to hardware directly

The project is still working toward building a larger user and development
community. User groups have been identified in Ireland, Slovenia and Korea,
as well as at Georgia Tech. Several suggestions provided by users at those
sites have been implemented in the head.

Items to be resolved before graduation:
 * Prepare and review a release candidate
 * Develop community diversity (currently Intel and CMU committers)

20 Oct 2010

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing on massive internet-scale datasets (what we call Big Data). The
idea is to build a cluster management system that enables the Big Data that
are stored in a cluster/data center to be accessed, shared, manipulated, and
computed on by remote users in a convenient, efficient, and safe manner.

Tashi has previously encompassed just the tools to manage virtual machines
using Xen and KVM, but is gaining the facility to hand out physical machines
as well.

Development activities have included assimilation of adopted code to Zoni
(Tashi's physical machine reservation system), inclusion of support for
controlling APC brand switched rack PDUs, addition of DNS CNAME management,
and bug fixes.

The project is still working toward building a larger user and development
community. User groups have been identified in Ireland and Korea, and the
developers are trying to reach out to them. Connections have also been made
with Georgia Tech.

Items to be resolved before graduation:
 * Prepare and review a release candidate
 * Develop community diversity (currently Intel and CMU committers)

21 Jul 2010

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing  on massive internet-scale datasets (what we call Big Data). The
idea is to  build a cluster management system that enables the Big Data that
are stored in  a cluster/data center to be accessed, shared, manipulated,
and computed on by  remote users in a convenient, efficient, and safe
manner.

Tashi has previously encompassed just the tools to manage virtual machines
using Xen and KVM, but is gaining the facility to hand out physical machines
as well.

Development activities have included fixes to conform to new python
programming standards, support for VHD virtual disks in Xen, configurable
vlan bridge templates for Xen, along with expansion of documentation.

The project is still working toward building a larger user and development
community. User groups have been identified in Ireland and Korea, and the
developers are trying to reach out to them.

Items to be resolved before graduation:

 * Prepare and review a release candidate
 * Develop community diversity (currently Intel and CMU committers)

21 Apr 2010

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing on massive internet-scale datasets (what we call Big Data). The
idea is to build a cluster management system that enables the Big Data that
are stored in a cluster/data center to be accessed, shared, manipulated, and
computed on by remote users in a convenient, efficient, and safe manner.

Tashi has previously encompassed just the tools to manage virtual machines
using Xen and KVM, but is gaining the facility to hand out physical machines
as well.

Development activities have included fixes to conform to new python
programming standards, and a module for Zoni to assign ports on HP blade
server switches.

The project is still working toward building a larger user and development
community. Michael Ryan, an active committer on the project, has taken a new
job and is unable to actively contribute to the project any longer. Richard
Gass, who is running a Tashi production environment, has been added as a
committer. Richard introduced the Zoni physical hardware management layer to
Tashi earlier.

Items to be resolved before graduation:

* Prepare and review a release candidate
* Develop community diversity (currently Intel and CMU committers)



= Traffic Server =

Traffic Server is an HTTP proxy server and cache, similar to Squid and
Varnish (but better). Traffic Server has been incubated since July 2009.

Recent activities:

* 2010-03-30 The PPMC has begun the graduation process.
* 2010-03-29 The new home page is launched.
* 2010-03-17 Diane Smith joins the Traffic Server PPMC.
* 2010-03-13 Apache Traffic Server v2.0.0-alpha is released.
* 2010-03-04 The community votes for CTR for trunk, RTC for release
branches.
* 2010-03-02 Manjesh Nilange joins the Traffic Server PPMC.
* 2010-02-26 Manjesh Nilange joins the project as a new committer.
* 2010-02-23 2.0.x release branch created, and CI environment setup.
* 2010-02-09 The last RAT issues are resolved, we're clean.
* 2010-02-02 KEYS file added to dist area.
* 2010-02-02 Automatic sync from SVN dist repo to dist servers setup.
* 2010-01-18 George Paul joins the Traffic Server PPMC.

The graduation process is completed, and we've passed the votes in both the
PPMC and the IPMC. A resolution proposal has been submitted to the board for
the next board meeting.

20 Jan 2010

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing on massive internet-scale datasets (what we call Big Data). The
idea is to build a cluster management system that enables the Big Data that
are stored in a cluster/data center to be accessed, shared, manipulated, and
computed on by remote users in a convenient, efficient, and safe manner.

Development activities have included work on the aws compatability layer, a
few minor bug fixes, a port of nmd to python, adding support for tagged
bridges in xen, and importing Zoni.

The project is still working toward building a larger user and development
community. Michael Ryan, an active committer on the project, has taken a
new job and is unable to actively contribute to the project any longer.

Items to be resolved before graduation:
* Prepare and review a release candidate
* Develop community diversity (currently Intel and CMU committers)


= Traffic Server =

Traffic Server is an HTTP proxy server and cache, similar to Squid and
Varnish (but better). Traffic Server has been incubated since July 2009.

Recent activities:

* 2009-12-28 George Paul joins the project as a new committer.
* 2009-12-04 Buildbot system setup, with automatic RAT reports.
* 2009-12-02 John Plevyak joins the Traffic Server PPMC.
* 2009-11-16 John Plevyak joins the project as a new committer.
* 2009-11-16 Diane Smith joins the project as a new committer.
* 2009-11-16 Paul Querna joins the Traffic Server PPMC.
* 2009-11-11 Paul Querna joins the project as a new committer.
* 2009-10-29 Source code migration to Apache Subversion completed.

Significant code contributions has been made since the code was initially
released, including 64-bit support, IPv6 and ports to most popular Linux
distributions. Work is actively done on ports for Solaris, FreeBSD and
MacOSX. A development branch is made for new large code changes, there are
already some very exciting additions, including dramatic cache improvements.
We're keeping our trunk as stable as possible (bug fixes primarily) in
preparation for code freeze and our first Apache release. The plan is to
release Apache Traffic Server v2.0 in Q1 2010. Three new, non-Yahoo
committers have been added since incubation, further increasing the projects
diversity. The number of RAT reports / issues has been reduced
significantly, and we expect to have them all covered this month.

A joint hackathon with the HTTPD crowd is planned for January 25-26. Some
details at http://wiki.apache.org/httpd/HTTPD+TS+Hackathon.

The top three things in the way for Traffic Server graduation are:

* We have a potential license issue with a dependency on Berkeley DB.
This needs to be resolved

* The TM on Traffic Server issue. Yahoo! has offered two possible solutions
for ASF to consider, and we'd like for the board to pick one (see details
below).

* We need to make an official Apache release (planned for Q1 2010).

The Trademark issue is that Y! holds several TMs for the name "Traffic
Server", most of which expires soon. Our legal team has proposed two
possible solutions, the first being the easiest for us.

1. Yahoo! provides ASF with a letter of assurance stating that we own all
right, title and interest in and to the TRAFFIC SERVER mark and the four
active registrations and that we will not take any action against ASF or
any of its licensees during the life of these registrations (and we'd express
our intention of letting them lapse and expire in this letter).

2. Yahoo! will assign all right, title and interest in and to the TRAFFIC
SERVER mark including the four active registrations to ASF [though we'd
probably want to make this contingent on getting through the incubator stage].

18 Nov 2009

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing  on massive internet-scale datasets (what we call Big Data). The
idea is to  build a cluster management system that enables the Big Data that
are stored in  a cluster/data center to be accessed, shared, manipulated,
and computed on by  remote users in a convenient, efficient, and safe
manner.

Development activities have included a communication layer rewrite, the
introduction of an EC2 compatibility layer, a bridge that allows the use of
Maui as a scheduler for Tashi, patches for HVM booting in Xen, DNS updates,
using RPyC on Python 2.4, the SQL backend, the updateVm RPC, and the client
utility's output, fixes to and an expansion of the VM statistics collection
code in the Qemu backend.  Additionally, a notes field was added to the host
definition, a syslog handler was added for logging, the scheduler was
modified  to reduce the number of repeated messages, and the documentation
for setting up  a single machine and configuring DHCP and DNS servers was
updated.

The project is still working toward building a larger user and development
community.  We have recently been contacted by some potential users from
Taiwan  HP, as seen on the -dev list.  Additionally, two developers have
increased the  quantity of patches submitted in this quarter, expanding the
number of  developers working on the project.  The upcoming tutorial at
SC'09 [1] is  drawing near and we expect to draw in more potential users at
the event.

Items to be resolved before graduation:

* Prepare and review a release candidate
* Develop community diversity (currently Intel and CMU committers)

[1] http://scyourway.nacse.org/conference/view/tut168


= Traffic Server =

The last month has been focusing on code cleanup and build system changes.
The Traffic Server project was accepted into the incubator in July 2009, and
we're still actively working on getting the community built up and to get
ready for an Open Source release of TS.

The Yahoo! team has worked on the following code cleanup tasks in
preparation for the code migration:

* All Coverity issues (that are relevant) have been fixed.
* All code that we are not Open Sourcing (at this point) has been removed.
* We have eliminated all Yahoo! proprietary code, cleaned up comments, and
unnecessary attributions.
* Build system has been reworked / improved, and the cleaned up code now
builds and run.
* We have eliminated a few licensing issues (there was nothing there that
would prevent us to push the code to SVN, but our legal wanted this to be
eliminated before code push).

In addition, we have completed the following incubator tasks:

* SVN area has been created (in preparation for code push),
http://svn.apache.org/repos/asf/incubator/trafficserver/.
* Confluence Wiki has been created, and documentation migration has begun,
http://cwiki.apache.org/confluence/display/TS/Index .

Outstanding Incubator tasks include:

* We're working on a final sign-off from our (Yahoo!) Legal Department,
which is currently holding us back from pushing the code. The goal here is
to assure that we have full distribution rights to all code that we push to
Apache SVN.
* Migrate project code to ASF infrastructure.
* Get the Trademark issue resolved (see below).
* Finish the ASF copyright and license attribution in all source files
(this is partway done).

Also, we'd like to bring up again the proposal that we made during the
incubator process for how to deal with the Traffic Server trademark. Our
preferred option is the following (from Chuck Neerdaels and our legal dept):

* The preferred option is to provide ASF with a letter of assurance stating
that we own all right, title and interest in and to the TRAFFIC SERVER mark
and the four active registrations and that we will not take any action
against ASF or any of its licensees during the life of these registrations
[and we'd express our intention of letting them lapse and expire]

* If this is acceptable to ASF, then that is what we'll do. If not, the
other alternative, which is more elaborate, is to assign all rights for the
Traffic Server trademark to the ASF.

15 Jul 2009

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing on massive internet-scale datasets (what we call Big Data). The
idea is to build a cluster management system that enables the Big Data that
are stored in a cluster/data center to be accessed, shared, manipulated, and
computed on by remote users in a convenient, efficient, and safe manner.

Development activities have included adding a locality and layout service,
integrating metrics into ganglia for monitoring, more complicated network
isolation support, and preliminary code for the purpose of communicating
with Maui, a cluster scheduler.  Additionally, the documentation has been
enhanced on the project webpage.

The project is still struggling to grow a substantial user community, but
there are several active contributors.  We are looking forward to the
opportunity to present our work with PRS and Hadoop at SC'09 [1].  In
addition, Tashi is deployed at Intel's OpenCirrus [2] site where it has more
than 20 users.  It is also being installed at CMU's OpenCirrus site.

Items to be resolved before graduation:
 * Prepare and review a release candidate
 * Develop community diversity (currently Intel and CMU committers)

[1] http://scyourway.nacse.org/conference/view/tut168

[2] http://opencirrus.org/

15 Apr 2009

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for
cloud computing on massive internet-scale datasets (what we call
Big Data). The idea is to build a cluster management system
that enables the Big Data that are stored in a cluster/data center
to be accessed, shared, manipulated, and computed on by remote users
in a convenient, efficient, and safe manner.

Development activities have included adding support for using a database to
track virtual machine information, modifying code to work with Python 2.4,
integration with dynamic DHCP and DNS servers, and adding a "tidy" target to
check the source code.

There have been a few questions on tashi-user@i.a.o that centered around
getting Tashi up and running.  Additionally, submissions on the dev list
included patches from a student at CMU, and some scripts from a
non-committer
at Intel.  There is still a lot of work to be done in growing a community.

Items to be resolved before graduation:
 * Put more effort into project documentation so that other potential
   contributors may more easily get involved
 * Develop community diversity (currently Intel and CMU committers)
 * Prepare and review a release candidate

21 Jan 2009

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for
cloud computing on massive internet-scale datasets (what we call
Big Data). The idea is to build a cluster management system
that enables the Big Data that are stored in a cluster/data center
to be accessed, shared, manipulated, and computed on by remote users
in a convenient, efficient, and safe manner.

Activity has been slow over the holidays.

The JIRA has been set up and is being used.

Work has begun on integration between the Tashi scheduler and DHCP and DNS
servers.

Items to be resolved before graduation:
 * Check and make sure that the files that have been donated have been
          updated to reflect the new ASF copyright
 * Check and make sure that all code included with the distribution is
          covered by one or more of the approved licenses
 * Community diversity (currently Intel and CMU committers)
 * Demonstrate ability to create Apache releases

17 Dec 2008

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for
cloud computing on massive internet-scale datasets (what we call
Big Data). The idea is to build a cluster management system
that enables the Big Data that are stored in a cluster/data center
to be accessed, shared, manipulated, and computed on by remote users
in a convenient, efficient, and safe manner.

Work has been progressing toward an initial test installation of Tashi via
the OpenCirrus testbed (http://www.opencirrus.org/).  This has consumed much
of the developers' time.  During this though, a few minor changes were made
to the code to make it more amenable to deployment.

Items to be resolved before graduation:
 * Check and make sure that the files that have been donated have
   been updated to reflect the new ASF copyright
 * Check and make sure that all code included with the distribution
   is covered by one or more of the approved licenses
 * Migrate the project to Apache infrastructure (begin using Jira)
 * Community diversity (currently Intel and CMU committers)
 * Demonstrate ability to create Apache releases

19 Nov 2008

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for cloud
computing on massive internet-scale datasets (what we call Big Data). The
idea is to build a cluster management system that enables the Big Data that
are stored in a cluster/data center to be accessed, shared, manipulated, and
computed on by remote users in a convenient, efficient, and safe manner.

The initial committers' accounts are active.
The initial code import has been completed.
A project website and an incubation status page have been added.

Items to be resolved before graduation:
 * Complete items listed on the project status page (relating to entering
incubation)
 * Community diversity (currently Intel and CMU committers)
 * Demonstrate ability to create Apache releases

15 Oct 2008

Tashi has been incubating since September 2008.

The Tashi project aims to build a software infrastructure for
cloud computing on massive internet-scale datasets (what we call
Big Data). The idea is to build a cluster management system
that enables the Big Data that are stored in a cluster/data center
to be accessed, shared, manipulated, and computed on by remote users
in a convenient, efficient, and safe manner.

Mailing lists have been created.

The svn repository was created at
https://svn.apache.org/repos/asf/incubator/tashi

Initial committers' accounts are in process. Mentors have
been given access to the svn repo.

The JIRA project has been created.
https://issues.apache.org/jira/secure/project/ViewProject.jspa?pid=12310841