repo
string | commit
string | message
string | diff
string |
---|---|---|---|
bravoserver/bravo
|
381bef126c3a0cbb54caeec444b8b479aabbf6d9
|
Move IRC to #bravoserver so we can kick spammers when necessary.
|
diff --git a/docs/introduction.rst b/docs/introduction.rst
index 985c49d..595270f 100644
--- a/docs/introduction.rst
+++ b/docs/introduction.rst
@@ -1,101 +1,101 @@
.. include:: globals.txt
.. _introduction:
=========================
A high-level introduction
=========================
Bravo is an open source, reverse-engineered implementation of Minecraft's server
application. Two of the major building blocks are Python_ and Twisted_, but
you need not be familiar with either to run, administer, and play on a
Bravo-based server.
Similar and different
=====================
While one of the goals of Bravo is to be roughly on par with the standard,
"Notchian" Minecraft server, Bravo does change and improve things for the
better, where appropriate. See :doc:`differences` for more details.
Some of the more positive hilights include:
* More responsiveness with higher populations.
* Much less memory and bandwidth consumption.
* Better inventory system that avoids some bugs found in the standard server.
Current state
=============
Bravo is currently in heavy development. While it is probably safe to run
creative games, we lack some elements needed for Survival-Multiplayer. Take
a look at :doc:`features` to get an idea of where we currently stand.
We encourage the curious to investigate for themselves, and post any bugs,
questions, or ideas you may have to our `issue tracker`_.
Project licensing
=================
Bravo is MIT/X11-licensed. A copy of the license is included in the
:file:`LICENSE` file in the repository or distribution. This extremely
permissive license gives you all of the flexibility you could ever want.
Q & A
=====
*Why are you doing this? What's wrong with the official Alpha/Beta server?*
Plenty. The biggest architectural mistake is the choice of dozens of threads
instead of NIO and an asynchronous event-driven model, but there are other
problems as well. Additionally, the offical server development team has
recently moved to remove all other servers as options for people wishing to
deploy servers. We don't approve of that.
*Are you implying that the official Alpha server is bad?*
Yes. As previous versions of this FAQ have stated, Notch is a cool guy, but
the official server is bad.
*Are you going to make an open-source client? That would be awesome!*
The server is free, but the client is not. Accordingly, we are not pursuing
an open-source client at this time. If you want to play Alpha, you should pay
for it. There's already enough Minecraft piracy going on; we don't feel like
being part of the problem. That said, Bravo's packet parser and networking
tools could be used in a client; the license permits it, after all.
*Where did the docs go?*
We contribute to the Minecraft Collective's wiki at
http://mc.kev009.com/wiki/ now, since it allows us to share data faster. All
general Minecraft data goes to that wiki. Bravo-specific docs are shipped in
ReST form, and a processed Sphinx version is available online at
http://bravo.readthedocs.org/.
*Why did you make design decision <X>?*
There's an entire page dedicated to this in the documentation. Look at
docs/philosophy.rst or :ref:`philosophy`.
*It doesn't install? Okay, maybe it installed, but I'm having issues!*
- On Freenode IRC (irc.freenode.net), #bravo is dedicated to Bravo development
- and assistance, and #mcdevs is a more general channel for all custom
- Minecraft development. You can generally get help from those channels. If you
- think you have found a bug, you can directly report it on the Github issue
- tracker as well.
+ On Freenode IRC (irc.freenode.net), #bravoserver is dedicated to Bravo
+ development and assistance, and #mcdevs is a more general channel for all
+ custom Minecraft development. You can generally get help from those channels.
+ If you think you have found a bug, you can directly report it on the Github
+ issue tracker as well.
Please, please, please read the installation instructions in the README first,
as well as the comments in bravo.ini.example. I did not type them out so that
they could be ignored. :3
Credits
=======
*Who are you guys, anyway?*
Corbin Simpson (MostAwesomeDude/simpson) is the main coder. Derrick Dymock
(Ac-town) is the visionary and provider of network traffic dumps. Ben Kero
and Mark Harris are the reluctant testers and bug-reporters. The Minecraft
Coalition has been an invaluable forum for discussion.
diff --git a/docs/troubleshooting.rst b/docs/troubleshooting.rst
index e2be2f2..85bf6aa 100644
--- a/docs/troubleshooting.rst
+++ b/docs/troubleshooting.rst
@@ -1,70 +1,70 @@
.. include:: globals.txt
===============
Troubleshooting
===============
Configuring
===========
*When I connect to the server, the client gets an "End of Stream" error and the
server log says something about "ConsoleRPCProtocol".*
You are connecting to the wrong port.
Bravo always runs an RPC console by default. This console isn't directly
accessible from clients. In order to connect a client, you must configure a
world and connect to that world. See the example bravo.ini configuration file
for an example of how to configure a world.
*My world is snowy. I didn't want this.*
In bravo.ini, change your ``seasons`` list to exclude winter. A possible
incantation might be the following::
seasons = *, -winter
Errors
======
*I get lots of RuntimeErrors from Exocet.*
Upgrade to a newer Bravo which doesn't use Exocet.
*I have an error involving construct!*
Install Construct. It is a required package.
*I have an error involving JSON!*
If you update to a newer Bravo, you won't need JSON support.
*I have an error involving IRC/AMP/ListOf!*
Your Twisted is too old. You really do need Twisted 11.0 or newer.
*I have an error ``TypeError: an integer is required`` when starting Bravo!*
Your Twisted is too old. You *really* do need Twisted 11.0 or newer.
*I am running as root on a Unix system and twistd cannot find
``bravo.service``. What's going on?*
For security reasons, twistd doesn't look in non-system directories as root.
If you insist on running as root, try an incantation like the following,
setting ``PYTHONPATH``::
# PYTHONPATH=. twistd -n bravo
But seriously, stop running as root.
Help!
=====
If you are having a hard time figuring something out, encountered a bug,
or have ideas, feel free to reach out to the community in one of several
different ways:
-* **IRC:** #Bravo on FreeNode
+* **IRC:** #bravoserver on FreeNode
* Post to our `issue tracker`_.
* Speak up over our `mailing list`_.
|
bravoserver/bravo
|
a0aa6d76fef125ca53f2ed4db312c16a519a6361
|
Added support for relative config locations.
|
diff --git a/twisted/plugins/bravod.py b/twisted/plugins/bravod.py
index d32a392..4be72ee 100644
--- a/twisted/plugins/bravod.py
+++ b/twisted/plugins/bravod.py
@@ -1,30 +1,39 @@
+import os
from zope.interface import implements
from twisted.application.service import IServiceMaker
from twisted.plugin import IPlugin
from twisted.python.filepath import FilePath
from twisted.python.usage import Options
class BravoOptions(Options):
optParameters = [["config", "c", "bravo.ini", "Configuration file"]]
class BravoServiceMaker(object):
implements(IPlugin, IServiceMaker)
tapname = "bravo"
description = "A Minecraft server"
options = BravoOptions
+ locations = ['/etc/bravo', os.path.expanduser('~/.bravo'), '.']
def makeService(self, options):
# Grab our configuration file's path.
conf = options["config"]
- path = FilePath(conf)
+ # If config is default value, check locations for configuration file.
+ if conf == options.optParameters[0][2]:
+ for location in self.locations:
+ path = FilePath(os.path.join(location, conf))
+ if path.exists():
+ break
+ else:
+ path = FilePath(conf)
if not path.exists():
raise RuntimeError("Couldn't find config file %r" % conf)
# Create our service and return it.
from bravo.service import service
return service(path)
bsm = BravoServiceMaker()
|
bravoserver/bravo
|
8206e486be79bb828b976de7fdba6d28d1599e9d
|
policy/recipes/blueprints: Add recipe for crafting beacons.
|
diff --git a/bravo/policy/recipes/blueprints.py b/bravo/policy/recipes/blueprints.py
index 3b0542c..e21385f 100644
--- a/bravo/policy/recipes/blueprints.py
+++ b/bravo/policy/recipes/blueprints.py
@@ -16,513 +16,524 @@ def two_by_one(material, provides, amount, name):
return Blueprint(name, (2, 1), ((material.key, 1),) * 2,
(provides.key, amount))
def three_by_one(material, provides, amount, name):
"""
A slightly involved recipe which looks a lot like Jenga, with blocks on
top of blocks on top of blocks.
"""
return Blueprint(name, (3, 1), ((material.key, 1),) * 3,
(provides.key, amount))
def two_by_two(material, provides, name):
"""
A recipe involving turning four of one thing, into one of another thing.
"""
return Blueprint(name, (2, 2), ((material.key, 1),) * 4,
(provides.key, 1))
def three_by_three(outer, inner, provides, name):
"""
A recipe which requires a single inner block surrounded by other blocks.
Think of it as like a chocolate-covered morsel.
"""
blueprint = (
(outer.key, 1),
(outer.key, 1),
(outer.key, 1),
(outer.key, 1),
(inner.key, 1),
(outer.key, 1),
(outer.key, 1),
(outer.key, 1),
(outer.key, 1),
)
return Blueprint(name, (3, 3), blueprint, (provides.key, 1))
def hollow_eight(outer, provides, name):
"""
A recipe which requires an empty inner block surrounded by other blocks.
"""
blueprint = (
(outer.key, 1),
(outer.key, 1),
(outer.key, 1),
(outer.key, 1),
None,
(outer.key, 1),
(outer.key, 1),
(outer.key, 1),
(outer.key, 1),
)
return Blueprint(name, (3, 3), blueprint, (provides.key, 1))
def stairs(material, provides, name):
blueprint = (
(material.key, 1),
None,
None,
(material.key, 1),
(material.key, 1),
None,
(material.key, 1),
(material.key, 1),
(material.key, 1),
)
return Blueprint("%s-stairs" % name, (3, 3), blueprint, (provides.key, 1))
# Armor.
def helmet(material, provides, name):
blueprint = (
(material.key, 1),
(material.key, 1),
(material.key, 1),
(material.key, 1),
None,
(material.key, 1),
)
return Blueprint("%s-helmet" % name, (3, 2), blueprint, (provides.key, 1))
def chestplate(material, provides, name):
blueprint = (
(material.key, 1),
None,
(material.key, 1),
(material.key, 1),
(material.key, 1),
(material.key, 1),
(material.key, 1),
(material.key, 1),
(material.key, 1),
)
return Blueprint("%s-chestplate" % name, (3, 3), blueprint,
(provides.key, 1))
def leggings(material, provides, name):
blueprint = (
(material.key, 1),
(material.key, 1),
(material.key, 1),
(material.key, 1),
None,
(material.key, 1),
(material.key, 1),
None,
(material.key, 1),
)
return Blueprint("%s-leggings" % name, (3, 3), blueprint,
(provides.key, 1))
def boots(material, provides, name):
blueprint = (
(material.key, 1),
None,
(material.key, 1),
(material.key, 1),
None,
(material.key, 1),
)
return Blueprint("%s-boots" % name, (3, 2), blueprint, (provides.key, 1))
# Weaponry.
def axe(material, provides, name):
blueprint = (
(material.key, 1),
(material.key, 1),
(material.key, 1),
(items["stick"].key, 1),
None,
(items["stick"].key, 1),
)
return Blueprint("%s-axe" % name, (2, 3), blueprint, (provides.key, 1))
def pickaxe(material, provides, name):
blueprint = (
(material.key, 1),
(material.key, 1),
(material.key, 1),
None,
(items["stick"].key, 1),
None,
None,
(items["stick"].key, 1),
None,
)
return Blueprint("%s-pickaxe" % name, (3, 3), blueprint,
(provides.key, 1))
def shovel(material, provides, name):
blueprint = (
(material.key, 1),
(items["stick"].key, 1),
(items["stick"].key, 1),
)
return Blueprint("%s-shovel" % name, (1, 3), blueprint, (provides.key, 1))
def hoe(material, provides, name):
blueprint = (
(material.key, 1),
(material.key, 1),
None,
(items["stick"].key, 1),
None,
(items["stick"].key, 1),
)
return Blueprint("%s-hoe" % name, (3, 2), blueprint, (provides.key, 1))
def sword(material, provides, name):
blueprint = (
(material.key, 1),
(material.key, 1),
(items["stick"].key, 1),
)
return Blueprint("%s-sword" % name, (1, 3), blueprint, (provides.key, 1))
def clock_compass(material, provides, name):
blueprint = (
None,
(material.key, 1),
None,
(material.key, 1),
(items["redstone"].key, 1),
(material.key, 1),
None,
(material.key, 1),
None,
)
return Blueprint(name, (3, 3), blueprint, (provides.key, 1))
def bowl_bucket(material, provides, amount, name):
blueprint = (
(material.key, 1),
None,
(material.key, 1),
None,
(material.key, 1),
None,
)
return Blueprint(name, (3, 2), blueprint, (provides.key, amount))
def cart_boat(material, provides, name):
blueprint = (
(material.key, 1),
None,
(material.key, 1),
(material.key, 1),
(material.key, 1),
(material.key, 1),
)
return Blueprint(name, (3, 2), blueprint, (provides.key, 1))
def door(material, provides, name):
return Blueprint("%s-door" % name, (2, 3), ((material.key, 1),) * 6,
(provides.key, 1))
# And now, having defined our helpers, we instantiate all of the recipes, in
# no particular order. There are no longer any descriptive names or comments
# next to most recipes, becase they are still instantiated with a name.
all_blueprints = (
# The basics.
one_by_two(blocks["wood"], blocks["wood"], items["stick"], 4, "sticks"),
one_by_two(items["coal"], items["stick"], blocks["torch"], 4, "torches"),
two_by_two(blocks["wood"], blocks["workbench"], "workbench"),
hollow_eight(blocks["cobblestone"], blocks["furnace"], "furnace"),
hollow_eight(blocks["wood"], blocks["chest"], "chest"),
# A handful of smelted/mined things which can be crafted into solid
# blocks.
three_by_three(items["iron-ingot"], items["iron-ingot"], blocks["iron"],
"iron-block"),
three_by_three(items["gold-ingot"], items["gold-ingot"], blocks["gold"],
"gold-block"),
three_by_three(items["diamond"], items["diamond"],
blocks["diamond-block"], "diamond-block"),
three_by_three(items["glowstone-dust"], items["glowstone-dust"],
blocks["lightstone"], "lightstone"),
three_by_three(items["lapis-lazuli"], items["lapis-lazuli"],
blocks["lapis-lazuli-block"], "lapis-lazuli-block"),
three_by_three(items["emerald"], items["emerald"],
blocks["emerald-block"], "emerald-block"),
# Some blocks.
three_by_three(items["string"], items["string"], blocks["wool"], "wool"),
three_by_one(blocks["stone"], blocks["single-stone-slab"], 3,
"single-stone-slab"),
three_by_one(blocks["cobblestone"], blocks["single-cobblestone-slab"], 3,
"single-cobblestone-slab"),
three_by_one(blocks["sandstone"], blocks["single-sandstone-slab"], 3,
"single-sandstone-slab"),
three_by_one(blocks["wood"], blocks["single-wooden-slab"], 3,
"single-wooden-slab"),
stairs(blocks["wood"], blocks["wooden-stairs"], "wood"),
stairs(blocks["cobblestone"], blocks["stone-stairs"], "stone"),
two_by_two(items["snowball"], blocks["snow-block"], "snow-block"),
two_by_two(items["clay-balls"], blocks["clay"], "clay-block"),
two_by_two(items["clay-brick"], blocks["brick"], "brick"),
two_by_two(blocks["sand"], blocks["sandstone"], "sandstone"),
one_by_two(blocks["pumpkin"], items["stick"], blocks["jack-o-lantern"], 1,
"jack-o-lantern"),
# Tools.
axe(blocks["wood"], items["wooden-axe"], "wood"),
axe(blocks["cobblestone"], items["stone-axe"], "stone"),
axe(items["iron-ingot"], items["iron-axe"], "iron"),
axe(items["gold-ingot"], items["gold-axe"], "gold"),
axe(items["diamond"], items["diamond-axe"], "diamond"),
pickaxe(blocks["wood"], items["wooden-pickaxe"], "wood"),
pickaxe(blocks["cobblestone"], items["stone-pickaxe"], "stone"),
pickaxe(items["iron-ingot"], items["iron-pickaxe"], "iron"),
pickaxe(items["gold-ingot"], items["gold-pickaxe"], "gold"),
pickaxe(items["diamond"], items["diamond-pickaxe"], "diamond"),
shovel(blocks["wood"], items["wooden-shovel"], "wood"),
shovel(blocks["cobblestone"], items["stone-shovel"], "stone"),
shovel(items["iron-ingot"], items["iron-shovel"], "iron"),
shovel(items["gold-ingot"], items["gold-shovel"], "gold"),
shovel(items["diamond"], items["diamond-shovel"], "diamond"),
hoe(blocks["wood"], items["wooden-hoe"], "wood"),
hoe(blocks["cobblestone"], items["stone-hoe"], "stone"),
hoe(items["iron-ingot"], items["iron-hoe"], "iron"),
hoe(items["gold-ingot"], items["gold-hoe"], "gold"),
hoe(items["diamond"], items["diamond-hoe"], "diamond"),
clock_compass(items["iron-ingot"], items["clock"], "clock"),
clock_compass(items["gold-ingot"], items["compass"], "compass"),
bowl_bucket(items["iron-ingot"], items["bucket"], 1, "bucket"),
# Weapons.
sword(blocks["wood"], items["wooden-sword"], "wood"),
sword(blocks["cobblestone"], items["stone-sword"], "stone"),
sword(items["iron-ingot"], items["iron-sword"], "iron"),
sword(items["gold-ingot"], items["gold-sword"], "gold"),
sword(items["diamond"], items["diamond-sword"], "diamond"),
# Armor.
helmet(items["leather"], items["leather-helmet"], "leather"),
helmet(items["gold-ingot"], items["gold-helmet"], "gold"),
helmet(items["iron-ingot"], items["iron-helmet"], "iron"),
helmet(items["diamond"], items["diamond-helmet"], "diamond"),
helmet(blocks["fire"], items["chainmail-helmet"], "chainmail"),
chestplate(items["leather"], items["leather-chestplate"], "leather"),
chestplate(items["gold-ingot"], items["gold-chestplate"], "gold"),
chestplate(items["iron-ingot"], items["iron-chestplate"], "iron"),
chestplate(items["diamond"], items["diamond-chestplate"], "diamond"),
chestplate(blocks["fire"], items["chainmail-chestplate"], "chainmail"),
leggings(items["leather"], items["leather-leggings"], "leather"),
leggings(items["gold-ingot"], items["gold-leggings"], "gold"),
leggings(items["iron-ingot"], items["iron-leggings"], "iron"),
leggings(items["diamond"], items["diamond-leggings"], "diamond"),
leggings(blocks["fire"], items["chainmail-leggings"], "chainmail"),
boots(items["leather"], items["leather-boots"], "leather"),
boots(items["gold-ingot"], items["gold-boots"], "gold"),
boots(items["iron-ingot"], items["iron-boots"], "iron"),
boots(items["diamond"], items["diamond-boots"], "diamond"),
boots(blocks["fire"], items["chainmail-boots"], "chainmail"),
# Transportation.
cart_boat(items["iron-ingot"], items["mine-cart"], "minecart"),
one_by_two(blocks["furnace"], items["mine-cart"],
items["powered-minecart"], 1, "poweredmc"),
one_by_two(blocks["chest"], items["mine-cart"], items["storage-minecart"],
1, "storagemc"),
cart_boat(blocks["wood"], items["boat"], "boat"),
# Mechanisms.
door(blocks["wood"], items["wooden-door"], "wood"),
door(items["iron-ingot"], items["iron-door"], "iron"),
two_by_one(blocks["wood"], blocks["wooden-plate"], 1, "wood-plate"),
two_by_one(blocks["stone"], blocks["stone-plate"], 1, "stone-plate"),
one_by_two(blocks["stone"], blocks["stone"], blocks["stone-button"], 1,
"stone-btn"),
one_by_two(items["redstone"], items["stick"], blocks["redstone-torch"], 1,
"redstone-torch"),
one_by_two(items["stick"], blocks["cobblestone"], blocks["lever"], 1,
"lever"),
three_by_three(blocks["wood"], items["redstone"], blocks["note-block"],
"noteblock"),
three_by_three(blocks["wood"], items["diamond"], blocks["jukebox"],
"jukebox"),
Blueprint("trapdoor", (3, 2), ((blocks["wood"].key, 1),) * 6,
(blocks["trapdoor"].key, 2)),
# Food.
bowl_bucket(blocks["wood"], items["bowl"], 4, "bowl"),
three_by_one(items["wheat"], items["bread"], 1, "bread"),
three_by_three(blocks["gold"], items["apple"], items["golden-apple"],
"goldapple"),
three_by_three(items["stick"], blocks["wool"], items["paintings"],
"paintings"),
three_by_one(blocks["reed"], items["paper"], 3, "paper"),
# Special items.
# These recipes are only special in that their blueprints don't follow any
# interesting or reusable patterns, so they are presented here in a very
# explicit, open-coded style.
Blueprint("arrow", (1, 3), (
(items["coal"].key, 1),
(items["stick"].key, 1),
(items["feather"].key, 1),
), (items["arrow"].key, 4)),
Blueprint("bed", (3, 2), (
(blocks["wool"].key, 1),
(blocks["wool"].key, 1),
(blocks["wool"].key, 1),
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
), (items["bed"].key, 1)),
Blueprint("book", (1, 3), (
(items["paper"].key, 1),
(items["paper"].key, 1),
(items["paper"].key, 1),
), (items["book"].key, 1)),
Blueprint("bookshelf", (3, 3), (
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
(items["book"].key, 1),
(items["book"].key, 1),
(items["book"].key, 1),
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
), (blocks["bookshelf"].key, 1)),
Blueprint("bow", (3, 3), (
(items["string"].key, 1),
(items["stick"].key, 1),
None,
(items["string"].key, 1),
None,
(items["stick"].key, 1),
(items["string"].key, 1),
(items["stick"].key, 1),
None,
), (items["bow"].key, 1)),
Blueprint("cake", (3, 3), (
(items["milk"].key, 1),
(items["milk"].key, 1),
(items["milk"].key, 1),
(items["egg"].key, 1),
(items["sugar"].key, 1),
(items["egg"].key, 1),
(items["wheat"].key, 1),
(items["wheat"].key, 1),
(items["wheat"].key, 1),
), (items["cake"].key, 1)),
Blueprint("dispenser", (3, 3), (
(blocks["cobblestone"].key, 1),
(blocks["cobblestone"].key, 1),
(blocks["cobblestone"].key, 1),
(blocks["cobblestone"].key, 1),
(items["bow"].key, 1),
(blocks["cobblestone"].key, 1),
(blocks["cobblestone"].key, 1),
(items["redstone"].key, 1),
(blocks["cobblestone"].key, 1),
), (blocks["dispenser"].key, 1)),
Blueprint("fence", (3, 2), (
(items["stick"].key, 1),
(items["stick"].key, 1),
(items["stick"].key, 1),
(items["stick"].key, 1),
(items["stick"].key, 1),
(items["stick"].key, 1),
), (blocks["fence"].key, 2)),
Blueprint("fishing-rod", (3, 3), (
None,
None,
(items["stick"].key, 1),
None,
(items["stick"].key, 1),
(items["string"].key, 1),
(items["stick"].key, 1),
None,
(items["string"].key, 1),
), (items["fishing-rod"].key, 1)),
Blueprint("flint-and-steel", (2, 2), (
(items["iron-ingot"].key, 1),
None,
None,
(items["flint"].key, 1)
), (items["flint-and-steel"].key, 1)),
Blueprint("ladder", (3, 3), (
(items["stick"].key, 1),
None,
(items["stick"].key, 1),
(items["stick"].key, 1),
(items["stick"].key, 1),
(items["stick"].key, 1),
(items["stick"].key, 1),
None,
(items["stick"].key, 1),
), (blocks["ladder"].key, 2)),
Blueprint("mushroom-stew", (1, 3), (
(blocks["red-mushroom"].key, 1),
(blocks["brown-mushroom"].key, 1),
(items["bowl"].key, 1),
), (items["mushroom-soup"].key, 1)),
Blueprint("mushroom-stew2", (1, 3), (
(blocks["brown-mushroom"].key, 1),
(blocks["red-mushroom"].key, 1),
(items["bowl"].key, 1),
), (items["mushroom-soup"].key, 1)),
Blueprint("sign", (3, 3), (
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
None,
(items["stick"].key, 1),
None,
), (items["sign"].key, 1)),
Blueprint("tnt", (3, 3), (
(items["sulphur"].key, 1),
(blocks["sand"].key, 1),
(items["sulphur"].key, 1),
(blocks["sand"].key, 1),
(items["sulphur"].key, 1),
(blocks["sand"].key, 1),
(items["sulphur"].key, 1),
(blocks["sand"].key, 1),
(items["sulphur"].key, 1),
), (blocks["tnt"].key, 1)),
Blueprint("track", (3, 3), (
(items["iron-ingot"].key, 1),
None,
(items["iron-ingot"].key, 1),
(items["iron-ingot"].key, 1),
(items["stick"].key, 1),
(items["iron-ingot"].key, 1),
(items["iron-ingot"].key, 1),
None,
(items["iron-ingot"].key, 1),
), (blocks["tracks"].key, 16)),
Blueprint("piston", (3, 3), (
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
(blocks["wood"].key, 1),
(blocks["cobblestone"].key, 1),
(items["iron-ingot"].key, 1),
(blocks["cobblestone"].key, 1),
(blocks["cobblestone"].key, 1),
(items["redstone"].key, 1),
(blocks["cobblestone"].key, 1),
), (blocks["piston"].key, 1)),
Blueprint("sticky-piston", (1, 2),
((items["slimeball"].key, 1), (blocks["piston"].key, 1)),
(blocks["sticky-piston"].key, 1)),
+ Blueprint("beacon", (3, 3), (
+ (blocks["glass"].key, 1),
+ (blocks["glass"].key, 1),
+ (blocks["glass"].key, 1),
+ (blocks["glass"].key, 1),
+ (items["nether-star"].key, 1),
+ (blocks["glass"].key, 1),
+ (blocks["obsidian"].key, 1),
+ (blocks["obsidian"].key, 1),
+ (blocks["obsidian"].key, 1),
+ ), (blocks["beacon"].key, 1)),
)
|
bravoserver/bravo
|
402ac3a0c768a90b05d7dc4fb45fe866282bd373
|
blocks: Add nether star and beacon.
|
diff --git a/bravo/blocks.py b/bravo/blocks.py
index 6f52606..8520084 100644
--- a/bravo/blocks.py
+++ b/bravo/blocks.py
@@ -1,837 +1,853 @@
from __future__ import division
faces = ("-y", "+y", "-z", "+z", "-x", "+x")
class Block(object):
"""
A model for a block.
There are lots of rules and properties specific to different types of
blocks. This class encapsulates those properties in a singleton-style
interface, allowing many blocks to be referenced in one location.
The basic idea of this class is to provide some centralized data and
information about blocks, in order to abstract away as many special cases
as possible. In general, if several blocks all have some special behavior,
then it may be worthwhile to store data describing that behavior on this
class rather than special-casing it in multiple places.
"""
__slots__ = (
"_f_dict",
"_o_dict",
"breakable",
"dim",
"drop",
"key",
"name",
"quantity",
"ratio",
"replace",
"slot",
"vanishes",
)
def __init__(self, slot, name, secondary=0, drop=None, replace=0, ratio=1,
quantity=1, dim=16, breakable=True, orientation=None, vanishes=False):
"""
:param int slot: The index of this block. Must be globally unique.
:param str name: A common name for this block.
:param int secondary: The metadata/damage/secondary attribute for this
block. Defaults to zero.
:param tuple drop: The type of block that should be dropped when an
instance of this block is destroyed. Defaults to the block value,
to drop instances of this same type of block. To indicate that
this block does not drop anything, set to air (0, 0).
:param int replace: The type of block to place in the map when
instances of this block are destroyed. Defaults to air.
:param float ratio: The probability of this block dropping a block
on destruction.
:param int quantity: The number of blocks dropped when this block
is destroyed.
:param int dim: How much light dims when passing through this kind
of block. Defaults to 16 = opaque block.
:param bool breakable: Whether this block is diggable, breakable,
bombable, explodeable, etc. Only a few blocks actually genuinely
cannot be broken, so the default is True.
:param tuple orientation: The orientation data for a block. See
:meth:`orientable` for an explanation. The data should be in standard
face order.
:param bool vanishes: Whether this block vanishes, or is replaced by,
another block when built upon.
"""
self.slot = slot
self.name = name
self.key = (self.slot, secondary)
if drop is None:
self.drop = self.key
else:
self.drop = drop
self.replace = replace
self.ratio = ratio
self.quantity = quantity
self.dim = dim
self.breakable = breakable
self.vanishes = vanishes
if orientation:
self._o_dict = dict(zip(faces, orientation))
self._f_dict = dict(zip(orientation, faces))
else:
self._o_dict = self._f_dict = {}
def __str__(self):
"""
Fairly verbose explanation of what this block is capable of.
"""
attributes = []
if not self.breakable:
attributes.append("unbreakable")
if self.dim == 0:
attributes.append("transparent")
elif self.dim < 16:
attributes.append("translucent (%d)" % self.dim)
if self.replace:
attributes.append("becomes %d" % self.replace)
if self.ratio != 1 or self.quantity > 1 or self.drop != self.key:
attributes.append("drops %r (key %r, rate %2.2f%%)" %
(self.quantity, self.drop, self.ratio * 100))
if attributes:
attributes = ": %s" % ", ".join(attributes)
else:
attributes = ""
return "Block(%r %r%s)" % (self.key, self.name, attributes)
__repr__ = __str__
def orientable(self):
"""
Whether this block can be oriented.
Orientable blocks are positioned according to the face on which they
are built. They may not be buildable on all faces. Blocks are only
orientable if their metadata can be used to directly and uniquely
determine the face against which they were built.
Ladders are orientable, signposts are not.
:rtype: bool
:returns: True if this block can be oriented, False if not.
"""
return bool(self._o_dict)
def face(self, metadata):
"""
Retrieve the face for given metadata corresponding to an orientation,
or None if the metadata is invalid for this block.
This method only returns valid data for orientable blocks; check
:meth:`orientable` first.
"""
return self._f_dict.get(metadata)
def orientation(self, face):
"""
Retrieve the metadata for a certain orientation, or None if this block
cannot be built against the given face.
This method only returns valid data for orientable blocks; check
:meth:`orientable` first.
"""
return self._o_dict.get(face)
class Item(object):
"""
An item.
"""
__slots__ = (
"key",
"name",
"slot",
)
def __init__(self, slot, name, secondary=0):
self.slot = slot
self.name = name
self.key = (self.slot, secondary)
def __str__(self):
return "Item(%r %r)" % (self.key, self.name)
__repr__ = __str__
block_names = [
"air", # 0x0
"stone",
"grass",
"dirt",
"cobblestone",
"wood",
"sapling",
"bedrock",
"water",
"spring",
"lava",
"lava-spring",
"sand",
"gravel",
"gold-ore",
"iron-ore",
"coal-ore", # 0x10
"log",
"leaves",
"sponge",
"glass",
"lapis-lazuli-ore",
"lapis-lazuli-block",
"dispenser",
"sandstone",
"note-block",
"bed-block",
"powered-rail",
"detector-rail",
"sticky-piston",
"spider-web",
"tall-grass",
"shrub", # 0x20
"piston",
"",
"wool",
"",
"flower",
"rose",
"brown-mushroom",
"red-mushroom",
"gold",
"iron",
"double-stone-slab",
"single-stone-slab",
"brick",
"tnt",
"bookshelf",
"mossy-cobblestone", # 0x30
"obsidian",
"torch",
"fire",
"mob-spawner",
"wooden-stairs",
"chest",
"redstone-wire",
"diamond-ore",
"diamond-block",
"workbench",
"crops",
"soil",
"furnace",
"burning-furnace",
"signpost",
"wooden-door-block", # 0x40
"ladder",
"tracks",
"stone-stairs",
"wall-sign",
"lever",
"stone-plate",
"iron-door-block",
"wooden-plate",
"redstone-ore",
"glowing-redstone-ore",
"redstone-torch-off",
"redstone-torch",
"stone-button",
"snow",
"ice",
"snow-block", # 0x50
"cactus",
"clay",
"reed",
"jukebox",
"fence",
"pumpkin",
"brimstone",
"slow-sand",
"lightstone",
"portal",
"jack-o-lantern",
"cake-block",
"redstone-repeater-off",
"redstone-repeater-on",
"locked-chest",
"trapdoor", # 0x60
"hidden-silverfish",
"stone-brick",
"huge-brown-mushroom",
"huge-red-mushroom",
"iron-bars",
"glass-pane",
"melon",
"pumpkin-stem",
"melon-stem",
"vine",
"fence-gate",
"brick-stairs",
"stone-brick-stairs",
"mycelium",
"lily-pad",
"nether-brick", # 0x70
"nether-brick-fence",
"nether-brick-stairs",
"nether-wart-block", # 0x73
"",
"",
"",
"",
"",
"",
"",
"",
"",
"double-wooden-slab",
"single-wooden-slab",
"",
"", # 0x80
"emerald-ore",
"",
"",
"",
"emerald-block", # 0x85
+ "",
+ "",
+ "",
+ "",
+ "beacon", # 0x8a
]
item_names = [
"iron-shovel", # 0x100
"iron-pickaxe",
"iron-axe",
"flint-and-steel",
"apple",
"bow",
"arrow",
"coal",
"diamond",
"iron-ingot",
"gold-ingot",
"iron-sword",
"wooden-sword",
"wooden-shovel",
"wooden-pickaxe",
"wooden-axe",
"stone-sword", # 0x110
"stone-shovel",
"stone-pickaxe",
"stone-axe",
"diamond-sword",
"diamond-shovel",
"diamond-pickaxe",
"diamond-axe",
"stick",
"bowl",
"mushroom-soup",
"gold-sword",
"gold-shovel",
"gold-pickaxe",
"gold-axe",
"string",
"feather", # 0x120
"sulphur",
"wooden-hoe",
"stone-hoe",
"iron-hoe",
"diamond-hoe",
"gold-hoe",
"seeds",
"wheat",
"bread",
"leather-helmet",
"leather-chestplate",
"leather-leggings",
"leather-boots",
"chainmail-helmet",
"chainmail-chestplate",
"chainmail-leggings", # 0x130
"chainmail-boots",
"iron-helmet",
"iron-chestplate",
"iron-leggings",
"iron-boots",
"diamond-helmet",
"diamond-chestplate",
"diamond-leggings",
"diamond-boots",
"gold-helmet",
"gold-chestplate",
"gold-leggings",
"gold-boots",
"flint",
"raw-porkchop",
"cooked-porkchop", # 0x140
"paintings",
"golden-apple",
"sign",
"wooden-door",
"bucket",
"water-bucket",
"lava-bucket",
"mine-cart",
"saddle",
"iron-door",
"redstone",
"snowball",
"boat",
"leather",
"milk",
"clay-brick", # 0x150
"clay-balls",
"sugar-cane",
"paper",
"book",
"slimeball",
"storage-minecart",
"powered-minecart",
"egg",
"compass",
"fishing-rod",
"clock",
"glowstone-dust",
"raw-fish",
"cooked-fish",
"dye",
"bone", # 0x160
"sugar",
"cake",
"bed",
"redstone-repeater",
"cookie",
"map",
"shears",
"melon-slice",
"pumpkin-seeds",
"melon-seeds",
"raw-beef",
"steak",
"raw-chicken",
"cooked-chicken",
"rotten-flesh",
"ender-pearl", # 0x170
"blaze-rod",
"ghast-tear",
"gold-nugget",
"nether-wart",
"potions",
"glass-bottle",
"spider-eye",
"fermented-spider-eye",
"blaze-powder",
"magma-cream", # 0x17a
"",
"",
"",
"",
- "spawn-egg",
+ "spawn-egg", # 0x17f
"", # 0x180
"",
"",
"",
"emerald", #0x184
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "nether-star", # 0x18f
]
special_item_names = [
"gold-music-disc",
"green-music-disc",
"blocks-music-disc",
"chirp-music-disc",
"far-music-disc"
]
dye_names = [
"ink-sac",
"red-dye",
"green-dye",
"cocoa-beans",
"lapis-lazuli",
"purple-dye",
"cyan-dye",
"light-gray-dye",
"gray-dye",
"pink-dye",
"lime-dye",
"yellow-dye",
"light-blue-dye",
"magenta-dye",
"orange-dye",
"bone-meal",
]
wool_names = [
"white-wool",
"orange-wool",
"magenta-wool",
"light-blue-wool",
"yellow-wool",
"lime-wool",
"pink-wool",
"gray-wool",
"light-gray-wool",
"cyan-wool",
"purple-wool",
"blue-wool",
"brown-wool",
"dark-green-wool",
"red-wool",
"black-wool",
]
sapling_names = [
"normal-sapling",
"pine-sapling",
"birch-sapling",
"jungle-sapling",
]
log_names = [
"normal-log",
"pine-log",
"birch-log",
"jungle-log",
]
leaf_names = [
"normal-leaf",
"pine-leaf",
"birch-leaf",
"jungle-leaf",
]
coal_names = [
"normal-coal",
"charcoal",
]
step_names = [
"single-stone-slab",
"single-sandstone-slab",
"single-wooden-slab",
"single-cobblestone-slab",
]
drops = {}
# Block -> block drops.
# If the drop block is zero, then it drops nothing.
drops[1] = (4, 0) # Stone -> Cobblestone
drops[2] = (3, 0) # Grass -> Dirt
drops[20] = (0, 0) # Glass
drops[52] = (0, 0) # Mob spawner
drops[60] = (3, 0) # Soil -> Dirt
drops[62] = (61, 0) # Burning Furnace -> Furnace
drops[78] = (0, 0) # Snow
# Block -> item drops.
drops[16] = (263, 0) # Coal Ore Block -> Coal
drops[56] = (264, 0) # Diamond Ore Block -> Diamond
drops[63] = (323, 0) # Sign Post -> Sign Item
drops[68] = (323, 0) # Wall Sign -> Sign Item
drops[83] = (338, 0) # Reed -> Reed Item
drops[89] = (348, 0) # Lightstone -> Lightstone Dust
drops[93] = (356, 0) # Redstone Repeater, on -> Redstone Repeater
drops[94] = (356, 0) # Redstone Repeater, off -> Redstone Repeater
drops[97] = (0, 0) # Hidden Silverfish
drops[110] = (3, 0) # Mycelium -> Dirt
drops[111] = (0, 0) # Lily Pad
drops[115] = (372, 0) # Nether Wart BLock -> Nether Wart
unbreakables = set()
unbreakables.add(0) # Air
unbreakables.add(7) # Bedrock
unbreakables.add(10) # Lava
unbreakables.add(11) # Lava spring
# When one of these is targeted and a block is placed, these are replaced
softblocks = set()
softblocks.add(30) # Cobweb
softblocks.add(31) # Tall grass
softblocks.add(70) # Snow
softblocks.add(106) # Vines
dims = {}
dims[0] = 0 # Air
dims[6] = 0 # Sapling
dims[10] = 0 # Lava
dims[11] = 0 # Lava spring
dims[20] = 0 # Glass
dims[26] = 0 # Bed
dims[37] = 0 # Yellow Flowers
dims[38] = 0 # Red Flowers
dims[39] = 0 # Brown Mushrooms
dims[40] = 0 # Red Mushrooms
dims[44] = 0 # Single Step
dims[51] = 0 # Fire
dims[52] = 0 # Mob spawner
dims[53] = 0 # Wooden stairs
dims[55] = 0 # Redstone (Wire)
dims[59] = 0 # Crops
dims[60] = 0 # Soil
dims[63] = 0 # Sign
dims[64] = 0 # Wood door
dims[66] = 0 # Rails
dims[67] = 0 # Stone stairs
dims[68] = 0 # Sign (on wall)
dims[69] = 0 # Lever
dims[70] = 0 # Stone Pressure Plate
dims[71] = 0 # Iron door
dims[72] = 0 # Wood Pressure Plate
dims[78] = 0 # Snow
dims[81] = 0 # Cactus
dims[83] = 0 # Sugar Cane
dims[85] = 0 # Fence
dims[90] = 0 # Portal
dims[92] = 0 # Cake
dims[93] = 0 # redstone-repeater-off
dims[94] = 0 # redstone-repeater-on
blocks = {}
"""
A dictionary of ``Block`` objects.
This dictionary can be indexed by slot number or block name.
"""
def _add_block(block):
blocks[block.slot] = block
blocks[block.name] = block
# Special blocks. Please remember to comment *what* makes the block special;
# most of us don't have all blocks memorized yet.
# Water (both kinds) is unbreakable, and dims by 3.
_add_block(Block(8, "water", breakable=False, dim=3))
_add_block(Block(9, "spring", breakable=False, dim=3))
# Gravel drops flint, with 1 in 10 odds.
_add_block(Block(13, "gravel", drop=(318, 0), ratio=1 / 10))
# Leaves drop saplings, with 1 in 9 odds, and dims by 1.
_add_block(Block(18, "leaves", drop=(6, 0), ratio=1 / 9, dim=1))
# Lapis lazuli ore drops 6 lapis lazuli items.
_add_block(Block(21, "lapis-lazuli-ore", drop=(351, 4), quantity=6))
# Beds are orientable and drops Bed Item
_add_block(Block(26, "bed-block", drop=(355, 0),
orientation=(None, None, 2, 0, 1, 3)))
# Torches are orientable and don't dim.
_add_block(Block(50, "torch", orientation=(None, 5, 4, 3, 2, 1), dim=0))
# Chests are orientable.
_add_block(Block(54, "chest", orientation=(None, None, 2, 3, 4, 5)))
# Furnaces are orientable.
_add_block(Block(61, "furnace", orientation=(None, None, 2, 3, 4, 5)))
# Wooden Door is orientable and drops Wooden Door item
_add_block(Block(64, "wooden-door-block", drop=(324, 0),
orientation=(None, None, 1, 3, 0, 2)))
# Ladders are orientable and don't dim.
_add_block(Block(65, "ladder", orientation=(None, None, 2, 3, 4, 5), dim=0))
# Levers are orientable and don't dim. Additionally, levers have special hax
# to be orientable two different ways.
_add_block(Block(69, "lever", orientation=(None, 5, 4, 3, 2, 1), dim=0))
blocks["lever"]._f_dict.update(
{13: "+y", 12: "-z", 11: "+z", 10: "-x", 9: "+x"})
# Iron Door is orientable and drops Iron Door item
_add_block(Block(71, "iron-door-block", drop=(330, 0),
orientation=(None, None, 1, 3, 0, 2)))
# Redstone ore drops 5 redstone dusts.
_add_block(Block(73, "redstone-ore", drop=(331, 0), quantity=5))
_add_block(Block(74, "glowing-redstone-ore", drop=(331, 0), quantity=5))
# Redstone torches are orientable and don't dim.
_add_block(Block(75, "redstone-torch-off", orientation=(None, 5, 4, 3, 2, 1), dim=0))
_add_block(Block(76, "redstone-torch", orientation=(None, 5, 4, 3, 2, 1), dim=0))
# Stone buttons are orientable and don't dim.
_add_block(Block(77, "stone-button", orientation=(None, None, 1, 2, 3, 4), dim=0))
# Snow vanishes upon build.
_add_block(Block(78, "snow", vanishes=True))
# Ice drops nothing, is replaced by springs, and dims by 3.
_add_block(Block(79, "ice", drop=(0, 0), replace=9, dim=3))
# Clay drops 4 clay balls.
_add_block(Block(82, "clay", drop=(337, 0), quantity=4))
# Trapdoor is orientable
_add_block(Block(96, "trapdoor", orientation=(None, None, 0, 1, 2, 3)))
# Giant brown mushrooms drop brown mushrooms.
_add_block(Block(99, "huge-brown-mushroom", drop=(39, 0), quantity=2))
# Giant red mushrooms drop red mushrooms.
_add_block(Block(100, "huge-red-mushroom", drop=(40, 0), quantity=2))
# Pumpkin stems drop pumpkin seeds.
_add_block(Block(104, "pumpkin-stem", drop=(361, 0), quantity=3))
# Melon stems drop melon seeds.
_add_block(Block(105, "melon-stem", drop=(362, 0), quantity=3))
for block in blocks.values():
blocks[block.name] = block
blocks[block.slot] = block
items = {}
"""
A dictionary of ``Item`` objects.
This dictionary can be indexed by slot number or block name.
"""
for i, name in enumerate(block_names):
if not name or name in blocks:
continue
kwargs = {}
if i in drops:
kwargs["drop"] = drops[i]
if i in unbreakables:
kwargs["breakable"] = False
if i in dims:
kwargs["dim"] = dims[i]
b = Block(i, name, **kwargs)
_add_block(b)
for i, name in enumerate(item_names):
kwargs = {}
i += 0x100
item = Item(i, name, **kwargs)
items[i] = item
items[name] = item
for i, name in enumerate(special_item_names):
kwargs = {}
i += 0x8D0
item = Item(i, name, **kwargs)
items[i] = item
items[name] = item
_secondary_items = {
items["coal"]: coal_names,
items["dye"]: dye_names,
}
for base_item, names in _secondary_items.iteritems():
for i, name in enumerate(names):
kwargs = {}
item = Item(base_item.slot, name, i, **kwargs)
items[name] = item
_secondary_blocks = {
blocks["leaves"]: leaf_names,
blocks["log"]: log_names,
blocks["sapling"]: sapling_names,
blocks["single-stone-slab"]: step_names,
blocks["wool"]: wool_names,
}
for base_block, names in _secondary_blocks.iteritems():
for i, name in enumerate(names):
kwargs = {}
kwargs["drop"] = base_block.drop
kwargs["breakable"] = base_block.breakable
kwargs["dim"] = base_block.dim
block = Block(base_block.slot, name, i, **kwargs)
_add_block(block)
glowing_blocks = {
blocks["torch"].slot: 14,
blocks["lightstone"].slot: 15,
blocks["jack-o-lantern"].slot: 15,
blocks["fire"].slot: 15,
blocks["lava"].slot: 15,
blocks["lava-spring"].slot: 15,
blocks["locked-chest"].slot: 15,
blocks["burning-furnace"].slot: 13,
blocks["portal"].slot: 11,
blocks["glowing-redstone-ore"].slot: 9,
blocks["redstone-repeater-on"].slot: 9,
blocks["redstone-torch"].slot: 7,
blocks["brown-mushroom"].slot: 1,
}
armor_helmets = (86, 298, 302, 306, 310, 314)
"""
List of slots of helmets.
Note that slot 86 (pumpkin) is a helmet.
"""
armor_chestplates = (299, 303, 307, 311, 315)
"""
List of slots of chestplates.
Note that slot 303 (chainmail chestplate) is a chestplate, even though it is
not normally obtainable.
"""
armor_leggings = (300, 304, 308, 312, 316)
"""
List of slots of leggings.
"""
armor_boots = (301, 305, 309, 313, 317)
"""
List of slots of boots.
"""
"""
List of unstackable items
"""
unstackable = (
items["wooden-sword"].slot,
items["wooden-shovel"].slot,
items["wooden-pickaxe"].slot,
# TODO: update the list
)
"""
List of fuel blocks and items maped to burn time
"""
furnace_fuel = {
items["stick"].slot: 10, # 5s
blocks["sapling"].slot: 10, # 5s
blocks["wood"].slot: 30, # 15s
blocks["fence"].slot: 30, # 15s
blocks["wooden-stairs"].slot: 30, # 15s
blocks["trapdoor"].slot: 30, # 15s
blocks["log"].slot: 30, # 15s
blocks["workbench"].slot: 30, # 15s
blocks["bookshelf"].slot: 30, # 15s
blocks["chest"].slot: 30, # 15s
blocks["locked-chest"].slot: 30, # 15s
blocks["jukebox"].slot: 30, # 15s
blocks["note-block"].slot: 30, # 15s
items["coal"].slot: 160, # 80s
items["lava-bucket"].slot: 2000 # 1000s
}
def parse_block(block):
"""
Get the key for a given block/item.
"""
try:
if block.startswith("0x") and (
(int(block, 16) in blocks) or (int(block, 16) in items)):
return (int(block, 16), 0)
elif (int(block) in blocks) or (int(block) in items):
return (int(block), 0)
else:
raise Exception("Couldn't find block id %s!" % block)
except ValueError:
if block in blocks:
return blocks[block].key
elif block in items:
return items[block].key
else:
raise Exception("Couldn't parse block %s!" % block)
|
bravoserver/bravo
|
c724d2419baefa912963569cfcfe365cb6935627
|
blocks: Add spawn egg.
|
diff --git a/bravo/blocks.py b/bravo/blocks.py
index 43c4c21..6f52606 100644
--- a/bravo/blocks.py
+++ b/bravo/blocks.py
@@ -1,837 +1,837 @@
from __future__ import division
faces = ("-y", "+y", "-z", "+z", "-x", "+x")
class Block(object):
"""
A model for a block.
There are lots of rules and properties specific to different types of
blocks. This class encapsulates those properties in a singleton-style
interface, allowing many blocks to be referenced in one location.
The basic idea of this class is to provide some centralized data and
information about blocks, in order to abstract away as many special cases
as possible. In general, if several blocks all have some special behavior,
then it may be worthwhile to store data describing that behavior on this
class rather than special-casing it in multiple places.
"""
__slots__ = (
"_f_dict",
"_o_dict",
"breakable",
"dim",
"drop",
"key",
"name",
"quantity",
"ratio",
"replace",
"slot",
"vanishes",
)
def __init__(self, slot, name, secondary=0, drop=None, replace=0, ratio=1,
quantity=1, dim=16, breakable=True, orientation=None, vanishes=False):
"""
:param int slot: The index of this block. Must be globally unique.
:param str name: A common name for this block.
:param int secondary: The metadata/damage/secondary attribute for this
block. Defaults to zero.
:param tuple drop: The type of block that should be dropped when an
instance of this block is destroyed. Defaults to the block value,
to drop instances of this same type of block. To indicate that
this block does not drop anything, set to air (0, 0).
:param int replace: The type of block to place in the map when
instances of this block are destroyed. Defaults to air.
:param float ratio: The probability of this block dropping a block
on destruction.
:param int quantity: The number of blocks dropped when this block
is destroyed.
:param int dim: How much light dims when passing through this kind
of block. Defaults to 16 = opaque block.
:param bool breakable: Whether this block is diggable, breakable,
bombable, explodeable, etc. Only a few blocks actually genuinely
cannot be broken, so the default is True.
:param tuple orientation: The orientation data for a block. See
:meth:`orientable` for an explanation. The data should be in standard
face order.
:param bool vanishes: Whether this block vanishes, or is replaced by,
another block when built upon.
"""
self.slot = slot
self.name = name
self.key = (self.slot, secondary)
if drop is None:
self.drop = self.key
else:
self.drop = drop
self.replace = replace
self.ratio = ratio
self.quantity = quantity
self.dim = dim
self.breakable = breakable
self.vanishes = vanishes
if orientation:
self._o_dict = dict(zip(faces, orientation))
self._f_dict = dict(zip(orientation, faces))
else:
self._o_dict = self._f_dict = {}
def __str__(self):
"""
Fairly verbose explanation of what this block is capable of.
"""
attributes = []
if not self.breakable:
attributes.append("unbreakable")
if self.dim == 0:
attributes.append("transparent")
elif self.dim < 16:
attributes.append("translucent (%d)" % self.dim)
if self.replace:
attributes.append("becomes %d" % self.replace)
if self.ratio != 1 or self.quantity > 1 or self.drop != self.key:
attributes.append("drops %r (key %r, rate %2.2f%%)" %
(self.quantity, self.drop, self.ratio * 100))
if attributes:
attributes = ": %s" % ", ".join(attributes)
else:
attributes = ""
return "Block(%r %r%s)" % (self.key, self.name, attributes)
__repr__ = __str__
def orientable(self):
"""
Whether this block can be oriented.
Orientable blocks are positioned according to the face on which they
are built. They may not be buildable on all faces. Blocks are only
orientable if their metadata can be used to directly and uniquely
determine the face against which they were built.
Ladders are orientable, signposts are not.
:rtype: bool
:returns: True if this block can be oriented, False if not.
"""
return bool(self._o_dict)
def face(self, metadata):
"""
Retrieve the face for given metadata corresponding to an orientation,
or None if the metadata is invalid for this block.
This method only returns valid data for orientable blocks; check
:meth:`orientable` first.
"""
return self._f_dict.get(metadata)
def orientation(self, face):
"""
Retrieve the metadata for a certain orientation, or None if this block
cannot be built against the given face.
This method only returns valid data for orientable blocks; check
:meth:`orientable` first.
"""
return self._o_dict.get(face)
class Item(object):
"""
An item.
"""
__slots__ = (
"key",
"name",
"slot",
)
def __init__(self, slot, name, secondary=0):
self.slot = slot
self.name = name
self.key = (self.slot, secondary)
def __str__(self):
return "Item(%r %r)" % (self.key, self.name)
__repr__ = __str__
block_names = [
"air", # 0x0
"stone",
"grass",
"dirt",
"cobblestone",
"wood",
"sapling",
"bedrock",
"water",
"spring",
"lava",
"lava-spring",
"sand",
"gravel",
"gold-ore",
"iron-ore",
"coal-ore", # 0x10
"log",
"leaves",
"sponge",
"glass",
"lapis-lazuli-ore",
"lapis-lazuli-block",
"dispenser",
"sandstone",
"note-block",
"bed-block",
"powered-rail",
"detector-rail",
"sticky-piston",
"spider-web",
"tall-grass",
"shrub", # 0x20
"piston",
"",
"wool",
"",
"flower",
"rose",
"brown-mushroom",
"red-mushroom",
"gold",
"iron",
"double-stone-slab",
"single-stone-slab",
"brick",
"tnt",
"bookshelf",
"mossy-cobblestone", # 0x30
"obsidian",
"torch",
"fire",
"mob-spawner",
"wooden-stairs",
"chest",
"redstone-wire",
"diamond-ore",
"diamond-block",
"workbench",
"crops",
"soil",
"furnace",
"burning-furnace",
"signpost",
"wooden-door-block", # 0x40
"ladder",
"tracks",
"stone-stairs",
"wall-sign",
"lever",
"stone-plate",
"iron-door-block",
"wooden-plate",
"redstone-ore",
"glowing-redstone-ore",
"redstone-torch-off",
"redstone-torch",
"stone-button",
"snow",
"ice",
"snow-block", # 0x50
"cactus",
"clay",
"reed",
"jukebox",
"fence",
"pumpkin",
"brimstone",
"slow-sand",
"lightstone",
"portal",
"jack-o-lantern",
"cake-block",
"redstone-repeater-off",
"redstone-repeater-on",
"locked-chest",
"trapdoor", # 0x60
"hidden-silverfish",
"stone-brick",
"huge-brown-mushroom",
"huge-red-mushroom",
"iron-bars",
"glass-pane",
"melon",
"pumpkin-stem",
"melon-stem",
"vine",
"fence-gate",
"brick-stairs",
"stone-brick-stairs",
"mycelium",
"lily-pad",
"nether-brick", # 0x70
"nether-brick-fence",
"nether-brick-stairs",
"nether-wart-block", # 0x73
"",
"",
"",
"",
"",
"",
"",
"",
"",
"double-wooden-slab",
"single-wooden-slab",
"",
"", # 0x80
"emerald-ore",
"",
"",
"",
"emerald-block", # 0x85
]
item_names = [
"iron-shovel", # 0x100
"iron-pickaxe",
"iron-axe",
"flint-and-steel",
"apple",
"bow",
"arrow",
"coal",
"diamond",
"iron-ingot",
"gold-ingot",
"iron-sword",
"wooden-sword",
"wooden-shovel",
"wooden-pickaxe",
"wooden-axe",
"stone-sword", # 0x110
"stone-shovel",
"stone-pickaxe",
"stone-axe",
"diamond-sword",
"diamond-shovel",
"diamond-pickaxe",
"diamond-axe",
"stick",
"bowl",
"mushroom-soup",
"gold-sword",
"gold-shovel",
"gold-pickaxe",
"gold-axe",
"string",
"feather", # 0x120
"sulphur",
"wooden-hoe",
"stone-hoe",
"iron-hoe",
"diamond-hoe",
"gold-hoe",
"seeds",
"wheat",
"bread",
"leather-helmet",
"leather-chestplate",
"leather-leggings",
"leather-boots",
"chainmail-helmet",
"chainmail-chestplate",
"chainmail-leggings", # 0x130
"chainmail-boots",
"iron-helmet",
"iron-chestplate",
"iron-leggings",
"iron-boots",
"diamond-helmet",
"diamond-chestplate",
"diamond-leggings",
"diamond-boots",
"gold-helmet",
"gold-chestplate",
"gold-leggings",
"gold-boots",
"flint",
"raw-porkchop",
"cooked-porkchop", # 0x140
"paintings",
"golden-apple",
"sign",
"wooden-door",
"bucket",
"water-bucket",
"lava-bucket",
"mine-cart",
"saddle",
"iron-door",
"redstone",
"snowball",
"boat",
"leather",
"milk",
"clay-brick", # 0x150
"clay-balls",
"sugar-cane",
"paper",
"book",
"slimeball",
"storage-minecart",
"powered-minecart",
"egg",
"compass",
"fishing-rod",
"clock",
"glowstone-dust",
"raw-fish",
"cooked-fish",
"dye",
"bone", # 0x160
"sugar",
"cake",
"bed",
"redstone-repeater",
"cookie",
"map",
"shears",
"melon-slice",
"pumpkin-seeds",
"melon-seeds",
"raw-beef",
"steak",
"raw-chicken",
"cooked-chicken",
"rotten-flesh",
"ender-pearl", # 0x170
"blaze-rod",
"ghast-tear",
"gold-nugget",
"nether-wart",
"potions",
"glass-bottle",
"spider-eye",
"fermented-spider-eye",
"blaze-powder",
"magma-cream", # 0x17a
"",
"",
"",
"",
- "",
+ "spawn-egg",
"", # 0x180
"",
"",
"",
"emerald", #0x184
]
special_item_names = [
"gold-music-disc",
"green-music-disc",
"blocks-music-disc",
"chirp-music-disc",
"far-music-disc"
]
dye_names = [
"ink-sac",
"red-dye",
"green-dye",
"cocoa-beans",
"lapis-lazuli",
"purple-dye",
"cyan-dye",
"light-gray-dye",
"gray-dye",
"pink-dye",
"lime-dye",
"yellow-dye",
"light-blue-dye",
"magenta-dye",
"orange-dye",
"bone-meal",
]
wool_names = [
"white-wool",
"orange-wool",
"magenta-wool",
"light-blue-wool",
"yellow-wool",
"lime-wool",
"pink-wool",
"gray-wool",
"light-gray-wool",
"cyan-wool",
"purple-wool",
"blue-wool",
"brown-wool",
"dark-green-wool",
"red-wool",
"black-wool",
]
sapling_names = [
"normal-sapling",
"pine-sapling",
"birch-sapling",
"jungle-sapling",
]
log_names = [
"normal-log",
"pine-log",
"birch-log",
"jungle-log",
]
leaf_names = [
"normal-leaf",
"pine-leaf",
"birch-leaf",
"jungle-leaf",
]
coal_names = [
"normal-coal",
"charcoal",
]
step_names = [
"single-stone-slab",
"single-sandstone-slab",
"single-wooden-slab",
"single-cobblestone-slab",
]
drops = {}
# Block -> block drops.
# If the drop block is zero, then it drops nothing.
drops[1] = (4, 0) # Stone -> Cobblestone
drops[2] = (3, 0) # Grass -> Dirt
drops[20] = (0, 0) # Glass
drops[52] = (0, 0) # Mob spawner
drops[60] = (3, 0) # Soil -> Dirt
drops[62] = (61, 0) # Burning Furnace -> Furnace
drops[78] = (0, 0) # Snow
# Block -> item drops.
drops[16] = (263, 0) # Coal Ore Block -> Coal
drops[56] = (264, 0) # Diamond Ore Block -> Diamond
drops[63] = (323, 0) # Sign Post -> Sign Item
drops[68] = (323, 0) # Wall Sign -> Sign Item
drops[83] = (338, 0) # Reed -> Reed Item
drops[89] = (348, 0) # Lightstone -> Lightstone Dust
drops[93] = (356, 0) # Redstone Repeater, on -> Redstone Repeater
drops[94] = (356, 0) # Redstone Repeater, off -> Redstone Repeater
drops[97] = (0, 0) # Hidden Silverfish
drops[110] = (3, 0) # Mycelium -> Dirt
drops[111] = (0, 0) # Lily Pad
drops[115] = (372, 0) # Nether Wart BLock -> Nether Wart
unbreakables = set()
unbreakables.add(0) # Air
unbreakables.add(7) # Bedrock
unbreakables.add(10) # Lava
unbreakables.add(11) # Lava spring
# When one of these is targeted and a block is placed, these are replaced
softblocks = set()
softblocks.add(30) # Cobweb
softblocks.add(31) # Tall grass
softblocks.add(70) # Snow
softblocks.add(106) # Vines
dims = {}
dims[0] = 0 # Air
dims[6] = 0 # Sapling
dims[10] = 0 # Lava
dims[11] = 0 # Lava spring
dims[20] = 0 # Glass
dims[26] = 0 # Bed
dims[37] = 0 # Yellow Flowers
dims[38] = 0 # Red Flowers
dims[39] = 0 # Brown Mushrooms
dims[40] = 0 # Red Mushrooms
dims[44] = 0 # Single Step
dims[51] = 0 # Fire
dims[52] = 0 # Mob spawner
dims[53] = 0 # Wooden stairs
dims[55] = 0 # Redstone (Wire)
dims[59] = 0 # Crops
dims[60] = 0 # Soil
dims[63] = 0 # Sign
dims[64] = 0 # Wood door
dims[66] = 0 # Rails
dims[67] = 0 # Stone stairs
dims[68] = 0 # Sign (on wall)
dims[69] = 0 # Lever
dims[70] = 0 # Stone Pressure Plate
dims[71] = 0 # Iron door
dims[72] = 0 # Wood Pressure Plate
dims[78] = 0 # Snow
dims[81] = 0 # Cactus
dims[83] = 0 # Sugar Cane
dims[85] = 0 # Fence
dims[90] = 0 # Portal
dims[92] = 0 # Cake
dims[93] = 0 # redstone-repeater-off
dims[94] = 0 # redstone-repeater-on
blocks = {}
"""
A dictionary of ``Block`` objects.
This dictionary can be indexed by slot number or block name.
"""
def _add_block(block):
blocks[block.slot] = block
blocks[block.name] = block
# Special blocks. Please remember to comment *what* makes the block special;
# most of us don't have all blocks memorized yet.
# Water (both kinds) is unbreakable, and dims by 3.
_add_block(Block(8, "water", breakable=False, dim=3))
_add_block(Block(9, "spring", breakable=False, dim=3))
# Gravel drops flint, with 1 in 10 odds.
_add_block(Block(13, "gravel", drop=(318, 0), ratio=1 / 10))
# Leaves drop saplings, with 1 in 9 odds, and dims by 1.
_add_block(Block(18, "leaves", drop=(6, 0), ratio=1 / 9, dim=1))
# Lapis lazuli ore drops 6 lapis lazuli items.
_add_block(Block(21, "lapis-lazuli-ore", drop=(351, 4), quantity=6))
# Beds are orientable and drops Bed Item
_add_block(Block(26, "bed-block", drop=(355, 0),
orientation=(None, None, 2, 0, 1, 3)))
# Torches are orientable and don't dim.
_add_block(Block(50, "torch", orientation=(None, 5, 4, 3, 2, 1), dim=0))
# Chests are orientable.
_add_block(Block(54, "chest", orientation=(None, None, 2, 3, 4, 5)))
# Furnaces are orientable.
_add_block(Block(61, "furnace", orientation=(None, None, 2, 3, 4, 5)))
# Wooden Door is orientable and drops Wooden Door item
_add_block(Block(64, "wooden-door-block", drop=(324, 0),
orientation=(None, None, 1, 3, 0, 2)))
# Ladders are orientable and don't dim.
_add_block(Block(65, "ladder", orientation=(None, None, 2, 3, 4, 5), dim=0))
# Levers are orientable and don't dim. Additionally, levers have special hax
# to be orientable two different ways.
_add_block(Block(69, "lever", orientation=(None, 5, 4, 3, 2, 1), dim=0))
blocks["lever"]._f_dict.update(
{13: "+y", 12: "-z", 11: "+z", 10: "-x", 9: "+x"})
# Iron Door is orientable and drops Iron Door item
_add_block(Block(71, "iron-door-block", drop=(330, 0),
orientation=(None, None, 1, 3, 0, 2)))
# Redstone ore drops 5 redstone dusts.
_add_block(Block(73, "redstone-ore", drop=(331, 0), quantity=5))
_add_block(Block(74, "glowing-redstone-ore", drop=(331, 0), quantity=5))
# Redstone torches are orientable and don't dim.
_add_block(Block(75, "redstone-torch-off", orientation=(None, 5, 4, 3, 2, 1), dim=0))
_add_block(Block(76, "redstone-torch", orientation=(None, 5, 4, 3, 2, 1), dim=0))
# Stone buttons are orientable and don't dim.
_add_block(Block(77, "stone-button", orientation=(None, None, 1, 2, 3, 4), dim=0))
# Snow vanishes upon build.
_add_block(Block(78, "snow", vanishes=True))
# Ice drops nothing, is replaced by springs, and dims by 3.
_add_block(Block(79, "ice", drop=(0, 0), replace=9, dim=3))
# Clay drops 4 clay balls.
_add_block(Block(82, "clay", drop=(337, 0), quantity=4))
# Trapdoor is orientable
_add_block(Block(96, "trapdoor", orientation=(None, None, 0, 1, 2, 3)))
# Giant brown mushrooms drop brown mushrooms.
_add_block(Block(99, "huge-brown-mushroom", drop=(39, 0), quantity=2))
# Giant red mushrooms drop red mushrooms.
_add_block(Block(100, "huge-red-mushroom", drop=(40, 0), quantity=2))
# Pumpkin stems drop pumpkin seeds.
_add_block(Block(104, "pumpkin-stem", drop=(361, 0), quantity=3))
# Melon stems drop melon seeds.
_add_block(Block(105, "melon-stem", drop=(362, 0), quantity=3))
for block in blocks.values():
blocks[block.name] = block
blocks[block.slot] = block
items = {}
"""
A dictionary of ``Item`` objects.
This dictionary can be indexed by slot number or block name.
"""
for i, name in enumerate(block_names):
if not name or name in blocks:
continue
kwargs = {}
if i in drops:
kwargs["drop"] = drops[i]
if i in unbreakables:
kwargs["breakable"] = False
if i in dims:
kwargs["dim"] = dims[i]
b = Block(i, name, **kwargs)
_add_block(b)
for i, name in enumerate(item_names):
kwargs = {}
i += 0x100
item = Item(i, name, **kwargs)
items[i] = item
items[name] = item
for i, name in enumerate(special_item_names):
kwargs = {}
i += 0x8D0
item = Item(i, name, **kwargs)
items[i] = item
items[name] = item
_secondary_items = {
items["coal"]: coal_names,
items["dye"]: dye_names,
}
for base_item, names in _secondary_items.iteritems():
for i, name in enumerate(names):
kwargs = {}
item = Item(base_item.slot, name, i, **kwargs)
items[name] = item
_secondary_blocks = {
blocks["leaves"]: leaf_names,
blocks["log"]: log_names,
blocks["sapling"]: sapling_names,
blocks["single-stone-slab"]: step_names,
blocks["wool"]: wool_names,
}
for base_block, names in _secondary_blocks.iteritems():
for i, name in enumerate(names):
kwargs = {}
kwargs["drop"] = base_block.drop
kwargs["breakable"] = base_block.breakable
kwargs["dim"] = base_block.dim
block = Block(base_block.slot, name, i, **kwargs)
_add_block(block)
glowing_blocks = {
blocks["torch"].slot: 14,
blocks["lightstone"].slot: 15,
blocks["jack-o-lantern"].slot: 15,
blocks["fire"].slot: 15,
blocks["lava"].slot: 15,
blocks["lava-spring"].slot: 15,
blocks["locked-chest"].slot: 15,
blocks["burning-furnace"].slot: 13,
blocks["portal"].slot: 11,
blocks["glowing-redstone-ore"].slot: 9,
blocks["redstone-repeater-on"].slot: 9,
blocks["redstone-torch"].slot: 7,
blocks["brown-mushroom"].slot: 1,
}
armor_helmets = (86, 298, 302, 306, 310, 314)
"""
List of slots of helmets.
Note that slot 86 (pumpkin) is a helmet.
"""
armor_chestplates = (299, 303, 307, 311, 315)
"""
List of slots of chestplates.
Note that slot 303 (chainmail chestplate) is a chestplate, even though it is
not normally obtainable.
"""
armor_leggings = (300, 304, 308, 312, 316)
"""
List of slots of leggings.
"""
armor_boots = (301, 305, 309, 313, 317)
"""
List of slots of boots.
"""
"""
List of unstackable items
"""
unstackable = (
items["wooden-sword"].slot,
items["wooden-shovel"].slot,
items["wooden-pickaxe"].slot,
# TODO: update the list
)
"""
List of fuel blocks and items maped to burn time
"""
furnace_fuel = {
items["stick"].slot: 10, # 5s
blocks["sapling"].slot: 10, # 5s
blocks["wood"].slot: 30, # 15s
blocks["fence"].slot: 30, # 15s
blocks["wooden-stairs"].slot: 30, # 15s
blocks["trapdoor"].slot: 30, # 15s
blocks["log"].slot: 30, # 15s
blocks["workbench"].slot: 30, # 15s
blocks["bookshelf"].slot: 30, # 15s
blocks["chest"].slot: 30, # 15s
blocks["locked-chest"].slot: 30, # 15s
blocks["jukebox"].slot: 30, # 15s
blocks["note-block"].slot: 30, # 15s
items["coal"].slot: 160, # 80s
items["lava-bucket"].slot: 2000 # 1000s
}
def parse_block(block):
"""
Get the key for a given block/item.
"""
try:
if block.startswith("0x") and (
(int(block, 16) in blocks) or (int(block, 16) in items)):
return (int(block, 16), 0)
elif (int(block) in blocks) or (int(block) in items):
return (int(block), 0)
else:
raise Exception("Couldn't find block id %s!" % block)
except ValueError:
if block in blocks:
return blocks[block].key
elif block in items:
return items[block].key
else:
raise Exception("Couldn't parse block %s!" % block)
|
bravoserver/bravo
|
5f243855bf91bb9e628f13ffc5b5cf5c60177bdf
|
beta/protocol: Use hex numbers in logs.
|
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index 82f3474..d0f3c86 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -572,931 +572,931 @@ class BetaServerProtocol(object, Protocol, TimeoutMixin):
for instrument, pitch in notes:
self.write_packet("note", x=x, y=y, z=z, pitch=pitch,
instrument=instrument)
self.write_packet("block", x=x, y=y, z=z, type=block, meta=meta)
def send_chat(self, message):
"""
Send a chat message back to the client.
"""
data = json.dumps({"text": message})
self.write_packet("chat", message=data)
# Automatic properties. Assigning to them causes the client to be notified
# of changes.
@property
def health(self):
return self._health
@health.setter
def health(self, value):
if not 0 <= value <= 20:
raise BetaClientError("Invalid health value %d" % value)
if self._health != value:
self.write_packet("health", hp=value, fp=0, saturation=0)
self._health = value
@property
def latency(self):
return self._latency
@latency.setter
def latency(self, value):
# Clamp the value to not exceed the boundaries of the packet. This is
# necessary even though, in theory, a ping this high is bad news.
value = clamp(value, 0, 65535)
# Check to see if this is a new value, and if so, alert everybody.
if self._latency != value:
packet = make_packet("players", name=self.username, online=True,
ping=value)
self.factory.broadcast(packet)
self._latency = value
class KickedProtocol(BetaServerProtocol):
"""
A very simple Beta protocol that helps enforce IP bans, Max Connections,
and Max Connections Per IP.
This protocol disconnects people as soon as they connect, with a helpful
message.
"""
def __init__(self, reason=None):
BetaServerProtocol.__init__(self)
if reason:
self.reason = reason
else:
self.reason = (
"This server doesn't like you very much."
" I don't like you very much either.")
def connectionMade(self):
self.error("%s" % self.reason)
class BetaProxyProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` that proxies for an InfiniCraft client.
"""
gateway = "server.wiki.vg"
def handshake(self, container):
self.write_packet("handshake", username="-")
def login(self, container):
self.username = container.username
self.write_packet("login", protocol=0, username="", seed=0,
dimension="earth")
url = urlunparse(("http", self.gateway, "/node/0/0/", None, None,
None))
d = getPage(url)
d.addCallback(self.start_proxy)
def start_proxy(self, response):
log.msg("Response: %s" % response)
log.msg("Starting proxy...")
address, port = response.split(":")
self.add_node(address, int(port))
def add_node(self, address, port):
"""
Add a new node to this client.
"""
from twisted.internet.endpoints import TCP4ClientEndpoint
log.msg("Adding node %s:%d" % (address, port))
endpoint = TCP4ClientEndpoint(reactor, address, port, 5)
self.node_factory = InfiniClientFactory()
d = endpoint.connect(self.node_factory)
d.addCallback(self.node_connected)
d.addErrback(self.node_connect_error)
def node_connected(self, protocol):
log.msg("Connected new node!")
def node_connect_error(self, reason):
log.err("Couldn't connect node!")
log.err(reason)
class BravoProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` suitable for serving MC worlds to clients.
This protocol really does need to be hooked up with a ``BravoFactory`` or
something very much like it.
"""
chunk_tasks = None
time_loop = None
eid = 0
last_dig = None
def __init__(self, config, name):
BetaServerProtocol.__init__(self)
self.config = config
self.config_name = "world %s" % name
# Retrieve the MOTD. Only needs to be done once.
self.motd = self.config.getdefault(self.config_name, "motd",
"BravoServer")
def register_hooks(self):
log.msg("Registering client hooks...")
plugin_types = {
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
for t in plugin_types:
setattr(self, t, getattr(self.factory, t))
log.msg("Registering policies...")
if self.factory.mode == "creative":
self.dig_policy = dig_policies["speedy"]
else:
self.dig_policy = dig_policies["notchy"]
log.msg("Registered client plugin hooks!")
def pre_handshake(self):
"""
Set up username and get going.
"""
if self.username in self.factory.protocols:
# This username's already taken; find a new one.
for name in username_alternatives(self.username):
if name not in self.factory.protocols:
self.username = name
break
else:
self.error("Your username is already taken.")
return False
return True
@inlineCallbacks
def authenticated(self):
BetaServerProtocol.authenticated(self)
# Init player, and copy data into it.
self.player = yield self.factory.world.load_player(self.username)
self.player.eid = self.eid
self.location = self.player.location
# Init players' inventory window.
self.inventory = InventoryWindow(self.player.inventory)
# *Now* we are in our factory's list of protocols. Be aware.
self.factory.protocols[self.username] = self
# Announce our presence.
self.factory.chat("%s is joining the game..." % self.username)
packet = make_packet("players", name=self.username, online=True,
ping=0)
self.factory.broadcast(packet)
# Craft our avatar and send it to already-connected other players.
packet = make_packet("create", eid=self.player.eid)
packet += self.player.save_to_packet()
self.factory.broadcast_for_others(packet, self)
# And of course spawn all of those players' avatars in our client as
# well.
for protocol in self.factory.protocols.itervalues():
# Skip over ourselves; otherwise, the client tweaks out and
# usually either dies or locks up.
if protocol is self:
continue
self.write_packet("create", eid=protocol.player.eid)
packet = protocol.player.save_to_packet()
packet += protocol.player.save_equipment_to_packet()
self.transport.write(packet)
# Send spawn and inventory.
spawn = self.factory.world.level.spawn
packet = make_packet("spawn", x=spawn[0], y=spawn[1], z=spawn[2])
packet += self.inventory.save_to_packet()
self.transport.write(packet)
# TODO: Send Abilities (0xca)
# TODO: Update Health (0x08)
# TODO: Update Experience (0x2b)
# Send weather.
self.transport.write(self.factory.vane.make_packet())
self.send_initial_chunk_and_location()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(10)
def orientation_changed(self):
# Bang your head!
yaw, pitch = self.location.ori.to_fracs()
packet = make_packet("entity-orientation", eid=self.player.eid,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
def position_changed(self):
# Send chunks.
self.update_chunks()
for entity in self.entities_near(2):
if entity.name != "Item":
continue
left = self.player.inventory.add(entity.item, entity.quantity)
if left != entity.quantity:
if left != 0:
# partial collect
entity.quantity = left
else:
packet = make_packet("collect", eid=entity.eid,
destination=self.player.eid)
packet += make_packet("destroy", count=1, eid=[entity.eid])
self.factory.broadcast(packet)
self.factory.destroy_entity(entity)
packet = self.inventory.save_to_packet()
self.transport.write(packet)
def entities_near(self, radius):
"""
Obtain the entities within a radius of this player.
Radius is measured in blocks.
"""
chunk_radius = int(radius // 16 + 1)
chunkx, chunkz = self.location.pos.to_chunk()
minx = chunkx - chunk_radius
maxx = chunkx + chunk_radius + 1
minz = chunkz - chunk_radius
maxz = chunkz + chunk_radius + 1
for x, z in product(xrange(minx, maxx), xrange(minz, maxz)):
if (x, z) not in self.chunks:
continue
chunk = self.chunks[x, z]
yieldables = [entity for entity in chunk.entities
if self.location.distance(entity.location) <= (radius * 32)]
for i in yieldables:
yield i
def chat(self, container):
# data = json.loads(container.data)
log.msg("Chat! %r" % container.data)
if container.message.startswith("/"):
commands = retrieve_plugins(IChatCommand, factory=self.factory)
# Register aliases.
for plugin in commands.values():
for alias in plugin.aliases:
commands[alias] = plugin
params = container.message[1:].split(" ")
command = params.pop(0).lower()
if command and command in commands:
def cb(iterable):
for line in iterable:
self.send_chat(line)
def eb(error):
self.send_chat("Error: %s" % error.getErrorMessage())
d = maybeDeferred(commands[command].chat_command,
self.username, params)
d.addCallback(cb)
d.addErrback(eb)
else:
self.send_chat("Unknown command: %s" % command)
else:
# Send the message up to the factory to be chatified.
message = "<%s> %s" % (self.username, container.message)
self.factory.chat(message)
def use(self, container):
"""
For each entity in proximity (4 blocks), check if it is the target
of this packet and call all hooks that stated interested in this
type.
"""
nearby_players = self.factory.players_near(self.player, 4)
for entity in chain(self.entities_near(4), nearby_players):
if entity.eid == container.target:
for hook in self.use_hooks[entity.name]:
hook.use_hook(self.factory, self.player, entity,
container.button == 0)
break
@inlineCallbacks
def digging(self, container):
if container.x == -1 and container.z == -1 and container.y == 255:
# Lala-land dig packet. Discard it for now.
return
# Player drops currently holding item/block.
if (container.state == "dropped" and container.face == "-y" and
container.x == 0 and container.y == 0 and container.z == 0):
i = self.player.inventory
holding = i.holdables[self.player.equipped]
if holding:
primary, secondary, count = holding
if i.consume((primary, secondary), self.player.equipped):
dest = self.location.in_front_of(2)
coords = dest.pos._replace(y=dest.pos.y + 1)
self.factory.give(coords, (primary, secondary), 1)
# Re-send inventory.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# If no items in this slot are left, this player isn't
# holding an item anymore.
if i.holdables[self.player.equipped] is None:
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=65535,
count=1,
secondary=0
)
self.factory.broadcast_for_others(packet, self)
return
if container.state == "shooting":
self.shoot_arrow()
return
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
coords = smallx, container.y, smallz
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't dig in chunk (%d, %d)!" % (bigx, bigz))
return
block = chunk.get_block((smallx, container.y, smallz))
if container.state == "started":
# Run pre dig hooks
for hook in self.pre_dig_hooks:
cancel = yield maybeDeferred(hook.pre_dig_hook, self.player,
(container.x, container.y, container.z), block)
if cancel:
return
tool = self.player.inventory.holdables[self.player.equipped]
# Check to see whether we should break this block.
if self.dig_policy.is_1ko(block, tool):
self.run_dig_hooks(chunk, coords, blocks[block])
else:
# Set up a timer for breaking the block later.
dtime = time() + self.dig_policy.dig_time(block, tool)
self.last_dig = coords, block, dtime
elif container.state == "stopped":
# The client thinks it has broken a block. We shall see.
if not self.last_dig:
return
oldcoords, oldblock, dtime = self.last_dig
if oldcoords != coords or oldblock != block:
# Nope!
self.last_dig = None
return
dtime -= time()
# When enough time has elapsed, run the dig hooks.
d = deferLater(reactor, max(dtime, 0), self.run_dig_hooks, chunk,
coords, blocks[block])
d.addCallback(lambda none: setattr(self, "last_dig", None))
def run_dig_hooks(self, chunk, coords, block):
"""
Destroy a block and run the post-destroy dig hooks.
"""
x, y, z = coords
if block.breakable:
chunk.destroy(coords)
l = []
for hook in self.dig_hooks:
l.append(maybeDeferred(hook.dig_hook, chunk, x, y, z, block))
dl = DeferredList(l)
dl.addCallback(lambda none: self.factory.flush_chunk(chunk))
@inlineCallbacks
def build(self, container):
"""
Handle a build packet.
Several things must happen. First, the packet's contents need to be
examined to ensure that the packet is valid. A check is done to see if
the packet is opening a windowed object. If not, then a build is
run.
"""
# Is the target within our purview? We don't do a very strict
# containment check, but we *do* require that the chunk be loaded.
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't select in chunk (%d, %d)!" % (bigx, bigz))
return
target = blocks[chunk.get_block((smallx, container.y, smallz))]
# Attempt to open a window.
from bravo.policy.windows import window_for_block
window = window_for_block(target)
if window is not None:
# We have a window!
self.windows[self.wid] = window
identifier, title, slots = window.open()
self.write_packet("window-open", wid=self.wid, type=identifier,
title=title, slots=slots)
self.wid += 1
return
# Try to open it first
for hook in self.open_hooks:
window = yield maybeDeferred(hook.open_hook, self, container,
chunk.get_block((smallx, container.y, smallz)))
if window:
self.write_packet("window-open", wid=window.wid,
type=window.identifier, title=window.title,
slots=window.slots_num)
packet = window.save_to_packet()
self.transport.write(packet)
# window opened
return
# Ignore clients that think -1 is placeable.
if container.primary == -1:
return
# Special case when face is "noop": Update the status of the currently
# held block rather than placing a new block.
if container.face == "noop":
return
# If the target block is vanishable, then adjust our aim accordingly.
if target.vanishes:
container.face = "+y"
container.y -= 1
if container.primary in blocks:
block = blocks[container.primary]
elif container.primary in items:
block = items[container.primary]
else:
- log.err("Ignoring request to place unknown block %d" %
- container.primary)
+ log.err("Ignoring request to place unknown block 0x%x" %
+ container.primary)
return
# Run pre-build hooks. These hooks are able to interrupt the build
# process.
builddata = BuildData(block, 0x0, container.x, container.y,
container.z, container.face)
for hook in self.pre_build_hooks:
cont, builddata, cancel = yield maybeDeferred(hook.pre_build_hook,
self.player, builddata)
if cancel:
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
return
if not cont:
break
# Run the build.
try:
yield maybeDeferred(self.run_build, builddata)
except BuildError:
return
newblock = builddata.block.slot
coords = adjust_coords_for_face(
(builddata.x, builddata.y, builddata.z), builddata.face)
# Run post-build hooks. These are merely callbacks which cannot
# interfere with the build process, largely because the build process
# already happened.
for hook in self.post_build_hooks:
yield maybeDeferred(hook.post_build_hook, self.player, coords,
builddata.block)
# Feed automatons.
for automaton in self.factory.automatons:
if newblock in automaton.blocks:
automaton.feed(coords)
# Re-send inventory.
# XXX this could be optimized if/when inventories track damage.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
def run_build(self, builddata):
block, metadata, x, y, z, face = builddata
# Don't place items as blocks.
if block.slot not in blocks:
raise BuildError("Couldn't build item %r as block" % block)
# Check for orientable blocks.
if not metadata and block.orientable():
metadata = block.orientation(face)
if metadata is None:
# Oh, I guess we can't even place the block on this face.
raise BuildError("Couldn't orient block %r on face %s" %
(block, face))
# Make sure we can remove it from the inventory first.
if not self.player.inventory.consume((block.slot, 0),
self.player.equipped):
# Okay, first one was a bust; maybe we can consume the related
# block for dropping instead?
if not self.player.inventory.consume(block.drop,
self.player.equipped):
raise BuildError("Couldn't consume %r from inventory" % block)
# Offset coords according to face.
x, y, z = adjust_coords_for_face((x, y, z), face)
# Set the block and data.
dl = [self.factory.world.set_block((x, y, z), block.slot)]
if metadata:
dl.append(self.factory.world.set_metadata((x, y, z), metadata))
return DeferredList(dl)
def equip(self, container):
self.player.equipped = container.slot
# Inform everyone about the item the player is holding now.
item = self.player.inventory.holdables[self.player.equipped]
if item is None:
# Empty slot. Use signed short -1.
primary, secondary = -1, 0
else:
primary, secondary, count = item
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=primary,
count=1,
secondary=secondary
)
self.factory.broadcast_for_others(packet, self)
def pickup(self, container):
self.factory.give((container.x, container.y, container.z),
(container.primary, container.secondary), container.count)
def animate(self, container):
# Broadcast the animation of the entity to everyone else. Only swing
# arm is send by notchian clients.
packet = make_packet("animate",
eid=self.player.eid,
animation=container.animation
)
self.factory.broadcast_for_others(packet, self)
def wclose(self, container):
wid = container.wid
if wid == 0:
# WID 0 is reserved for the client inventory.
pass
elif wid in self.windows:
w = self.windows.pop(wid)
w.close()
else:
self.error("WID %d doesn't exist." % wid)
def waction(self, container):
wid = container.wid
if wid in self.windows:
w = self.windows[wid]
result = w.action(container.slot, container.button,
container.token, container.shift,
container.primary)
self.write_packet("window-token", wid=wid, token=container.token,
acknowledged=result)
else:
self.error("WID %d doesn't exist." % wid)
def wcreative(self, container):
"""
A slot was altered in creative mode.
"""
# XXX Sometimes the container doesn't contain all of this information.
# What then?
applied = self.inventory.creative(container.slot, container.primary,
container.secondary, container.count)
if applied:
# Inform other players about changes to this player's equipment.
equipped_slot = self.player.equipped + 36
if container.slot == equipped_slot:
packet = make_packet("entity-equipment",
eid=self.player.eid,
# XXX why 0? why not the actual slot?
slot=0,
primary=container.primary,
count=1,
secondary=container.secondary,
)
self.factory.broadcast_for_others(packet, self)
def shoot_arrow(self):
# TODO 1. Create arrow entity: arrow = Arrow(self.factory, self.player)
# 2. Register within the factory: self.factory.register_entity(arrow)
# 3. Run it: arrow.run()
pass
def sign(self, container):
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't handle sign in chunk (%d, %d)!" % (bigx, bigz))
return
if (smallx, container.y, smallz) in chunk.tiles:
new = False
s = chunk.tiles[smallx, container.y, smallz]
else:
new = True
s = Sign(smallx, container.y, smallz)
chunk.tiles[smallx, container.y, smallz] = s
s.text1 = container.line1
s.text2 = container.line2
s.text3 = container.line3
s.text4 = container.line4
chunk.dirty = True
# The best part of a sign isn't making one, it's showing everybody
# else on the server that you did.
packet = make_packet("sign", container)
self.factory.broadcast_for_chunk(packet, bigx, bigz)
# Run sign hooks.
for hook in self.sign_hooks:
hook.sign_hook(self.factory, chunk, container.x, container.y,
container.z, [s.text1, s.text2, s.text3, s.text4], new)
def complete(self, container):
"""
Attempt to tab-complete user names.
"""
needle = container.autocomplete
usernames = self.factory.protocols.keys()
results = complete(needle, usernames)
self.write_packet("tab", autocomplete=results)
def settings_packet(self, container):
"""
Acknowledge a change of settings and update chunk distance.
"""
super(BravoProtocol, self).settings_packet(container)
self.update_chunks()
def disable_chunk(self, x, z):
key = x, z
log.msg("Disabling chunk %d, %d" % key)
if key not in self.chunks:
log.msg("...But the chunk wasn't loaded!")
return
# Remove the chunk from cache.
chunk = self.chunks.pop(key)
eids = [e.eid for e in chunk.entities]
self.write_packet("destroy", count=len(eids), eid=eids)
# Clear chunk data on the client.
self.write_packet("chunk", x=x, z=z, continuous=False, primary=0x0,
add=0x0, data="")
def enable_chunk(self, x, z):
"""
Request a chunk.
This function will asynchronously obtain the chunk, and send it on the
wire.
:returns: `Deferred` that will be fired when the chunk is obtained,
with no arguments
"""
log.msg("Enabling chunk %d, %d" % (x, z))
if (x, z) in self.chunks:
log.msg("...But the chunk was already loaded!")
return succeed(None)
d = self.factory.world.request_chunk(x, z)
@d.addCallback
def cb(chunk):
self.chunks[x, z] = chunk
return chunk
d.addCallback(self.send_chunk)
return d
def send_chunk(self, chunk):
log.msg("Sending chunk %d, %d" % (chunk.x, chunk.z))
packet = chunk.save_to_packet()
self.transport.write(packet)
for entity in chunk.entities:
packet = entity.save_to_packet()
self.transport.write(packet)
for entity in chunk.tiles.itervalues():
if entity.name == "Sign":
packet = entity.save_to_packet()
self.transport.write(packet)
def send_initial_chunk_and_location(self):
"""
Send the initial chunks and location.
This method sends more than one chunk; since Beta 1.2, it must send
nearly fifty chunks before the location can be safely sent.
"""
# Disable located hooks. We'll re-enable them at the end.
self.state = STATE_AUTHENTICATED
log.msg("Initial, position %d, %d, %d" % self.location.pos)
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
# Send the chunk that the player will stand on. The other chunks are
# not so important. There *used* to be a bug, circa Beta 1.2, that
# required lots of surrounding geometry to be present, but that's been
# fixed.
d = self.enable_chunk(bigx, bigz)
# What to do if we can't load a given chunk? Just kick 'em.
d.addErrback(lambda fail: self.error("Couldn't load a chunk... :c"))
# Don't dare send more chunks beyond the initial one until we've
# spawned. Once we've spawned, set our status to LOCATED and then
# update_location() will work.
@d.addCallback
def located(none):
self.state = STATE_LOCATED
# Ensure that we're above-ground.
self.ascend(0)
d.addCallback(lambda none: self.update_location())
d.addCallback(lambda none: self.position_changed())
# Send the MOTD.
if self.motd:
@d.addCallback
def motd(none):
self.send_chat(self.motd.replace("<tagline>", get_motd()))
# Finally, start the secondary chunk loop.
d.addCallback(lambda none: self.update_chunks())
def update_chunks(self):
# Don't send chunks unless we're located.
if self.state != STATE_LOCATED:
return
x, z = self.location.pos.to_chunk()
# These numbers come from a couple spots, including minecraftwiki, but
# I verified them experimentally using torches and pillars to mark
# distances on each setting. ~ C.
distances = {
"tiny": 2,
"short": 4,
"far": 16,
}
radius = distances.get(self.settings.distance, 8)
new = set(circling(x, z, radius))
old = set(self.chunks.iterkeys())
added = new - old
discarded = old - new
# Perhaps some explanation is in order.
# The cooperate() function iterates over the iterable it is fed,
# without tying up the reactor, by yielding after each iteration. The
# inner part of the generator expression generates all of the chunks
# around the currently needed chunk, and it sorts them by distance to
# the current chunk. The end result is that we load chunks one-by-one,
# nearest to furthest, without stalling other clients.
if self.chunk_tasks:
for task in self.chunk_tasks:
try:
task.stop()
except (TaskDone, TaskFailed):
pass
to_enable = sorted_by_distance(added, x, z)
self.chunk_tasks = [
cooperate(self.enable_chunk(i, j) for i, j in to_enable),
cooperate(self.disable_chunk(i, j) for i, j in discarded),
]
def update_time(self):
time = int(self.factory.time)
self.write_packet("time", timestamp=time, time=time % 24000)
def connectionLost(self, reason=connectionDone):
"""
Cleanup after a lost connection.
Most of the time, these connections are lost cleanly; we don't have
any cleanup to do in the unclean case since clients don't have any
kind of pending state which must be recovered.
Remember, the connection can be lost before identification and
authentication, so ``self.username`` and ``self.player`` can be None.
"""
if self.username and self.player:
self.factory.world.save_player(self.username, self.player)
if self.player:
self.factory.destroy_entity(self.player)
packet = make_packet("destroy", count=1, eid=[self.player.eid])
self.factory.broadcast(packet)
if self.username:
packet = make_packet("players", name=self.username, online=False,
ping=0)
self.factory.broadcast(packet)
self.factory.chat("%s has left the game." % self.username)
self.factory.teardown_protocol(self)
# We are now torn down. After this point, there will be no more
# factory stuff, just our own personal stuff.
del self.factory
if self.time_loop:
self.time_loop.stop()
if self.chunk_tasks:
for task in self.chunk_tasks:
try:
task.stop()
except (TaskDone, TaskFailed):
pass
|
bravoserver/bravo
|
89ef296eed2f302202084c942e185fa1ad7cadfd
|
beta: Use JSON for text messages.
|
diff --git a/bravo/beta/factory.py b/bravo/beta/factory.py
index 2ac1fe0..c0b1c85 100644
--- a/bravo/beta/factory.py
+++ b/bravo/beta/factory.py
@@ -1,565 +1,568 @@
from collections import defaultdict
from itertools import product
+import json
from twisted.internet import reactor
from twisted.internet.interfaces import IPushProducer
from twisted.internet.protocol import Factory
from twisted.internet.task import LoopingCall
from twisted.python import log
from zope.interface import implements
from bravo.beta.packets import make_packet
from bravo.beta.protocol import BravoProtocol, KickedProtocol
from bravo.entity import entities
from bravo.ibravo import (ISortedPlugin, IAutomaton, ITerrainGenerator,
IUseHook, ISignHook, IPreDigHook, IDigHook,
IPreBuildHook, IPostBuildHook, IWindowOpenHook,
IWindowClickHook, IWindowCloseHook)
from bravo.location import Location
from bravo.plugin import retrieve_named_plugins, retrieve_sorted_plugins
from bravo.policy.packs import packs as available_packs
from bravo.policy.seasons import Spring, Winter
from bravo.utilities.chat import chat_name, sanitize_chat
from bravo.weather import WeatherVane
from bravo.world import World
(STATE_UNAUTHENTICATED, STATE_CHALLENGED, STATE_AUTHENTICATED,
STATE_LOCATED) = range(4)
circle = [(i, j)
for i, j in product(xrange(-5, 5), xrange(-5, 5))
if i**2 + j**2 <= 25
]
class BravoFactory(Factory):
"""
A ``Factory`` that creates ``BravoProtocol`` objects when connected to.
"""
implements(IPushProducer)
protocol = BravoProtocol
timestamp = None
time = 0
day = 0
eid = 1
interfaces = []
def __init__(self, config, name):
"""
Create a factory and world.
``name`` is the string used to look up factory-specific settings from
the configuration.
:param str name: internal name of this factory
"""
self.name = name
self.config = config
self.config_name = "world %s" % name
self.world = World(self.config, self.name)
self.world.factory = self
self.protocols = dict()
self.connectedIPs = defaultdict(int)
self.mode = self.config.get(self.config_name, "mode")
if self.mode not in ("creative", "survival"):
raise Exception("Unsupported mode %s" % self.mode)
self.limitConnections = self.config.getintdefault(self.config_name,
"limitConnections",
0)
self.limitPerIP = self.config.getintdefault(self.config_name,
"limitPerIP", 0)
self.vane = WeatherVane(self)
def startFactory(self):
log.msg("Initializing factory for world '%s'..." % self.name)
# Get our plugins set up.
self.register_plugins()
log.msg("Starting world...")
self.world.start()
log.msg("Starting timekeeping...")
self.timestamp = reactor.seconds()
self.time = self.world.level.time
self.update_season()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(2)
log.msg("Starting entity updates...")
# Start automatons.
for automaton in self.automatons:
automaton.start()
self.chat_consumers = set()
log.msg("Factory successfully initialized for world '%s'!" % self.name)
def stopFactory(self):
"""
Called before factory stops listening on ports. Used to perform
shutdown tasks.
"""
log.msg("Shutting down world...")
# Stop automatons. Technically, they may not actually halt until their
# next iteration, but that is close enough for us, probably.
# Automatons are contracted to not access the world after stop() is
# called.
for automaton in self.automatons:
automaton.stop()
# Evict plugins as soon as possible. Can't be done before stopping
# automatons.
self.unregister_plugins()
self.time_loop.stop()
# Write back current world time. This must be done before stopping the
# world.
self.world.time = self.time
# And now stop the world.
self.world.stop()
log.msg("World data saved!")
def buildProtocol(self, addr):
"""
Create a protocol.
This overriden method provides early player entity registration, as a
solution to the username/entity race that occurs on login.
"""
banned = self.world.serializer.load_plugin_data("banned_ips")
# Do IP bans first.
for ip in banned.split():
if addr.host == ip:
# Use KickedProtocol with extreme prejudice.
log.msg("Kicking banned IP %s" % addr.host)
p = KickedProtocol("Sorry, but your IP address is banned.")
p.factory = self
return p
# We are ignoring values less that 1, but making sure not to go over
# the connection limit.
if (self.limitConnections
and len(self.protocols) >= self.limitConnections):
log.msg("Reached maximum players, turning %s away." % addr.host)
p = KickedProtocol("The player limit has already been reached."
" Please try again later.")
p.factory = self
return p
# Do our connection-per-IP check.
if (self.limitPerIP and
self.connectedIPs[addr.host] >= self.limitPerIP):
log.msg("At maximum connections for %s already, dropping." % addr.host)
p = KickedProtocol("There are too many players connected from this IP.")
p.factory = self
return p
else:
self.connectedIPs[addr.host] += 1
# If the player wasn't kicked, let's continue!
log.msg("Starting connection for %s" % addr)
p = self.protocol(self.config, self.name)
p.host = addr.host
p.factory = self
self.register_entity(p)
# Copy our hooks to the protocol.
p.register_hooks()
return p
def teardown_protocol(self, protocol):
"""
Do internal bookkeeping on behalf of a protocol which has been
disconnected.
Did you know that "bookkeeping" is one of the few words in English
which has three pairs of double letters in a row?
"""
username = protocol.username
host = protocol.host
if username in self.protocols:
del self.protocols[username]
self.connectedIPs[host] -= 1
def set_username(self, protocol, username):
"""
Attempt to set a new username for a protocol.
:returns: whether the username was changed
"""
# If the username's already taken, refuse it.
if username in self.protocols:
return False
if protocol.username in self.protocols:
# This protocol's known under another name, so remove it.
del self.protocols[protocol.username]
# Set the username.
self.protocols[username] = protocol
protocol.username = username
return True
def register_plugins(self):
"""
Setup plugin hooks.
"""
log.msg("Registering client plugin hooks...")
plugin_types = {
"automatons": IAutomaton,
"generators": ITerrainGenerator,
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
packs = self.config.getlistdefault(self.config_name, "packs", [])
try:
packs = [available_packs[pack] for pack in packs]
except KeyError, e:
raise Exception("Couldn't find plugin pack %s" % e.args)
for t, interface in plugin_types.iteritems():
l = self.config.getlistdefault(self.config_name, t, [])
# Grab extra plugins from the pack. Order doesn't really matter
# since the plugin loader sorts things anyway.
for pack in packs:
if t in pack:
l += pack[t]
# Hax. :T
if t == "generators":
plugins = retrieve_sorted_plugins(interface, l)
elif issubclass(interface, ISortedPlugin):
plugins = retrieve_sorted_plugins(interface, l, factory=self)
else:
plugins = retrieve_named_plugins(interface, l, factory=self)
log.msg("Using %s: %s" % (t.replace("_", " "),
", ".join(plugin.name for plugin in plugins)))
setattr(self, t, plugins)
# Deal with seasons.
seasons = self.config.getlistdefault(self.config_name, "seasons", [])
for pack in packs:
if "seasons" in pack:
seasons += pack["seasons"]
self.seasons = []
if "spring" in seasons:
self.seasons.append(Spring())
if "winter" in seasons:
self.seasons.append(Winter())
# Assign generators to the world pipeline.
self.world.pipeline = self.generators
# Use hooks have special funkiness.
uh = self.use_hooks
self.use_hooks = defaultdict(list)
for plugin in uh:
for target in plugin.targets:
self.use_hooks[target].append(plugin)
def unregister_plugins(self):
log.msg("Unregistering client plugin hooks...")
for name in [
"automatons",
"generators",
"open_hooks",
"click_hooks",
"close_hooks",
"pre_build_hooks",
"post_build_hooks",
"pre_dig_hooks",
"dig_hooks",
"sign_hooks",
"use_hooks",
]:
delattr(self, name)
def create_entity(self, x, y, z, name, **kwargs):
"""
Spawn an entirely new entity at the specified block coordinates.
Handles entity registration as well as instantiation.
"""
bigx = x // 16
bigz = z // 16
location = Location.at_block(x, y, z)
entity = entities[name](eid=0, location=location, **kwargs)
self.register_entity(entity)
d = self.world.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
chunk.entities.add(entity)
log.msg("Created entity %s" % entity)
# XXX Maybe just send the entity object to the manager instead of
# the following?
if hasattr(entity,'loop'):
self.world.mob_manager.start_mob(entity)
return entity
def register_entity(self, entity):
"""
Registers an entity with this factory.
Registration is perhaps too fancy of a name; this method merely makes
sure that the entity has a unique and usable entity ID. In particular,
this method does *not* make the entity attached to the world, or
advertise its existence.
"""
if not entity.eid:
self.eid += 1
entity.eid = self.eid
log.msg("Registered entity %s" % entity)
def destroy_entity(self, entity):
"""
Destroy an entity.
The factory doesn't have to know about entities, but it is a good
place to put this logic.
"""
bigx, bigz = entity.location.pos.to_chunk()
d = self.world.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
chunk.entities.discard(entity)
chunk.dirty = True
log.msg("Destroyed entity %s" % entity)
def update_time(self):
"""
Update the in-game timer.
The timer goes from 0 to 24000, both of which are high noon. The clock
increments by 20 every second. Days are 20 minutes long.
The day clock is incremented every in-game day, which is every 20
minutes. The day clock goes from 0 to 360, which works out to a reset
once every 5 days. This is a Babylonian in-game year.
"""
t = reactor.seconds()
self.time += 20 * (t - self.timestamp)
self.timestamp = t
days, self.time = divmod(self.time, 24000)
if days:
self.day += days
self.day %= 360
self.update_season()
def broadcast_time(self):
packet = make_packet("time", timestamp=int(self.time))
self.broadcast(packet)
def update_season(self):
"""
Update the world's season.
"""
all_seasons = sorted(self.seasons, key=lambda s: s.day)
# Get all the seasons that we have past the start date of this year.
# We are looking for the season which is closest to our current day,
# without going over; I call this the Price-is-Right style of season
# handling. :3
past_seasons = [s for s in all_seasons if s.day <= self.day]
if past_seasons:
# The most recent one is the one we are in
self.world.season = past_seasons[-1]
elif all_seasons:
# We haven't past any seasons yet this year, so grab the last one
# from 'last year'
self.world.season = all_seasons[-1]
else:
# No seasons enabled.
self.world.season = None
def chat(self, message):
"""
Relay chat messages.
Chat messages are sent to all connected clients, as well as to anybody
consuming this factory.
"""
for consumer in self.chat_consumers:
consumer.write((self, message))
# Prepare the message for chat packeting.
for user in self.protocols:
message = message.replace(user, chat_name(user))
message = sanitize_chat(message)
log.msg("Chat: %s" % message.encode("utf8"))
- packet = make_packet("chat", message=message)
+ data = json.dumps({"text": message})
+
+ packet = make_packet("chat", data=data)
self.broadcast(packet)
def broadcast(self, packet):
"""
Broadcast a packet to all connected players.
"""
for player in self.protocols.itervalues():
player.transport.write(packet)
def broadcast_for_others(self, packet, protocol):
"""
Broadcast a packet to all players except the originating player.
Useful for certain packets like player entity spawns which should
never be reflexive.
"""
for player in self.protocols.itervalues():
if player is not protocol:
player.transport.write(packet)
def broadcast_for_chunk(self, packet, x, z):
"""
Broadcast a packet to all players that have a certain chunk loaded.
`x` and `z` are chunk coordinates, not block coordinates.
"""
for player in self.protocols.itervalues():
if (x, z) in player.chunks:
player.transport.write(packet)
def scan_chunk(self, chunk):
"""
Tell automatons about this chunk.
"""
# It's possible for there to be no automatons; this usually means that
# the factory is shutting down. We should be permissive and handle
# this case correctly.
if hasattr(self, "automatons"):
for automaton in self.automatons:
automaton.scan(chunk)
def flush_chunk(self, chunk):
"""
Flush a damaged chunk to all players that have it loaded.
"""
if chunk.is_damaged():
packet = chunk.get_damage_packet()
for player in self.protocols.itervalues():
if (chunk.x, chunk.z) in player.chunks:
player.transport.write(packet)
chunk.clear_damage()
def flush_all_chunks(self):
"""
Flush any damage anywhere in this world to all players.
This is a sledgehammer which should be used sparingly at best, and is
only well-suited to plugins which touch multiple chunks at once.
In other words, if I catch you using this in your plugin needlessly,
I'm gonna have a chat with you.
"""
for chunk in self.world._cache.iterdirty():
self.flush_chunk(chunk)
def give(self, coords, block, quantity):
"""
Spawn a pickup at the specified coordinates.
The coordinates need to be in pixels, not blocks.
If the size of the stack is too big, multiple stacks will be dropped.
:param tuple coords: coordinates, in pixels
:param tuple block: key of block or item to drop
:param int quantity: number of blocks to drop in the stack
"""
x, y, z = coords
while quantity > 0:
entity = self.create_entity(x // 32, y // 32, z // 32, "Item",
item=block, quantity=min(quantity, 64))
packet = entity.save_to_packet()
packet += make_packet("create", eid=entity.eid)
self.broadcast(packet)
quantity -= 64
def players_near(self, player, radius):
"""
Obtain other players within a radius of a given player.
Radius is measured in blocks.
"""
radius *= 32
for p in self.protocols.itervalues():
if p.player == player:
continue
distance = player.location.distance(p.location)
if distance <= radius:
yield p.player
def pauseProducing(self):
pass
def resumeProducing(self):
pass
def stopProducing(self):
pass
diff --git a/bravo/beta/packets.py b/bravo/beta/packets.py
index 86e4b32..2f39f4d 100644
--- a/bravo/beta/packets.py
+++ b/bravo/beta/packets.py
@@ -1,844 +1,844 @@
from collections import namedtuple
from construct import Struct, Container, Embed, Enum, MetaField
from construct import MetaArray, If, Switch, Const, Peek, Magic
from construct import OptionalGreedyRange, RepeatUntil
from construct import Flag, PascalString, Adapter
from construct import UBInt8, UBInt16, UBInt32, UBInt64
from construct import SBInt8, SBInt16, SBInt32
from construct import BFloat32, BFloat64
from construct import BitStruct, BitField
from construct import StringAdapter, LengthValueAdapter, Sequence
from construct import ConstructError
def IPacket(object):
"""
Interface for packets.
"""
def parse(buf, offset):
"""
Parse a packet out of the given buffer, starting at the given offset.
If the parse is successful, returns a tuple of the parsed packet and
the next packet offset in the buffer.
If the parse fails due to insufficient data, returns a tuple of None
and the amount of data required before the parse can be retried.
Exceptions may be raised if the parser finds invalid data.
"""
def simple(name, fmt, *args):
"""
Make a customized namedtuple representing a simple, primitive packet.
"""
from struct import Struct
s = Struct(fmt)
@classmethod
def parse(cls, buf, offset):
if len(buf) >= s.size + offset:
unpacked = s.unpack_from(buf, offset)
return cls(*unpacked), s.size + offset
else:
return None, s.size - len(buf)
def build(self):
return s.pack(*self)
methods = {
"parse": parse,
"build": build,
}
return type(name, (namedtuple(name, *args),), methods)
DUMP_ALL_PACKETS = False
# Strings.
# This one is a UCS2 string, which effectively decodes single writeChar()
# invocations. We need to import the encoding for it first, though.
from bravo.encodings import ucs2
from codecs import register
register(ucs2)
class DoubleAdapter(LengthValueAdapter):
def _encode(self, obj, context):
return len(obj) / 2, obj
def AlphaString(name):
return StringAdapter(
DoubleAdapter(
Sequence(name,
UBInt16("length"),
MetaField("data", lambda ctx: ctx["length"] * 2),
)
),
encoding="ucs2",
)
# Boolean converter.
def Bool(*args, **kwargs):
return Flag(*args, default=True, **kwargs)
# Flying, position, and orientation, reused in several places.
grounded = Struct("grounded", UBInt8("grounded"))
position = Struct("position",
BFloat64("x"),
BFloat64("y"),
BFloat64("stance"),
BFloat64("z")
)
orientation = Struct("orientation", BFloat32("rotation"), BFloat32("pitch"))
# TODO: this must be replaced with 'slot' (see below)
# Notchian item packing (slot data)
items = Struct("items",
SBInt16("primary"),
If(lambda context: context["primary"] >= 0,
Embed(Struct("item_information",
UBInt8("count"),
UBInt16("secondary"),
Magic("\xff\xff"),
)),
),
)
Speed = namedtuple('speed', 'x y z')
class Slot(object):
def __init__(self, item_id=-1, count=1, damage=0, nbt=None):
self.item_id = item_id
self.count = count
self.damage = damage
# TODO: Implement packing/unpacking of gzipped NBT data
self.nbt = nbt
@classmethod
def fromItem(cls, item, count):
return cls(item[0], count, item[1])
@property
def is_empty(self):
return self.item_id == -1
def __len__(self):
return 0 if self.nbt is None else len(self.nbt)
def __repr__(self):
from bravo.blocks import items
if self.is_empty:
return 'Slot()'
elif len(self):
return 'Slot(%s, count=%d, damage=%d, +nbt:%dB)' % (
str(items[self.item_id]), self.count, self.damage, len(self)
)
else:
return 'Slot(%s, count=%d, damage=%d)' % (
str(items[self.item_id]), self.count, self.damage
)
def __eq__(self, other):
return (self.item_id == other.item_id and
self.count == other.count and
self.damage == self.damage and
self.nbt == self.nbt)
class SlotAdapter(Adapter):
def _decode(self, obj, context):
if obj.item_id == -1:
s = Slot(obj.item_id)
else:
s = Slot(obj.item_id, obj.count, obj.damage, obj.nbt)
return s
def _encode(self, obj, context):
if not isinstance(obj, Slot):
raise ConstructError('Slot object expected')
if obj.is_empty:
return Container(item_id=-1)
else:
return Container(item_id=obj.item_id, count=obj.count, damage=obj.damage,
nbt_len=len(obj) if len(obj) else -1, nbt=obj.nbt)
slot = SlotAdapter(
Struct("slot",
SBInt16("item_id"),
If(lambda context: context["item_id"] >= 0,
Embed(Struct("item_information",
UBInt8("count"),
UBInt16("damage"),
SBInt16("nbt_len"),
If(lambda context: context["nbt_len"] >= 0,
MetaField("nbt", lambda ctx: ctx["nbt_len"])
)
)),
)
)
)
Metadata = namedtuple("Metadata", "type value")
metadata_types = ["byte", "short", "int", "float", "string", "slot", "coords"]
# Metadata adaptor.
class MetadataAdapter(Adapter):
def _decode(self, obj, context):
d = {}
for m in obj.data:
d[m.id.key] = Metadata(metadata_types[m.id.type], m.value)
return d
def _encode(self, obj, context):
c = Container(data=[], terminator=None)
for k, v in obj.iteritems():
t, value = v
d = Container(
id=Container(type=metadata_types.index(t), key=k),
value=value,
peeked=None)
c.data.append(d)
if c.data:
c.data[-1].peeked = 127
else:
c.data.append(Container(id=Container(first=0, second=0), value=0,
peeked=127))
return c
# Metadata inner container.
metadata_switch = {
0: UBInt8("value"),
1: UBInt16("value"),
2: UBInt32("value"),
3: BFloat32("value"),
4: AlphaString("value"),
5: slot,
6: Struct("coords",
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
),
}
# Metadata subconstruct.
metadata = MetadataAdapter(
Struct("metadata",
RepeatUntil(lambda obj, context: obj["peeked"] == 0x7f,
Struct("data",
BitStruct("id",
BitField("type", 3),
BitField("key", 5),
),
Switch("value", lambda context: context["id"]["type"],
metadata_switch),
Peek(UBInt8("peeked")),
),
),
Const(UBInt8("terminator"), 0x7f),
),
)
# Build faces, used during dig and build.
faces = {
"noop": -1,
"-y": 0,
"+y": 1,
"-z": 2,
"+z": 3,
"-x": 4,
"+x": 5,
}
face = Enum(SBInt8("face"), **faces)
# World dimension.
dimensions = {
"earth": 0,
"sky": 1,
"nether": 255,
}
dimension = Enum(UBInt8("dimension"), **dimensions)
# Difficulty levels
difficulties = {
"peaceful": 0,
"easy": 1,
"normal": 2,
"hard": 3,
}
difficulty = Enum(UBInt8("difficulty"), **difficulties)
modes = {
"survival": 0,
"creative": 1,
"adventure": 2,
}
mode = Enum(UBInt8("mode"), **modes)
# Possible effects.
# XXX these names aren't really canonized yet
effect = Enum(UBInt8("effect"),
move_fast=1,
move_slow=2,
dig_fast=3,
dig_slow=4,
damage_boost=5,
heal=6,
harm=7,
jump=8,
confusion=9,
regenerate=10,
resistance=11,
fire_resistance=12,
water_resistance=13,
invisibility=14,
blindness=15,
night_vision=16,
hunger=17,
weakness=18,
poison=19,
wither=20,
)
# The actual packet list.
packets = {
0x00: Struct("ping",
UBInt32("pid"),
),
0x01: Struct("login",
# Player Entity ID (random number generated by the server)
UBInt32("eid"),
# default, flat, largeBiomes
AlphaString("leveltype"),
mode,
dimension,
difficulty,
UBInt8("unused"),
UBInt8("maxplayers"),
),
0x02: Struct("handshake",
UBInt8("protocol"),
AlphaString("username"),
AlphaString("host"),
UBInt32("port"),
),
0x03: Struct("chat",
- AlphaString("message"),
+ AlphaString("data"),
),
0x04: Struct("time",
# Total Ticks
UBInt64("timestamp"),
# Time of day
UBInt64("time"),
),
0x05: Struct("entity-equipment",
UBInt32("eid"),
UBInt16("slot"),
Embed(items),
),
0x06: Struct("spawn",
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
0x07: Struct("use",
UBInt32("eid"),
UBInt32("target"),
UBInt8("button"),
),
0x08: Struct("health",
BFloat32("hp"),
UBInt16("fp"),
BFloat32("saturation"),
),
0x09: Struct("respawn",
dimension,
difficulty,
mode,
UBInt16("height"),
AlphaString("leveltype"),
),
0x0a: grounded,
0x0b: Struct("position",
position,
grounded
),
0x0c: Struct("orientation",
orientation,
grounded
),
# TODO: Differ between client and server 'position'
0x0d: Struct("location",
position,
orientation,
grounded
),
0x0e: Struct("digging",
Enum(UBInt8("state"),
started=0,
cancelled=1,
stopped=2,
checked=3,
dropped=4,
# Also eating
shooting=5,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
face,
),
0x0f: Struct("build",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
face,
Embed(items),
UBInt8("cursorx"),
UBInt8("cursory"),
UBInt8("cursorz"),
),
# Hold Item Change
0x10: Struct("equip",
# Only 0-8
UBInt16("slot"),
),
0x11: Struct("bed",
UBInt32("eid"),
UBInt8("unknown"),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
),
0x12: Struct("animate",
UBInt32("eid"),
Enum(UBInt8("animation"),
noop=0,
arm=1,
hit=2,
leave_bed=3,
eat=5,
unknown=102,
crouch=104,
uncrouch=105,
),
),
0x13: Struct("action",
UBInt32("eid"),
Enum(UBInt8("action"),
crouch=1,
uncrouch=2,
leave_bed=3,
start_sprint=4,
stop_sprint=5,
),
UBInt32("unknown"),
),
0x14: Struct("player",
UBInt32("eid"),
AlphaString("username"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
# 0 For none, unlike other packets
# -1 crashes clients
SBInt16("item"),
metadata,
),
0x16: Struct("collect",
UBInt32("eid"),
UBInt32("destination"),
),
# Object/Vehicle
0x17: Struct("object", # XXX: was 'vehicle'!
UBInt32("eid"),
Enum(UBInt8("type"), # See http://wiki.vg/Entities#Objects
boat=1,
item_stack=2,
minecart=10,
storage_cart=11,
powered_cart=12,
tnt=50,
ender_crystal=51,
arrow=60,
snowball=61,
egg=62,
thrown_enderpearl=65,
wither_skull=66,
falling_block=70,
frames=71,
ender_eye=72,
thrown_potion=73,
dragon_egg=74,
thrown_xp_bottle=75,
fishing_float=90,
),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("pitch"),
UBInt8("yaw"),
SBInt32("data"), # See http://www.wiki.vg/Object_Data
If(lambda context: context["data"] != 0,
Struct("speed",
SBInt16("x"),
SBInt16("y"),
SBInt16("z"),
)
),
),
0x18: Struct("mob",
UBInt32("eid"),
Enum(UBInt8("type"), **{
"Creeper": 50,
"Skeleton": 51,
"Spider": 52,
"GiantZombie": 53,
"Zombie": 54,
"Slime": 55,
"Ghast": 56,
"ZombiePig": 57,
"Enderman": 58,
"CaveSpider": 59,
"Silverfish": 60,
"Blaze": 61,
"MagmaCube": 62,
"EnderDragon": 63,
"Wither": 64,
"Bat": 65,
"Witch": 66,
"Pig": 90,
"Sheep": 91,
"Cow": 92,
"Chicken": 93,
"Squid": 94,
"Wolf": 95,
"Mooshroom": 96,
"Snowman": 97,
"Ocelot": 98,
"IronGolem": 99,
"Villager": 120
}),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
SBInt8("yaw"),
SBInt8("pitch"),
SBInt8("head_yaw"),
SBInt16("vx"),
SBInt16("vy"),
SBInt16("vz"),
metadata,
),
0x19: Struct("painting",
UBInt32("eid"),
AlphaString("title"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
face,
),
0x1a: Struct("experience",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt16("quantity"),
),
0x1b: Struct("steer",
BFloat32("first"),
BFloat32("second"),
Bool("third"),
Bool("fourth"),
),
0x1c: Struct("velocity",
UBInt32("eid"),
SBInt16("dx"),
SBInt16("dy"),
SBInt16("dz"),
),
0x1d: Struct("destroy",
UBInt8("count"),
MetaArray(lambda context: context["count"], UBInt32("eid")),
),
0x1e: Struct("create",
UBInt32("eid"),
),
0x1f: Struct("entity-position",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz")
),
0x20: Struct("entity-orientation",
UBInt32("eid"),
UBInt8("yaw"),
UBInt8("pitch")
),
0x21: Struct("entity-location",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz"),
UBInt8("yaw"),
UBInt8("pitch")
),
0x22: Struct("teleport",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
),
0x23: Struct("entity-head",
UBInt32("eid"),
UBInt8("yaw"),
),
0x26: Struct("status",
UBInt32("eid"),
Enum(UBInt8("status"),
damaged=2,
killed=3,
taming=6,
tamed=7,
drying=8,
eating=9,
sheep_eat=10,
golem_rose=11,
heart_particle=12,
angry_particle=13,
happy_particle=14,
magic_particle=15,
shaking=16,
firework=17,
),
),
0x27: Struct("attach",
UBInt32("eid"),
# XXX -1 for detatching
UBInt32("vid"),
UBInt8("unknown"),
),
0x28: Struct("metadata",
UBInt32("eid"),
metadata,
),
0x29: Struct("effect",
UBInt32("eid"),
effect,
UBInt8("amount"),
UBInt16("duration"),
),
0x2a: Struct("uneffect",
UBInt32("eid"),
effect,
),
0x2b: Struct("levelup",
BFloat32("current"),
UBInt16("level"),
UBInt16("total"),
),
# XXX 0x2c, server to client, needs to be implemented, needs special
# UUID-packing techniques
0x33: Struct("chunk",
SBInt32("x"),
SBInt32("z"),
Bool("continuous"),
UBInt16("primary"),
UBInt16("add"),
PascalString("data", length_field=UBInt32("length"), encoding="zlib"),
),
0x34: Struct("batch",
SBInt32("x"),
SBInt32("z"),
UBInt16("count"),
PascalString("data", length_field=UBInt32("length")),
),
0x35: Struct("block",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt16("type"),
UBInt8("meta"),
),
# XXX This covers general tile actions, not just note blocks.
# TODO: Needs work
0x36: Struct("block-action",
SBInt32("x"),
SBInt16("y"),
SBInt32("z"),
UBInt8("byte1"),
UBInt8("byte2"),
UBInt16("blockid"),
),
0x37: Struct("block-break-anim",
UBInt32("eid"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
UBInt8("stage"),
),
# XXX Server -> Client. Use 0x33 instead.
0x38: Struct("bulk-chunk",
UBInt16("count"),
UBInt32("length"),
UBInt8("sky_light"),
MetaField("data", lambda ctx: ctx["length"]),
MetaArray(lambda context: context["count"],
Struct("metadata",
UBInt32("chunk_x"),
UBInt32("chunk_z"),
UBInt16("bitmap_primary"),
UBInt16("bitmap_secondary"),
)
)
),
# TODO: Needs work?
0x3c: Struct("explosion",
BFloat64("x"),
BFloat64("y"),
BFloat64("z"),
BFloat32("radius"),
UBInt32("count"),
MetaField("blocks", lambda context: context["count"] * 3),
BFloat32("motionx"),
BFloat32("motiony"),
BFloat32("motionz"),
),
0x3d: Struct("sound",
Enum(UBInt32("sid"),
click2=1000,
click1=1001,
bow_fire=1002,
door_toggle=1003,
extinguish=1004,
record_play=1005,
charge=1007,
fireball=1008,
zombie_wood=1010,
zombie_metal=1011,
zombie_break=1012,
wither=1013,
smoke=2000,
block_break=2001,
splash_potion=2002,
ender_eye=2003,
blaze=2004,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt32("data"),
Bool("volume-mod"),
),
0x3e: Struct("named-sound",
AlphaString("name"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
BFloat32("volume"),
UBInt8("pitch"),
),
0x3f: Struct("particle",
AlphaString("name"),
BFloat32("x"),
BFloat32("y"),
BFloat32("z"),
BFloat32("x_offset"),
BFloat32("y_offset"),
BFloat32("z_offset"),
BFloat32("speed"),
UBInt32("count"),
),
0x46: Struct("state",
Enum(UBInt8("state"),
bad_bed=0,
start_rain=1,
stop_rain=2,
mode_change=3,
run_credits=4,
),
mode,
),
0x47: Struct("thunderbolt",
UBInt32("eid"),
UBInt8("gid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
0x64: Struct("window-open",
UBInt8("wid"),
Enum(UBInt8("type"),
chest=0,
workbench=1,
furnace=2,
dispenser=3,
enchatment_table=4,
brewing_stand=5,
npc_trade=6,
beacon=7,
anvil=8,
hopper=9,
),
AlphaString("title"),
UBInt8("slots"),
UBInt8("use_title"),
# XXX iff type == 0xb (currently unknown) write an extra secret int
# here. WTF?
),
0x65: Struct("window-close",
UBInt8("wid"),
),
0x66: Struct("window-action",
UBInt8("wid"),
UBInt16("slot"),
UBInt8("button"),
UBInt16("token"),
UBInt8("shift"), # TODO: rename to 'mode'
Embed(items),
),
0x67: Struct("window-slot",
UBInt8("wid"),
UBInt16("slot"),
Embed(items),
),
0x68: Struct("inventory",
UBInt8("wid"),
UBInt16("length"),
MetaArray(lambda context: context["length"], items),
),
0x69: Struct("window-progress",
UBInt8("wid"),
UBInt16("bar"),
UBInt16("progress"),
),
0x6a: Struct("window-token",
UBInt8("wid"),
UBInt16("token"),
Bool("acknowledged"),
),
0x6b: Struct("window-creative",
UBInt16("slot"),
Embed(items),
),
0x6c: Struct("enchant",
UBInt8("wid"),
UBInt8("enchantment"),
),
0x82: Struct("sign",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
AlphaString("line1"),
AlphaString("line2"),
AlphaString("line3"),
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index 584a9d5..82f3474 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -1,1493 +1,1502 @@
# vim: set fileencoding=utf8 :
from itertools import product, chain
+import json
from time import time
from urlparse import urlunparse
from twisted.internet import reactor
from twisted.internet.defer import (DeferredList, inlineCallbacks,
maybeDeferred, succeed)
from twisted.internet.protocol import Protocol, connectionDone
from twisted.internet.task import cooperate, deferLater, LoopingCall
from twisted.internet.task import TaskDone, TaskFailed
from twisted.protocols.policies import TimeoutMixin
from twisted.python import log
from twisted.web.client import getPage
from bravo import version
from bravo.beta.structures import BuildData, Settings
from bravo.blocks import blocks, items
from bravo.chunk import CHUNK_HEIGHT
from bravo.entity import Sign
from bravo.errors import BetaClientError, BuildError
from bravo.ibravo import (IChatCommand, IPreBuildHook, IPostBuildHook,
IWindowOpenHook, IWindowClickHook, IWindowCloseHook,
IPreDigHook, IDigHook, ISignHook, IUseHook)
from bravo.infini.factory import InfiniClientFactory
from bravo.inventory.windows import InventoryWindow
from bravo.location import Location, Orientation, Position
from bravo.motd import get_motd
from bravo.beta.packets import parse_packets, make_packet, make_error_packet
from bravo.plugin import retrieve_plugins
from bravo.policy.dig import dig_policies
from bravo.utilities.coords import adjust_coords_for_face, split_coords
from bravo.utilities.chat import complete, username_alternatives
from bravo.utilities.maths import circling, clamp, sorted_by_distance
from bravo.utilities.temporal import timestamp_from_clock
# States of the protocol.
(STATE_UNAUTHENTICATED, STATE_AUTHENTICATED, STATE_LOCATED) = range(3)
SUPPORTED_PROTOCOL = 78
class BetaServerProtocol(object, Protocol, TimeoutMixin):
"""
The Minecraft Alpha/Beta server protocol.
This class is mostly designed to be a skeleton for featureful clients. It
tries hard to not step on the toes of potential subclasses.
"""
excess = ""
packet = None
state = STATE_UNAUTHENTICATED
buf = ""
parser = None
handler = None
player = None
username = None
settings = Settings()
motd = "Bravo Generic Beta Server"
_health = 20
_latency = 0
def __init__(self):
self.chunks = dict()
self.windows = {}
self.wid = 1
self.location = Location()
self.handlers = {
0x00: self.ping,
0x02: self.handshake,
0x03: self.chat,
0x07: self.use,
0x09: self.respawn,
0x0a: self.grounded,
0x0b: self.position,
0x0c: self.orientation,
0x0d: self.location_packet,
0x0e: self.digging,
0x0f: self.build,
0x10: self.equip,
0x12: self.animate,
0x13: self.action,
0x15: self.pickup,
0x65: self.wclose,
0x66: self.waction,
0x6a: self.wacknowledge,
0x6b: self.wcreative,
0x82: self.sign,
0xca: self.client_settings,
0xcb: self.complete,
0xcc: self.settings_packet,
0xfe: self.poll,
0xff: self.quit,
}
self._ping_loop = LoopingCall(self.update_ping)
self.setTimeout(30)
# Low-level packet handlers
# Try not to hook these if possible, since they offer no convenient
# abstractions or protections.
def ping(self, container):
"""
Hook for ping packets.
By default, this hook will examine the timestamps on incoming pings,
and use them to estimate the current latency of the connected client.
"""
now = timestamp_from_clock(reactor)
then = container.pid
self.latency = now - then
def handshake(self, container):
"""
Hook for handshake packets.
Override this to customize how logins are handled. By default, this
method will only confirm that the negotiated wire protocol is the
correct version, copy data out of the packet and onto the protocol,
and then run the ``authenticated`` callback.
This method will call the ``pre_handshake`` method hook prior to
logging in the client.
"""
self.username = container.username
if container.protocol < SUPPORTED_PROTOCOL:
# Kick old clients.
self.error("This server doesn't support your ancient client.")
return
elif container.protocol > SUPPORTED_PROTOCOL:
# Kick new clients.
self.error("This server doesn't support your newfangled client.")
return
log.msg("Handshaking with client, protocol version %d" %
container.protocol)
if not self.pre_handshake():
log.msg("Pre-handshake hook failed; kicking client")
self.error("You failed the pre-handshake hook.")
return
players = min(self.factory.limitConnections, 20)
self.write_packet("login", eid=self.eid, leveltype="default",
mode=self.factory.mode,
dimension=self.factory.world.dimension,
difficulty="peaceful", unused=0, maxplayers=players)
self.authenticated()
def pre_handshake(self):
"""
Whether this client should be logged in.
"""
return True
def chat(self, container):
"""
Hook for chat packets.
"""
def use(self, container):
"""
Hook for use packets.
"""
def respawn(self, container):
"""
Hook for respawn packets.
"""
def grounded(self, container):
"""
Hook for grounded packets.
"""
self.location.grounded = bool(container.grounded)
def position(self, container):
"""
Hook for position packets.
"""
if self.state != STATE_LOCATED:
log.msg("Ignoring unlocated position!")
return
self.grounded(container.grounded)
old_position = self.location.pos
position = Position.from_player(container.position.x,
container.position.y, container.position.z)
altered = False
dx, dy, dz = old_position - position
if any(abs(d) >= 64 for d in (dx, dy, dz)):
# Whoa, slow down there, cowboy. You're moving too fast. We're
# gonna ignore this position change completely, because it's
# either bogus or ignoring a recent teleport.
altered = True
else:
self.location.pos = position
self.location.stance = container.position.stance
# Santitize location. This handles safety boundaries, illegal stance,
# etc.
altered = self.location.clamp() or altered
# If, for any reason, our opinion on where the client should be
# located is different than theirs, force them to conform to our point
# of view.
if altered:
log.msg("Not updating bogus position!")
self.update_location()
# If our position actually changed, fire the position change hook.
if old_position != position:
self.position_changed()
def orientation(self, container):
"""
Hook for orientation packets.
"""
self.grounded(container.grounded)
old_orientation = self.location.ori
orientation = Orientation.from_degs(container.orientation.rotation,
container.orientation.pitch)
self.location.ori = orientation
if old_orientation != orientation:
self.orientation_changed()
def location_packet(self, container):
"""
Hook for location packets.
"""
self.position(container)
self.orientation(container)
def digging(self, container):
"""
Hook for digging packets.
"""
def build(self, container):
"""
Hook for build packets.
"""
def equip(self, container):
"""
Hook for equip packets.
"""
def pickup(self, container):
"""
Hook for pickup packets.
"""
def animate(self, container):
"""
Hook for animate packets.
"""
def action(self, container):
"""
Hook for action packets.
"""
def wclose(self, container):
"""
Hook for wclose packets.
"""
def waction(self, container):
"""
Hook for waction packets.
"""
def wacknowledge(self, container):
"""
Hook for wacknowledge packets.
"""
def wcreative(self, container):
"""
Hook for creative inventory action packets.
"""
def sign(self, container):
"""
Hook for sign packets.
"""
def client_settings(self, container):
"""
Hook for interaction setting packets.
"""
self.settings.update_interaction(container)
def complete(self, container):
"""
Hook for tab-completion packets.
"""
def settings_packet(self, container):
"""
Hook for presentation setting packets.
"""
self.settings.update_presentation(container)
def poll(self, container):
"""
Hook for poll packets.
By default, queries the parent factory for some data, and replays it
in a specific format to the requester. The connection is then closed
at both ends. This functionality is used by Beta 1.8 clients to poll
servers for status.
"""
log.msg("Poll data: %r" % container.data)
players = unicode(len(self.factory.protocols))
max_players = unicode(self.factory.limitConnections or 1000000)
data = [
u"§1",
unicode(SUPPORTED_PROTOCOL),
u"Bravo %s" % version,
self.motd,
players,
max_players,
]
response = u"\u0000".join(data)
self.error(response)
def quit(self, container):
"""
Hook for quit packets.
By default, merely logs the quit message and drops the connection.
Even if the connection is not dropped, it will be lost anyway since
the client will close the connection. It's better to explicitly let it
go here than to have zombie protocols.
"""
log.msg("Client is quitting: %s" % container.message)
self.transport.loseConnection()
# Twisted-level data handlers and methods
# Please don't override these needlessly, as they are pretty solid and
# shouldn't need to be touched.
def dataReceived(self, data):
self.buf += data
packets, self.buf = parse_packets(self.buf)
if packets:
self.resetTimeout()
for header, payload in packets:
if header in self.handlers:
d = maybeDeferred(self.handlers[header], payload)
@d.addErrback
def eb(failure):
log.err("Error while handling packet 0x%.2x" % header)
log.err(failure)
return None
else:
log.err("Didn't handle parseable packet 0x%.2x!" % header)
log.err(payload)
def connectionLost(self, reason=connectionDone):
if self._ping_loop.running:
self._ping_loop.stop()
def timeoutConnection(self):
self.error("Connection timed out")
# State-change callbacks
# Feel free to override these, but call them at some point.
def authenticated(self):
"""
Called when the client has successfully authenticated with the server.
"""
self.state = STATE_AUTHENTICATED
self._ping_loop.start(30)
# Event callbacks
# These are meant to be overriden.
def orientation_changed(self):
"""
Called when the client moves.
This callback is only for orientation, not position.
"""
pass
def position_changed(self):
"""
Called when the client moves.
This callback is only for position, not orientation.
"""
pass
# Convenience methods for consolidating code and expressing intent. I
# hear that these are occasionally useful. If a method in this section can
# be used, then *PLEASE* use it; not using it is the same as open-coding
# whatever you're doing, and only hurts in the long run.
def write_packet(self, header, **payload):
"""
Send a packet to the client.
"""
self.transport.write(make_packet(header, **payload))
def update_ping(self):
"""
Send a keepalive to the client.
"""
timestamp = timestamp_from_clock(reactor)
self.write_packet("ping", pid=timestamp)
def update_location(self):
"""
Send this client's location to the client.
Also let other clients know where this client is.
"""
# Don't bother trying to update things if the position's not yet
# synchronized. We could end up jettisoning them into the void.
if self.state != STATE_LOCATED:
return
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
# Inform everybody of our new location.
packet = make_packet("teleport", eid=self.player.eid, x=x, y=y, z=z,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
# Inform ourselves of our new location.
packet = self.location.save_to_packet()
self.transport.write(packet)
def ascend(self, count):
"""
Ascend to the next XZ-plane.
``count`` is the number of ascensions to perform, and may be zero in
order to force this player to not be standing inside a block.
:returns: bool of whether the ascension was successful
This client must be located for this method to have any effect.
"""
if self.state != STATE_LOCATED:
return False
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
chunk = self.chunks[bigx, bigz]
column = [chunk.get_block((smallx, i, smallz))
for i in range(CHUNK_HEIGHT)]
# Special case: Ascend at most once, if the current spot isn't good.
if count == 0:
if (not column[y]) or column[y + 1] or column[y + 2]:
# Yeah, we're gonna need to move.
count += 1
else:
# Nope, we're fine where we are.
return True
for i in xrange(y, 255):
# Find the next spot above us which has a platform and two empty
# blocks of air.
if column[i] and (not column[i + 1]) and not column[i + 2]:
count -= 1
if not count:
break
else:
return False
self.location.pos = self.location.pos._replace(y=i * 32)
return True
def error(self, message):
"""
Error out.
This method sends ``message`` to the client as a descriptive error
message, then closes the connection.
"""
log.msg("Error: %r" % message)
self.transport.write(make_error_packet(message))
self.transport.loseConnection()
def play_notes(self, notes):
"""
Play some music.
Send a sequence of notes to the player. ``notes`` is a finite iterable
of pairs of instruments and pitches.
There is no way to time notes; if staggered playback is desired (and
it usually is!), then ``play_notes()`` should be called repeatedly at
the appropriate times.
This method turns the block beneath the player into a note block,
plays the requested notes through it, then turns it back into the
original block, all without actually modifying the chunk.
"""
x, y, z = self.location.pos.to_block()
if y:
y -= 1
bigx, smallx, bigz, smallz = split_coords(x, z)
if (bigx, bigz) not in self.chunks:
return
block = self.chunks[bigx, bigz].get_block((smallx, y, smallz))
meta = self.chunks[bigx, bigz].get_metadata((smallx, y, smallz))
self.write_packet("block", x=x, y=y, z=z,
type=blocks["note-block"].slot, meta=0)
for instrument, pitch in notes:
self.write_packet("note", x=x, y=y, z=z, pitch=pitch,
instrument=instrument)
self.write_packet("block", x=x, y=y, z=z, type=block, meta=meta)
+ def send_chat(self, message):
+ """
+ Send a chat message back to the client.
+ """
+
+ data = json.dumps({"text": message})
+ self.write_packet("chat", message=data)
+
# Automatic properties. Assigning to them causes the client to be notified
# of changes.
@property
def health(self):
return self._health
@health.setter
def health(self, value):
if not 0 <= value <= 20:
raise BetaClientError("Invalid health value %d" % value)
if self._health != value:
self.write_packet("health", hp=value, fp=0, saturation=0)
self._health = value
@property
def latency(self):
return self._latency
@latency.setter
def latency(self, value):
# Clamp the value to not exceed the boundaries of the packet. This is
# necessary even though, in theory, a ping this high is bad news.
value = clamp(value, 0, 65535)
# Check to see if this is a new value, and if so, alert everybody.
if self._latency != value:
packet = make_packet("players", name=self.username, online=True,
ping=value)
self.factory.broadcast(packet)
self._latency = value
class KickedProtocol(BetaServerProtocol):
"""
A very simple Beta protocol that helps enforce IP bans, Max Connections,
and Max Connections Per IP.
This protocol disconnects people as soon as they connect, with a helpful
message.
"""
def __init__(self, reason=None):
BetaServerProtocol.__init__(self)
if reason:
self.reason = reason
else:
self.reason = (
"This server doesn't like you very much."
" I don't like you very much either.")
def connectionMade(self):
self.error("%s" % self.reason)
class BetaProxyProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` that proxies for an InfiniCraft client.
"""
gateway = "server.wiki.vg"
def handshake(self, container):
self.write_packet("handshake", username="-")
def login(self, container):
self.username = container.username
self.write_packet("login", protocol=0, username="", seed=0,
dimension="earth")
url = urlunparse(("http", self.gateway, "/node/0/0/", None, None,
None))
d = getPage(url)
d.addCallback(self.start_proxy)
def start_proxy(self, response):
log.msg("Response: %s" % response)
log.msg("Starting proxy...")
address, port = response.split(":")
self.add_node(address, int(port))
def add_node(self, address, port):
"""
Add a new node to this client.
"""
from twisted.internet.endpoints import TCP4ClientEndpoint
log.msg("Adding node %s:%d" % (address, port))
endpoint = TCP4ClientEndpoint(reactor, address, port, 5)
self.node_factory = InfiniClientFactory()
d = endpoint.connect(self.node_factory)
d.addCallback(self.node_connected)
d.addErrback(self.node_connect_error)
def node_connected(self, protocol):
log.msg("Connected new node!")
def node_connect_error(self, reason):
log.err("Couldn't connect node!")
log.err(reason)
class BravoProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` suitable for serving MC worlds to clients.
This protocol really does need to be hooked up with a ``BravoFactory`` or
something very much like it.
"""
chunk_tasks = None
time_loop = None
eid = 0
last_dig = None
def __init__(self, config, name):
BetaServerProtocol.__init__(self)
self.config = config
self.config_name = "world %s" % name
# Retrieve the MOTD. Only needs to be done once.
self.motd = self.config.getdefault(self.config_name, "motd",
"BravoServer")
def register_hooks(self):
log.msg("Registering client hooks...")
plugin_types = {
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
for t in plugin_types:
setattr(self, t, getattr(self.factory, t))
log.msg("Registering policies...")
if self.factory.mode == "creative":
self.dig_policy = dig_policies["speedy"]
else:
self.dig_policy = dig_policies["notchy"]
log.msg("Registered client plugin hooks!")
def pre_handshake(self):
"""
Set up username and get going.
"""
if self.username in self.factory.protocols:
# This username's already taken; find a new one.
for name in username_alternatives(self.username):
if name not in self.factory.protocols:
self.username = name
break
else:
self.error("Your username is already taken.")
return False
return True
@inlineCallbacks
def authenticated(self):
BetaServerProtocol.authenticated(self)
# Init player, and copy data into it.
self.player = yield self.factory.world.load_player(self.username)
self.player.eid = self.eid
self.location = self.player.location
# Init players' inventory window.
self.inventory = InventoryWindow(self.player.inventory)
# *Now* we are in our factory's list of protocols. Be aware.
self.factory.protocols[self.username] = self
# Announce our presence.
self.factory.chat("%s is joining the game..." % self.username)
packet = make_packet("players", name=self.username, online=True,
ping=0)
self.factory.broadcast(packet)
# Craft our avatar and send it to already-connected other players.
packet = make_packet("create", eid=self.player.eid)
packet += self.player.save_to_packet()
self.factory.broadcast_for_others(packet, self)
# And of course spawn all of those players' avatars in our client as
# well.
for protocol in self.factory.protocols.itervalues():
# Skip over ourselves; otherwise, the client tweaks out and
# usually either dies or locks up.
if protocol is self:
continue
self.write_packet("create", eid=protocol.player.eid)
packet = protocol.player.save_to_packet()
packet += protocol.player.save_equipment_to_packet()
self.transport.write(packet)
# Send spawn and inventory.
spawn = self.factory.world.level.spawn
packet = make_packet("spawn", x=spawn[0], y=spawn[1], z=spawn[2])
packet += self.inventory.save_to_packet()
self.transport.write(packet)
# TODO: Send Abilities (0xca)
# TODO: Update Health (0x08)
# TODO: Update Experience (0x2b)
# Send weather.
self.transport.write(self.factory.vane.make_packet())
self.send_initial_chunk_and_location()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(10)
def orientation_changed(self):
# Bang your head!
yaw, pitch = self.location.ori.to_fracs()
packet = make_packet("entity-orientation", eid=self.player.eid,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
def position_changed(self):
# Send chunks.
self.update_chunks()
for entity in self.entities_near(2):
if entity.name != "Item":
continue
left = self.player.inventory.add(entity.item, entity.quantity)
if left != entity.quantity:
if left != 0:
# partial collect
entity.quantity = left
else:
packet = make_packet("collect", eid=entity.eid,
destination=self.player.eid)
packet += make_packet("destroy", count=1, eid=[entity.eid])
self.factory.broadcast(packet)
self.factory.destroy_entity(entity)
packet = self.inventory.save_to_packet()
self.transport.write(packet)
def entities_near(self, radius):
"""
Obtain the entities within a radius of this player.
Radius is measured in blocks.
"""
chunk_radius = int(radius // 16 + 1)
chunkx, chunkz = self.location.pos.to_chunk()
minx = chunkx - chunk_radius
maxx = chunkx + chunk_radius + 1
minz = chunkz - chunk_radius
maxz = chunkz + chunk_radius + 1
for x, z in product(xrange(minx, maxx), xrange(minz, maxz)):
if (x, z) not in self.chunks:
continue
chunk = self.chunks[x, z]
yieldables = [entity for entity in chunk.entities
if self.location.distance(entity.location) <= (radius * 32)]
for i in yieldables:
yield i
def chat(self, container):
+ # data = json.loads(container.data)
+ log.msg("Chat! %r" % container.data)
if container.message.startswith("/"):
commands = retrieve_plugins(IChatCommand, factory=self.factory)
# Register aliases.
for plugin in commands.values():
for alias in plugin.aliases:
commands[alias] = plugin
params = container.message[1:].split(" ")
command = params.pop(0).lower()
if command and command in commands:
def cb(iterable):
for line in iterable:
- self.write_packet("chat", message=line)
+ self.send_chat(line)
def eb(error):
- self.write_packet("chat", message="Error: %s" %
- error.getErrorMessage())
+ self.send_chat("Error: %s" % error.getErrorMessage())
+
d = maybeDeferred(commands[command].chat_command,
self.username, params)
d.addCallback(cb)
d.addErrback(eb)
else:
- self.write_packet("chat",
- message="Unknown command: %s" % command)
+ self.send_chat("Unknown command: %s" % command)
else:
# Send the message up to the factory to be chatified.
message = "<%s> %s" % (self.username, container.message)
self.factory.chat(message)
def use(self, container):
"""
For each entity in proximity (4 blocks), check if it is the target
of this packet and call all hooks that stated interested in this
type.
"""
nearby_players = self.factory.players_near(self.player, 4)
for entity in chain(self.entities_near(4), nearby_players):
if entity.eid == container.target:
for hook in self.use_hooks[entity.name]:
hook.use_hook(self.factory, self.player, entity,
container.button == 0)
break
@inlineCallbacks
def digging(self, container):
if container.x == -1 and container.z == -1 and container.y == 255:
# Lala-land dig packet. Discard it for now.
return
# Player drops currently holding item/block.
if (container.state == "dropped" and container.face == "-y" and
container.x == 0 and container.y == 0 and container.z == 0):
i = self.player.inventory
holding = i.holdables[self.player.equipped]
if holding:
primary, secondary, count = holding
if i.consume((primary, secondary), self.player.equipped):
dest = self.location.in_front_of(2)
coords = dest.pos._replace(y=dest.pos.y + 1)
self.factory.give(coords, (primary, secondary), 1)
# Re-send inventory.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# If no items in this slot are left, this player isn't
# holding an item anymore.
if i.holdables[self.player.equipped] is None:
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=65535,
count=1,
secondary=0
)
self.factory.broadcast_for_others(packet, self)
return
if container.state == "shooting":
self.shoot_arrow()
return
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
coords = smallx, container.y, smallz
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't dig in chunk (%d, %d)!" % (bigx, bigz))
return
block = chunk.get_block((smallx, container.y, smallz))
if container.state == "started":
# Run pre dig hooks
for hook in self.pre_dig_hooks:
cancel = yield maybeDeferred(hook.pre_dig_hook, self.player,
(container.x, container.y, container.z), block)
if cancel:
return
tool = self.player.inventory.holdables[self.player.equipped]
# Check to see whether we should break this block.
if self.dig_policy.is_1ko(block, tool):
self.run_dig_hooks(chunk, coords, blocks[block])
else:
# Set up a timer for breaking the block later.
dtime = time() + self.dig_policy.dig_time(block, tool)
self.last_dig = coords, block, dtime
elif container.state == "stopped":
# The client thinks it has broken a block. We shall see.
if not self.last_dig:
return
oldcoords, oldblock, dtime = self.last_dig
if oldcoords != coords or oldblock != block:
# Nope!
self.last_dig = None
return
dtime -= time()
# When enough time has elapsed, run the dig hooks.
d = deferLater(reactor, max(dtime, 0), self.run_dig_hooks, chunk,
coords, blocks[block])
d.addCallback(lambda none: setattr(self, "last_dig", None))
def run_dig_hooks(self, chunk, coords, block):
"""
Destroy a block and run the post-destroy dig hooks.
"""
x, y, z = coords
if block.breakable:
chunk.destroy(coords)
l = []
for hook in self.dig_hooks:
l.append(maybeDeferred(hook.dig_hook, chunk, x, y, z, block))
dl = DeferredList(l)
dl.addCallback(lambda none: self.factory.flush_chunk(chunk))
@inlineCallbacks
def build(self, container):
"""
Handle a build packet.
Several things must happen. First, the packet's contents need to be
examined to ensure that the packet is valid. A check is done to see if
the packet is opening a windowed object. If not, then a build is
run.
"""
# Is the target within our purview? We don't do a very strict
# containment check, but we *do* require that the chunk be loaded.
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't select in chunk (%d, %d)!" % (bigx, bigz))
return
target = blocks[chunk.get_block((smallx, container.y, smallz))]
# Attempt to open a window.
from bravo.policy.windows import window_for_block
window = window_for_block(target)
if window is not None:
# We have a window!
self.windows[self.wid] = window
identifier, title, slots = window.open()
self.write_packet("window-open", wid=self.wid, type=identifier,
title=title, slots=slots)
self.wid += 1
return
# Try to open it first
for hook in self.open_hooks:
window = yield maybeDeferred(hook.open_hook, self, container,
chunk.get_block((smallx, container.y, smallz)))
if window:
self.write_packet("window-open", wid=window.wid,
type=window.identifier, title=window.title,
slots=window.slots_num)
packet = window.save_to_packet()
self.transport.write(packet)
# window opened
return
# Ignore clients that think -1 is placeable.
if container.primary == -1:
return
# Special case when face is "noop": Update the status of the currently
# held block rather than placing a new block.
if container.face == "noop":
return
# If the target block is vanishable, then adjust our aim accordingly.
if target.vanishes:
container.face = "+y"
container.y -= 1
if container.primary in blocks:
block = blocks[container.primary]
elif container.primary in items:
block = items[container.primary]
else:
log.err("Ignoring request to place unknown block %d" %
container.primary)
return
# Run pre-build hooks. These hooks are able to interrupt the build
# process.
builddata = BuildData(block, 0x0, container.x, container.y,
container.z, container.face)
for hook in self.pre_build_hooks:
cont, builddata, cancel = yield maybeDeferred(hook.pre_build_hook,
self.player, builddata)
if cancel:
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
return
if not cont:
break
# Run the build.
try:
yield maybeDeferred(self.run_build, builddata)
except BuildError:
return
newblock = builddata.block.slot
coords = adjust_coords_for_face(
(builddata.x, builddata.y, builddata.z), builddata.face)
# Run post-build hooks. These are merely callbacks which cannot
# interfere with the build process, largely because the build process
# already happened.
for hook in self.post_build_hooks:
yield maybeDeferred(hook.post_build_hook, self.player, coords,
builddata.block)
# Feed automatons.
for automaton in self.factory.automatons:
if newblock in automaton.blocks:
automaton.feed(coords)
# Re-send inventory.
# XXX this could be optimized if/when inventories track damage.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
def run_build(self, builddata):
block, metadata, x, y, z, face = builddata
# Don't place items as blocks.
if block.slot not in blocks:
raise BuildError("Couldn't build item %r as block" % block)
# Check for orientable blocks.
if not metadata and block.orientable():
metadata = block.orientation(face)
if metadata is None:
# Oh, I guess we can't even place the block on this face.
raise BuildError("Couldn't orient block %r on face %s" %
(block, face))
# Make sure we can remove it from the inventory first.
if not self.player.inventory.consume((block.slot, 0),
self.player.equipped):
# Okay, first one was a bust; maybe we can consume the related
# block for dropping instead?
if not self.player.inventory.consume(block.drop,
self.player.equipped):
raise BuildError("Couldn't consume %r from inventory" % block)
# Offset coords according to face.
x, y, z = adjust_coords_for_face((x, y, z), face)
# Set the block and data.
dl = [self.factory.world.set_block((x, y, z), block.slot)]
if metadata:
dl.append(self.factory.world.set_metadata((x, y, z), metadata))
return DeferredList(dl)
def equip(self, container):
self.player.equipped = container.slot
# Inform everyone about the item the player is holding now.
item = self.player.inventory.holdables[self.player.equipped]
if item is None:
# Empty slot. Use signed short -1.
primary, secondary = -1, 0
else:
primary, secondary, count = item
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=primary,
count=1,
secondary=secondary
)
self.factory.broadcast_for_others(packet, self)
def pickup(self, container):
self.factory.give((container.x, container.y, container.z),
(container.primary, container.secondary), container.count)
def animate(self, container):
# Broadcast the animation of the entity to everyone else. Only swing
# arm is send by notchian clients.
packet = make_packet("animate",
eid=self.player.eid,
animation=container.animation
)
self.factory.broadcast_for_others(packet, self)
def wclose(self, container):
wid = container.wid
if wid == 0:
# WID 0 is reserved for the client inventory.
pass
elif wid in self.windows:
w = self.windows.pop(wid)
w.close()
else:
self.error("WID %d doesn't exist." % wid)
def waction(self, container):
wid = container.wid
if wid in self.windows:
w = self.windows[wid]
result = w.action(container.slot, container.button,
container.token, container.shift,
container.primary)
self.write_packet("window-token", wid=wid, token=container.token,
acknowledged=result)
else:
self.error("WID %d doesn't exist." % wid)
def wcreative(self, container):
"""
A slot was altered in creative mode.
"""
# XXX Sometimes the container doesn't contain all of this information.
# What then?
applied = self.inventory.creative(container.slot, container.primary,
container.secondary, container.count)
if applied:
# Inform other players about changes to this player's equipment.
equipped_slot = self.player.equipped + 36
if container.slot == equipped_slot:
packet = make_packet("entity-equipment",
eid=self.player.eid,
# XXX why 0? why not the actual slot?
slot=0,
primary=container.primary,
count=1,
secondary=container.secondary,
)
self.factory.broadcast_for_others(packet, self)
def shoot_arrow(self):
# TODO 1. Create arrow entity: arrow = Arrow(self.factory, self.player)
# 2. Register within the factory: self.factory.register_entity(arrow)
# 3. Run it: arrow.run()
pass
def sign(self, container):
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't handle sign in chunk (%d, %d)!" % (bigx, bigz))
return
if (smallx, container.y, smallz) in chunk.tiles:
new = False
s = chunk.tiles[smallx, container.y, smallz]
else:
new = True
s = Sign(smallx, container.y, smallz)
chunk.tiles[smallx, container.y, smallz] = s
s.text1 = container.line1
s.text2 = container.line2
s.text3 = container.line3
s.text4 = container.line4
chunk.dirty = True
# The best part of a sign isn't making one, it's showing everybody
# else on the server that you did.
packet = make_packet("sign", container)
self.factory.broadcast_for_chunk(packet, bigx, bigz)
# Run sign hooks.
for hook in self.sign_hooks:
hook.sign_hook(self.factory, chunk, container.x, container.y,
container.z, [s.text1, s.text2, s.text3, s.text4], new)
def complete(self, container):
"""
Attempt to tab-complete user names.
"""
needle = container.autocomplete
usernames = self.factory.protocols.keys()
results = complete(needle, usernames)
self.write_packet("tab", autocomplete=results)
def settings_packet(self, container):
"""
Acknowledge a change of settings and update chunk distance.
"""
super(BravoProtocol, self).settings_packet(container)
self.update_chunks()
def disable_chunk(self, x, z):
key = x, z
log.msg("Disabling chunk %d, %d" % key)
if key not in self.chunks:
log.msg("...But the chunk wasn't loaded!")
return
# Remove the chunk from cache.
chunk = self.chunks.pop(key)
eids = [e.eid for e in chunk.entities]
self.write_packet("destroy", count=len(eids), eid=eids)
# Clear chunk data on the client.
self.write_packet("chunk", x=x, z=z, continuous=False, primary=0x0,
add=0x0, data="")
def enable_chunk(self, x, z):
"""
Request a chunk.
This function will asynchronously obtain the chunk, and send it on the
wire.
:returns: `Deferred` that will be fired when the chunk is obtained,
with no arguments
"""
log.msg("Enabling chunk %d, %d" % (x, z))
if (x, z) in self.chunks:
log.msg("...But the chunk was already loaded!")
return succeed(None)
d = self.factory.world.request_chunk(x, z)
@d.addCallback
def cb(chunk):
self.chunks[x, z] = chunk
return chunk
d.addCallback(self.send_chunk)
return d
def send_chunk(self, chunk):
log.msg("Sending chunk %d, %d" % (chunk.x, chunk.z))
packet = chunk.save_to_packet()
self.transport.write(packet)
for entity in chunk.entities:
packet = entity.save_to_packet()
self.transport.write(packet)
for entity in chunk.tiles.itervalues():
if entity.name == "Sign":
packet = entity.save_to_packet()
self.transport.write(packet)
def send_initial_chunk_and_location(self):
"""
Send the initial chunks and location.
This method sends more than one chunk; since Beta 1.2, it must send
nearly fifty chunks before the location can be safely sent.
"""
# Disable located hooks. We'll re-enable them at the end.
self.state = STATE_AUTHENTICATED
log.msg("Initial, position %d, %d, %d" % self.location.pos)
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
# Send the chunk that the player will stand on. The other chunks are
# not so important. There *used* to be a bug, circa Beta 1.2, that
# required lots of surrounding geometry to be present, but that's been
# fixed.
d = self.enable_chunk(bigx, bigz)
# What to do if we can't load a given chunk? Just kick 'em.
d.addErrback(lambda fail: self.error("Couldn't load a chunk... :c"))
# Don't dare send more chunks beyond the initial one until we've
# spawned. Once we've spawned, set our status to LOCATED and then
# update_location() will work.
@d.addCallback
def located(none):
self.state = STATE_LOCATED
# Ensure that we're above-ground.
self.ascend(0)
d.addCallback(lambda none: self.update_location())
d.addCallback(lambda none: self.position_changed())
# Send the MOTD.
if self.motd:
@d.addCallback
def motd(none):
- self.write_packet("chat",
- message=self.motd.replace("<tagline>", get_motd()))
+ self.send_chat(self.motd.replace("<tagline>", get_motd()))
# Finally, start the secondary chunk loop.
d.addCallback(lambda none: self.update_chunks())
def update_chunks(self):
# Don't send chunks unless we're located.
if self.state != STATE_LOCATED:
return
x, z = self.location.pos.to_chunk()
# These numbers come from a couple spots, including minecraftwiki, but
# I verified them experimentally using torches and pillars to mark
# distances on each setting. ~ C.
distances = {
"tiny": 2,
"short": 4,
"far": 16,
}
radius = distances.get(self.settings.distance, 8)
new = set(circling(x, z, radius))
old = set(self.chunks.iterkeys())
added = new - old
discarded = old - new
# Perhaps some explanation is in order.
# The cooperate() function iterates over the iterable it is fed,
# without tying up the reactor, by yielding after each iteration. The
# inner part of the generator expression generates all of the chunks
# around the currently needed chunk, and it sorts them by distance to
# the current chunk. The end result is that we load chunks one-by-one,
# nearest to furthest, without stalling other clients.
if self.chunk_tasks:
for task in self.chunk_tasks:
try:
task.stop()
except (TaskDone, TaskFailed):
pass
to_enable = sorted_by_distance(added, x, z)
self.chunk_tasks = [
cooperate(self.enable_chunk(i, j) for i, j in to_enable),
cooperate(self.disable_chunk(i, j) for i, j in discarded),
]
def update_time(self):
time = int(self.factory.time)
self.write_packet("time", timestamp=time, time=time % 24000)
def connectionLost(self, reason=connectionDone):
"""
Cleanup after a lost connection.
Most of the time, these connections are lost cleanly; we don't have
any cleanup to do in the unclean case since clients don't have any
kind of pending state which must be recovered.
Remember, the connection can be lost before identification and
authentication, so ``self.username`` and ``self.player`` can be None.
"""
if self.username and self.player:
self.factory.world.save_player(self.username, self.player)
if self.player:
self.factory.destroy_entity(self.player)
packet = make_packet("destroy", count=1, eid=[self.player.eid])
self.factory.broadcast(packet)
if self.username:
packet = make_packet("players", name=self.username, online=False,
ping=0)
self.factory.broadcast(packet)
self.factory.chat("%s has left the game." % self.username)
self.factory.teardown_protocol(self)
# We are now torn down. After this point, there will be no more
# factory stuff, just our own personal stuff.
del self.factory
if self.time_loop:
self.time_loop.stop()
if self.chunk_tasks:
for task in self.chunk_tasks:
try:
task.stop()
except (TaskDone, TaskFailed):
pass
diff --git a/bravo/plugins/commands/common.py b/bravo/plugins/commands/common.py
index ae62102..8d91ce5 100644
--- a/bravo/plugins/commands/common.py
+++ b/bravo/plugins/commands/common.py
@@ -1,479 +1,481 @@
+import json
from textwrap import wrap
from twisted.internet import reactor
from zope.interface import implements
from bravo.beta.packets import make_packet
from bravo.blocks import parse_block
from bravo.ibravo import IChatCommand, IConsoleCommand
from bravo.plugin import retrieve_plugins
from bravo.policy.seasons import Spring, Winter
from bravo.utilities.temporal import split_time
def parse_player(factory, name):
if name in factory.protocols:
return factory.protocols[name]
else:
raise Exception("Couldn't find player %s" % name)
class Help(object):
"""
Provide helpful information about commands.
"""
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def general_help(self, plugins):
"""
Return a list of commands.
"""
commands = [plugin.name for plugin in set(plugins.itervalues())]
commands.sort()
wrapped = wrap(", ".join(commands), 60)
help_text = [
"Use /help <command> for more information on a command.",
"List of commands:",
] + wrapped
return help_text
def specific_help(self, plugins, name):
"""
Return specific help about a single plugin.
"""
try:
plugin = plugins[name]
except:
return ("No such command!",)
help_text = [
"Usage: %s %s" % (plugin.name, plugin.usage),
]
if plugin.aliases:
help_text.append("Aliases: %s" % ", ".join(plugin.aliases))
help_text.append(plugin.__doc__)
return help_text
def chat_command(self, username, parameters):
plugins = retrieve_plugins(IChatCommand, factory=self.factory)
if parameters:
return self.specific_help(plugins, "".join(parameters))
else:
return self.general_help(plugins)
def console_command(self, parameters):
plugins = retrieve_plugins(IConsoleCommand, factory=self.factory)
if parameters:
return self.specific_help(plugins, "".join(parameters))
else:
return self.general_help(plugins)
name = "help"
aliases = tuple()
usage = ""
class List(object):
"""
List the currently connected players.
"""
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def dispatch(self, factory):
yield "Connected players: %s" % (", ".join(
player for player in factory.protocols))
def chat_command(self, username, parameters):
for i in self.dispatch(self.factory):
yield i
def console_command(self, parameters):
for i in self.dispatch(self.factory):
yield i
name = "list"
aliases = ("playerlist",)
usage = ""
class Time(object):
"""
Obtain or change the current time and date.
"""
# XXX my code is all over the place; clean me up
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def dispatch(self, factory):
hours, minutes = split_time(factory.time)
# If the factory's got seasons enabled, then the world will have
# a season, and we can examine it. Otherwise, just print the day as-is
# for the date.
season = factory.world.season
if season:
day_of_season = factory.day - season.day
while day_of_season < 0:
day_of_season += 360
date = "{0} ({1} {2})".format(factory.day, day_of_season,
season.name)
else:
date = "%d" % factory.day
return ("%02d:%02d, %s" % (hours, minutes, date),)
def chat_command(self, username, parameters):
if len(parameters) >= 1:
# Set the time
time = parameters[0]
if time == 'sunset':
time = 12000
elif time == 'sunrise':
time = 24000
elif ':' in time:
# Interpret it as a real-world esque time (24hr clock)
hours, minutes = time.split(':')
hours, minutes = int(hours), int(minutes)
# 24000 ticks / day = 1000 ticks / hour ~= 16.6 ticks / minute
time = (hours * 1000) + (minutes * 50 / 3)
time -= 6000 # to account for 24000 being high noon in minecraft.
if len(parameters) >= 2:
self.factory.day = int(parameters[1])
self.factory.time = int(time)
self.factory.update_time()
self.factory.update_season()
# Update the time for the clients
self.factory.broadcast_time()
# Tell the user the current time.
return self.dispatch(self.factory)
def console_command(self, parameters):
return self.dispatch(self.factory)
name = "time"
aliases = ("date",)
usage = "[time] [day]"
class Say(object):
"""
Broadcast a message to everybody.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
message = "[Server] %s" % " ".join(parameters)
yield message
- packet = make_packet("chat", message=message)
+ data = json.dumps({"text": message})
+ packet = make_packet("chat", data=data)
self.factory.broadcast(packet)
name = "say"
aliases = tuple()
usage = "<message>"
class Give(object):
"""
Spawn block or item pickups near a player.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
if len(parameters) == 0:
return ("Usage: /{0} {1}".format(self.name, self.usage),)
elif len(parameters) == 1:
block = parameters[0]
count = 1
elif len(parameters) == 2:
block = parameters[0]
count = parameters[1]
else:
block = " ".join(parameters[:-1])
count = parameters[-1]
player = parse_player(self.factory, username)
block = parse_block(block)
count = int(count)
# Get a location two blocks in front of the player.
dest = player.player.location.in_front_of(2)
dest.y += 1
coords = int(dest.x * 32), int(dest.y * 32), int(dest.z * 32)
self.factory.give(coords, block, count)
# Return an empty tuple for iteration
return tuple()
name = "give"
aliases = tuple()
usage = "<block> <quantity>"
class Quit(object):
"""
Gracefully shutdown the server.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
# Let's shutdown!
message = "Server shutting down."
yield message
# Use an error packet to kick clients cleanly.
packet = make_packet("error", message=message)
self.factory.broadcast(packet)
yield "Saving all chunks to disk..."
for chunk in self.factory.world._cache.iterdirty():
yield self.factory.world.save_chunk(chunk)
yield "Halting."
reactor.stop()
name = "quit"
aliases = ("exit",)
usage = ""
class SaveAll(object):
"""
Save all world data to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
yield "Flushing all chunks..."
for chunk in self.factory.world._cache.iterdirty():
yield self.factory.world.save_chunk(chunk)
yield "Save complete!"
name = "save-all"
aliases = tuple()
usage = ""
class SaveOff(object):
"""
Disable saving world data to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
yield "Disabling saving..."
self.factory.world.save_off()
yield "Saving disabled. Currently running in memory."
name = "save-off"
aliases = tuple()
usage = ""
class SaveOn(object):
"""
Enable saving world data to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
yield "Enabling saving (this could take a bit)..."
self.factory.world.save_on()
yield "Saving enabled."
name = "save-on"
aliases = tuple()
usage = ""
class WriteConfig(object):
"""
Write configuration to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
with open("".join(parameters), "wb") as f:
self.factory.config.write(f)
yield "Configuration saved."
name = "write-config"
aliases = tuple()
usage = ""
class Season(object):
"""
Change the season.
This command fast-forwards the calendar to the first day of the requested
season.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
wanted = " ".join(parameters)
if wanted == "spring":
season = Spring()
elif wanted == "winter":
season = Winter()
else:
yield "Couldn't find season %s" % wanted
return
msg = "Changing season to %s..." % wanted
yield msg
self.factory.day = season.day
self.factory.update_season()
yield "Season successfully changed!"
name = "season"
aliases = tuple()
usage = "<season>"
class Me(object):
"""
Emote.
"""
implements(IChatCommand)
def __init__(self, factory):
pass
def chat_command(self, username, parameters):
say = " ".join(parameters)
msg = "* %s %s" % (username, say)
return (msg,)
name = "me"
aliases = tuple()
usage = "<message>"
class Kick(object):
"""
Kick a player from the world.
With great power comes great responsibility; use this wisely.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def dispatch(self, parameters):
player = parse_player(self.factory, parameters[0])
if len(parameters) == 1:
msg = "%s has been kicked." % parameters[0]
elif len(parameters) > 1:
reason = " ".join(parameters[1:])
msg = "%s has been kicked for %s" % (parameters[0],reason)
packet = make_packet("error", message=msg)
player.transport.write(packet)
yield msg
def console_command(self, parameters):
for i in self.dispatch(parameters):
yield i
name = "kick"
aliases = tuple()
usage = "<player> [<reason>]"
class GetPos(object):
"""
Ascertain a player's location.
This command is identical to the command provided by Hey0.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
player = parse_player(self.factory, username)
l = player.player.location
locMsg = "Your location is <%d, %d, %d>" % l.pos.to_block()
yield locMsg
name = "getpos"
aliases = tuple()
usage = ""
class Nick(object):
"""
Set a player's nickname.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
player = parse_player(self.factory, username)
if len(parameters) == 0:
return ("Usage: /nick <nickname>",)
else:
new = parameters[0]
if self.factory.set_username(player, new):
return ("Changed nickname from %s to %s" % (username, new),)
else:
return ("Couldn't change nickname!",)
name = "nick"
aliases = tuple()
usage = "<nickname>"
|
bravoserver/bravo
|
f512b3a6187cc57571a6a381f27cbb7b710f128a
|
beta: Fake poll packet reciept.
|
diff --git a/bravo/beta/packets.py b/bravo/beta/packets.py
index a0b512c..86e4b32 100644
--- a/bravo/beta/packets.py
+++ b/bravo/beta/packets.py
@@ -448,616 +448,621 @@ packets = {
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
# 0 For none, unlike other packets
# -1 crashes clients
SBInt16("item"),
metadata,
),
0x16: Struct("collect",
UBInt32("eid"),
UBInt32("destination"),
),
# Object/Vehicle
0x17: Struct("object", # XXX: was 'vehicle'!
UBInt32("eid"),
Enum(UBInt8("type"), # See http://wiki.vg/Entities#Objects
boat=1,
item_stack=2,
minecart=10,
storage_cart=11,
powered_cart=12,
tnt=50,
ender_crystal=51,
arrow=60,
snowball=61,
egg=62,
thrown_enderpearl=65,
wither_skull=66,
falling_block=70,
frames=71,
ender_eye=72,
thrown_potion=73,
dragon_egg=74,
thrown_xp_bottle=75,
fishing_float=90,
),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("pitch"),
UBInt8("yaw"),
SBInt32("data"), # See http://www.wiki.vg/Object_Data
If(lambda context: context["data"] != 0,
Struct("speed",
SBInt16("x"),
SBInt16("y"),
SBInt16("z"),
)
),
),
0x18: Struct("mob",
UBInt32("eid"),
Enum(UBInt8("type"), **{
"Creeper": 50,
"Skeleton": 51,
"Spider": 52,
"GiantZombie": 53,
"Zombie": 54,
"Slime": 55,
"Ghast": 56,
"ZombiePig": 57,
"Enderman": 58,
"CaveSpider": 59,
"Silverfish": 60,
"Blaze": 61,
"MagmaCube": 62,
"EnderDragon": 63,
"Wither": 64,
"Bat": 65,
"Witch": 66,
"Pig": 90,
"Sheep": 91,
"Cow": 92,
"Chicken": 93,
"Squid": 94,
"Wolf": 95,
"Mooshroom": 96,
"Snowman": 97,
"Ocelot": 98,
"IronGolem": 99,
"Villager": 120
}),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
SBInt8("yaw"),
SBInt8("pitch"),
SBInt8("head_yaw"),
SBInt16("vx"),
SBInt16("vy"),
SBInt16("vz"),
metadata,
),
0x19: Struct("painting",
UBInt32("eid"),
AlphaString("title"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
face,
),
0x1a: Struct("experience",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt16("quantity"),
),
0x1b: Struct("steer",
BFloat32("first"),
BFloat32("second"),
Bool("third"),
Bool("fourth"),
),
0x1c: Struct("velocity",
UBInt32("eid"),
SBInt16("dx"),
SBInt16("dy"),
SBInt16("dz"),
),
0x1d: Struct("destroy",
UBInt8("count"),
MetaArray(lambda context: context["count"], UBInt32("eid")),
),
0x1e: Struct("create",
UBInt32("eid"),
),
0x1f: Struct("entity-position",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz")
),
0x20: Struct("entity-orientation",
UBInt32("eid"),
UBInt8("yaw"),
UBInt8("pitch")
),
0x21: Struct("entity-location",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz"),
UBInt8("yaw"),
UBInt8("pitch")
),
0x22: Struct("teleport",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
),
0x23: Struct("entity-head",
UBInt32("eid"),
UBInt8("yaw"),
),
0x26: Struct("status",
UBInt32("eid"),
Enum(UBInt8("status"),
damaged=2,
killed=3,
taming=6,
tamed=7,
drying=8,
eating=9,
sheep_eat=10,
golem_rose=11,
heart_particle=12,
angry_particle=13,
happy_particle=14,
magic_particle=15,
shaking=16,
firework=17,
),
),
0x27: Struct("attach",
UBInt32("eid"),
# XXX -1 for detatching
UBInt32("vid"),
UBInt8("unknown"),
),
0x28: Struct("metadata",
UBInt32("eid"),
metadata,
),
0x29: Struct("effect",
UBInt32("eid"),
effect,
UBInt8("amount"),
UBInt16("duration"),
),
0x2a: Struct("uneffect",
UBInt32("eid"),
effect,
),
0x2b: Struct("levelup",
BFloat32("current"),
UBInt16("level"),
UBInt16("total"),
),
# XXX 0x2c, server to client, needs to be implemented, needs special
# UUID-packing techniques
0x33: Struct("chunk",
SBInt32("x"),
SBInt32("z"),
Bool("continuous"),
UBInt16("primary"),
UBInt16("add"),
PascalString("data", length_field=UBInt32("length"), encoding="zlib"),
),
0x34: Struct("batch",
SBInt32("x"),
SBInt32("z"),
UBInt16("count"),
PascalString("data", length_field=UBInt32("length")),
),
0x35: Struct("block",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt16("type"),
UBInt8("meta"),
),
# XXX This covers general tile actions, not just note blocks.
# TODO: Needs work
0x36: Struct("block-action",
SBInt32("x"),
SBInt16("y"),
SBInt32("z"),
UBInt8("byte1"),
UBInt8("byte2"),
UBInt16("blockid"),
),
0x37: Struct("block-break-anim",
UBInt32("eid"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
UBInt8("stage"),
),
# XXX Server -> Client. Use 0x33 instead.
0x38: Struct("bulk-chunk",
UBInt16("count"),
UBInt32("length"),
UBInt8("sky_light"),
MetaField("data", lambda ctx: ctx["length"]),
MetaArray(lambda context: context["count"],
Struct("metadata",
UBInt32("chunk_x"),
UBInt32("chunk_z"),
UBInt16("bitmap_primary"),
UBInt16("bitmap_secondary"),
)
)
),
# TODO: Needs work?
0x3c: Struct("explosion",
BFloat64("x"),
BFloat64("y"),
BFloat64("z"),
BFloat32("radius"),
UBInt32("count"),
MetaField("blocks", lambda context: context["count"] * 3),
BFloat32("motionx"),
BFloat32("motiony"),
BFloat32("motionz"),
),
0x3d: Struct("sound",
Enum(UBInt32("sid"),
click2=1000,
click1=1001,
bow_fire=1002,
door_toggle=1003,
extinguish=1004,
record_play=1005,
charge=1007,
fireball=1008,
zombie_wood=1010,
zombie_metal=1011,
zombie_break=1012,
wither=1013,
smoke=2000,
block_break=2001,
splash_potion=2002,
ender_eye=2003,
blaze=2004,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt32("data"),
Bool("volume-mod"),
),
0x3e: Struct("named-sound",
AlphaString("name"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
BFloat32("volume"),
UBInt8("pitch"),
),
0x3f: Struct("particle",
AlphaString("name"),
BFloat32("x"),
BFloat32("y"),
BFloat32("z"),
BFloat32("x_offset"),
BFloat32("y_offset"),
BFloat32("z_offset"),
BFloat32("speed"),
UBInt32("count"),
),
0x46: Struct("state",
Enum(UBInt8("state"),
bad_bed=0,
start_rain=1,
stop_rain=2,
mode_change=3,
run_credits=4,
),
mode,
),
0x47: Struct("thunderbolt",
UBInt32("eid"),
UBInt8("gid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
0x64: Struct("window-open",
UBInt8("wid"),
Enum(UBInt8("type"),
chest=0,
workbench=1,
furnace=2,
dispenser=3,
enchatment_table=4,
brewing_stand=5,
npc_trade=6,
beacon=7,
anvil=8,
hopper=9,
),
AlphaString("title"),
UBInt8("slots"),
UBInt8("use_title"),
# XXX iff type == 0xb (currently unknown) write an extra secret int
# here. WTF?
),
0x65: Struct("window-close",
UBInt8("wid"),
),
0x66: Struct("window-action",
UBInt8("wid"),
UBInt16("slot"),
UBInt8("button"),
UBInt16("token"),
UBInt8("shift"), # TODO: rename to 'mode'
Embed(items),
),
0x67: Struct("window-slot",
UBInt8("wid"),
UBInt16("slot"),
Embed(items),
),
0x68: Struct("inventory",
UBInt8("wid"),
UBInt16("length"),
MetaArray(lambda context: context["length"], items),
),
0x69: Struct("window-progress",
UBInt8("wid"),
UBInt16("bar"),
UBInt16("progress"),
),
0x6a: Struct("window-token",
UBInt8("wid"),
UBInt16("token"),
Bool("acknowledged"),
),
0x6b: Struct("window-creative",
UBInt16("slot"),
Embed(items),
),
0x6c: Struct("enchant",
UBInt8("wid"),
UBInt8("enchantment"),
),
0x82: Struct("sign",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
AlphaString("line1"),
AlphaString("line2"),
AlphaString("line3"),
AlphaString("line4"),
),
0x83: Struct("map",
UBInt16("type"),
UBInt16("itemid"),
PascalString("data", length_field=UBInt16("length")),
),
0x84: Struct("tile-update",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
UBInt8("action"),
PascalString("nbt_data", length_field=UBInt16("length")), # gzipped
),
0x85: Struct("0x85",
UBInt8("first"),
UBInt32("second"),
UBInt32("third"),
UBInt32("fourth"),
),
0xc8: Struct("statistics",
UBInt32("sid"), # XXX I should be an Enum!
UBInt32("count"),
),
0xc9: Struct("players",
AlphaString("name"),
Bool("online"),
UBInt16("ping"),
),
0xca: Struct("abilities",
UBInt8("flags"),
BFloat32("fly-speed"),
BFloat32("walk-speed"),
),
0xcb: Struct("tab",
AlphaString("autocomplete"),
),
0xcc: Struct("settings",
AlphaString("locale"),
UBInt8("distance"),
UBInt8("chat"),
difficulty,
Bool("cape"),
),
0xcd: Struct("statuses",
UBInt8("payload")
),
0xce: Struct("score_item",
AlphaString("name"),
AlphaString("value"),
Enum(UBInt8("action"),
create=0,
remove=1,
update=2,
),
),
0xcf: Struct("score_update",
AlphaString("item_name"),
UBInt8("remove"),
If(lambda context: context["remove"] == 0,
Embed(Struct("information",
AlphaString("score_name"),
UBInt32("value"),
))
),
),
0xd0: Struct("score_display",
Enum(UBInt8("position"),
as_list=0,
sidebar=1,
below_name=2,
),
AlphaString("score_name"),
),
0xd1: Struct("teams",
AlphaString("name"),
Enum(UBInt8("mode"),
team_created=0,
team_removed=1,
team_updates=2,
players_added=3,
players_removed=4,
),
If(lambda context: context["mode"] in ("team_created", "team_updated"),
Embed(Struct("team_info",
AlphaString("team_name"),
AlphaString("team_prefix"),
AlphaString("team_suffix"),
Enum(UBInt8("friendly_fire"),
off=0,
on=1,
invisibles=2,
),
))
),
If(lambda context: context["mode"] in ("team_created", "players_added", "players_removed"),
Embed(Struct("players_info",
UBInt16("count"),
MetaArray(lambda context: context["count"], AlphaString("player_names")),
))
),
),
0xfa: Struct("plugin-message",
AlphaString("channel"),
PascalString("data", length_field=UBInt16("length")),
),
0xfc: Struct("key-response",
PascalString("key", length_field=UBInt16("key-len")),
PascalString("token", length_field=UBInt16("token-len")),
),
0xfd: Struct("key-request",
AlphaString("server"),
PascalString("key", length_field=UBInt16("key-len")),
PascalString("token", length_field=UBInt16("token-len")),
),
- # XXX changed structure, looks like more weird-ass
- # pseudo-backwards-compatible imperative protocol bullshit
- 0xfe: Struct("poll", UBInt8("unused")),
+ 0xfe: Struct("poll",
+ Magic("\x01" # Poll packet constant
+ "\xfa" # Followed by a plugin message
+ "\x00\x0b" # Length of plugin channel name
+ + u"MC|PingHost".encode("ucs2") # Plugin channel name
+ ),
+ PascalString("data", length_field=UBInt16("length")),
+ ),
# TODO: rename to 'kick'
0xff: Struct("error", AlphaString("message")),
}
packet_stream = Struct("packet_stream",
OptionalGreedyRange(
Struct("full_packet",
UBInt8("header"),
Switch("payload", lambda context: context["header"], packets),
),
),
OptionalGreedyRange(
UBInt8("leftovers"),
),
)
def parse_packets(bytestream):
"""
Opportunistically parse out as many packets as possible from a raw
bytestream.
Returns a tuple containing a list of unpacked packet containers, and any
leftover unparseable bytes.
"""
container = packet_stream.parse(bytestream)
l = [(i.header, i.payload) for i in container.full_packet]
leftovers = "".join(chr(i) for i in container.leftovers)
if DUMP_ALL_PACKETS:
for header, payload in l:
print "Parsed packet 0x%.2x" % header
print payload
return l, leftovers
incremental_packet_stream = Struct("incremental_packet_stream",
Struct("full_packet",
UBInt8("header"),
Switch("payload", lambda context: context["header"], packets),
),
OptionalGreedyRange(
UBInt8("leftovers"),
),
)
def parse_packets_incrementally(bytestream):
"""
Parse out packets one-by-one, yielding a tuple of packet header and packet
payload.
This function returns a generator.
This function will yield all valid packets in the bytestream up to the
first invalid packet.
:returns: a generator yielding tuples of headers and payloads
"""
while bytestream:
parsed = incremental_packet_stream.parse(bytestream)
header = parsed.full_packet.header
payload = parsed.full_packet.payload
bytestream = "".join(chr(i) for i in parsed.leftovers)
yield header, payload
packets_by_name = dict((v.name, k) for (k, v) in packets.iteritems())
def make_packet(packet, *args, **kwargs):
"""
Constructs a packet bytestream from a packet header and payload.
The payload should be passed as keyword arguments. Additional containers
or dictionaries to be added to the payload may be passed positionally, as
well.
"""
if packet not in packets_by_name:
print "Couldn't find packet name %s!" % packet
return ""
header = packets_by_name[packet]
for arg in args:
kwargs.update(dict(arg))
container = Container(**kwargs)
if DUMP_ALL_PACKETS:
print "Making packet <%s> (0x%.2x)" % (packet, header)
print container
payload = packets[header].build(container)
return chr(header) + payload
def make_error_packet(message):
"""
Convenience method to generate an error packet bytestream.
"""
return make_packet("error", message=message)
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index e139556..584a9d5 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -1,1045 +1,1047 @@
# vim: set fileencoding=utf8 :
from itertools import product, chain
from time import time
from urlparse import urlunparse
from twisted.internet import reactor
from twisted.internet.defer import (DeferredList, inlineCallbacks,
maybeDeferred, succeed)
from twisted.internet.protocol import Protocol, connectionDone
from twisted.internet.task import cooperate, deferLater, LoopingCall
from twisted.internet.task import TaskDone, TaskFailed
from twisted.protocols.policies import TimeoutMixin
from twisted.python import log
from twisted.web.client import getPage
from bravo import version
from bravo.beta.structures import BuildData, Settings
from bravo.blocks import blocks, items
from bravo.chunk import CHUNK_HEIGHT
from bravo.entity import Sign
from bravo.errors import BetaClientError, BuildError
from bravo.ibravo import (IChatCommand, IPreBuildHook, IPostBuildHook,
IWindowOpenHook, IWindowClickHook, IWindowCloseHook,
IPreDigHook, IDigHook, ISignHook, IUseHook)
from bravo.infini.factory import InfiniClientFactory
from bravo.inventory.windows import InventoryWindow
from bravo.location import Location, Orientation, Position
from bravo.motd import get_motd
from bravo.beta.packets import parse_packets, make_packet, make_error_packet
from bravo.plugin import retrieve_plugins
from bravo.policy.dig import dig_policies
from bravo.utilities.coords import adjust_coords_for_face, split_coords
from bravo.utilities.chat import complete, username_alternatives
from bravo.utilities.maths import circling, clamp, sorted_by_distance
from bravo.utilities.temporal import timestamp_from_clock
# States of the protocol.
(STATE_UNAUTHENTICATED, STATE_AUTHENTICATED, STATE_LOCATED) = range(3)
-SUPPORTED_PROTOCOL = 74
+SUPPORTED_PROTOCOL = 78
class BetaServerProtocol(object, Protocol, TimeoutMixin):
"""
The Minecraft Alpha/Beta server protocol.
This class is mostly designed to be a skeleton for featureful clients. It
tries hard to not step on the toes of potential subclasses.
"""
excess = ""
packet = None
state = STATE_UNAUTHENTICATED
buf = ""
parser = None
handler = None
player = None
username = None
settings = Settings()
motd = "Bravo Generic Beta Server"
_health = 20
_latency = 0
def __init__(self):
self.chunks = dict()
self.windows = {}
self.wid = 1
self.location = Location()
self.handlers = {
0x00: self.ping,
0x02: self.handshake,
0x03: self.chat,
0x07: self.use,
0x09: self.respawn,
0x0a: self.grounded,
0x0b: self.position,
0x0c: self.orientation,
0x0d: self.location_packet,
0x0e: self.digging,
0x0f: self.build,
0x10: self.equip,
0x12: self.animate,
0x13: self.action,
0x15: self.pickup,
0x65: self.wclose,
0x66: self.waction,
0x6a: self.wacknowledge,
0x6b: self.wcreative,
0x82: self.sign,
0xca: self.client_settings,
0xcb: self.complete,
0xcc: self.settings_packet,
0xfe: self.poll,
0xff: self.quit,
}
self._ping_loop = LoopingCall(self.update_ping)
self.setTimeout(30)
# Low-level packet handlers
# Try not to hook these if possible, since they offer no convenient
# abstractions or protections.
def ping(self, container):
"""
Hook for ping packets.
By default, this hook will examine the timestamps on incoming pings,
and use them to estimate the current latency of the connected client.
"""
now = timestamp_from_clock(reactor)
then = container.pid
self.latency = now - then
def handshake(self, container):
"""
Hook for handshake packets.
Override this to customize how logins are handled. By default, this
method will only confirm that the negotiated wire protocol is the
correct version, copy data out of the packet and onto the protocol,
and then run the ``authenticated`` callback.
This method will call the ``pre_handshake`` method hook prior to
logging in the client.
"""
self.username = container.username
if container.protocol < SUPPORTED_PROTOCOL:
# Kick old clients.
self.error("This server doesn't support your ancient client.")
return
elif container.protocol > SUPPORTED_PROTOCOL:
# Kick new clients.
self.error("This server doesn't support your newfangled client.")
return
log.msg("Handshaking with client, protocol version %d" %
container.protocol)
if not self.pre_handshake():
log.msg("Pre-handshake hook failed; kicking client")
self.error("You failed the pre-handshake hook.")
return
players = min(self.factory.limitConnections, 20)
self.write_packet("login", eid=self.eid, leveltype="default",
mode=self.factory.mode,
dimension=self.factory.world.dimension,
difficulty="peaceful", unused=0, maxplayers=players)
self.authenticated()
def pre_handshake(self):
"""
Whether this client should be logged in.
"""
return True
def chat(self, container):
"""
Hook for chat packets.
"""
def use(self, container):
"""
Hook for use packets.
"""
def respawn(self, container):
"""
Hook for respawn packets.
"""
def grounded(self, container):
"""
Hook for grounded packets.
"""
self.location.grounded = bool(container.grounded)
def position(self, container):
"""
Hook for position packets.
"""
if self.state != STATE_LOCATED:
log.msg("Ignoring unlocated position!")
return
self.grounded(container.grounded)
old_position = self.location.pos
position = Position.from_player(container.position.x,
container.position.y, container.position.z)
altered = False
dx, dy, dz = old_position - position
if any(abs(d) >= 64 for d in (dx, dy, dz)):
# Whoa, slow down there, cowboy. You're moving too fast. We're
# gonna ignore this position change completely, because it's
# either bogus or ignoring a recent teleport.
altered = True
else:
self.location.pos = position
self.location.stance = container.position.stance
# Santitize location. This handles safety boundaries, illegal stance,
# etc.
altered = self.location.clamp() or altered
# If, for any reason, our opinion on where the client should be
# located is different than theirs, force them to conform to our point
# of view.
if altered:
log.msg("Not updating bogus position!")
self.update_location()
# If our position actually changed, fire the position change hook.
if old_position != position:
self.position_changed()
def orientation(self, container):
"""
Hook for orientation packets.
"""
self.grounded(container.grounded)
old_orientation = self.location.ori
orientation = Orientation.from_degs(container.orientation.rotation,
container.orientation.pitch)
self.location.ori = orientation
if old_orientation != orientation:
self.orientation_changed()
def location_packet(self, container):
"""
Hook for location packets.
"""
self.position(container)
self.orientation(container)
def digging(self, container):
"""
Hook for digging packets.
"""
def build(self, container):
"""
Hook for build packets.
"""
def equip(self, container):
"""
Hook for equip packets.
"""
def pickup(self, container):
"""
Hook for pickup packets.
"""
def animate(self, container):
"""
Hook for animate packets.
"""
def action(self, container):
"""
Hook for action packets.
"""
def wclose(self, container):
"""
Hook for wclose packets.
"""
def waction(self, container):
"""
Hook for waction packets.
"""
def wacknowledge(self, container):
"""
Hook for wacknowledge packets.
"""
def wcreative(self, container):
"""
Hook for creative inventory action packets.
"""
def sign(self, container):
"""
Hook for sign packets.
"""
def client_settings(self, container):
"""
Hook for interaction setting packets.
"""
self.settings.update_interaction(container)
def complete(self, container):
"""
Hook for tab-completion packets.
"""
def settings_packet(self, container):
"""
Hook for presentation setting packets.
"""
self.settings.update_presentation(container)
def poll(self, container):
"""
Hook for poll packets.
By default, queries the parent factory for some data, and replays it
in a specific format to the requester. The connection is then closed
at both ends. This functionality is used by Beta 1.8 clients to poll
servers for status.
"""
+ log.msg("Poll data: %r" % container.data)
+
players = unicode(len(self.factory.protocols))
max_players = unicode(self.factory.limitConnections or 1000000)
data = [
u"§1",
unicode(SUPPORTED_PROTOCOL),
u"Bravo %s" % version,
self.motd,
players,
max_players,
]
response = u"\u0000".join(data)
self.error(response)
def quit(self, container):
"""
Hook for quit packets.
By default, merely logs the quit message and drops the connection.
Even if the connection is not dropped, it will be lost anyway since
the client will close the connection. It's better to explicitly let it
go here than to have zombie protocols.
"""
log.msg("Client is quitting: %s" % container.message)
self.transport.loseConnection()
# Twisted-level data handlers and methods
# Please don't override these needlessly, as they are pretty solid and
# shouldn't need to be touched.
def dataReceived(self, data):
self.buf += data
packets, self.buf = parse_packets(self.buf)
if packets:
self.resetTimeout()
for header, payload in packets:
if header in self.handlers:
d = maybeDeferred(self.handlers[header], payload)
@d.addErrback
def eb(failure):
log.err("Error while handling packet 0x%.2x" % header)
log.err(failure)
return None
else:
log.err("Didn't handle parseable packet 0x%.2x!" % header)
log.err(payload)
def connectionLost(self, reason=connectionDone):
if self._ping_loop.running:
self._ping_loop.stop()
def timeoutConnection(self):
self.error("Connection timed out")
# State-change callbacks
# Feel free to override these, but call them at some point.
def authenticated(self):
"""
Called when the client has successfully authenticated with the server.
"""
self.state = STATE_AUTHENTICATED
self._ping_loop.start(30)
# Event callbacks
# These are meant to be overriden.
def orientation_changed(self):
"""
Called when the client moves.
This callback is only for orientation, not position.
"""
pass
def position_changed(self):
"""
Called when the client moves.
This callback is only for position, not orientation.
"""
pass
# Convenience methods for consolidating code and expressing intent. I
# hear that these are occasionally useful. If a method in this section can
# be used, then *PLEASE* use it; not using it is the same as open-coding
# whatever you're doing, and only hurts in the long run.
def write_packet(self, header, **payload):
"""
Send a packet to the client.
"""
self.transport.write(make_packet(header, **payload))
def update_ping(self):
"""
Send a keepalive to the client.
"""
timestamp = timestamp_from_clock(reactor)
self.write_packet("ping", pid=timestamp)
def update_location(self):
"""
Send this client's location to the client.
Also let other clients know where this client is.
"""
# Don't bother trying to update things if the position's not yet
# synchronized. We could end up jettisoning them into the void.
if self.state != STATE_LOCATED:
return
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
# Inform everybody of our new location.
packet = make_packet("teleport", eid=self.player.eid, x=x, y=y, z=z,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
# Inform ourselves of our new location.
packet = self.location.save_to_packet()
self.transport.write(packet)
def ascend(self, count):
"""
Ascend to the next XZ-plane.
``count`` is the number of ascensions to perform, and may be zero in
order to force this player to not be standing inside a block.
:returns: bool of whether the ascension was successful
This client must be located for this method to have any effect.
"""
if self.state != STATE_LOCATED:
return False
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
chunk = self.chunks[bigx, bigz]
column = [chunk.get_block((smallx, i, smallz))
for i in range(CHUNK_HEIGHT)]
# Special case: Ascend at most once, if the current spot isn't good.
if count == 0:
if (not column[y]) or column[y + 1] or column[y + 2]:
# Yeah, we're gonna need to move.
count += 1
else:
# Nope, we're fine where we are.
return True
for i in xrange(y, 255):
# Find the next spot above us which has a platform and two empty
# blocks of air.
if column[i] and (not column[i + 1]) and not column[i + 2]:
count -= 1
if not count:
break
else:
return False
self.location.pos = self.location.pos._replace(y=i * 32)
return True
def error(self, message):
"""
Error out.
This method sends ``message`` to the client as a descriptive error
message, then closes the connection.
"""
- log.msg("Error: %s" % message)
+ log.msg("Error: %r" % message)
self.transport.write(make_error_packet(message))
self.transport.loseConnection()
def play_notes(self, notes):
"""
Play some music.
Send a sequence of notes to the player. ``notes`` is a finite iterable
of pairs of instruments and pitches.
There is no way to time notes; if staggered playback is desired (and
it usually is!), then ``play_notes()`` should be called repeatedly at
the appropriate times.
This method turns the block beneath the player into a note block,
plays the requested notes through it, then turns it back into the
original block, all without actually modifying the chunk.
"""
x, y, z = self.location.pos.to_block()
if y:
y -= 1
bigx, smallx, bigz, smallz = split_coords(x, z)
if (bigx, bigz) not in self.chunks:
return
block = self.chunks[bigx, bigz].get_block((smallx, y, smallz))
meta = self.chunks[bigx, bigz].get_metadata((smallx, y, smallz))
self.write_packet("block", x=x, y=y, z=z,
type=blocks["note-block"].slot, meta=0)
for instrument, pitch in notes:
self.write_packet("note", x=x, y=y, z=z, pitch=pitch,
instrument=instrument)
self.write_packet("block", x=x, y=y, z=z, type=block, meta=meta)
# Automatic properties. Assigning to them causes the client to be notified
# of changes.
@property
def health(self):
return self._health
@health.setter
def health(self, value):
if not 0 <= value <= 20:
raise BetaClientError("Invalid health value %d" % value)
if self._health != value:
self.write_packet("health", hp=value, fp=0, saturation=0)
self._health = value
@property
def latency(self):
return self._latency
@latency.setter
def latency(self, value):
# Clamp the value to not exceed the boundaries of the packet. This is
# necessary even though, in theory, a ping this high is bad news.
value = clamp(value, 0, 65535)
# Check to see if this is a new value, and if so, alert everybody.
if self._latency != value:
packet = make_packet("players", name=self.username, online=True,
ping=value)
self.factory.broadcast(packet)
self._latency = value
class KickedProtocol(BetaServerProtocol):
"""
A very simple Beta protocol that helps enforce IP bans, Max Connections,
and Max Connections Per IP.
This protocol disconnects people as soon as they connect, with a helpful
message.
"""
def __init__(self, reason=None):
BetaServerProtocol.__init__(self)
if reason:
self.reason = reason
else:
self.reason = (
"This server doesn't like you very much."
" I don't like you very much either.")
def connectionMade(self):
self.error("%s" % self.reason)
class BetaProxyProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` that proxies for an InfiniCraft client.
"""
gateway = "server.wiki.vg"
def handshake(self, container):
self.write_packet("handshake", username="-")
def login(self, container):
self.username = container.username
self.write_packet("login", protocol=0, username="", seed=0,
dimension="earth")
url = urlunparse(("http", self.gateway, "/node/0/0/", None, None,
None))
d = getPage(url)
d.addCallback(self.start_proxy)
def start_proxy(self, response):
log.msg("Response: %s" % response)
log.msg("Starting proxy...")
address, port = response.split(":")
self.add_node(address, int(port))
def add_node(self, address, port):
"""
Add a new node to this client.
"""
from twisted.internet.endpoints import TCP4ClientEndpoint
log.msg("Adding node %s:%d" % (address, port))
endpoint = TCP4ClientEndpoint(reactor, address, port, 5)
self.node_factory = InfiniClientFactory()
d = endpoint.connect(self.node_factory)
d.addCallback(self.node_connected)
d.addErrback(self.node_connect_error)
def node_connected(self, protocol):
log.msg("Connected new node!")
def node_connect_error(self, reason):
log.err("Couldn't connect node!")
log.err(reason)
class BravoProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` suitable for serving MC worlds to clients.
This protocol really does need to be hooked up with a ``BravoFactory`` or
something very much like it.
"""
chunk_tasks = None
time_loop = None
eid = 0
last_dig = None
def __init__(self, config, name):
BetaServerProtocol.__init__(self)
self.config = config
self.config_name = "world %s" % name
# Retrieve the MOTD. Only needs to be done once.
self.motd = self.config.getdefault(self.config_name, "motd",
"BravoServer")
def register_hooks(self):
log.msg("Registering client hooks...")
plugin_types = {
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
for t in plugin_types:
setattr(self, t, getattr(self.factory, t))
log.msg("Registering policies...")
if self.factory.mode == "creative":
self.dig_policy = dig_policies["speedy"]
else:
self.dig_policy = dig_policies["notchy"]
log.msg("Registered client plugin hooks!")
def pre_handshake(self):
"""
Set up username and get going.
"""
if self.username in self.factory.protocols:
# This username's already taken; find a new one.
for name in username_alternatives(self.username):
if name not in self.factory.protocols:
self.username = name
break
else:
self.error("Your username is already taken.")
return False
return True
@inlineCallbacks
def authenticated(self):
BetaServerProtocol.authenticated(self)
# Init player, and copy data into it.
self.player = yield self.factory.world.load_player(self.username)
self.player.eid = self.eid
self.location = self.player.location
# Init players' inventory window.
self.inventory = InventoryWindow(self.player.inventory)
# *Now* we are in our factory's list of protocols. Be aware.
self.factory.protocols[self.username] = self
# Announce our presence.
self.factory.chat("%s is joining the game..." % self.username)
packet = make_packet("players", name=self.username, online=True,
ping=0)
self.factory.broadcast(packet)
# Craft our avatar and send it to already-connected other players.
packet = make_packet("create", eid=self.player.eid)
packet += self.player.save_to_packet()
self.factory.broadcast_for_others(packet, self)
# And of course spawn all of those players' avatars in our client as
# well.
for protocol in self.factory.protocols.itervalues():
# Skip over ourselves; otherwise, the client tweaks out and
# usually either dies or locks up.
if protocol is self:
continue
self.write_packet("create", eid=protocol.player.eid)
packet = protocol.player.save_to_packet()
packet += protocol.player.save_equipment_to_packet()
self.transport.write(packet)
# Send spawn and inventory.
spawn = self.factory.world.level.spawn
packet = make_packet("spawn", x=spawn[0], y=spawn[1], z=spawn[2])
packet += self.inventory.save_to_packet()
self.transport.write(packet)
# TODO: Send Abilities (0xca)
# TODO: Update Health (0x08)
# TODO: Update Experience (0x2b)
# Send weather.
self.transport.write(self.factory.vane.make_packet())
self.send_initial_chunk_and_location()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(10)
def orientation_changed(self):
# Bang your head!
yaw, pitch = self.location.ori.to_fracs()
packet = make_packet("entity-orientation", eid=self.player.eid,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
def position_changed(self):
# Send chunks.
self.update_chunks()
for entity in self.entities_near(2):
if entity.name != "Item":
continue
left = self.player.inventory.add(entity.item, entity.quantity)
if left != entity.quantity:
if left != 0:
# partial collect
entity.quantity = left
else:
packet = make_packet("collect", eid=entity.eid,
destination=self.player.eid)
packet += make_packet("destroy", count=1, eid=[entity.eid])
self.factory.broadcast(packet)
self.factory.destroy_entity(entity)
packet = self.inventory.save_to_packet()
self.transport.write(packet)
def entities_near(self, radius):
"""
Obtain the entities within a radius of this player.
Radius is measured in blocks.
"""
chunk_radius = int(radius // 16 + 1)
chunkx, chunkz = self.location.pos.to_chunk()
minx = chunkx - chunk_radius
maxx = chunkx + chunk_radius + 1
minz = chunkz - chunk_radius
maxz = chunkz + chunk_radius + 1
for x, z in product(xrange(minx, maxx), xrange(minz, maxz)):
if (x, z) not in self.chunks:
continue
chunk = self.chunks[x, z]
yieldables = [entity for entity in chunk.entities
if self.location.distance(entity.location) <= (radius * 32)]
for i in yieldables:
yield i
def chat(self, container):
if container.message.startswith("/"):
commands = retrieve_plugins(IChatCommand, factory=self.factory)
# Register aliases.
for plugin in commands.values():
for alias in plugin.aliases:
commands[alias] = plugin
params = container.message[1:].split(" ")
command = params.pop(0).lower()
if command and command in commands:
def cb(iterable):
for line in iterable:
self.write_packet("chat", message=line)
def eb(error):
self.write_packet("chat", message="Error: %s" %
error.getErrorMessage())
d = maybeDeferred(commands[command].chat_command,
self.username, params)
d.addCallback(cb)
d.addErrback(eb)
else:
self.write_packet("chat",
message="Unknown command: %s" % command)
else:
# Send the message up to the factory to be chatified.
message = "<%s> %s" % (self.username, container.message)
self.factory.chat(message)
def use(self, container):
"""
For each entity in proximity (4 blocks), check if it is the target
of this packet and call all hooks that stated interested in this
type.
"""
nearby_players = self.factory.players_near(self.player, 4)
for entity in chain(self.entities_near(4), nearby_players):
if entity.eid == container.target:
for hook in self.use_hooks[entity.name]:
hook.use_hook(self.factory, self.player, entity,
container.button == 0)
break
@inlineCallbacks
def digging(self, container):
if container.x == -1 and container.z == -1 and container.y == 255:
# Lala-land dig packet. Discard it for now.
return
# Player drops currently holding item/block.
if (container.state == "dropped" and container.face == "-y" and
container.x == 0 and container.y == 0 and container.z == 0):
i = self.player.inventory
holding = i.holdables[self.player.equipped]
if holding:
primary, secondary, count = holding
if i.consume((primary, secondary), self.player.equipped):
dest = self.location.in_front_of(2)
coords = dest.pos._replace(y=dest.pos.y + 1)
self.factory.give(coords, (primary, secondary), 1)
# Re-send inventory.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# If no items in this slot are left, this player isn't
# holding an item anymore.
if i.holdables[self.player.equipped] is None:
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=65535,
count=1,
secondary=0
)
self.factory.broadcast_for_others(packet, self)
return
if container.state == "shooting":
self.shoot_arrow()
return
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
coords = smallx, container.y, smallz
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't dig in chunk (%d, %d)!" % (bigx, bigz))
return
block = chunk.get_block((smallx, container.y, smallz))
if container.state == "started":
# Run pre dig hooks
for hook in self.pre_dig_hooks:
cancel = yield maybeDeferred(hook.pre_dig_hook, self.player,
(container.x, container.y, container.z), block)
if cancel:
return
tool = self.player.inventory.holdables[self.player.equipped]
# Check to see whether we should break this block.
if self.dig_policy.is_1ko(block, tool):
self.run_dig_hooks(chunk, coords, blocks[block])
else:
# Set up a timer for breaking the block later.
dtime = time() + self.dig_policy.dig_time(block, tool)
self.last_dig = coords, block, dtime
elif container.state == "stopped":
# The client thinks it has broken a block. We shall see.
if not self.last_dig:
return
oldcoords, oldblock, dtime = self.last_dig
if oldcoords != coords or oldblock != block:
# Nope!
self.last_dig = None
return
dtime -= time()
# When enough time has elapsed, run the dig hooks.
d = deferLater(reactor, max(dtime, 0), self.run_dig_hooks, chunk,
coords, blocks[block])
d.addCallback(lambda none: setattr(self, "last_dig", None))
def run_dig_hooks(self, chunk, coords, block):
"""
Destroy a block and run the post-destroy dig hooks.
"""
x, y, z = coords
if block.breakable:
chunk.destroy(coords)
l = []
for hook in self.dig_hooks:
l.append(maybeDeferred(hook.dig_hook, chunk, x, y, z, block))
dl = DeferredList(l)
dl.addCallback(lambda none: self.factory.flush_chunk(chunk))
@inlineCallbacks
def build(self, container):
"""
Handle a build packet.
Several things must happen. First, the packet's contents need to be
examined to ensure that the packet is valid. A check is done to see if
the packet is opening a windowed object. If not, then a build is
run.
"""
# Is the target within our purview? We don't do a very strict
# containment check, but we *do* require that the chunk be loaded.
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't select in chunk (%d, %d)!" % (bigx, bigz))
return
target = blocks[chunk.get_block((smallx, container.y, smallz))]
# Attempt to open a window.
from bravo.policy.windows import window_for_block
window = window_for_block(target)
if window is not None:
# We have a window!
self.windows[self.wid] = window
identifier, title, slots = window.open()
self.write_packet("window-open", wid=self.wid, type=identifier,
title=title, slots=slots)
self.wid += 1
return
# Try to open it first
for hook in self.open_hooks:
window = yield maybeDeferred(hook.open_hook, self, container,
chunk.get_block((smallx, container.y, smallz)))
if window:
self.write_packet("window-open", wid=window.wid,
|
bravoserver/bravo
|
2e2b46c4d2c1e8bfad69955d40702007544f5a0c
|
bravo/world: Fix dictionary changing size during iteration.
|
diff --git a/bravo/world.py b/bravo/world.py
index baff818..0350d5a 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,708 +1,711 @@
from array import array
from functools import wraps
from itertools import imap, product
import random
import sys
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
from twisted.internet.task import LoopingCall, coiterate
from twisted.python import log
from bravo.beta.structures import Level
from bravo.chunk import Chunk, CHUNK_HEIGHT
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
class ChunkCache(object):
"""
A cache which holds references to all chunks which should be held in
memory.
This cache remembers chunks that were recently used, that are in permanent
residency, and so forth. Its exact caching algorithm is currently null.
When chunks dirty themselves, they are expected to notify the cache, which
will then schedule an eviction for the chunk.
"""
def __init__(self):
self._perm = {}
self._dirty = {}
def pin(self, chunk):
self._perm[chunk.x, chunk.z] = chunk
def unpin(self, chunk):
del self._perm[chunk.x, chunk.z]
def put(self, chunk):
# XXX expand caching strategy
pass
def get(self, coords):
if coords in self._perm:
return self._perm[coords]
# Returns None if not found!
return self._dirty.get(coords)
def cleaned(self, chunk):
del self._dirty[chunk.x, chunk.z]
def dirtied(self, chunk):
self._dirty[chunk.x, chunk.z] = chunk
def iterperm(self):
return self._perm.itervalues()
def iterdirty(self):
return self._dirty.itervalues()
class ImpossibleCoordinates(Exception):
"""
A coordinate could not ever be valid.
"""
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
chunk = self._cache.get(bigcoords)
if chunk is None:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
_season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
The initial level data.
"""
_cache = None
"""
The chunk cache.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
self._pending_chunks = dict()
@property
def season(self):
return self._season
@season.setter
def season(self, value):
if self._season != value:
self._season = value
if self._cache is not None:
# Issue 388: Apply the season to the permanent cache.
- coiterate(imap(value.transform, self._cache.iterperm()))
+ # Use a list so that we don't end up with indefinite amounts
+ # of work to do, and also so that we don't try to do work
+ # while the permanent cache is changing size.
+ coiterate(imap(value.transform, list(self._cache.iterperm())))
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
log.msg("World connected on %s, using serializer %s" %
(world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
# Create our cache.
self._cache = ChunkCache()
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
# Start up the permanent cache.
# has_option() is not exactly desirable, but it's appropriate here
# because we don't want to take any action if the key is unset.
if self.config.has_option(self.config_name, "perm_cache"):
cache_level = self.config.getint(self.config_name, "perm_cache")
self.enable_cache(cache_level)
self.chunk_management_loop = LoopingCall(self.flush_chunk)
self.chunk_management_loop.start(1)
# XXX Put this in init or here?
self.mob_manager = MobManager()
# XXX Put this in the managers constructor?
self.mob_manager.world = self
@inlineCallbacks
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
:returns: A ``Deferred`` that fires after the world has stopped.
"""
self.chunk_management_loop.stop()
# Flush all dirty chunks to disk. Don't bother cleaning them off.
for chunk in self._cache.iterdirty():
yield self.save_chunk(chunk)
# Destroy the cache.
self._cache = None
# Save the level data.
yield maybeDeferred(self.serializer.save_level, self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
:returns: A ``Deferred`` which will fire when the cache has been
adjusted.
"""
log.msg("Setting cache size to %d, please hold..." % size)
assign = self._cache.pin
def worker(x, z):
log.msg("Adding %d, %d to cache..." % (x, z))
return self.request_chunk(x, z).addCallback(assign)
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
work = (worker(x, z) for x, z in product(rx, rz))
d = coiterate(work)
@d.addCallback
def notify(none):
log.msg("Cache size is now %d!" % size)
return d
def flush_chunk(self):
"""
Flush a dirty chunk.
This method will always block when there are dirty chunks.
"""
for chunk in self._cache.iterdirty():
# Save a single chunk, and add a callback to remove it from the
# cache when it's been cleaned.
d = self.save_chunk(chunk)
d.addCallback(self._cache.cleaned)
break
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
self.chunk_management_loop.stop()
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
self.chunk_management_loop.start(1)
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
if hasattr(entity, 'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
# First, try the cache.
cached = self._cache.get((x, z))
if cached is not None:
returnValue(cached)
# Is it pending?
if (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
# Create a new chunk object, since the cache turned up empty.
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
# Add in our magic dirtiness hook so that the cache can be aware of
# chunks who have been...naughty.
chunk.dirtied = self._cache.dirtied
if chunk.dirty:
# The chunk was already dirty!? Oh, naughty indeed!
self._cache.dirtied(chunk)
if chunk.populated:
self._cache.put(chunk)
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
def pp(chunk):
chunk.populated = True
chunk.dirty = True
self.postprocess_chunk(chunk)
self._cache.dirtied(chunk)
del self._pending_chunks[x, z]
return chunk
# Set up callbacks.
d.addCallback(pp)
d.chainDeferred(pe)
# Because multiple people might be attached to this callback, we're
# going to do something magical here. We will yield a forked version
# of our Deferred. This means that we will wait right here, for a
# long, long time, before actually returning with the chunk, *but*,
# when we actually finish, we'll be ready to return the chunk
# immediately. Our caller cannot possibly care because they only see a
# Deferred either way.
retval = yield retval
returnValue(retval)
def save_chunk(self, chunk):
"""
Write a chunk to the serializer.
Note that this method does nothing when the given chunk is not dirty
or saving is off!
:returns: A ``Deferred`` which will fire after the chunk has been
saved with the chunk.
"""
if not chunk.dirty or not self.saving:
return succeed(chunk)
d = maybeDeferred(self.serializer.save_chunk, chunk)
@d.addCallback
def cb(none):
chunk.dirty = False
return chunk
@d.addErrback
def eb(failure):
failure.trap(SerializerWriteException)
log.msg("Couldn't write %r" % chunk)
return d
def load_player(self, username):
"""
Retrieve player data.
:returns: a ``Deferred`` that will be fired with a ``Player``
"""
# Get the player, possibly.
d = maybeDeferred(self.serializer.load_player, username)
@d.addErrback
def eb(failure):
failure.trap(SerializerReadException)
log.msg("Couldn't load player %r" % username)
# Make a player.
player = Player(username=username)
player.location.x = self.level.spawn[0]
player.location.y = self.level.spawn[1]
player.location.stance = self.level.spawn[1]
player.location.z = self.level.spawn[2]
return player
# This Deferred's good to go as-is.
return d
def save_player(self, username, player):
if self.saving:
self.serializer.save_player(player)
# World-level geometry access.
# These methods let external API users refrain from going through the
# standard motions of looking up and loading chunk information.
@coords_to_chunk
def get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_block(coords)
@coords_to_chunk
def set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_block(coords, value)
@coords_to_chunk
def get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_metadata(coords)
@coords_to_chunk
def set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_metadata(coords, value)
@coords_to_chunk
def destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.destroy(coords)
@coords_to_chunk
def mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: the requested block
"""
return chunk.get_block(coords)
@sync_coords_to_chunk
def sync_set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: None
"""
chunk.set_block(coords, value)
@sync_coords_to_chunk
def sync_get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: the requested metadata
"""
return chunk.get_metadata(coords)
@sync_coords_to_chunk
def sync_set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: None
"""
chunk.set_metadata(coords, value)
@sync_coords_to_chunk
def sync_destroy(self, chunk, coords):
"""
|
bravoserver/bravo
|
1225064eb671e3e5d61c6e9594ce06dc7b8029a1
|
beta/protocol: Clean up b.p.windows-using code.
|
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index 22172ef..e139556 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -516,988 +516,976 @@ class BetaServerProtocol(object, Protocol, TimeoutMixin):
count -= 1
if not count:
break
else:
return False
self.location.pos = self.location.pos._replace(y=i * 32)
return True
def error(self, message):
"""
Error out.
This method sends ``message`` to the client as a descriptive error
message, then closes the connection.
"""
log.msg("Error: %s" % message)
self.transport.write(make_error_packet(message))
self.transport.loseConnection()
def play_notes(self, notes):
"""
Play some music.
Send a sequence of notes to the player. ``notes`` is a finite iterable
of pairs of instruments and pitches.
There is no way to time notes; if staggered playback is desired (and
it usually is!), then ``play_notes()`` should be called repeatedly at
the appropriate times.
This method turns the block beneath the player into a note block,
plays the requested notes through it, then turns it back into the
original block, all without actually modifying the chunk.
"""
x, y, z = self.location.pos.to_block()
if y:
y -= 1
bigx, smallx, bigz, smallz = split_coords(x, z)
if (bigx, bigz) not in self.chunks:
return
block = self.chunks[bigx, bigz].get_block((smallx, y, smallz))
meta = self.chunks[bigx, bigz].get_metadata((smallx, y, smallz))
self.write_packet("block", x=x, y=y, z=z,
type=blocks["note-block"].slot, meta=0)
for instrument, pitch in notes:
self.write_packet("note", x=x, y=y, z=z, pitch=pitch,
instrument=instrument)
self.write_packet("block", x=x, y=y, z=z, type=block, meta=meta)
# Automatic properties. Assigning to them causes the client to be notified
# of changes.
@property
def health(self):
return self._health
@health.setter
def health(self, value):
if not 0 <= value <= 20:
raise BetaClientError("Invalid health value %d" % value)
if self._health != value:
self.write_packet("health", hp=value, fp=0, saturation=0)
self._health = value
@property
def latency(self):
return self._latency
@latency.setter
def latency(self, value):
# Clamp the value to not exceed the boundaries of the packet. This is
# necessary even though, in theory, a ping this high is bad news.
value = clamp(value, 0, 65535)
# Check to see if this is a new value, and if so, alert everybody.
if self._latency != value:
packet = make_packet("players", name=self.username, online=True,
ping=value)
self.factory.broadcast(packet)
self._latency = value
class KickedProtocol(BetaServerProtocol):
"""
A very simple Beta protocol that helps enforce IP bans, Max Connections,
and Max Connections Per IP.
This protocol disconnects people as soon as they connect, with a helpful
message.
"""
def __init__(self, reason=None):
BetaServerProtocol.__init__(self)
if reason:
self.reason = reason
else:
self.reason = (
"This server doesn't like you very much."
" I don't like you very much either.")
def connectionMade(self):
self.error("%s" % self.reason)
class BetaProxyProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` that proxies for an InfiniCraft client.
"""
gateway = "server.wiki.vg"
def handshake(self, container):
self.write_packet("handshake", username="-")
def login(self, container):
self.username = container.username
self.write_packet("login", protocol=0, username="", seed=0,
dimension="earth")
url = urlunparse(("http", self.gateway, "/node/0/0/", None, None,
None))
d = getPage(url)
d.addCallback(self.start_proxy)
def start_proxy(self, response):
log.msg("Response: %s" % response)
log.msg("Starting proxy...")
address, port = response.split(":")
self.add_node(address, int(port))
def add_node(self, address, port):
"""
Add a new node to this client.
"""
from twisted.internet.endpoints import TCP4ClientEndpoint
log.msg("Adding node %s:%d" % (address, port))
endpoint = TCP4ClientEndpoint(reactor, address, port, 5)
self.node_factory = InfiniClientFactory()
d = endpoint.connect(self.node_factory)
d.addCallback(self.node_connected)
d.addErrback(self.node_connect_error)
def node_connected(self, protocol):
log.msg("Connected new node!")
def node_connect_error(self, reason):
log.err("Couldn't connect node!")
log.err(reason)
class BravoProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` suitable for serving MC worlds to clients.
This protocol really does need to be hooked up with a ``BravoFactory`` or
something very much like it.
"""
chunk_tasks = None
time_loop = None
eid = 0
last_dig = None
def __init__(self, config, name):
BetaServerProtocol.__init__(self)
self.config = config
self.config_name = "world %s" % name
# Retrieve the MOTD. Only needs to be done once.
self.motd = self.config.getdefault(self.config_name, "motd",
"BravoServer")
def register_hooks(self):
log.msg("Registering client hooks...")
plugin_types = {
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
for t in plugin_types:
setattr(self, t, getattr(self.factory, t))
log.msg("Registering policies...")
if self.factory.mode == "creative":
self.dig_policy = dig_policies["speedy"]
else:
self.dig_policy = dig_policies["notchy"]
log.msg("Registered client plugin hooks!")
def pre_handshake(self):
"""
Set up username and get going.
"""
if self.username in self.factory.protocols:
# This username's already taken; find a new one.
for name in username_alternatives(self.username):
if name not in self.factory.protocols:
self.username = name
break
else:
self.error("Your username is already taken.")
return False
return True
@inlineCallbacks
def authenticated(self):
BetaServerProtocol.authenticated(self)
# Init player, and copy data into it.
self.player = yield self.factory.world.load_player(self.username)
self.player.eid = self.eid
self.location = self.player.location
# Init players' inventory window.
self.inventory = InventoryWindow(self.player.inventory)
# *Now* we are in our factory's list of protocols. Be aware.
self.factory.protocols[self.username] = self
# Announce our presence.
self.factory.chat("%s is joining the game..." % self.username)
packet = make_packet("players", name=self.username, online=True,
ping=0)
self.factory.broadcast(packet)
# Craft our avatar and send it to already-connected other players.
packet = make_packet("create", eid=self.player.eid)
packet += self.player.save_to_packet()
self.factory.broadcast_for_others(packet, self)
# And of course spawn all of those players' avatars in our client as
# well.
for protocol in self.factory.protocols.itervalues():
# Skip over ourselves; otherwise, the client tweaks out and
# usually either dies or locks up.
if protocol is self:
continue
self.write_packet("create", eid=protocol.player.eid)
packet = protocol.player.save_to_packet()
packet += protocol.player.save_equipment_to_packet()
self.transport.write(packet)
# Send spawn and inventory.
spawn = self.factory.world.level.spawn
packet = make_packet("spawn", x=spawn[0], y=spawn[1], z=spawn[2])
packet += self.inventory.save_to_packet()
self.transport.write(packet)
# TODO: Send Abilities (0xca)
# TODO: Update Health (0x08)
# TODO: Update Experience (0x2b)
# Send weather.
self.transport.write(self.factory.vane.make_packet())
self.send_initial_chunk_and_location()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(10)
def orientation_changed(self):
# Bang your head!
yaw, pitch = self.location.ori.to_fracs()
packet = make_packet("entity-orientation", eid=self.player.eid,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
def position_changed(self):
# Send chunks.
self.update_chunks()
for entity in self.entities_near(2):
if entity.name != "Item":
continue
left = self.player.inventory.add(entity.item, entity.quantity)
if left != entity.quantity:
if left != 0:
# partial collect
entity.quantity = left
else:
packet = make_packet("collect", eid=entity.eid,
destination=self.player.eid)
packet += make_packet("destroy", count=1, eid=[entity.eid])
self.factory.broadcast(packet)
self.factory.destroy_entity(entity)
packet = self.inventory.save_to_packet()
self.transport.write(packet)
def entities_near(self, radius):
"""
Obtain the entities within a radius of this player.
Radius is measured in blocks.
"""
chunk_radius = int(radius // 16 + 1)
chunkx, chunkz = self.location.pos.to_chunk()
minx = chunkx - chunk_radius
maxx = chunkx + chunk_radius + 1
minz = chunkz - chunk_radius
maxz = chunkz + chunk_radius + 1
for x, z in product(xrange(minx, maxx), xrange(minz, maxz)):
if (x, z) not in self.chunks:
continue
chunk = self.chunks[x, z]
yieldables = [entity for entity in chunk.entities
if self.location.distance(entity.location) <= (radius * 32)]
for i in yieldables:
yield i
def chat(self, container):
if container.message.startswith("/"):
commands = retrieve_plugins(IChatCommand, factory=self.factory)
# Register aliases.
for plugin in commands.values():
for alias in plugin.aliases:
commands[alias] = plugin
params = container.message[1:].split(" ")
command = params.pop(0).lower()
if command and command in commands:
def cb(iterable):
for line in iterable:
self.write_packet("chat", message=line)
def eb(error):
self.write_packet("chat", message="Error: %s" %
error.getErrorMessage())
d = maybeDeferred(commands[command].chat_command,
self.username, params)
d.addCallback(cb)
d.addErrback(eb)
else:
self.write_packet("chat",
message="Unknown command: %s" % command)
else:
# Send the message up to the factory to be chatified.
message = "<%s> %s" % (self.username, container.message)
self.factory.chat(message)
def use(self, container):
"""
For each entity in proximity (4 blocks), check if it is the target
of this packet and call all hooks that stated interested in this
type.
"""
nearby_players = self.factory.players_near(self.player, 4)
for entity in chain(self.entities_near(4), nearby_players):
if entity.eid == container.target:
for hook in self.use_hooks[entity.name]:
hook.use_hook(self.factory, self.player, entity,
container.button == 0)
break
@inlineCallbacks
def digging(self, container):
if container.x == -1 and container.z == -1 and container.y == 255:
# Lala-land dig packet. Discard it for now.
return
# Player drops currently holding item/block.
if (container.state == "dropped" and container.face == "-y" and
container.x == 0 and container.y == 0 and container.z == 0):
i = self.player.inventory
holding = i.holdables[self.player.equipped]
if holding:
primary, secondary, count = holding
if i.consume((primary, secondary), self.player.equipped):
dest = self.location.in_front_of(2)
coords = dest.pos._replace(y=dest.pos.y + 1)
self.factory.give(coords, (primary, secondary), 1)
# Re-send inventory.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# If no items in this slot are left, this player isn't
# holding an item anymore.
if i.holdables[self.player.equipped] is None:
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=65535,
count=1,
secondary=0
)
self.factory.broadcast_for_others(packet, self)
return
if container.state == "shooting":
self.shoot_arrow()
return
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
coords = smallx, container.y, smallz
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't dig in chunk (%d, %d)!" % (bigx, bigz))
return
block = chunk.get_block((smallx, container.y, smallz))
if container.state == "started":
# Run pre dig hooks
for hook in self.pre_dig_hooks:
cancel = yield maybeDeferred(hook.pre_dig_hook, self.player,
(container.x, container.y, container.z), block)
if cancel:
return
tool = self.player.inventory.holdables[self.player.equipped]
# Check to see whether we should break this block.
if self.dig_policy.is_1ko(block, tool):
self.run_dig_hooks(chunk, coords, blocks[block])
else:
# Set up a timer for breaking the block later.
dtime = time() + self.dig_policy.dig_time(block, tool)
self.last_dig = coords, block, dtime
elif container.state == "stopped":
# The client thinks it has broken a block. We shall see.
if not self.last_dig:
return
oldcoords, oldblock, dtime = self.last_dig
if oldcoords != coords or oldblock != block:
# Nope!
self.last_dig = None
return
dtime -= time()
# When enough time has elapsed, run the dig hooks.
d = deferLater(reactor, max(dtime, 0), self.run_dig_hooks, chunk,
coords, blocks[block])
d.addCallback(lambda none: setattr(self, "last_dig", None))
def run_dig_hooks(self, chunk, coords, block):
"""
Destroy a block and run the post-destroy dig hooks.
"""
x, y, z = coords
if block.breakable:
chunk.destroy(coords)
l = []
for hook in self.dig_hooks:
l.append(maybeDeferred(hook.dig_hook, chunk, x, y, z, block))
dl = DeferredList(l)
dl.addCallback(lambda none: self.factory.flush_chunk(chunk))
@inlineCallbacks
def build(self, container):
"""
Handle a build packet.
Several things must happen. First, the packet's contents need to be
examined to ensure that the packet is valid. A check is done to see if
the packet is opening a windowed object. If not, then a build is
run.
"""
# Is the target within our purview? We don't do a very strict
# containment check, but we *do* require that the chunk be loaded.
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't select in chunk (%d, %d)!" % (bigx, bigz))
return
target = blocks[chunk.get_block((smallx, container.y, smallz))]
- # If it's a chest, hax.
- if target.name == "chest":
- from bravo.policy.windows import Chest
- w = Chest()
- self.windows[self.wid] = w
-
- w.open()
- self.write_packet("window-open", wid=self.wid, type=w.identifier,
- title=w.title, slots=w.slots)
-
- self.wid += 1
- return
- elif target.name == "workbench":
- from bravo.policy.windows import Workbench
- w = Workbench()
- self.windows[self.wid] = w
-
- w.open()
- self.write_packet("window-open", wid=self.wid, type=w.identifier,
- title=w.title, slots=w.slots)
-
+ # Attempt to open a window.
+ from bravo.policy.windows import window_for_block
+ window = window_for_block(target)
+ if window is not None:
+ # We have a window!
+ self.windows[self.wid] = window
+ identifier, title, slots = window.open()
+ self.write_packet("window-open", wid=self.wid, type=identifier,
+ title=title, slots=slots)
self.wid += 1
return
# Try to open it first
for hook in self.open_hooks:
window = yield maybeDeferred(hook.open_hook, self, container,
chunk.get_block((smallx, container.y, smallz)))
if window:
self.write_packet("window-open", wid=window.wid,
type=window.identifier, title=window.title,
slots=window.slots_num)
packet = window.save_to_packet()
self.transport.write(packet)
# window opened
return
# Ignore clients that think -1 is placeable.
if container.primary == -1:
return
# Special case when face is "noop": Update the status of the currently
# held block rather than placing a new block.
if container.face == "noop":
return
# If the target block is vanishable, then adjust our aim accordingly.
if target.vanishes:
container.face = "+y"
container.y -= 1
if container.primary in blocks:
block = blocks[container.primary]
elif container.primary in items:
block = items[container.primary]
else:
log.err("Ignoring request to place unknown block %d" %
container.primary)
return
# Run pre-build hooks. These hooks are able to interrupt the build
# process.
builddata = BuildData(block, 0x0, container.x, container.y,
container.z, container.face)
for hook in self.pre_build_hooks:
cont, builddata, cancel = yield maybeDeferred(hook.pre_build_hook,
self.player, builddata)
if cancel:
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
return
if not cont:
break
# Run the build.
try:
yield maybeDeferred(self.run_build, builddata)
except BuildError:
return
newblock = builddata.block.slot
coords = adjust_coords_for_face(
(builddata.x, builddata.y, builddata.z), builddata.face)
# Run post-build hooks. These are merely callbacks which cannot
# interfere with the build process, largely because the build process
# already happened.
for hook in self.post_build_hooks:
yield maybeDeferred(hook.post_build_hook, self.player, coords,
builddata.block)
# Feed automatons.
for automaton in self.factory.automatons:
if newblock in automaton.blocks:
automaton.feed(coords)
# Re-send inventory.
# XXX this could be optimized if/when inventories track damage.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
def run_build(self, builddata):
block, metadata, x, y, z, face = builddata
# Don't place items as blocks.
if block.slot not in blocks:
raise BuildError("Couldn't build item %r as block" % block)
# Check for orientable blocks.
if not metadata and block.orientable():
metadata = block.orientation(face)
if metadata is None:
# Oh, I guess we can't even place the block on this face.
raise BuildError("Couldn't orient block %r on face %s" %
(block, face))
# Make sure we can remove it from the inventory first.
if not self.player.inventory.consume((block.slot, 0),
self.player.equipped):
# Okay, first one was a bust; maybe we can consume the related
# block for dropping instead?
if not self.player.inventory.consume(block.drop,
self.player.equipped):
raise BuildError("Couldn't consume %r from inventory" % block)
# Offset coords according to face.
x, y, z = adjust_coords_for_face((x, y, z), face)
# Set the block and data.
dl = [self.factory.world.set_block((x, y, z), block.slot)]
if metadata:
dl.append(self.factory.world.set_metadata((x, y, z), metadata))
return DeferredList(dl)
def equip(self, container):
self.player.equipped = container.slot
# Inform everyone about the item the player is holding now.
item = self.player.inventory.holdables[self.player.equipped]
if item is None:
# Empty slot. Use signed short -1.
primary, secondary = -1, 0
else:
primary, secondary, count = item
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=primary,
count=1,
secondary=secondary
)
self.factory.broadcast_for_others(packet, self)
def pickup(self, container):
self.factory.give((container.x, container.y, container.z),
(container.primary, container.secondary), container.count)
def animate(self, container):
# Broadcast the animation of the entity to everyone else. Only swing
# arm is send by notchian clients.
packet = make_packet("animate",
eid=self.player.eid,
animation=container.animation
)
self.factory.broadcast_for_others(packet, self)
def wclose(self, container):
wid = container.wid
if wid == 0:
# WID 0 is reserved for the client inventory.
pass
elif wid in self.windows:
w = self.windows.pop(wid)
w.close()
else:
self.error("WID %d doesn't exist." % wid)
def waction(self, container):
wid = container.wid
if wid in self.windows:
w = self.windows[wid]
result = w.action(container.slot, container.button,
container.token, container.shift,
container.primary)
self.write_packet("window-token", wid=wid, token=container.token,
acknowledged=result)
else:
self.error("WID %d doesn't exist." % wid)
def wcreative(self, container):
"""
A slot was altered in creative mode.
"""
# XXX Sometimes the container doesn't contain all of this information.
# What then?
applied = self.inventory.creative(container.slot, container.primary,
container.secondary, container.count)
if applied:
# Inform other players about changes to this player's equipment.
equipped_slot = self.player.equipped + 36
if container.slot == equipped_slot:
packet = make_packet("entity-equipment",
eid=self.player.eid,
# XXX why 0? why not the actual slot?
slot=0,
primary=container.primary,
count=1,
secondary=container.secondary,
)
self.factory.broadcast_for_others(packet, self)
def shoot_arrow(self):
# TODO 1. Create arrow entity: arrow = Arrow(self.factory, self.player)
# 2. Register within the factory: self.factory.register_entity(arrow)
# 3. Run it: arrow.run()
pass
def sign(self, container):
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't handle sign in chunk (%d, %d)!" % (bigx, bigz))
return
if (smallx, container.y, smallz) in chunk.tiles:
new = False
s = chunk.tiles[smallx, container.y, smallz]
else:
new = True
s = Sign(smallx, container.y, smallz)
chunk.tiles[smallx, container.y, smallz] = s
s.text1 = container.line1
s.text2 = container.line2
s.text3 = container.line3
s.text4 = container.line4
chunk.dirty = True
# The best part of a sign isn't making one, it's showing everybody
# else on the server that you did.
packet = make_packet("sign", container)
self.factory.broadcast_for_chunk(packet, bigx, bigz)
# Run sign hooks.
for hook in self.sign_hooks:
hook.sign_hook(self.factory, chunk, container.x, container.y,
container.z, [s.text1, s.text2, s.text3, s.text4], new)
def complete(self, container):
"""
Attempt to tab-complete user names.
"""
needle = container.autocomplete
usernames = self.factory.protocols.keys()
results = complete(needle, usernames)
self.write_packet("tab", autocomplete=results)
def settings_packet(self, container):
"""
Acknowledge a change of settings and update chunk distance.
"""
super(BravoProtocol, self).settings_packet(container)
self.update_chunks()
def disable_chunk(self, x, z):
key = x, z
log.msg("Disabling chunk %d, %d" % key)
if key not in self.chunks:
log.msg("...But the chunk wasn't loaded!")
return
# Remove the chunk from cache.
chunk = self.chunks.pop(key)
eids = [e.eid for e in chunk.entities]
self.write_packet("destroy", count=len(eids), eid=eids)
# Clear chunk data on the client.
self.write_packet("chunk", x=x, z=z, continuous=False, primary=0x0,
add=0x0, data="")
def enable_chunk(self, x, z):
"""
Request a chunk.
This function will asynchronously obtain the chunk, and send it on the
wire.
:returns: `Deferred` that will be fired when the chunk is obtained,
with no arguments
"""
log.msg("Enabling chunk %d, %d" % (x, z))
if (x, z) in self.chunks:
log.msg("...But the chunk was already loaded!")
return succeed(None)
d = self.factory.world.request_chunk(x, z)
@d.addCallback
def cb(chunk):
self.chunks[x, z] = chunk
return chunk
d.addCallback(self.send_chunk)
return d
def send_chunk(self, chunk):
log.msg("Sending chunk %d, %d" % (chunk.x, chunk.z))
packet = chunk.save_to_packet()
self.transport.write(packet)
for entity in chunk.entities:
packet = entity.save_to_packet()
self.transport.write(packet)
for entity in chunk.tiles.itervalues():
if entity.name == "Sign":
packet = entity.save_to_packet()
self.transport.write(packet)
def send_initial_chunk_and_location(self):
"""
Send the initial chunks and location.
This method sends more than one chunk; since Beta 1.2, it must send
nearly fifty chunks before the location can be safely sent.
"""
# Disable located hooks. We'll re-enable them at the end.
self.state = STATE_AUTHENTICATED
log.msg("Initial, position %d, %d, %d" % self.location.pos)
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
# Send the chunk that the player will stand on. The other chunks are
# not so important. There *used* to be a bug, circa Beta 1.2, that
# required lots of surrounding geometry to be present, but that's been
# fixed.
d = self.enable_chunk(bigx, bigz)
# What to do if we can't load a given chunk? Just kick 'em.
d.addErrback(lambda fail: self.error("Couldn't load a chunk... :c"))
# Don't dare send more chunks beyond the initial one until we've
# spawned. Once we've spawned, set our status to LOCATED and then
# update_location() will work.
@d.addCallback
def located(none):
self.state = STATE_LOCATED
# Ensure that we're above-ground.
self.ascend(0)
d.addCallback(lambda none: self.update_location())
d.addCallback(lambda none: self.position_changed())
# Send the MOTD.
if self.motd:
@d.addCallback
def motd(none):
self.write_packet("chat",
message=self.motd.replace("<tagline>", get_motd()))
# Finally, start the secondary chunk loop.
d.addCallback(lambda none: self.update_chunks())
def update_chunks(self):
# Don't send chunks unless we're located.
if self.state != STATE_LOCATED:
return
x, z = self.location.pos.to_chunk()
# These numbers come from a couple spots, including minecraftwiki, but
# I verified them experimentally using torches and pillars to mark
# distances on each setting. ~ C.
distances = {
"tiny": 2,
"short": 4,
"far": 16,
}
radius = distances.get(self.settings.distance, 8)
new = set(circling(x, z, radius))
old = set(self.chunks.iterkeys())
added = new - old
discarded = old - new
# Perhaps some explanation is in order.
# The cooperate() function iterates over the iterable it is fed,
# without tying up the reactor, by yielding after each iteration. The
# inner part of the generator expression generates all of the chunks
# around the currently needed chunk, and it sorts them by distance to
# the current chunk. The end result is that we load chunks one-by-one,
# nearest to furthest, without stalling other clients.
if self.chunk_tasks:
for task in self.chunk_tasks:
try:
task.stop()
except (TaskDone, TaskFailed):
pass
to_enable = sorted_by_distance(added, x, z)
self.chunk_tasks = [
cooperate(self.enable_chunk(i, j) for i, j in to_enable),
cooperate(self.disable_chunk(i, j) for i, j in discarded),
]
def update_time(self):
time = int(self.factory.time)
self.write_packet("time", timestamp=time, time=time % 24000)
def connectionLost(self, reason=connectionDone):
"""
Cleanup after a lost connection.
Most of the time, these connections are lost cleanly; we don't have
any cleanup to do in the unclean case since clients don't have any
kind of pending state which must be recovered.
Remember, the connection can be lost before identification and
authentication, so ``self.username`` and ``self.player`` can be None.
"""
if self.username and self.player:
self.factory.world.save_player(self.username, self.player)
if self.player:
self.factory.destroy_entity(self.player)
packet = make_packet("destroy", count=1, eid=[self.player.eid])
self.factory.broadcast(packet)
if self.username:
packet = make_packet("players", name=self.username, online=False,
ping=0)
self.factory.broadcast(packet)
self.factory.chat("%s has left the game." % self.username)
self.factory.teardown_protocol(self)
# We are now torn down. After this point, there will be no more
# factory stuff, just our own personal stuff.
del self.factory
if self.time_loop:
self.time_loop.stop()
if self.chunk_tasks:
for task in self.chunk_tasks:
try:
task.stop()
except (TaskDone, TaskFailed):
pass
|
bravoserver/bravo
|
9f08562f180c305292aaa8699335a216697facdf
|
ibravo, policy/windows: More work on IWindow.
|
diff --git a/bravo/ibravo.py b/bravo/ibravo.py
index 32404fc..5b85de7 100644
--- a/bravo/ibravo.py
+++ b/bravo/ibravo.py
@@ -1,540 +1,567 @@
from twisted.python.components import registerAdapter
from twisted.web.resource import IResource
from zope.interface import implements, invariant, Attribute, Interface
from bravo.errors import InvariantException
class IBravoPlugin(Interface):
"""
Interface for plugins.
This interface stores common metadata used during plugin discovery.
"""
name = Attribute("""
The name of the plugin.
This name is used to reference the plugin in configurations, and also
to uniquely index the plugin.
""")
def sorted_invariant(s):
intersection = set(s.before) & set(s.after)
if intersection:
raise InvariantException("Plugin wants to come before and after %r" %
intersection)
class ISortedPlugin(IBravoPlugin):
"""
Parent interface for sorted plugins.
Sorted plugins have an innate and automatic ordering inside lists thanks
to the ability to advertise their dependencies.
"""
invariant(sorted_invariant)
before = Attribute("""
Plugins which must come before this plugin in the pipeline.
Should be a tuple, list, or some other iterable.
""")
after = Attribute("""
Plugins which must come after this plugin in the pipeline.
Should be a tuple, list, or some other iterable.
""")
class ITerrainGenerator(ISortedPlugin):
"""
Interface for terrain generators.
"""
def populate(chunk, seed):
"""
Given a chunk and a seed value, populate the chunk with terrain.
This function should assume that it runs as part of a pipeline, and
that the chunk may already be partially or totally populated.
"""
def command_invariant(c):
if c.__doc__ is None:
raise InvariantException("Command has no documentation")
class ICommand(IBravoPlugin):
"""
A command.
Commands must be documented, as an invariant. The documentation for a
command will be displayed for clients upon request, via internal help
commands.
"""
invariant(command_invariant)
aliases = Attribute("""
Additional keywords which may be used to alias this command.
""")
usage = Attribute("""
String explaining how to use this command.
""")
class IChatCommand(ICommand):
"""
Interface for chat commands.
Chat commands are invoked from the chat inside clients, so they are always
called by a specific client.
This interface is specifically designed to exist comfortably side-by-side
with `IConsoleCommand`.
"""
def chat_command(username, parameters):
"""
Handle a command from the chat interface.
:param str username: username of player
:param list parameters: additional parameters passed to the command
:returns: a generator object or other iterable yielding lines
"""
class IConsoleCommand(ICommand):
"""
Interface for console commands.
Console commands are invoked from a console or some other location with
two defining attributes: Access restricted to superusers, and no user
issuing the command. As such, no access control list applies to them, but
they must be given usernames to operate on explicitly.
"""
def console_command(parameters):
"""
Handle a command.
:param list parameters: additional parameters passed to the command
:returns: a generator object or other iterable yielding lines
"""
class ChatToConsole(object):
"""
Adapt a chat command to be used on the console.
This largely consists of passing the username correctly.
"""
implements(IConsoleCommand)
def __init__(self, chatcommand):
self.chatcommand = chatcommand
self.aliases = self.chatcommand.aliases
self.info = self.chatcommand.info
self.name = self.chatcommand.name
self.usage = "<username> %s" % self.chatcommand.usage
def console_command(self, parameters):
if IConsoleCommand.providedBy(self.chatcommand):
return self.chatcommand.console_command(parameters)
else:
username = parameters.pop(0) if parameters else ""
return self.chatcommand.chat_command(username, parameters)
registerAdapter(ChatToConsole, IChatCommand, IConsoleCommand)
class IRecipe(IBravoPlugin):
"""
A description for creating materials from other materials.
"""
def matches(table, stride):
"""
Determine whether a given crafting table matches this recipe.
``table`` is a list of slots.
``stride`` is the stride of the table.
:returns: bool
"""
def reduce(table, stride):
"""
Remove items from a given crafting table corresponding to a single
match of this recipe. The table is modified in-place.
This method is meant to be used to subtract items from a crafting
table following a successful recipe match.
This method may assume that this recipe ``matches()`` the table.
``table`` is a list of slots.
``stride`` is the stride of the table.
"""
provides = Attribute("""
Tuple representing the yield of this recipe.
This tuple must be of the format (slot, count).
""")
class ISeason(IBravoPlugin):
"""
Seasons are transformational stages run during certain days to emulate an
environment.
"""
def transform(chunk):
"""
Apply the season to the given chunk.
"""
day = Attribute("""
Day of the year on which to switch to this season.
""")
class ISerializer(IBravoPlugin):
"""
Class that understands how to serialize several different kinds of objects
to and from disk-friendly formats.
Implementors of this interface are expected to provide a uniform
implementation of their serialization technique.
"""
def connect(url):
"""
Connect this serializer to a serialization resource, as defined in
``url``.
Bravo uses URLs to specify all serialization resources. While there is
no strict enforcement of the identifier being a URL, most popular
database libraries can understand URL-based resources, and thus it is
a useful *de facto* standard. If a URL is not passed, or the URL is
invalid, this method may raise an exception.
"""
def save_chunk(chunk):
"""
Save a chunk.
May return a ``Deferred`` that will fire on completion.
"""
def load_chunk(x, z):
"""
Load a chunk. The chunk must exist.
May return a ``Deferred`` that will fire on completion.
:raises: SerializerReadException if the chunk doesn't exist
"""
def save_level(level):
"""
Save a level.
May return a ``Deferred`` that will fire on completion.
"""
def load_level():
"""
Load a level. The level must exist.
May return a ``Deferred`` that will fire on completion.
:raises: SerializerReadException if the level doesn't exist
"""
def save_player(player):
"""
Save a player.
May return a ``Deferred`` that will fire on completion.
"""
def load_player(username):
"""
Load a player. The player must exist.
May return a ``Deferred`` that will fire on completion.
:raises: SerializerReadException if the player doesn't exist
"""
def save_plugin_data(name, value):
"""
Save plugin-specific data. The data must be a bytestring.
May return a ``Deferred`` that will fire on completion.
"""
def load_plugin_data(name):
"""
Load plugin-specific data. If no data is found, an empty bytestring
will be returned.
May return a ``Deferred`` that will fire on completion.
"""
# Hooks
class IWindowOpenHook(ISortedPlugin):
"""
Hook for actions to be taken on container open
"""
def open_hook(player, container, block):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x64 message
The ``block`` is a block we trying to open
:returns: ``Deffered`` with None or window object
"""
pass
class IWindowClickHook(ISortedPlugin):
"""
Hook for actions to be taken on window clicks
"""
def click_hook(player, container):
"""
The ``player`` a Player's protocol
The ``container`` is a 0x66 message
:returns: True if you processed the action and TRANSACTION must be ok
You probably will never return True here.
"""
pass
class IWindowCloseHook(ISortedPlugin):
"""
Hook for actions to be taken on window clicks
"""
def close_hook(player, container):
"""
The ``player`` a Player's protocol
The ``container`` is a 0x65 message
"""
pass
class IPreBuildHook(ISortedPlugin):
"""
Hook for actions to be taken before a block is placed.
"""
def pre_build_hook(player, builddata):
"""
Do things.
The ``player`` is a ``Player`` entity and can be modified as needed.
The ``builddata`` tuple has all of the useful things. It stores a
``Block`` that will be placed, as well as the block coordinates and
face of the place where the block will be built.
``builddata`` needs to be passed to the next hook in sequence, but it
can be modified in passing in order to modify the way blocks are
placed.
Any access to chunks must be done through the factory. To get the
current factory, import it from ``bravo.parameters``:
>>> from bravo.parameters import factory
First variable in the return value indicates whether processing
of building should continue after this hook runs. Use it to halt build
hook processing, if needed.
Third variable in the return value indicates whether building process
shall be canceled. Use it to completele stop the process.
For sanity purposes, build hooks may return a ``Deferred`` which will
fire with their return values, but are not obligated to do so.
A trivial do-nothing build hook looks like the following:
>>> def pre_build_hook(self, player, builddata):
... return True, builddata, False
To make life more pleasant when returning deferred values, use
``inlineCallbacks``, which many of the standard build hooks use:
>>> @inlineCallbacks
... def pre_build_hook(self, player, builddata):
... returnValue((True, builddata, False))
This form makes it much easier to deal with asynchronous operations on
the factory and world.
:param ``Player`` player: player entity doing the building
:param namedtuple builddata: permanent building location and data
:returns: ``Deferred`` with tuple of build data and whether subsequent
hooks will run
"""
class IPostBuildHook(ISortedPlugin):
"""
Hook for actions to be taken after a block is placed.
"""
def post_build_hook(player, coords, block):
"""
Do things.
The coordinates for the given block have already been pre-adjusted.
"""
class IPreDigHook(ISortedPlugin):
"""
Hook for actions to be taken as dig started.
"""
def pre_dig_hook(player, coords, block):
"""
The ``player`` a Player's protocol
The ``coords`` is block coords - x, y, z
The ``block`` is a block we going to dig
:returns: True to cancel the dig action.
"""
class IDigHook(ISortedPlugin):
"""
Hook for actions to be taken after a block is dug up.
"""
def dig_hook(chunk, x, y, z, block):
"""
Do things.
:param `Chunk` chunk: digging location
:param int x: X coordinate
:param int y: Y coordinate
:param int z: Z coordinate
:param `Block` block: dug block
"""
class ISignHook(ISortedPlugin):
"""
Hook for actions to be taken after a sign is updated.
This hook fires both on sign creation and sign editing.
"""
def sign_hook(chunk, x, y, z, text, new):
"""
Do things.
:param `Chunk` chunk: digging location
:param int x: X coordinate
:param int y: Y coordinate
:param int z: Z coordinate
:param list text: list of lines of text
:param bool new: whether this sign is newly placed
"""
class IUseHook(ISortedPlugin):
"""
Hook for actions to be taken when a player interacts with an entity.
Each plugin needs to specify a list of entity types it is interested in
in advance, and it will only be called for those.
"""
def use_hook(player, target, alternate):
"""
Do things.
:param `Player` player: player
:param `Entity` target: target of the interaction
:param bool alternate: whether the player right-clicked the target
"""
targets = Attribute("""
List of entity names this plugin wants to be called for.
""")
class IAutomaton(IBravoPlugin):
"""
An automaton.
Automatons are given blocks from chunks which interest them, and may do
processing on those blocks.
"""
blocks = Attribute("""
List of blocks which this automaton is interested in.
""")
def feed(coordinates):
"""
Provide this automaton with block coordinates to handle later.
"""
def scan(chunk):
"""
Provide this automaton with an entire chunk which this automaton may
handle as it pleases.
A utility scanner which will simply `feed()` this automaton is in
bravo.utilities.automatic.
"""
def start():
"""
Run the automaton.
"""
def stop():
"""
Stop the automaton.
After this method is called, the automaton should not continue
processing data; it needs to stop immediately.
"""
class IWorldResource(IBravoPlugin, IResource):
"""
Interface for a world specific web resource.
"""
class IWindow(Interface):
"""
An openable window.
- """
+
+ ``IWindow`` generalizes the single-purpose dedicated windows used
+ primarily by blocks which have storage and/or timers associated with them.
+ A window is an object which has some slots which can hold items and
+ blocks, and is receptive to a general protocol which alters those slots in
+ a structured fashion. However, windows do not know about player
+ inventories, and cannot perform wire-protocol-specific actions.
+
+ This interface is the answer to several questions:
+ * How can we write code for workbenches and chests without having to
+ duplicate inventory management code?
+ * How can combination locks or other highly-customized windows be
+ designed?
+ * Is it possible to abstract away the low-level details of mouse
+ actions and instead discuss semantic movement of items through an
+ inventory's various slots and between window panes?
+ * Can windows have background processes happening which result in
+ periodic changes to their viewers?
+
+ Damage tracking might need to be event-driven.
+ """
+
+ slots = Attribute("""
+ A mapping of slot numbers to slot data.
+ """)
def open():
"""
Open a window.
- Returns the title of the window and number of slots in the window.
+ :returns: The identifier of the window, the title of the window, and
+ the number of slots in the window.
"""
def close():
"""
Close a window.
"""
- def alter(slot, old):
+ def altered(slot, old, new):
"""
- Notify the window that a slot has been altered.
+ Notify the window that a slot's data should be changed.
- The old slot data is passed in as `old`.
+ Both the old and new slots are provided.
"""
def damaged():
"""
Retrieve the damaged slot numbers.
+
+ :returns: A sequence of slot numbers.
"""
def undamage():
"""
Forget about damage.
"""
diff --git a/bravo/policy/windows.py b/bravo/policy/windows.py
index 4f31568..bed3bd2 100644
--- a/bravo/policy/windows.py
+++ b/bravo/policy/windows.py
@@ -1,97 +1,102 @@
from zope.interface import implements
from bravo.ibravo import IWindow
class Pane(object):
"""
A composite window which combines an inventory and a specialized window.
"""
- def __init__(self, inventory, block):
+ def __init__(self, inventory, window):
self.inventory = inventory
- self.window = window_for_block(block)()
- self.window.open()
+ self.window = window
def open(self):
- pass
+ return self.window.open()
def close(self):
- pass
+ self.window.close()
def action(self, slot, button, transaction, shifted, item):
return False
@property
def slots(self):
return len(self.window.slots)
class Chest(object):
"""
The chest window.
"""
implements(IWindow)
+ title = "Unnamed Chest"
+ identifier = "chest"
+
def __init__(self):
self._damaged = set()
self.slots = dict((x, None) for x in range(36))
def open(self):
- pass
+ return self.identifier, self.title, len(self.slots)
def close(self):
pass
- def altered(self, slot, old):
+ def altered(self, slot, old, new):
self._damaged.add(slot)
def damaged(self):
- return iter(self._damaged)
+ return sorted(self._damaged)
def undamage(self):
self._damaged.clear()
- title = "derp"
- identifier = "chest"
-
class Workbench(object):
"""
The workbench/crafting window.
"""
implements(IWindow)
+ title = ""
+ identifier = "workbench"
+
+ slots = 2
+
def open(self):
- pass
+ return self.identifier, self.title, self.slots
def close(self):
pass
- def action(self, slot, button, transaction, shifted, item):
- return False
-
- slots = 2
- title = ""
- identifier = "workbench"
+ def altered(self, slot, old, new):
+ pass
class Furnace(object):
"""
The furnace window.
"""
implements(IWindow)
def window_for_block(block):
if block.name == "chest":
- return Chest
+ return Chest()
elif block.name == "workbench":
- return Workbench
+ return Workbench()
elif block.name == "furnace":
- return Furnace
+ return Furnace()
return None
+
+
+def pane(inventory, block):
+ window = window_for_block(block)
+ return Pane(inventory, window)
diff --git a/bravo/tests/policy/test_windows.py b/bravo/tests/policy/test_windows.py
new file mode 100644
index 0000000..b92920a
--- /dev/null
+++ b/bravo/tests/policy/test_windows.py
@@ -0,0 +1,18 @@
+from unittest import TestCase
+
+from zope.interface.verify import verifyObject
+
+from bravo.ibravo import IWindow
+from bravo.policy.windows import Chest
+
+
+class TestChest(TestCase):
+
+ def test_verify_object(self):
+ c = Chest()
+ verifyObject(IWindow, c)
+
+ def test_damage_single(self):
+ c = Chest()
+ c.altered(17, None, None)
+ self.assertTrue(17 in c.damaged())
|
bravoserver/bravo
|
dd77a97338a68fa266a1876196ee4890c5e87b98
|
Fix up some pyflakes warnings.
|
diff --git a/bravo/beta/factory.py b/bravo/beta/factory.py
index 05a92c1..2ac1fe0 100644
--- a/bravo/beta/factory.py
+++ b/bravo/beta/factory.py
@@ -1,514 +1,514 @@
from collections import defaultdict
-from itertools import chain, product
+from itertools import product
from twisted.internet import reactor
from twisted.internet.interfaces import IPushProducer
from twisted.internet.protocol import Factory
from twisted.internet.task import LoopingCall
from twisted.python import log
from zope.interface import implements
from bravo.beta.packets import make_packet
from bravo.beta.protocol import BravoProtocol, KickedProtocol
from bravo.entity import entities
from bravo.ibravo import (ISortedPlugin, IAutomaton, ITerrainGenerator,
IUseHook, ISignHook, IPreDigHook, IDigHook,
IPreBuildHook, IPostBuildHook, IWindowOpenHook,
IWindowClickHook, IWindowCloseHook)
from bravo.location import Location
from bravo.plugin import retrieve_named_plugins, retrieve_sorted_plugins
from bravo.policy.packs import packs as available_packs
from bravo.policy.seasons import Spring, Winter
from bravo.utilities.chat import chat_name, sanitize_chat
from bravo.weather import WeatherVane
from bravo.world import World
(STATE_UNAUTHENTICATED, STATE_CHALLENGED, STATE_AUTHENTICATED,
STATE_LOCATED) = range(4)
circle = [(i, j)
for i, j in product(xrange(-5, 5), xrange(-5, 5))
if i**2 + j**2 <= 25
]
class BravoFactory(Factory):
"""
A ``Factory`` that creates ``BravoProtocol`` objects when connected to.
"""
implements(IPushProducer)
protocol = BravoProtocol
timestamp = None
time = 0
day = 0
eid = 1
interfaces = []
def __init__(self, config, name):
"""
Create a factory and world.
``name`` is the string used to look up factory-specific settings from
the configuration.
:param str name: internal name of this factory
"""
self.name = name
self.config = config
self.config_name = "world %s" % name
self.world = World(self.config, self.name)
self.world.factory = self
self.protocols = dict()
self.connectedIPs = defaultdict(int)
self.mode = self.config.get(self.config_name, "mode")
if self.mode not in ("creative", "survival"):
raise Exception("Unsupported mode %s" % self.mode)
self.limitConnections = self.config.getintdefault(self.config_name,
"limitConnections",
0)
self.limitPerIP = self.config.getintdefault(self.config_name,
"limitPerIP", 0)
self.vane = WeatherVane(self)
def startFactory(self):
log.msg("Initializing factory for world '%s'..." % self.name)
# Get our plugins set up.
self.register_plugins()
log.msg("Starting world...")
self.world.start()
log.msg("Starting timekeeping...")
self.timestamp = reactor.seconds()
self.time = self.world.level.time
self.update_season()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(2)
log.msg("Starting entity updates...")
# Start automatons.
for automaton in self.automatons:
automaton.start()
self.chat_consumers = set()
log.msg("Factory successfully initialized for world '%s'!" % self.name)
def stopFactory(self):
"""
Called before factory stops listening on ports. Used to perform
shutdown tasks.
"""
log.msg("Shutting down world...")
# Stop automatons. Technically, they may not actually halt until their
# next iteration, but that is close enough for us, probably.
# Automatons are contracted to not access the world after stop() is
# called.
for automaton in self.automatons:
automaton.stop()
# Evict plugins as soon as possible. Can't be done before stopping
# automatons.
self.unregister_plugins()
self.time_loop.stop()
# Write back current world time. This must be done before stopping the
# world.
self.world.time = self.time
# And now stop the world.
self.world.stop()
log.msg("World data saved!")
def buildProtocol(self, addr):
"""
Create a protocol.
This overriden method provides early player entity registration, as a
solution to the username/entity race that occurs on login.
"""
banned = self.world.serializer.load_plugin_data("banned_ips")
# Do IP bans first.
for ip in banned.split():
if addr.host == ip:
# Use KickedProtocol with extreme prejudice.
log.msg("Kicking banned IP %s" % addr.host)
p = KickedProtocol("Sorry, but your IP address is banned.")
p.factory = self
return p
# We are ignoring values less that 1, but making sure not to go over
# the connection limit.
if (self.limitConnections
and len(self.protocols) >= self.limitConnections):
log.msg("Reached maximum players, turning %s away." % addr.host)
p = KickedProtocol("The player limit has already been reached."
" Please try again later.")
p.factory = self
return p
# Do our connection-per-IP check.
if (self.limitPerIP and
self.connectedIPs[addr.host] >= self.limitPerIP):
log.msg("At maximum connections for %s already, dropping." % addr.host)
p = KickedProtocol("There are too many players connected from this IP.")
p.factory = self
return p
else:
self.connectedIPs[addr.host] += 1
# If the player wasn't kicked, let's continue!
log.msg("Starting connection for %s" % addr)
p = self.protocol(self.config, self.name)
p.host = addr.host
p.factory = self
self.register_entity(p)
# Copy our hooks to the protocol.
p.register_hooks()
return p
def teardown_protocol(self, protocol):
"""
Do internal bookkeeping on behalf of a protocol which has been
disconnected.
Did you know that "bookkeeping" is one of the few words in English
which has three pairs of double letters in a row?
"""
username = protocol.username
host = protocol.host
if username in self.protocols:
del self.protocols[username]
self.connectedIPs[host] -= 1
def set_username(self, protocol, username):
"""
Attempt to set a new username for a protocol.
:returns: whether the username was changed
"""
# If the username's already taken, refuse it.
if username in self.protocols:
return False
if protocol.username in self.protocols:
# This protocol's known under another name, so remove it.
del self.protocols[protocol.username]
# Set the username.
self.protocols[username] = protocol
protocol.username = username
return True
def register_plugins(self):
"""
Setup plugin hooks.
"""
log.msg("Registering client plugin hooks...")
plugin_types = {
"automatons": IAutomaton,
"generators": ITerrainGenerator,
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
packs = self.config.getlistdefault(self.config_name, "packs", [])
try:
packs = [available_packs[pack] for pack in packs]
except KeyError, e:
raise Exception("Couldn't find plugin pack %s" % e.args)
for t, interface in plugin_types.iteritems():
l = self.config.getlistdefault(self.config_name, t, [])
# Grab extra plugins from the pack. Order doesn't really matter
# since the plugin loader sorts things anyway.
for pack in packs:
if t in pack:
l += pack[t]
# Hax. :T
if t == "generators":
plugins = retrieve_sorted_plugins(interface, l)
elif issubclass(interface, ISortedPlugin):
plugins = retrieve_sorted_plugins(interface, l, factory=self)
else:
plugins = retrieve_named_plugins(interface, l, factory=self)
log.msg("Using %s: %s" % (t.replace("_", " "),
", ".join(plugin.name for plugin in plugins)))
setattr(self, t, plugins)
# Deal with seasons.
seasons = self.config.getlistdefault(self.config_name, "seasons", [])
for pack in packs:
if "seasons" in pack:
seasons += pack["seasons"]
self.seasons = []
if "spring" in seasons:
self.seasons.append(Spring())
if "winter" in seasons:
self.seasons.append(Winter())
# Assign generators to the world pipeline.
self.world.pipeline = self.generators
# Use hooks have special funkiness.
uh = self.use_hooks
self.use_hooks = defaultdict(list)
for plugin in uh:
for target in plugin.targets:
self.use_hooks[target].append(plugin)
def unregister_plugins(self):
log.msg("Unregistering client plugin hooks...")
for name in [
"automatons",
"generators",
"open_hooks",
"click_hooks",
"close_hooks",
"pre_build_hooks",
"post_build_hooks",
"pre_dig_hooks",
"dig_hooks",
"sign_hooks",
"use_hooks",
]:
delattr(self, name)
def create_entity(self, x, y, z, name, **kwargs):
"""
Spawn an entirely new entity at the specified block coordinates.
Handles entity registration as well as instantiation.
"""
bigx = x // 16
bigz = z // 16
location = Location.at_block(x, y, z)
entity = entities[name](eid=0, location=location, **kwargs)
self.register_entity(entity)
d = self.world.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
chunk.entities.add(entity)
log.msg("Created entity %s" % entity)
# XXX Maybe just send the entity object to the manager instead of
# the following?
if hasattr(entity,'loop'):
self.world.mob_manager.start_mob(entity)
return entity
def register_entity(self, entity):
"""
Registers an entity with this factory.
Registration is perhaps too fancy of a name; this method merely makes
sure that the entity has a unique and usable entity ID. In particular,
this method does *not* make the entity attached to the world, or
advertise its existence.
"""
if not entity.eid:
self.eid += 1
entity.eid = self.eid
log.msg("Registered entity %s" % entity)
def destroy_entity(self, entity):
"""
Destroy an entity.
The factory doesn't have to know about entities, but it is a good
place to put this logic.
"""
bigx, bigz = entity.location.pos.to_chunk()
d = self.world.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
chunk.entities.discard(entity)
chunk.dirty = True
log.msg("Destroyed entity %s" % entity)
def update_time(self):
"""
Update the in-game timer.
The timer goes from 0 to 24000, both of which are high noon. The clock
increments by 20 every second. Days are 20 minutes long.
The day clock is incremented every in-game day, which is every 20
minutes. The day clock goes from 0 to 360, which works out to a reset
once every 5 days. This is a Babylonian in-game year.
"""
t = reactor.seconds()
self.time += 20 * (t - self.timestamp)
self.timestamp = t
days, self.time = divmod(self.time, 24000)
if days:
self.day += days
self.day %= 360
self.update_season()
def broadcast_time(self):
packet = make_packet("time", timestamp=int(self.time))
self.broadcast(packet)
def update_season(self):
"""
Update the world's season.
"""
all_seasons = sorted(self.seasons, key=lambda s: s.day)
# Get all the seasons that we have past the start date of this year.
# We are looking for the season which is closest to our current day,
# without going over; I call this the Price-is-Right style of season
# handling. :3
past_seasons = [s for s in all_seasons if s.day <= self.day]
if past_seasons:
# The most recent one is the one we are in
self.world.season = past_seasons[-1]
elif all_seasons:
# We haven't past any seasons yet this year, so grab the last one
# from 'last year'
self.world.season = all_seasons[-1]
else:
# No seasons enabled.
self.world.season = None
def chat(self, message):
"""
Relay chat messages.
Chat messages are sent to all connected clients, as well as to anybody
consuming this factory.
"""
for consumer in self.chat_consumers:
consumer.write((self, message))
# Prepare the message for chat packeting.
for user in self.protocols:
message = message.replace(user, chat_name(user))
message = sanitize_chat(message)
log.msg("Chat: %s" % message.encode("utf8"))
packet = make_packet("chat", message=message)
self.broadcast(packet)
def broadcast(self, packet):
"""
Broadcast a packet to all connected players.
"""
for player in self.protocols.itervalues():
player.transport.write(packet)
def broadcast_for_others(self, packet, protocol):
"""
Broadcast a packet to all players except the originating player.
Useful for certain packets like player entity spawns which should
never be reflexive.
"""
for player in self.protocols.itervalues():
if player is not protocol:
player.transport.write(packet)
def broadcast_for_chunk(self, packet, x, z):
"""
Broadcast a packet to all players that have a certain chunk loaded.
`x` and `z` are chunk coordinates, not block coordinates.
"""
for player in self.protocols.itervalues():
if (x, z) in player.chunks:
player.transport.write(packet)
def scan_chunk(self, chunk):
"""
Tell automatons about this chunk.
"""
# It's possible for there to be no automatons; this usually means that
# the factory is shutting down. We should be permissive and handle
# this case correctly.
if hasattr(self, "automatons"):
for automaton in self.automatons:
automaton.scan(chunk)
def flush_chunk(self, chunk):
"""
Flush a damaged chunk to all players that have it loaded.
"""
if chunk.is_damaged():
packet = chunk.get_damage_packet()
for player in self.protocols.itervalues():
if (chunk.x, chunk.z) in player.chunks:
player.transport.write(packet)
chunk.clear_damage()
def flush_all_chunks(self):
"""
Flush any damage anywhere in this world to all players.
This is a sledgehammer which should be used sparingly at best, and is
only well-suited to plugins which touch multiple chunks at once.
In other words, if I catch you using this in your plugin needlessly,
I'm gonna have a chat with you.
"""
for chunk in self.world._cache.iterdirty():
self.flush_chunk(chunk)
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index 8980616..22172ef 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -350,1026 +350,1024 @@ class BetaServerProtocol(object, Protocol, TimeoutMixin):
players,
max_players,
]
response = u"\u0000".join(data)
self.error(response)
def quit(self, container):
"""
Hook for quit packets.
By default, merely logs the quit message and drops the connection.
Even if the connection is not dropped, it will be lost anyway since
the client will close the connection. It's better to explicitly let it
go here than to have zombie protocols.
"""
log.msg("Client is quitting: %s" % container.message)
self.transport.loseConnection()
# Twisted-level data handlers and methods
# Please don't override these needlessly, as they are pretty solid and
# shouldn't need to be touched.
def dataReceived(self, data):
self.buf += data
packets, self.buf = parse_packets(self.buf)
if packets:
self.resetTimeout()
for header, payload in packets:
if header in self.handlers:
d = maybeDeferred(self.handlers[header], payload)
@d.addErrback
def eb(failure):
log.err("Error while handling packet 0x%.2x" % header)
log.err(failure)
return None
else:
log.err("Didn't handle parseable packet 0x%.2x!" % header)
log.err(payload)
def connectionLost(self, reason=connectionDone):
if self._ping_loop.running:
self._ping_loop.stop()
def timeoutConnection(self):
self.error("Connection timed out")
# State-change callbacks
# Feel free to override these, but call them at some point.
def authenticated(self):
"""
Called when the client has successfully authenticated with the server.
"""
self.state = STATE_AUTHENTICATED
self._ping_loop.start(30)
# Event callbacks
# These are meant to be overriden.
def orientation_changed(self):
"""
Called when the client moves.
This callback is only for orientation, not position.
"""
pass
def position_changed(self):
"""
Called when the client moves.
This callback is only for position, not orientation.
"""
pass
# Convenience methods for consolidating code and expressing intent. I
# hear that these are occasionally useful. If a method in this section can
# be used, then *PLEASE* use it; not using it is the same as open-coding
# whatever you're doing, and only hurts in the long run.
def write_packet(self, header, **payload):
"""
Send a packet to the client.
"""
self.transport.write(make_packet(header, **payload))
def update_ping(self):
"""
Send a keepalive to the client.
"""
timestamp = timestamp_from_clock(reactor)
self.write_packet("ping", pid=timestamp)
def update_location(self):
"""
Send this client's location to the client.
Also let other clients know where this client is.
"""
# Don't bother trying to update things if the position's not yet
# synchronized. We could end up jettisoning them into the void.
if self.state != STATE_LOCATED:
return
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
# Inform everybody of our new location.
packet = make_packet("teleport", eid=self.player.eid, x=x, y=y, z=z,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
# Inform ourselves of our new location.
packet = self.location.save_to_packet()
self.transport.write(packet)
def ascend(self, count):
"""
Ascend to the next XZ-plane.
``count`` is the number of ascensions to perform, and may be zero in
order to force this player to not be standing inside a block.
:returns: bool of whether the ascension was successful
This client must be located for this method to have any effect.
"""
if self.state != STATE_LOCATED:
return False
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
chunk = self.chunks[bigx, bigz]
column = [chunk.get_block((smallx, i, smallz))
for i in range(CHUNK_HEIGHT)]
# Special case: Ascend at most once, if the current spot isn't good.
if count == 0:
if (not column[y]) or column[y + 1] or column[y + 2]:
# Yeah, we're gonna need to move.
count += 1
else:
# Nope, we're fine where we are.
return True
for i in xrange(y, 255):
# Find the next spot above us which has a platform and two empty
# blocks of air.
if column[i] and (not column[i + 1]) and not column[i + 2]:
count -= 1
if not count:
break
else:
return False
self.location.pos = self.location.pos._replace(y=i * 32)
return True
def error(self, message):
"""
Error out.
This method sends ``message`` to the client as a descriptive error
message, then closes the connection.
"""
log.msg("Error: %s" % message)
self.transport.write(make_error_packet(message))
self.transport.loseConnection()
def play_notes(self, notes):
"""
Play some music.
Send a sequence of notes to the player. ``notes`` is a finite iterable
of pairs of instruments and pitches.
There is no way to time notes; if staggered playback is desired (and
it usually is!), then ``play_notes()`` should be called repeatedly at
the appropriate times.
This method turns the block beneath the player into a note block,
plays the requested notes through it, then turns it back into the
original block, all without actually modifying the chunk.
"""
x, y, z = self.location.pos.to_block()
if y:
y -= 1
bigx, smallx, bigz, smallz = split_coords(x, z)
if (bigx, bigz) not in self.chunks:
return
block = self.chunks[bigx, bigz].get_block((smallx, y, smallz))
meta = self.chunks[bigx, bigz].get_metadata((smallx, y, smallz))
self.write_packet("block", x=x, y=y, z=z,
type=blocks["note-block"].slot, meta=0)
for instrument, pitch in notes:
self.write_packet("note", x=x, y=y, z=z, pitch=pitch,
instrument=instrument)
self.write_packet("block", x=x, y=y, z=z, type=block, meta=meta)
# Automatic properties. Assigning to them causes the client to be notified
# of changes.
@property
def health(self):
return self._health
@health.setter
def health(self, value):
if not 0 <= value <= 20:
raise BetaClientError("Invalid health value %d" % value)
if self._health != value:
self.write_packet("health", hp=value, fp=0, saturation=0)
self._health = value
@property
def latency(self):
return self._latency
@latency.setter
def latency(self, value):
# Clamp the value to not exceed the boundaries of the packet. This is
# necessary even though, in theory, a ping this high is bad news.
value = clamp(value, 0, 65535)
# Check to see if this is a new value, and if so, alert everybody.
if self._latency != value:
packet = make_packet("players", name=self.username, online=True,
ping=value)
self.factory.broadcast(packet)
self._latency = value
class KickedProtocol(BetaServerProtocol):
"""
A very simple Beta protocol that helps enforce IP bans, Max Connections,
and Max Connections Per IP.
This protocol disconnects people as soon as they connect, with a helpful
message.
"""
def __init__(self, reason=None):
BetaServerProtocol.__init__(self)
if reason:
self.reason = reason
else:
self.reason = (
"This server doesn't like you very much."
" I don't like you very much either.")
def connectionMade(self):
self.error("%s" % self.reason)
class BetaProxyProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` that proxies for an InfiniCraft client.
"""
gateway = "server.wiki.vg"
def handshake(self, container):
self.write_packet("handshake", username="-")
def login(self, container):
self.username = container.username
self.write_packet("login", protocol=0, username="", seed=0,
dimension="earth")
url = urlunparse(("http", self.gateway, "/node/0/0/", None, None,
None))
d = getPage(url)
d.addCallback(self.start_proxy)
def start_proxy(self, response):
log.msg("Response: %s" % response)
log.msg("Starting proxy...")
address, port = response.split(":")
self.add_node(address, int(port))
def add_node(self, address, port):
"""
Add a new node to this client.
"""
from twisted.internet.endpoints import TCP4ClientEndpoint
log.msg("Adding node %s:%d" % (address, port))
endpoint = TCP4ClientEndpoint(reactor, address, port, 5)
self.node_factory = InfiniClientFactory()
d = endpoint.connect(self.node_factory)
d.addCallback(self.node_connected)
d.addErrback(self.node_connect_error)
def node_connected(self, protocol):
log.msg("Connected new node!")
def node_connect_error(self, reason):
log.err("Couldn't connect node!")
log.err(reason)
class BravoProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` suitable for serving MC worlds to clients.
This protocol really does need to be hooked up with a ``BravoFactory`` or
something very much like it.
"""
chunk_tasks = None
time_loop = None
eid = 0
last_dig = None
def __init__(self, config, name):
BetaServerProtocol.__init__(self)
self.config = config
self.config_name = "world %s" % name
# Retrieve the MOTD. Only needs to be done once.
self.motd = self.config.getdefault(self.config_name, "motd",
"BravoServer")
def register_hooks(self):
log.msg("Registering client hooks...")
plugin_types = {
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
for t in plugin_types:
setattr(self, t, getattr(self.factory, t))
log.msg("Registering policies...")
if self.factory.mode == "creative":
self.dig_policy = dig_policies["speedy"]
else:
self.dig_policy = dig_policies["notchy"]
log.msg("Registered client plugin hooks!")
def pre_handshake(self):
"""
Set up username and get going.
"""
if self.username in self.factory.protocols:
# This username's already taken; find a new one.
for name in username_alternatives(self.username):
if name not in self.factory.protocols:
self.username = name
break
else:
self.error("Your username is already taken.")
return False
return True
@inlineCallbacks
def authenticated(self):
BetaServerProtocol.authenticated(self)
# Init player, and copy data into it.
self.player = yield self.factory.world.load_player(self.username)
self.player.eid = self.eid
self.location = self.player.location
# Init players' inventory window.
self.inventory = InventoryWindow(self.player.inventory)
# *Now* we are in our factory's list of protocols. Be aware.
self.factory.protocols[self.username] = self
# Announce our presence.
self.factory.chat("%s is joining the game..." % self.username)
packet = make_packet("players", name=self.username, online=True,
ping=0)
self.factory.broadcast(packet)
# Craft our avatar and send it to already-connected other players.
packet = make_packet("create", eid=self.player.eid)
packet += self.player.save_to_packet()
self.factory.broadcast_for_others(packet, self)
# And of course spawn all of those players' avatars in our client as
# well.
for protocol in self.factory.protocols.itervalues():
# Skip over ourselves; otherwise, the client tweaks out and
# usually either dies or locks up.
if protocol is self:
continue
self.write_packet("create", eid=protocol.player.eid)
packet = protocol.player.save_to_packet()
packet += protocol.player.save_equipment_to_packet()
self.transport.write(packet)
# Send spawn and inventory.
spawn = self.factory.world.level.spawn
packet = make_packet("spawn", x=spawn[0], y=spawn[1], z=spawn[2])
packet += self.inventory.save_to_packet()
self.transport.write(packet)
# TODO: Send Abilities (0xca)
# TODO: Update Health (0x08)
# TODO: Update Experience (0x2b)
# Send weather.
self.transport.write(self.factory.vane.make_packet())
self.send_initial_chunk_and_location()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(10)
def orientation_changed(self):
# Bang your head!
yaw, pitch = self.location.ori.to_fracs()
packet = make_packet("entity-orientation", eid=self.player.eid,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
def position_changed(self):
# Send chunks.
self.update_chunks()
for entity in self.entities_near(2):
if entity.name != "Item":
continue
left = self.player.inventory.add(entity.item, entity.quantity)
if left != entity.quantity:
if left != 0:
# partial collect
entity.quantity = left
else:
packet = make_packet("collect", eid=entity.eid,
destination=self.player.eid)
packet += make_packet("destroy", count=1, eid=[entity.eid])
self.factory.broadcast(packet)
self.factory.destroy_entity(entity)
packet = self.inventory.save_to_packet()
self.transport.write(packet)
def entities_near(self, radius):
"""
Obtain the entities within a radius of this player.
Radius is measured in blocks.
"""
chunk_radius = int(radius // 16 + 1)
chunkx, chunkz = self.location.pos.to_chunk()
minx = chunkx - chunk_radius
maxx = chunkx + chunk_radius + 1
minz = chunkz - chunk_radius
maxz = chunkz + chunk_radius + 1
for x, z in product(xrange(minx, maxx), xrange(minz, maxz)):
if (x, z) not in self.chunks:
continue
chunk = self.chunks[x, z]
yieldables = [entity for entity in chunk.entities
if self.location.distance(entity.location) <= (radius * 32)]
for i in yieldables:
yield i
def chat(self, container):
if container.message.startswith("/"):
- pp = {"factory": self.factory}
-
commands = retrieve_plugins(IChatCommand, factory=self.factory)
# Register aliases.
for plugin in commands.values():
for alias in plugin.aliases:
commands[alias] = plugin
params = container.message[1:].split(" ")
command = params.pop(0).lower()
if command and command in commands:
def cb(iterable):
for line in iterable:
self.write_packet("chat", message=line)
def eb(error):
self.write_packet("chat", message="Error: %s" %
error.getErrorMessage())
d = maybeDeferred(commands[command].chat_command,
self.username, params)
d.addCallback(cb)
d.addErrback(eb)
else:
self.write_packet("chat",
message="Unknown command: %s" % command)
else:
# Send the message up to the factory to be chatified.
message = "<%s> %s" % (self.username, container.message)
self.factory.chat(message)
def use(self, container):
"""
For each entity in proximity (4 blocks), check if it is the target
of this packet and call all hooks that stated interested in this
type.
"""
nearby_players = self.factory.players_near(self.player, 4)
for entity in chain(self.entities_near(4), nearby_players):
if entity.eid == container.target:
for hook in self.use_hooks[entity.name]:
hook.use_hook(self.factory, self.player, entity,
container.button == 0)
break
@inlineCallbacks
def digging(self, container):
if container.x == -1 and container.z == -1 and container.y == 255:
# Lala-land dig packet. Discard it for now.
return
# Player drops currently holding item/block.
if (container.state == "dropped" and container.face == "-y" and
container.x == 0 and container.y == 0 and container.z == 0):
i = self.player.inventory
holding = i.holdables[self.player.equipped]
if holding:
primary, secondary, count = holding
if i.consume((primary, secondary), self.player.equipped):
dest = self.location.in_front_of(2)
coords = dest.pos._replace(y=dest.pos.y + 1)
self.factory.give(coords, (primary, secondary), 1)
# Re-send inventory.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# If no items in this slot are left, this player isn't
# holding an item anymore.
if i.holdables[self.player.equipped] is None:
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=65535,
count=1,
secondary=0
)
self.factory.broadcast_for_others(packet, self)
return
if container.state == "shooting":
self.shoot_arrow()
return
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
coords = smallx, container.y, smallz
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't dig in chunk (%d, %d)!" % (bigx, bigz))
return
block = chunk.get_block((smallx, container.y, smallz))
if container.state == "started":
# Run pre dig hooks
for hook in self.pre_dig_hooks:
cancel = yield maybeDeferred(hook.pre_dig_hook, self.player,
(container.x, container.y, container.z), block)
if cancel:
return
tool = self.player.inventory.holdables[self.player.equipped]
# Check to see whether we should break this block.
if self.dig_policy.is_1ko(block, tool):
self.run_dig_hooks(chunk, coords, blocks[block])
else:
# Set up a timer for breaking the block later.
dtime = time() + self.dig_policy.dig_time(block, tool)
self.last_dig = coords, block, dtime
elif container.state == "stopped":
# The client thinks it has broken a block. We shall see.
if not self.last_dig:
return
oldcoords, oldblock, dtime = self.last_dig
if oldcoords != coords or oldblock != block:
# Nope!
self.last_dig = None
return
dtime -= time()
# When enough time has elapsed, run the dig hooks.
d = deferLater(reactor, max(dtime, 0), self.run_dig_hooks, chunk,
coords, blocks[block])
d.addCallback(lambda none: setattr(self, "last_dig", None))
def run_dig_hooks(self, chunk, coords, block):
"""
Destroy a block and run the post-destroy dig hooks.
"""
x, y, z = coords
if block.breakable:
chunk.destroy(coords)
l = []
for hook in self.dig_hooks:
l.append(maybeDeferred(hook.dig_hook, chunk, x, y, z, block))
dl = DeferredList(l)
dl.addCallback(lambda none: self.factory.flush_chunk(chunk))
@inlineCallbacks
def build(self, container):
"""
Handle a build packet.
Several things must happen. First, the packet's contents need to be
examined to ensure that the packet is valid. A check is done to see if
the packet is opening a windowed object. If not, then a build is
run.
"""
# Is the target within our purview? We don't do a very strict
# containment check, but we *do* require that the chunk be loaded.
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't select in chunk (%d, %d)!" % (bigx, bigz))
return
target = blocks[chunk.get_block((smallx, container.y, smallz))]
# If it's a chest, hax.
if target.name == "chest":
from bravo.policy.windows import Chest
w = Chest()
self.windows[self.wid] = w
w.open()
self.write_packet("window-open", wid=self.wid, type=w.identifier,
title=w.title, slots=w.slots)
self.wid += 1
return
elif target.name == "workbench":
from bravo.policy.windows import Workbench
w = Workbench()
self.windows[self.wid] = w
w.open()
self.write_packet("window-open", wid=self.wid, type=w.identifier,
title=w.title, slots=w.slots)
self.wid += 1
return
# Try to open it first
for hook in self.open_hooks:
window = yield maybeDeferred(hook.open_hook, self, container,
chunk.get_block((smallx, container.y, smallz)))
if window:
self.write_packet("window-open", wid=window.wid,
type=window.identifier, title=window.title,
slots=window.slots_num)
packet = window.save_to_packet()
self.transport.write(packet)
# window opened
return
# Ignore clients that think -1 is placeable.
if container.primary == -1:
return
# Special case when face is "noop": Update the status of the currently
# held block rather than placing a new block.
if container.face == "noop":
return
# If the target block is vanishable, then adjust our aim accordingly.
if target.vanishes:
container.face = "+y"
container.y -= 1
if container.primary in blocks:
block = blocks[container.primary]
elif container.primary in items:
block = items[container.primary]
else:
log.err("Ignoring request to place unknown block %d" %
container.primary)
return
# Run pre-build hooks. These hooks are able to interrupt the build
# process.
builddata = BuildData(block, 0x0, container.x, container.y,
container.z, container.face)
for hook in self.pre_build_hooks:
cont, builddata, cancel = yield maybeDeferred(hook.pre_build_hook,
self.player, builddata)
if cancel:
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
return
if not cont:
break
# Run the build.
try:
yield maybeDeferred(self.run_build, builddata)
except BuildError:
return
newblock = builddata.block.slot
coords = adjust_coords_for_face(
(builddata.x, builddata.y, builddata.z), builddata.face)
# Run post-build hooks. These are merely callbacks which cannot
# interfere with the build process, largely because the build process
# already happened.
for hook in self.post_build_hooks:
yield maybeDeferred(hook.post_build_hook, self.player, coords,
builddata.block)
# Feed automatons.
for automaton in self.factory.automatons:
if newblock in automaton.blocks:
automaton.feed(coords)
# Re-send inventory.
# XXX this could be optimized if/when inventories track damage.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
def run_build(self, builddata):
block, metadata, x, y, z, face = builddata
# Don't place items as blocks.
if block.slot not in blocks:
raise BuildError("Couldn't build item %r as block" % block)
# Check for orientable blocks.
if not metadata and block.orientable():
metadata = block.orientation(face)
if metadata is None:
# Oh, I guess we can't even place the block on this face.
raise BuildError("Couldn't orient block %r on face %s" %
(block, face))
# Make sure we can remove it from the inventory first.
if not self.player.inventory.consume((block.slot, 0),
self.player.equipped):
# Okay, first one was a bust; maybe we can consume the related
# block for dropping instead?
if not self.player.inventory.consume(block.drop,
self.player.equipped):
raise BuildError("Couldn't consume %r from inventory" % block)
# Offset coords according to face.
x, y, z = adjust_coords_for_face((x, y, z), face)
# Set the block and data.
dl = [self.factory.world.set_block((x, y, z), block.slot)]
if metadata:
dl.append(self.factory.world.set_metadata((x, y, z), metadata))
return DeferredList(dl)
def equip(self, container):
self.player.equipped = container.slot
# Inform everyone about the item the player is holding now.
item = self.player.inventory.holdables[self.player.equipped]
if item is None:
# Empty slot. Use signed short -1.
primary, secondary = -1, 0
else:
primary, secondary, count = item
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=primary,
count=1,
secondary=secondary
)
self.factory.broadcast_for_others(packet, self)
def pickup(self, container):
self.factory.give((container.x, container.y, container.z),
(container.primary, container.secondary), container.count)
def animate(self, container):
# Broadcast the animation of the entity to everyone else. Only swing
# arm is send by notchian clients.
packet = make_packet("animate",
eid=self.player.eid,
animation=container.animation
)
self.factory.broadcast_for_others(packet, self)
def wclose(self, container):
wid = container.wid
if wid == 0:
# WID 0 is reserved for the client inventory.
pass
elif wid in self.windows:
w = self.windows.pop(wid)
w.close()
else:
self.error("WID %d doesn't exist." % wid)
def waction(self, container):
wid = container.wid
if wid in self.windows:
w = self.windows[wid]
result = w.action(container.slot, container.button,
container.token, container.shift,
container.primary)
self.write_packet("window-token", wid=wid, token=container.token,
acknowledged=result)
else:
self.error("WID %d doesn't exist." % wid)
def wcreative(self, container):
"""
A slot was altered in creative mode.
"""
# XXX Sometimes the container doesn't contain all of this information.
# What then?
applied = self.inventory.creative(container.slot, container.primary,
container.secondary, container.count)
if applied:
# Inform other players about changes to this player's equipment.
equipped_slot = self.player.equipped + 36
if container.slot == equipped_slot:
packet = make_packet("entity-equipment",
eid=self.player.eid,
# XXX why 0? why not the actual slot?
slot=0,
primary=container.primary,
count=1,
secondary=container.secondary,
)
self.factory.broadcast_for_others(packet, self)
def shoot_arrow(self):
# TODO 1. Create arrow entity: arrow = Arrow(self.factory, self.player)
# 2. Register within the factory: self.factory.register_entity(arrow)
# 3. Run it: arrow.run()
pass
def sign(self, container):
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't handle sign in chunk (%d, %d)!" % (bigx, bigz))
return
if (smallx, container.y, smallz) in chunk.tiles:
new = False
s = chunk.tiles[smallx, container.y, smallz]
else:
new = True
s = Sign(smallx, container.y, smallz)
chunk.tiles[smallx, container.y, smallz] = s
s.text1 = container.line1
s.text2 = container.line2
s.text3 = container.line3
s.text4 = container.line4
chunk.dirty = True
# The best part of a sign isn't making one, it's showing everybody
# else on the server that you did.
packet = make_packet("sign", container)
self.factory.broadcast_for_chunk(packet, bigx, bigz)
# Run sign hooks.
for hook in self.sign_hooks:
hook.sign_hook(self.factory, chunk, container.x, container.y,
container.z, [s.text1, s.text2, s.text3, s.text4], new)
def complete(self, container):
"""
Attempt to tab-complete user names.
"""
needle = container.autocomplete
usernames = self.factory.protocols.keys()
results = complete(needle, usernames)
self.write_packet("tab", autocomplete=results)
def settings_packet(self, container):
"""
Acknowledge a change of settings and update chunk distance.
"""
super(BravoProtocol, self).settings_packet(container)
self.update_chunks()
def disable_chunk(self, x, z):
key = x, z
log.msg("Disabling chunk %d, %d" % key)
if key not in self.chunks:
log.msg("...But the chunk wasn't loaded!")
return
# Remove the chunk from cache.
chunk = self.chunks.pop(key)
eids = [e.eid for e in chunk.entities]
self.write_packet("destroy", count=len(eids), eid=eids)
# Clear chunk data on the client.
self.write_packet("chunk", x=x, z=z, continuous=False, primary=0x0,
add=0x0, data="")
def enable_chunk(self, x, z):
"""
Request a chunk.
This function will asynchronously obtain the chunk, and send it on the
wire.
:returns: `Deferred` that will be fired when the chunk is obtained,
with no arguments
"""
log.msg("Enabling chunk %d, %d" % (x, z))
if (x, z) in self.chunks:
log.msg("...But the chunk was already loaded!")
return succeed(None)
d = self.factory.world.request_chunk(x, z)
@d.addCallback
def cb(chunk):
self.chunks[x, z] = chunk
return chunk
d.addCallback(self.send_chunk)
return d
def send_chunk(self, chunk):
log.msg("Sending chunk %d, %d" % (chunk.x, chunk.z))
packet = chunk.save_to_packet()
self.transport.write(packet)
for entity in chunk.entities:
packet = entity.save_to_packet()
self.transport.write(packet)
for entity in chunk.tiles.itervalues():
if entity.name == "Sign":
packet = entity.save_to_packet()
self.transport.write(packet)
def send_initial_chunk_and_location(self):
"""
Send the initial chunks and location.
diff --git a/bravo/chunk.py b/bravo/chunk.py
index 2a8b6f7..697dfcc 100644
--- a/bravo/chunk.py
+++ b/bravo/chunk.py
@@ -18,632 +18,632 @@ class ChunkWarning(Warning):
"""
def check_bounds(f):
"""
Decorate a function or method to have its first positional argument be
treated as an (x, y, z) tuple which must fit inside chunk boundaries of
16, CHUNK_HEIGHT, and 16, respectively.
A warning will be raised if the bounds check fails.
"""
@wraps(f)
def deco(chunk, coords, *args, **kwargs):
x, y, z = coords
# Coordinates were out-of-bounds; warn and run away.
if not (0 <= x < 16 and 0 <= z < 16 and 0 <= y < CHUNK_HEIGHT):
warn("Coordinates %s are OOB in %s() of %s, ignoring call"
% (coords, f.func_name, chunk), ChunkWarning, stacklevel=2)
# A concession towards where this decorator will be used. The
# value is likely to be discarded either way, but if the value is
# used, we shouldn't horribly die because of None/0 mismatch.
return 0
return f(chunk, coords, *args, **kwargs)
return deco
def ci(x, y, z):
"""
Turn an (x, y, z) tuple into a chunk index.
This is really a macro and not a function, but Python doesn't know the
difference. Hopefully this is faster on PyPy than on CPython.
"""
return (x * 16 + z) * CHUNK_HEIGHT + y
def segment_array(a):
"""
Chop up a chunk-sized array into sixteen components.
The chops are done in order to produce the smaller chunks preferred by
modern clients.
"""
l = [array(a.typecode) for chaff in range(16)]
index = 0
for i in range(0, len(a), 16):
l[index].extend(a[i:i + 16])
index = (index + 1) % 16
return l
def make_glows():
"""
Set up glow tables.
These tables provide glow maps for illuminated points.
"""
glow = [None] * 16
for i in range(16):
dim = 2 * i + 1
glow[i] = array("b", [0] * (dim**3))
for x, y, z in product(xrange(dim), repeat=3):
distance = abs(x - i) + abs(y - i) + abs(z - i)
glow[i][(x * dim + y) * dim + z] = i + 1 - distance
glow[i] = array("B", [clamp(x, 0, 15) for x in glow[i]])
return glow
glow = make_glows()
def composite_glow(target, strength, x, y, z):
"""
Composite a light source onto a lightmap.
The exact operation is not quite unlike an add.
"""
ambient = glow[strength]
xbound, zbound, ybound = 16, CHUNK_HEIGHT, 16
sx = x - strength
sy = y - strength
sz = z - strength
ex = x + strength
ey = y + strength
ez = z + strength
si, sj, sk = 0, 0, 0
ei, ej, ek = strength * 2, strength * 2, strength * 2
if sx < 0:
sx, si = 0, -sx
if sy < 0:
sy, sj = 0, -sy
if sz < 0:
sz, sk = 0, -sz
if ex > xbound:
ex, ei = xbound, ei - ex + xbound
if ey > ybound:
ey, ej = ybound, ej - ey + ybound
if ez > zbound:
ez, ek = zbound, ek - ez + zbound
adim = 2 * strength + 1
# Composite! Apologies for the loops.
for (tx, ax) in zip(range(sx, ex), range(si, ei)):
for (tz, az) in zip(range(sz, ez), range(sk, ek)):
for (ty, ay) in zip(range(sy, ey), range(sj, ej)):
ambient_index = (ax * adim + az) * adim + ay
target[ci(tx, ty, tz)] += ambient[ambient_index]
def iter_neighbors(coords):
"""
Iterate over the chunk-local coordinates surrounding the given
coordinates.
All coordinates are chunk-local.
Coordinates which are not valid chunk-local coordinates will not be
generated.
"""
x, z, y = coords
for dx, dz, dy in (
(1, 0, 0),
(-1, 0, 0),
(0, 1, 0),
(0, -1, 0),
(0, 0, 1),
(0, 0, -1)):
nx = x + dx
nz = z + dz
ny = y + dy
if not (0 <= nx < 16 and
0 <= nz < 16 and
0 <= ny < CHUNK_HEIGHT):
continue
yield nx, nz, ny
def neighboring_light(glow, block):
"""
Calculate the amount of light that should be shone on a block.
``glow`` is the brighest neighboring light. ``block`` is the slot of the
block being illuminated.
The return value is always a valid light value.
"""
return clamp(glow - blocks[block].dim, 0, 15)
class Chunk(object):
"""
A chunk of blocks.
Chunks are large pieces of world geometry (block data). The blocks, light
maps, and associated metadata are stored in chunks. Chunks are
always measured 16xCHUNK_HEIGHTx16 and are aligned on 16x16 boundaries in
the xz-plane.
:cvar bool dirty: Whether this chunk needs to be flushed to disk.
:cvar bool populated: Whether this chunk has had its initial block data
filled out.
"""
all_damaged = False
populated = False
dirtied = None
"""
Optional hook to be called when this chunk becomes dirty.
"""
_dirty = True
"""
Internal flag describing whether the chunk is dirty. Don't touch directly;
use the ``dirty`` property instead.
"""
def __init__(self, x, z):
"""
:param int x: X coordinate in chunk coords
:param int z: Z coordinate in chunk coords
:ivar array.array heightmap: Tracks the tallest block in each xz-column.
:ivar bool all_damaged: Flag for forcing the entire chunk to be
damaged. This is for efficiency; past a certain point, it is not
efficient to batch block updates or track damage. Heavily damaged
chunks have their damage represented as a complete resend of the
entire chunk.
"""
self.x = int(x)
self.z = int(z)
self.heightmap = array("B", [0] * (16 * 16))
self.blocklight = array("B", [0] * (16 * 16 * CHUNK_HEIGHT))
self.sections = [Section() for i in range(16)]
self.entities = set()
self.tiles = {}
self.damaged = set()
def __repr__(self):
return "Chunk(%d, %d)" % (self.x, self.z)
__str__ = __repr__
@property
def dirty(self):
return self._dirty
@dirty.setter
def dirty(self, value):
if value and not self._dirty:
# Notify whoever cares.
if self.dirtied is not None:
self.dirtied(self)
self._dirty = value
def regenerate_heightmap(self):
"""
Regenerate the height map array.
The height map is merely the position of the tallest block in any
xz-column.
"""
for x in range(16):
for z in range(16):
column = x * 16 + z
for y in range(255, -1, -1):
if self.get_block((x, y, z)):
break
self.heightmap[column] = y
def regenerate_blocklight(self):
lightmap = array("L", [0] * (16 * 16 * CHUNK_HEIGHT))
for x, z, y in iterchunk():
block = self.get_block((x, y, z))
if block in glowing_blocks:
composite_glow(lightmap, glowing_blocks[block], x, y, z)
self.blocklight = array("B", [clamp(x, 0, 15) for x in lightmap])
def regenerate_skylight(self):
"""
Regenerate the ambient light map.
Each block's individual light comes from two sources. The ambient
light comes from the sky.
The height map must be valid for this method to produce valid results.
"""
# Create an array of skylights, and a mask of dimming blocks.
lights = [0xf] * (16 * 16)
mask = [0x0] * (16 * 16)
# For each y-level, we're going to update the mask, apply it to the
# lights, apply the lights to the section, and then blur the lights
# and move downwards. Since empty sections are full of air, and air
# doesn't ever dim, ignoring empty sections should be a correct way
# to speed things up. Another optimization is that the process ends
# early if the entire slice of lights is dark.
for section in reversed(self.sections):
if not section:
continue
for y in range(15, -1, -1):
# Early-out if there's no more light left.
if not any(lights):
break
# Update the mask.
for x, z in XZ:
offset = x * 16 + z
block = section.get_block((x, y, z))
mask[offset] = blocks[block].dim
# Apply the mask to the lights.
for i, dim in enumerate(mask):
# Keep it positive.
lights[i] = max(0, lights[i] - dim)
# Apply the lights to the section.
for x, z in XZ:
offset = x * 16 + z
section.set_skylight((x, y, z), lights[offset])
# XXX blur the lights
# And continue moving downward.
def regenerate(self):
"""
Regenerate all auxiliary tables.
"""
self.regenerate_heightmap()
self.regenerate_blocklight()
self.regenerate_skylight()
self.dirty = True
def damage(self, coords):
"""
Record damage on this chunk.
"""
if self.all_damaged:
return
x, y, z = coords
self.damaged.add(coords)
# The number 176 represents the threshold at which it is cheaper to
# resend the entire chunk instead of individual blocks.
if len(self.damaged) > 176:
self.all_damaged = True
self.damaged.clear()
def is_damaged(self):
"""
Determine whether any damage is pending on this chunk.
:rtype: bool
:returns: True if any damage is pending on this chunk, False if not.
"""
return self.all_damaged or bool(self.damaged)
def get_damage_packet(self):
"""
Make a packet representing the current damage on this chunk.
This method is not private, but some care should be taken with it,
since it wraps some fairly cryptic internal data structures.
If this chunk is currently undamaged, this method will return an empty
string, which should be safe to treat as a packet. Please check with
`is_damaged()` before doing this if you need to optimize this case.
To avoid extra overhead, this method should really be used in
conjunction with `Factory.broadcast_for_chunk()`.
Do not forget to clear this chunk's damage! Callers are responsible
for doing this.
>>> packet = chunk.get_damage_packet()
>>> factory.broadcast_for_chunk(packet, chunk.x, chunk.z)
>>> chunk.clear_damage()
:rtype: str
:returns: String representation of the packet.
"""
if self.all_damaged:
# Resend the entire chunk!
return self.save_to_packet()
elif not self.damaged:
# Send nothing at all; we don't even have a scratch on us.
return ""
elif len(self.damaged) == 1:
# Use a single block update packet. Find the first (only) set bit
# in the damaged array, and use it as an index.
coords = next(iter(self.damaged))
block = self.get_block(coords)
metadata = self.get_metadata(coords)
x, y, z = coords
return make_packet("block",
x=x + self.x * 16,
y=y,
z=z + self.z * 16,
type=block,
meta=metadata)
else:
# Use a batch update.
records = []
for coords in self.damaged:
block = self.get_block(coords)
metadata = self.get_metadata(coords)
x, y, z = coords
record = x << 28 | z << 24 | y << 16 | block << 4 | metadata
records.append(record)
data = "".join(pack(">I", record) for record in records)
return make_packet("batch", x=self.x, z=self.z,
count=len(records), data=data)
def clear_damage(self):
"""
Clear this chunk's damage.
"""
self.damaged.clear()
self.all_damaged = False
def save_to_packet(self):
"""
Generate a chunk packet.
"""
mask = 0
packed = []
ls = segment_array(self.blocklight)
for i, section in enumerate(self.sections):
if any(section.blocks):
mask |= 1 << i
packed.append(section.blocks.tostring())
for i, section in enumerate(self.sections):
if mask & 1 << i:
packed.append(pack_nibbles(section.metadata))
for i, l in enumerate(ls):
if mask & 1 << i:
packed.append(pack_nibbles(l))
for i, section in enumerate(self.sections):
if mask & 1 << i:
packed.append(pack_nibbles(section.skylight))
# Fake the biome data.
packed.append("\x00" * 256)
packet = make_packet("chunk", x=self.x, z=self.z, continuous=True,
primary=mask, add=0x0, data="".join(packed))
return packet
@check_bounds
def get_block(self, coords):
"""
Look up a block value.
:param tuple coords: coordinate triplet
:rtype: int
:returns: int representing block type
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_block((x, y, z))
@check_bounds
def set_block(self, coords, block):
"""
Update a block value.
:param tuple coords: coordinate triplet
:param int block: block type
"""
x, y, z = coords
index, section_y = divmod(y, 16)
column = x * 16 + z
if self.get_block(coords) != block:
self.sections[index].set_block((x, section_y, z), block)
if not self.populated:
return
# Regenerate heightmap at this coordinate.
if block:
self.heightmap[column] = max(self.heightmap[column], y)
else:
# If we replace the highest block with air, we need to go
# through all blocks below it to find the new top block.
height = self.heightmap[column]
if y == height:
for y in range(height, -1, -1):
if self.get_block((x, y, z)):
break
self.heightmap[column] = y
# Do the blocklight at this coordinate, if appropriate.
if block in glowing_blocks:
composite_glow(self.blocklight, glowing_blocks[block],
x, y, z)
- self.blocklight = array("B", [clamp(x, 0, 15) for x in
- self.blocklight])
+ bl = [clamp(light, 0, 15) for light in self.blocklight]
+ self.blocklight = array("B", bl)
# And the skylight.
glow = max(self.get_skylight((nx, ny, nz))
for nx, nz, ny in iter_neighbors((x, z, y)))
self.set_skylight((x, y, z), neighboring_light(glow, block))
self.dirty = True
self.damage(coords)
@check_bounds
def get_metadata(self, coords):
"""
Look up metadata.
:param tuple coords: coordinate triplet
:rtype: int
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_metadata((x, y, z))
@check_bounds
def set_metadata(self, coords, metadata):
"""
Update metadata.
:param tuple coords: coordinate triplet
:param int metadata:
"""
if self.get_metadata(coords) != metadata:
x, y, z = coords
index, y = divmod(y, 16)
self.sections[index].set_metadata((x, y, z), metadata)
self.dirty = True
self.damage(coords)
@check_bounds
def get_skylight(self, coords):
"""
Look up skylight value.
:param tuple coords: coordinate triplet
:rtype: int
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_skylight((x, y, z))
@check_bounds
def set_skylight(self, coords, value):
"""
Update skylight value.
:param tuple coords: coordinate triplet
:param int metadata:
"""
if self.get_metadata(coords) != value:
x, y, z = coords
index, y = divmod(y, 16)
self.sections[index].set_skylight((x, y, z), value)
@check_bounds
def destroy(self, coords):
"""
Destroy the block at the given coordinates.
This may or may not set the block to be full of air; it uses the
block's preferred replacement. For example, ice generally turns to
water when destroyed.
This is safe as a no-op; for example, destroying a block of air with
no metadata is not going to cause state changes.
:param tuple coords: coordinate triplet
"""
block = blocks[self.get_block(coords)]
self.set_block(coords, block.replace)
self.set_metadata(coords, 0)
def height_at(self, x, z):
"""
Get the height of an xz-column of blocks.
:param int x: X coordinate
:param int z: Z coordinate
:rtype: int
:returns: The height of the given column of blocks.
"""
return self.heightmap[x * 16 + z]
def sed(self, search, replace):
"""
Execute a search and replace on all blocks in this chunk.
Named after the ubiquitous Unix tool. Does a semantic
s/search/replace/g on this chunk's blocks.
:param int search: block to find
:param int replace: block to use as a replacement
"""
for section in self.sections:
for i, block in enumerate(section.blocks):
if block == search:
section.blocks[i] = replace
self.all_damaged = True
self.dirty = True
diff --git a/bravo/location.py b/bravo/location.py
index 437e2fa..c05ae89 100644
--- a/bravo/location.py
+++ b/bravo/location.py
@@ -1,252 +1,252 @@
from __future__ import division
from collections import namedtuple
from copy import copy
from math import atan2, cos, degrees, radians, pi, sin, sqrt
import operator
from construct import Container
from bravo.beta.packets import make_packet
-from bravo.utilities.maths import clamp
def _combinator(op):
def f(self, other):
return self._replace(x=op(self.x, other.x), y=op(self.y, other.y),
z=op(self.z, other.z))
return f
class Position(namedtuple("Position", "x, y, z")):
"""
The coordinates pointing to an entity.
Positions are *always* stored as integer absolute pixel coordinates.
"""
__add__ = _combinator(operator.add)
__sub__ = _combinator(operator.sub)
__mul__ = _combinator(operator.mul)
__div__ = _combinator(operator.div)
@classmethod
def from_player(cls, x, y, z):
"""
Create a ``Position`` from floating-point block coordinates.
"""
return cls(int(x * 32), int(y * 32), int(z * 32))
def to_player(self):
"""
Return this position as floating-point block coordinates.
"""
return self.x / 32, self.y / 32, self.z / 32
def to_block(self):
"""
Return this position as block coordinates.
"""
return int(self.x // 32), int(self.y // 32), int(self.z // 32)
def to_chunk(self):
return int(self.x // 32 // 16), int(self.z // 32 // 16)
def distance(self, other):
"""
Return the distance between this position and another, in absolute
pixels.
"""
dx = (self.x - other.x)**2
dy = (self.y - other.y)**2
dz = (self.z - other.z)**2
return int(sqrt(dx + dy + dz))
def heading(self, other):
"""
Return the heading from this position to another, in radians.
This is a wrapper for the common atan2() expression found in games,
meant to help encapsulate semantics and keep copy-paste errors from
happening.
"""
theta = atan2(self.z - other.z, self.x - other.x) + pi / 2
if theta < 0:
theta += pi * 2
return theta
class Orientation(namedtuple("Orientation", "theta, phi")):
"""
The angles corresponding to the heading of an entity.
Theta and phi are very much like the theta and phi of spherical
coordinates, except that phi's zero is perpendicular to the XZ-plane
rather than pointing straight up or straight down.
Orientation is stored in floating-point radians, for simplicity of
computation. Unfortunately, no wire protocol speaks radians, so several
conversion methods are provided for sanity and convenience.
The ``from_degs()`` and ``to_degs()`` methods provide integer degrees.
This form is called "yaw and pitch" by protocol documentation.
"""
@classmethod
def from_degs(cls, yaw, pitch):
"""
Create an ``Orientation`` from integer degrees.
"""
return cls(radians(yaw) % (pi * 2), radians(pitch))
def to_degs(self):
"""
Return this orientation as integer degrees.
"""
return int(round(degrees(self.theta))), int(round(degrees(self.phi)))
def to_fracs(self):
"""
Return this orientation as fractions of a byte.
"""
yaw = int(self.theta * 255 / (2 * pi)) % 256
pitch = int(self.phi * 255 / (2 * pi)) % 256
return yaw, pitch
class Location(object):
"""
The position and orientation of an entity.
"""
def __init__(self):
# Position in pixels.
self.pos = Position(0, 0, 0)
# Start with a relatively sane stance.
self.stance = 1.0
# Orientation, in radians.
self.ori = Orientation(0.0, 0.0)
# Whether we are in the air.
self.grounded = False
@classmethod
def at_block(cls, x, y, z):
"""
Pinpoint a location at a certain block.
This constructor is intended to aid in pinpointing locations at a
specific block rather than forcing users to do the pixel<->block maths
themselves. Admittedly, the maths in question aren't hard, but there's
no reason to avoid this encapsulation.
"""
location = cls()
location.pos = Position(x * 32, y * 32, z * 32)
return location
def __repr__(self):
return "<Location(%s, (%d, %d (+%.6f), %d), (%.2f, %.2f))>" % (
"grounded" if self.grounded else "midair", self.pos.x, self.pos.y,
self.stance - self.pos.y, self.pos.z, self.ori.theta,
self.ori.phi)
__str__ = __repr__
def clamp(self):
"""
Force this location to be sane.
Forces the position and orientation to be sane, then fixes up
location-specific things, like stance.
:returns: bool indicating whether this location had to be altered
"""
clamped = False
y = self.pos.y
# Clamp Y. We take precautions here and forbid things to go up past
# the top of the world; this tend to strand entities up in the sky
# where they cannot get down. We also forbid entities from falling
# past bedrock.
# TODO: Fix me, I'm broken
+ # XXX how am I broken?
if not (32 * 1) <= y:
y = max(y, 32 * 1)
self.pos = self.pos._replace(y=y)
clamped = True
# Stance is the current jumping position, plus a small offset of
# around 0.1. In the Alpha server, it must between 0.1 and 1.65, or
# the anti-grounded code kicks the client. In the Beta server, though,
# the clamp is different. Experimentally, the stance can range from
# 1.5 (crouching) to 2.4 (jumping). At this point, we enforce some
# sanity on our client, and force the stance to a reasonable value.
fy = y / 32
if not 1.5 < (self.stance - fy) < 2.4:
# Standard standing stance is 1.62.
self.stance = fy + 1.62
clamped = True
return clamped
def save_to_packet(self):
"""
Returns a position/look/grounded packet.
"""
# Get our position.
x, y, z = self.pos.to_player()
# Grab orientation.
yaw, pitch = self.ori.to_degs()
# Note: When this packet is sent from the server, the 'y' and 'stance' fields are swapped.
position = Container(x=x, y=self.stance, z=z, stance=y)
orientation = Container(rotation=yaw, pitch=pitch)
grounded = Container(grounded=self.grounded)
packet = make_packet("location", position=position,
orientation=orientation, grounded=grounded)
return packet
def distance(self, other):
"""
Return the distance between this location and another location.
"""
return self.pos.distance(other.pos)
def in_front_of(self, distance):
"""
Return a ``Location`` a certain number of blocks in front of this
position.
The orientation of the returned location is identical to this
position's orientation.
:param int distance: the number of blocks by which to offset this
position
"""
other = copy(self)
distance *= 32
# Do some trig to put the other location a few blocks ahead of the
# player in the direction they are facing. Note that all three
# coordinates are "misnamed;" the unit circle actually starts at (0,
# 1) and goes *backwards* towards (-1, 0).
x = int(self.pos.x - distance * sin(self.ori.theta))
z = int(self.pos.z + distance * cos(self.ori.theta))
other.pos = other.pos._replace(x=x, z=z)
return other
diff --git a/bravo/service.py b/bravo/service.py
index 635f225..2872763 100644
--- a/bravo/service.py
+++ b/bravo/service.py
@@ -1,109 +1,109 @@
from twisted.application.internet import TCPClient, TCPServer
-from twisted.application.service import Application, MultiService
+from twisted.application.service import MultiService
from twisted.application.strports import service as serviceForEndpoint
from twisted.internet.protocol import Factory
from twisted.python import log
from bravo.amp import ConsoleRPCFactory
from bravo.config import read_configuration
from bravo.beta.factory import BravoFactory
from bravo.infini.factory import InfiniNodeFactory
from bravo.beta.protocol import BetaProxyProtocol
class BetaProxyFactory(Factory):
protocol = BetaProxyProtocol
def __init__(self, config, name):
self.name = name
self.port = config.getint("infiniproxy %s" % name, "port")
def services_for_endpoints(endpoints, factory):
l = []
for endpoint in endpoints:
server = serviceForEndpoint(endpoint, factory)
# XXX hack for bravo.web:135, which wants this. :c
server.args = [None, factory]
server.setName("%s (%s)" % (factory.name, endpoint))
l.append(server)
return l
class BravoService(MultiService):
def __init__(self, path):
"""
Initialize this service.
The path should be a ``FilePath`` which points to the configuration
file to use.
"""
MultiService.__init__(self)
# Grab configuration.
self.config = read_configuration(path)
# Start up our AMP RPC.
self.amp = TCPServer(25601, ConsoleRPCFactory(self))
MultiService.addService(self, self.amp)
self.factorylist = list()
self.irc = False
self.ircbots = list()
self.configure_services()
def addService(self, service):
MultiService.addService(self, service)
def removeService(self, service):
MultiService.removeService(self, service)
def configure_services(self):
for section in self.config.sections():
if section.startswith("world "):
# Bravo worlds. Grab a list of endpoints and load them.
factory = BravoFactory(self.config, section[6:])
interfaces = self.config.getlist(section, "interfaces")
for service in services_for_endpoints(interfaces, factory):
self.addService(service)
self.factorylist.append(factory)
elif section == "web":
try:
from bravo.web import bravo_site
except ImportError:
log.msg("Couldn't import web stuff!")
else:
factory = bravo_site(self.namedServices)
factory.name = "web"
interfaces = self.config.getlist("web", "interfaces")
for service in services_for_endpoints(interfaces, factory):
self.addService(service)
elif section.startswith("irc "):
try:
from bravo.irc import BravoIRC
except ImportError:
log.msg("Couldn't import IRC stuff!")
else:
self.irc = True
self.ircbots.append(section)
elif section.startswith("infiniproxy "):
factory = BetaProxyFactory(self.config, section[12:])
interfaces = self.config.getlist(section, "interfaces")
for service in services_for_endpoints(interfaces, factory):
self.addService(service)
elif section.startswith("infininode "):
factory = InfiniNodeFactory(self.config, section[11:])
interfaces = self.config.getlist(section, "interfaces")
for service in services_for_endpoints(interfaces, factory):
self.addService(service)
if self.irc:
for section in self.ircbots:
factory = BravoIRC(self.factorylist, self.config, section[4:])
client = TCPClient(factory.host, factory.port, factory)
client.setName(factory.config)
self.addService(client)
service = BravoService
diff --git a/bravo/tests/protocols/test_beta.py b/bravo/tests/protocols/test_beta.py
index 9ad4c48..e8fa202 100644
--- a/bravo/tests/protocols/test_beta.py
+++ b/bravo/tests/protocols/test_beta.py
@@ -1,188 +1,186 @@
from twisted.trial.unittest import TestCase
import warnings
from twisted.internet import reactor
from twisted.internet.task import deferLater
-from construct import Container
-
from bravo.beta.protocol import (BetaServerProtocol, BravoProtocol,
STATE_LOCATED)
from bravo.chunk import Chunk
from bravo.config import BravoConfigParser
from bravo.errors import BetaClientError
class FakeTransport(object):
data = []
lost = False
def write(self, data):
self.data.append(data)
def loseConnection(self):
self.lost = True
class FakeFactory(object):
def broadcast(self, packet):
pass
class TestBetaServerProtocol(TestCase):
def setUp(self):
self.p = BetaServerProtocol()
self.p.factory = FakeFactory()
self.p.transport = FakeTransport()
def tearDown(self):
# Stop the connection timeout.
self.p.setTimeout(None)
def test_trivial(self):
pass
def test_health_initial(self):
"""
The client's health should start at 20.
"""
self.assertEqual(self.p.health, 20)
def test_health_invalid(self):
"""
An error is raised when an invalid value is assigned for health.
"""
self.assertRaises(BetaClientError, setattr, self.p, "health", -1)
self.assertRaises(BetaClientError, setattr, self.p, "health", 21)
def test_health_update(self):
"""
The protocol should emit a health update when its health changes.
"""
self.p.transport.data = []
self.p.health = 19
self.assertEqual(len(self.p.transport.data), 1)
self.assertTrue(self.p.transport.data[0].startswith("\x08"))
def test_health_no_change(self):
"""
If health is assigned to but not changed, no health update should be
issued.
"""
self.p.transport.data = []
self.p.health = 20
self.assertFalse(self.p.transport.data)
def test_connection_timeout(self):
"""
Connections should time out after 30 seconds.
"""
def cb():
self.assertTrue(self.p.transport.lost)
d = deferLater(reactor, 31, cb)
return d
def test_latency_overflow(self):
"""
Massive latencies should not cause exceptions to be raised.
"""
# Set the username to avoid a packet generation problem.
self.p.username = "unittest"
# Turn on warning context and warning->error filter; otherwise, only a
# warning will be emitted on Python 2.6 and older, and we want the
# test to always fail in that case.
with warnings.catch_warnings():
warnings.simplefilter("error")
self.p.latency = 70000
class TestBravoProtocol(TestCase):
def setUp(self):
self.bcp = BravoConfigParser()
self.p = BravoProtocol(self.bcp, "unittest")
def tearDown(self):
self.p.setTimeout(None)
def test_trivial(self):
pass
def test_entities_near_unloaded_chunk(self):
"""
entities_near() shouldn't raise a fatal KeyError when a nearby chunk
isn't loaded.
Reported by brachiel on IRC.
"""
list(self.p.entities_near(2))
def test_disable_chunk_invalid(self):
"""
If invalid data is sent to disable_chunk(), no error should happen.
"""
self.p.disable_chunk(0, 0)
class TestBravoProtocolChunks(TestCase):
def setUp(self):
self.bcp = BravoConfigParser()
self.p = BravoProtocol(self.bcp, "unittest")
self.p.setTimeout(None)
self.p.state = STATE_LOCATED
def test_trivial(self):
pass
def test_ascend_zero(self):
"""
``ascend()`` can take a count of zero to ensure that the client is
standing on solid ground.
"""
self.p.location.pos = self.p.location.pos._replace(y=16)
c = Chunk(0, 0)
c.set_block((0, 0, 0), 1)
self.p.chunks[0, 0] = c
self.p.ascend(0)
self.assertEqual(self.p.location.pos.y, 16)
def test_ascend_zero_up(self):
"""
Even with a zero count, ``ascend()`` will move the player to the
correct elevation.
"""
self.p.location.pos = self.p.location.pos._replace(y=16)
c = Chunk(0, 0)
c.set_block((0, 0, 0), 1)
c.set_block((0, 1, 0), 1)
self.p.chunks[0, 0] = c
self.p.ascend(0)
self.assertEqual(self.p.location.pos.y, 32)
def test_ascend_one_up(self):
"""
``ascend()`` moves players upwards.
"""
self.p.location.pos = self.p.location.pos._replace(y=16)
c = Chunk(0, 0)
c.set_block((0, 0, 0), 1)
c.set_block((0, 1, 0), 1)
self.p.chunks[0, 0] = c
self.p.ascend(1)
self.assertEqual(self.p.location.pos.y, 32)
diff --git a/bravo/world.py b/bravo/world.py
index a4ea715..baff818 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,518 +1,517 @@
from array import array
from functools import wraps
from itertools import imap, product
import random
import sys
-import weakref
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
from twisted.internet.task import LoopingCall, coiterate
from twisted.python import log
from bravo.beta.structures import Level
from bravo.chunk import Chunk, CHUNK_HEIGHT
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
class ChunkCache(object):
"""
A cache which holds references to all chunks which should be held in
memory.
This cache remembers chunks that were recently used, that are in permanent
residency, and so forth. Its exact caching algorithm is currently null.
When chunks dirty themselves, they are expected to notify the cache, which
will then schedule an eviction for the chunk.
"""
def __init__(self):
self._perm = {}
self._dirty = {}
def pin(self, chunk):
self._perm[chunk.x, chunk.z] = chunk
def unpin(self, chunk):
del self._perm[chunk.x, chunk.z]
def put(self, chunk):
# XXX expand caching strategy
pass
def get(self, coords):
if coords in self._perm:
return self._perm[coords]
# Returns None if not found!
return self._dirty.get(coords)
def cleaned(self, chunk):
del self._dirty[chunk.x, chunk.z]
def dirtied(self, chunk):
self._dirty[chunk.x, chunk.z] = chunk
def iterperm(self):
return self._perm.itervalues()
def iterdirty(self):
return self._dirty.itervalues()
class ImpossibleCoordinates(Exception):
"""
A coordinate could not ever be valid.
"""
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
chunk = self._cache.get(bigcoords)
if chunk is None:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
_season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
The initial level data.
"""
_cache = None
"""
The chunk cache.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
self._pending_chunks = dict()
@property
def season(self):
return self._season
@season.setter
def season(self, value):
if self._season != value:
self._season = value
if self._cache is not None:
# Issue 388: Apply the season to the permanent cache.
coiterate(imap(value.transform, self._cache.iterperm()))
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
log.msg("World connected on %s, using serializer %s" %
(world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
# Create our cache.
self._cache = ChunkCache()
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
# Start up the permanent cache.
# has_option() is not exactly desirable, but it's appropriate here
# because we don't want to take any action if the key is unset.
if self.config.has_option(self.config_name, "perm_cache"):
cache_level = self.config.getint(self.config_name, "perm_cache")
self.enable_cache(cache_level)
self.chunk_management_loop = LoopingCall(self.flush_chunk)
self.chunk_management_loop.start(1)
# XXX Put this in init or here?
self.mob_manager = MobManager()
# XXX Put this in the managers constructor?
self.mob_manager.world = self
@inlineCallbacks
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
:returns: A ``Deferred`` that fires after the world has stopped.
"""
self.chunk_management_loop.stop()
# Flush all dirty chunks to disk. Don't bother cleaning them off.
for chunk in self._cache.iterdirty():
yield self.save_chunk(chunk)
# Destroy the cache.
self._cache = None
# Save the level data.
yield maybeDeferred(self.serializer.save_level, self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
:returns: A ``Deferred`` which will fire when the cache has been
adjusted.
"""
log.msg("Setting cache size to %d, please hold..." % size)
assign = self._cache.pin
def worker(x, z):
log.msg("Adding %d, %d to cache..." % (x, z))
return self.request_chunk(x, z).addCallback(assign)
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
work = (worker(x, z) for x, z in product(rx, rz))
d = coiterate(work)
@d.addCallback
def notify(none):
log.msg("Cache size is now %d!" % size)
return d
def flush_chunk(self):
"""
Flush a dirty chunk.
This method will always block when there are dirty chunks.
"""
for chunk in self._cache.iterdirty():
# Save a single chunk, and add a callback to remove it from the
# cache when it's been cleaned.
d = self.save_chunk(chunk)
d.addCallback(self._cache.cleaned)
break
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
self.chunk_management_loop.stop()
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
self.chunk_management_loop.start(1)
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
if hasattr(entity, 'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
# First, try the cache.
cached = self._cache.get((x, z))
if cached is not None:
returnValue(cached)
# Is it pending?
if (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
# Create a new chunk object, since the cache turned up empty.
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
# Add in our magic dirtiness hook so that the cache can be aware of
# chunks who have been...naughty.
chunk.dirtied = self._cache.dirtied
if chunk.dirty:
# The chunk was already dirty!? Oh, naughty indeed!
self._cache.dirtied(chunk)
if chunk.populated:
self._cache.put(chunk)
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
|
bravoserver/bravo
|
bbda5f549cb074f57be0f56961551ff48ec6e424
|
inventory: Clean up a couple algorithmic nits.
|
diff --git a/bravo/inventory/__init__.py b/bravo/inventory/__init__.py
index 7ca3b89..0d26f82 100644
--- a/bravo/inventory/__init__.py
+++ b/bravo/inventory/__init__.py
@@ -1,177 +1,166 @@
from itertools import chain
from bravo import blocks
from bravo.beta.structures import Slot
-class NextLoop(Exception):
- pass
class SerializableSlots(object):
'''
Base class for all slots configurations
'''
def __len__(self):
return self.metalength
@property
def metalength(self):
return sum(map(len, self.metalist))
def load_from_list(self, l):
if len(l) < self.metalength:
# XXX why will it break everything? :T
raise AttributeError # otherwise it will break everything
for target in self.metalist:
- if len(target):
+ if target:
target[:], l = l[:len(target)], l[len(target):]
def save_to_list(self):
return [i for i in chain(*self.metalist)]
class Inventory(SerializableSlots):
'''
The class represents Player's inventory
'''
def __init__(self):
self.armor = [None] * 4
self.crafting = [None] * 27
self.storage = [None] * 27
self.holdables = [None] * 9
self.dummy = [None] * 64 # represents gap in serialized structure
def add(self, item, quantity):
"""
Attempt to add an item to the inventory.
:param tuple item: a key representing the item
:returns: quantity of items that did not fit inventory
"""
- while quantity:
- try:
- qty_before = quantity
- # Try to stack first
- for stash in (self.holdables, self.storage):
- for i, slot in enumerate(stash):
- if slot is not None and slot.holds(item) and slot.quantity < 64 \
- and slot.primary not in blocks.unstackable:
- count = slot.quantity + quantity
- if count > 64:
- count, quantity = 64, count - 64
- else:
- quantity = 0
- stash[i] = slot.replace(quantity=count)
- if quantity == 0:
- return 0
- # one more loop for rest of items
- raise NextLoop # break to outer while loop
- # try to find empty space
- for stash in (self.holdables, self.storage):
- for i, slot in enumerate(stash):
- if slot is None:
- stash[i] = Slot(item[0], item[1], quantity)
- return 0
- if qty_before == quantity:
- # did one loop but was not able to put any of the items
- break
- except NextLoop:
- # used to break out of all 'for' loops
- pass
+ # Try to stack first
+ for stash in (self.holdables, self.storage):
+ for i, slot in enumerate(stash):
+ if slot is not None and slot.holds(item) and slot.quantity < 64 \
+ and slot.primary not in blocks.unstackable:
+ count = slot.quantity + quantity
+ if count > 64:
+ count, quantity = 64, count - 64
+ else:
+ quantity = 0
+ stash[i] = slot.replace(quantity=count)
+ if quantity == 0:
+ return 0
+
+ # try to find empty space
+ for stash in (self.holdables, self.storage):
+ for i, slot in enumerate(stash):
+ if slot is None:
+ # XXX bug; might overflow a slot!
+ stash[i] = Slot(item[0], item[1], quantity)
+ return 0
+
return quantity
def consume(self, item, index):
"""
Attempt to remove a used holdable from the inventory.
A return value of ``False`` indicates that there were no holdables of
the given type and slot to consume.
:param tuple item: a key representing the type of the item
:param int slot: which slot was selected
:returns: whether the item was successfully removed
"""
slot = self.holdables[index]
# Can't really remove things from an empty slot...
if slot is None:
return False
if slot.holds(item):
self.holdables[index] = slot.decrement()
return True
return False
def select_armor(self, index, alternate, shift, selected = None):
"""
Handle a slot selection on an armor slot.
:returns tuple: ( True/False, new selection )
"""
# Special case for armor slots.
allowed_items_per_slot = {
0: blocks.armor_helmets, 1: blocks.armor_chestplates,
2: blocks.armor_leggings, 3: blocks.armor_boots
}
allowed_items = allowed_items_per_slot[index]
if selected is not None:
sslot = selected
if sslot.primary not in allowed_items:
return (False, selected)
if self.armor[index] is None:
# Put one armor piece into the slot, decrement the amount
# in the selection.
self.armor[index] = sslot.replace(quantity=1)
selected = sslot.decrement()
else:
# If both slot and selection are the same item, do nothing.
# If not, the quantity needs to be 1, because only one item
# fits into the slot, and exchanging slot and selection is not
# possible otherwise.
if not self.armor[index].holds(sslot) and sslot.quantity == 1:
selected, self.armor[index] = self.armor[index], selected
else:
return (False, selected)
else:
if self.armor[index] is None:
# Slot and selection are empty, do nothing.
return (False, selected)
else:
# Move item in the slot into the selection.
selected = self.armor[index]
self.armor[index] = None
# Yeah, okay, success.
return (True, selected)
#
# The methods below are for serialization purposes only.
#
@property
def metalist(self):
# this one is used for serialization
return [self.holdables, self.storage, self.dummy, self.armor]
def load_from_list(self, l):
SerializableSlots.load_from_list(self, l)
# reverse armor slots (notchian)
- self.armor = [i for i in reversed(self.armor)]
+ self.armor.reverse()
def save_to_list(self):
- # save armor
- tmp_armor = self.armor
# reverse armor (notchian)
- self.armor = [i for i in reversed(self.armor)]
+ self.armor.reverse()
# generate the list
l = SerializableSlots.save_to_list(self)
# restore armor
- self.armor = tmp_armor
+ self.armor.reverse()
+
return l
diff --git a/bravo/inventory/windows.py b/bravo/inventory/windows.py
index af94d57..b4cb2b2 100644
--- a/bravo/inventory/windows.py
+++ b/bravo/inventory/windows.py
@@ -1,444 +1,435 @@
from itertools import chain, izip
from construct import Container, ListContainer
from bravo import blocks
from bravo.beta.packets import make_packet
from bravo.beta.structures import Slot
from bravo.inventory import SerializableSlots
from bravo.inventory.slots import Crafting, Workbench, LargeChestStorage
-class NextLoop(Exception):
- pass
class Window(SerializableSlots):
"""
Item manager
The ``Window`` covers all kinds of inventory and crafting windows,
ranging from user inventories to furnaces and workbenches.
The ``Window`` agregates player's inventory and other crafting/storage slots
as building blocks of the window.
:param int wid: window ID
:param Inventory inventory: player's inventory object
:param SlotsSet slots: other window slots
"""
def __init__(self, wid, inventory, slots):
self.inventory = inventory
self.slots = slots
self.wid = wid
self.selected = None
self.coords = None
# NOTE: The property must be defined in every final class
# of certain window. Never use generic one. This can lead to
# awfull bugs.
#@property
#def metalist(self):
# m = [self.slots.crafted, self.slots.crafting,
# self.slots.fuel, self.slots.storage]
# m += [self.inventory.storage, self.inventory.holdables]
# return m
@property
def slots_num(self):
return self.slots.slots_num
@property
def identifier(self):
return self.slots.identifier
@property
def title(self):
return self.slots.title
def container_for_slot(self, slot):
"""
Retrieve the table and index for a given slot.
There is an isomorphism here which allows all of the tables of this
``Window`` to be viewed as a single large table of slots.
"""
for l in self.metalist:
if not len(l):
continue
if slot < len(l):
return l, slot
slot -= len(l)
def slot_for_container(self, table, index):
"""
Retrieve slot number for given table and index.
"""
i = 0
for t in self.metalist:
l = len(t)
if t is table:
if l == 0 or l <= index:
return -1
else:
i += index
return i
else:
i += l
return -1
def load_from_packet(self, container):
"""
Load data from a packet container.
"""
items = [None] * self.metalength
for i, item in enumerate(container.items):
if item.id < 0:
items[i] = None
else:
items[i] = Slot(item.id, item.damage, item.count)
self.load_from_list(items)
def save_to_packet(self):
lc = ListContainer()
for item in chain(*self.metalist):
if item is None:
lc.append(Container(primary=-1))
else:
lc.append(Container(primary=item.primary,
secondary=item.secondary, count=item.quantity))
packet = make_packet("inventory", wid=self.wid, length=len(lc), items=lc)
return packet
def select_stack(self, container, index):
"""
Handle stacking of items (Shift + RMB/LMB)
"""
item = container[index]
if item is None:
return False
loop_over = enumerate # default enumerator - from start to end
# same as enumerate() but in reverse order
reverse_enumerate = lambda l: izip(xrange(len(l)-1, -1, -1), reversed(l))
if container is self.slots.crafting or container is self.slots.fuel:
- targets = (self.inventory.storage, self.inventory.holdables)
+ targets = self.inventory.storage, self.inventory.holdables
elif container is self.slots.crafted or container is self.slots.storage:
- targets = (self.inventory.holdables, self.inventory.storage)
+ targets = self.inventory.holdables, self.inventory.storage
# in this case notchian client enumerates from the end. o_O
loop_over = reverse_enumerate
elif container is self.inventory.storage:
- if len(self.slots.storage):
- targets = (self.slots.storage,)
+ if self.slots.storage:
+ targets = self.slots.storage,
else:
- targets = (self.inventory.holdables,)
+ targets = self.inventory.holdables,
elif container is self.inventory.holdables:
- if len(self.slots.storage):
- targets = (self.slots.storage,)
+ if self.slots.storage:
+ targets = self.slots.storage,
else:
- targets = (self.inventory.storage,)
+ targets = self.inventory.storage,
else:
return False
- # find same item to stack
initial_quantity = item_quantity = item.quantity
- while item_quantity:
- try:
- qty_before = item_quantity
- for stash in targets:
- for i, slot in loop_over(stash):
- if slot is not None and slot.holds(item) and slot.quantity < 64 \
- and slot.primary not in blocks.unstackable:
- count = slot.quantity + item_quantity
- if count > 64:
- count, item_quantity = 64, count - 64
- else:
- item_quantity = 0
- stash[i] = slot.replace(quantity=count)
- container[index] = item.replace(quantity=item_quantity)
- self.mark_dirty(stash, i)
- self.mark_dirty(container, index)
- if item_quantity == 0:
- container[index] = None
- return True
- # one more loop for rest of items
- raise NextLoop # break to outer while loop
- # find empty space to move
- for stash in targets:
- for i, slot in loop_over(stash):
- if slot is None:
- stash[i] = item.replace(quantity=item_quantity)
- container[index] = None
- self.mark_dirty(stash, i)
- self.mark_dirty(container, index)
- return True
- if item_quantity == qty_before:
- # did one loop but was not able to put any of the items
- break
- except NextLoop:
- # used to break out of all 'for' loops
- pass
+
+ # find same item to stack
+ for stash in targets:
+ for i, slot in loop_over(stash):
+ if slot is not None and slot.holds(item) and slot.quantity < 64 \
+ and slot.primary not in blocks.unstackable:
+ count = slot.quantity + item_quantity
+ if count > 64:
+ count, item_quantity = 64, count - 64
+ else:
+ item_quantity = 0
+ stash[i] = slot.replace(quantity=count)
+ container[index] = item.replace(quantity=item_quantity)
+ self.mark_dirty(stash, i)
+ self.mark_dirty(container, index)
+ if item_quantity == 0:
+ container[index] = None
+ return True
+
+ # find empty space to move
+ for stash in targets:
+ for i, slot in loop_over(stash):
+ if slot is None:
+ # XXX bug; might overflow a slot!
+ stash[i] = item.replace(quantity=item_quantity)
+ container[index] = None
+ self.mark_dirty(stash, i)
+ self.mark_dirty(container, index)
+ return True
+
return initial_quantity != item_quantity
def select(self, slot, alternate=False, shift=False):
"""
Handle a slot selection.
This method implements the basic public interface for interacting with
``Inventory`` objects. It is directly equivalent to mouse clicks made
upon slots.
:param int slot: which slot was selected
:param bool alternate: whether the selection is alternate; e.g., if it
was done with a right-click
:param bool shift: whether the shift key is toogled
"""
# Look up the container and offset.
# If, for any reason, our slot is out-of-bounds, then
# container_for_slot will return None. In that case, catch the error
# and return False.
try:
l, index = self.container_for_slot(slot)
except TypeError:
return False
if l is self.inventory.armor:
result, self.selected = self.inventory.select_armor(index,
alternate, shift, self.selected)
return result
elif l is self.slots.crafted:
if shift: # shift-click on crafted slot
# Notchian client works this way: you lose items
# that was not moved to inventory. So, it's not a bug.
if (self.select_stack(self.slots.crafted, 0)):
# As select_stack() call took items from crafted[0]
# we must update the recipe to generate new item there
self.slots.update_crafted()
# and now we emulate taking of the items
result, temp = self.slots.select_crafted(0, alternate, True, None)
else:
result = False
else:
result, self.selected = self.slots.select_crafted(index,
alternate, shift, self.selected)
return result
elif shift:
return self.select_stack(l, index)
elif self.selected is not None and l[index] is not None:
sslot = self.selected
islot = l[index]
if islot.holds(sslot) and islot.primary not in blocks.unstackable:
# both contain the same item
if alternate:
if islot.quantity < 64:
l[index] = islot.increment()
self.selected = sslot.decrement()
self.mark_dirty(l, index)
else:
if sslot.quantity + islot.quantity <= 64:
# Sum of items fits in one slot, so this is easy.
l[index] = islot.increment(sslot.quantity)
self.selected = None
else:
# fill up slot to 64, move left overs to selection
# valid for left and right mouse click
l[index] = islot.replace(quantity=64)
self.selected = sslot.replace(
quantity=sslot.quantity + islot.quantity - 64)
self.mark_dirty(l, index)
else:
# Default case: just swap
# valid for left and right mouse click
self.selected, l[index] = l[index], self.selected
self.mark_dirty(l, index)
else:
if alternate:
if self.selected is not None:
sslot = self.selected
l[index] = sslot.replace(quantity=1)
self.selected = sslot.decrement()
self.mark_dirty(l, index)
elif l[index] is None:
# Right click on empty inventory slot does nothing
return False
else:
# Logically, l[index] is not None, but self.selected is.
islot = l[index]
scount = islot.quantity // 2
scount, lcount = islot.quantity - scount, scount
l[index] = islot.replace(quantity=lcount)
self.selected = islot.replace(quantity=scount)
self.mark_dirty(l, index)
else:
# Default case: just swap.
self.selected, l[index] = l[index], self.selected
self.mark_dirty(l, index)
# At this point, we've already finished touching our selection; this
# is just a state update.
if l is self.slots.crafting:
self.slots.update_crafted()
return True
def close(self):
'''
Clear crafting areas and return items to drop and packets to send to client
'''
items = []
packets = ""
# slots on close action
it, pk = self.slots.close(self.wid)
items += it
packets += pk
# drop 'item on cursor'
items += self.drop_selected()
return items, packets
def drop_selected(self, alternate=False):
items = []
if self.selected is not None:
if alternate: # drop one item
i = Slot(self.selected.primary, self.selected.secondary, 1)
items.append(i)
self.selected = self.selected.decrement()
else: # drop all
items.append(self.selected)
self.selected = None
return items
def mark_dirty(self, table, index):
# override later in SharedWindow
pass
def packets_for_dirty(self, a):
# override later in SharedWindow
return ""
class InventoryWindow(Window):
'''
Special case of window - player's inventory window
'''
def __init__(self, inventory):
Window.__init__(self, 0, inventory, Crafting())
@property
def slots_num(self):
# Actually it doesn't matter. Client never notifies when it opens inventory
return 5
@property
def identifier(self):
# Actually it doesn't matter. Client never notifies when it opens inventory
return "inventory"
@property
def title(self):
# Actually it doesn't matter. Client never notifies when it opens inventory
return "Inventory"
@property
def metalist(self):
m = [self.slots.crafted, self.slots.crafting]
m += [self.inventory.armor, self.inventory.storage, self.inventory.holdables]
return m
def creative(self, slot, primary, secondary, quantity):
''' Process inventory changes made in creative mode
'''
try:
container, index = self.container_for_slot(slot)
except TypeError:
return False
# Current notchian implementation has only holdable slots.
# Prevent changes in other slots.
if container is self.inventory.holdables:
container[index] = Slot(primary, secondary, quantity)
return True
else:
return False
class WorkbenchWindow(Window):
def __init__(self, wid, inventory):
Window.__init__(self, wid, inventory, Workbench())
@property
def metalist(self):
# Window.metalist will work fine as well,
# but this verion works a little bit faster
m = [self.slots.crafted, self.slots.crafting]
m += [self.inventory.storage, self.inventory.holdables]
return m
class SharedWindow(Window):
"""
Base class for all windows with shared containers (like chests, furnace and dispenser)
"""
def __init__(self, wid, inventory, slots, coords):
"""
:param int wid: window ID
:param Inventory inventory: player's inventory object
:param Tile tile: tile object
:param tuple coords: world coords of the tile (bigx, smallx, bigz, smallz, y)
"""
Window.__init__(self, wid, inventory, slots)
self.coords = coords
self.dirty_slots = {} # { slot : value, ... }
def mark_dirty(self, table, index):
# player's inventory are not shareable slots, skip it
if table in self.slots.metalist:
slot = self.slot_for_container(table, index)
self.dirty_slots[slot] = table[index]
def packets_for_dirty(self, dirty_slots):
"""
Generate update packets for dirty usually privided by another window (sic!)
"""
packets = ""
for slot, item in dirty_slots.iteritems():
if item is None:
packets += make_packet("window-slot", wid=self.wid, slot=slot, primary=-1)
else:
packets += make_packet("window-slot", wid=self.wid, slot=slot,
primary=item.primary, secondary=item.secondary,
count=item.quantity)
return packets
class ChestWindow(SharedWindow):
@property
def metalist(self):
m = [self.slots.storage, self.inventory.storage, self.inventory.holdables]
return m
class LargeChestWindow(SharedWindow):
def __init__(self, wid, inventory, chest1, chest2, coords):
chests_storage = LargeChestStorage(chest1.storage, chest2.storage)
SharedWindow.__init__(self, wid, inventory, chests_storage, coords)
@property
def metalist(self):
m = [self.slots.storage, self.inventory.storage, self.inventory.holdables]
return m
class FurnaceWindow(SharedWindow):
@property
def metalist(self):
m = [self.slots.crafting, self.slots.fuel, self.slots.crafted]
m += [self.inventory.storage, self.inventory.holdables]
return m
|
bravoserver/bravo
|
263b6148870e7b096b0f12c1e36c889ee391d504
|
plugins/automatons: Allow grass to be deterministic in unit tests.
|
diff --git a/bravo/plugins/automatons.py b/bravo/plugins/automatons.py
index 3d411e6..779d9ae 100644
--- a/bravo/plugins/automatons.py
+++ b/bravo/plugins/automatons.py
@@ -1,213 +1,214 @@
from __future__ import division
from collections import deque
from itertools import product
-from random import randint, random
+from random import Random, randint
from twisted.internet import reactor
from twisted.internet.task import LoopingCall
from zope.interface import implements
from bravo.blocks import blocks
from bravo.ibravo import IAutomaton, IDigHook
from bravo.terrain.trees import ConeTree, NormalTree, RoundTree, RainforestTree
from bravo.utilities.automatic import column_scan
from bravo.world import ChunkNotLoaded
class Trees(object):
"""
Turn saplings into trees.
"""
implements(IAutomaton)
blocks = (blocks["sapling"].slot,)
grow_step_min = 15
grow_step_max = 60
trees = [
NormalTree,
ConeTree,
RoundTree,
RainforestTree,
]
def __init__(self, factory):
self.factory = factory
self.tracked = set()
def start(self):
# Noop for now -- this is wrong for several reasons.
pass
def stop(self):
for call in self.tracked:
if call.active():
call.cancel()
def process(self, coords):
try:
metadata = self.factory.world.sync_get_metadata(coords)
# Is this sapling ready to grow into a big tree? We use a bit-trick to
# check.
if metadata >= 12:
# Tree time!
tree = self.trees[metadata % 4](pos=coords)
tree.prepare(self.factory.world)
tree.make_trunk(self.factory.world)
tree.make_foliage(self.factory.world)
# We can't easily tell how many chunks were modified, so we have
# to flush all of them.
self.factory.flush_all_chunks()
else:
# Increment metadata.
metadata += 4
self.factory.world.sync_set_metadata(coords, metadata)
call = reactor.callLater(
randint(self.grow_step_min, self.grow_step_max), self.process,
coords)
self.tracked.add(call)
# Filter tracked set.
self.tracked = set(i for i in self.tracked if i.active())
except ChunkNotLoaded:
pass
def feed(self, coords):
call = reactor.callLater(
randint(self.grow_step_min, self.grow_step_max), self.process,
coords)
self.tracked.add(call)
scan = column_scan
name = "trees"
class Grass(object):
implements(IAutomaton, IDigHook)
- blocks = (blocks["dirt"].slot,)
+ blocks = blocks["dirt"].slot,
step = 1
def __init__(self, factory):
self.factory = factory
+ self.r = Random()
self.tracked = deque()
self.loop = LoopingCall(self.process)
def start(self):
if not self.loop.running:
self.loop.start(self.step, now=False)
def stop(self):
if self.loop.running:
self.loop.stop()
def process(self):
if not self.tracked:
return
# Effectively stop tracking this block. We'll add it back in if we're
# not finished with it.
coords = self.tracked.pop()
# Try to do our neighbor lookups. If it can't happen, don't worry
# about it; we can get to it later. Grass isn't exactly a
# super-high-tension thing that must happen.
try:
current = self.factory.world.sync_get_block(coords)
if current == blocks["dirt"].slot:
# Yep, it's still dirt. Let's look around and see whether it
# should be grassy. Our general strategy is as follows: We
# look at the blocks nearby. If at least eight of them are
# grass, grassiness is guaranteed, but if none of them are
# grass, grassiness just won't happen.
x, y, z = coords
# First things first: Grass can't grow if there's things on
# top of it, so check that first.
above = self.factory.world.sync_get_block((x, y + 1, z))
if above:
return
# The number of grassy neighbors.
grasses = 0
# Intentional shadow.
for x, y, z in product(xrange(x - 1, x + 2),
xrange(max(y - 1, 0), y + 4), xrange(z - 1, z + 2)):
# Early-exit to avoid block lookup if we finish early.
if grasses >= 8:
break
block = self.factory.world.sync_get_block((x, y, z))
if block == blocks["grass"].slot:
grasses += 1
# Randomly determine whether we are finished.
- if grasses / 8 >= random():
+ if grasses / 8 >= self.r.random():
# Hey, let's make some grass.
self.factory.world.set_block(coords, blocks["grass"].slot)
# And schedule the chunk to be flushed.
x, y, z = coords
d = self.factory.world.request_chunk(x // 16, z // 16)
d.addCallback(self.factory.flush_chunk)
else:
# Not yet; add it back to the list.
self.tracked.appendleft(coords)
except ChunkNotLoaded:
pass
def feed(self, coords):
self.tracked.appendleft(coords)
scan = column_scan
def dig_hook(self, chunk, x, y, z, block):
if y > 0:
block = chunk.get_block((x, y - 1, z))
if block in self.blocks:
# Track it now.
coords = (chunk.x * 16 + x, y - 1, chunk.z * 16 + z)
self.tracked.appendleft(coords)
name = "grass"
before = tuple()
after = tuple()
class Rain(object):
"""
Make it rain.
Rain only occurs during spring.
"""
implements(IAutomaton)
blocks = tuple()
def __init__(self, factory):
self.factory = factory
self.season_loop = LoopingCall(self.check_season)
def scan(self, chunk):
pass
def feed(self, coords):
pass
def start(self):
self.season_loop.start(5 * 60)
def stop(self):
self.season_loop.stop()
def check_season(self):
if self.factory.world.season.name == "spring":
self.factory.vane.weather = "rainy"
reactor.callLater(1 * 60, setattr, self.factory.vane, "weather",
"sunny")
name = "rain"
diff --git a/bravo/tests/plugins/test_automatons.py b/bravo/tests/plugins/test_automatons.py
index 0217493..cf3a029 100644
--- a/bravo/tests/plugins/test_automatons.py
+++ b/bravo/tests/plugins/test_automatons.py
@@ -1,214 +1,214 @@
from itertools import product
from unittest import TestCase
from twisted.internet.defer import inlineCallbacks
from bravo.blocks import blocks
from bravo.config import BravoConfigParser
from bravo.ibravo import IAutomaton
from bravo.plugin import retrieve_plugins
from bravo.world import World
class GrassMockFactory(object):
def flush_chunk(self, chunk):
pass
def flush_all_chunks(self):
pass
def scan_chunk(self, chunk):
pass
class TestGrass(TestCase):
def setUp(self):
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, "unittest")
self.w.pipeline = []
self.w.start()
self.f = GrassMockFactory()
self.f.world = self.w
self.w.factory = self.f
plugins = retrieve_plugins(IAutomaton, factory=self.f)
self.hook = plugins["grass"]
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
@inlineCallbacks
def test_not_dirt(self):
"""
Blocks which aren't dirt by the time they're processed will be
ignored.
"""
chunk = yield self.w.request_chunk(0, 0)
chunk.set_block((0, 0, 0), blocks["bedrock"].slot)
# Run the loop once.
self.hook.feed((0, 0, 0))
self.hook.process()
# We shouldn't have any pending blocks now.
self.assertFalse(self.hook.tracked)
@inlineCallbacks
def test_unloaded_chunk(self):
"""
The grass automaton can't load chunks, so it will stop tracking blocks
on the edge of the loaded world.
"""
chunk = yield self.w.request_chunk(0, 0)
chunk.set_block((0, 0, 0), blocks["dirt"].slot)
# Run the loop once.
self.hook.feed((0, 0, 0))
self.hook.process()
# We shouldn't have any pending blocks now.
self.assertFalse(self.hook.tracked)
@inlineCallbacks
def test_surrounding(self):
"""
When surrounded by eight grassy neighbors, dirt should turn into grass
immediately.
"""
chunk = yield self.w.request_chunk(0, 0)
# Set up grassy surroundings.
for x, z in product(xrange(0, 3), repeat=2):
chunk.set_block((x, 0, z), blocks["grass"].slot)
# Our lone Cinderella.
chunk.set_block((1, 0, 1), blocks["dirt"].slot)
# Do the actual hook run. This should take exactly one run.
self.hook.feed((1, 0, 1))
self.hook.process()
self.assertFalse(self.hook.tracked)
self.assertEqual(chunk.get_block((1, 0, 1)), blocks["grass"].slot)
def test_surrounding_not_dirt(self):
"""
Blocks which aren't dirt by the time they're processed will be
ignored, even when surrounded by grass.
"""
d = self.w.request_chunk(0, 0)
@d.addCallback
def cb(chunk):
# Set up grassy surroundings.
for x, z in product(xrange(0, 3), repeat=2):
chunk.set_block((x, 0, z), blocks["grass"].slot)
chunk.set_block((1, 0, 1), blocks["bedrock"].slot)
# Run the loop once.
self.hook.feed((1, 0, 1))
self.hook.process()
# We shouldn't have any pending blocks now.
self.assertFalse(self.hook.tracked)
return d
@inlineCallbacks
def test_surrounding_obstructed(self):
"""
Grass can't grow on blocks which have other blocks on top of them.
"""
chunk = yield self.w.request_chunk(0, 0)
# Set up grassy surroundings.
for x, z in product(xrange(0, 3), repeat=2):
chunk.set_block((x, 0, z), blocks["grass"].slot)
# Put an obstruction on top.
chunk.set_block((1, 1, 1), blocks["stone"].slot)
# Our lone Cinderella.
chunk.set_block((1, 0, 1), blocks["dirt"].slot)
# Do the actual hook run. This should take exactly one run.
self.hook.feed((1, 0, 1))
self.hook.process()
self.assertFalse(self.hook.tracked)
self.assertEqual(chunk.get_block((1, 0, 1)), blocks["dirt"].slot)
@inlineCallbacks
def test_above(self):
"""
Grass spreads downwards.
"""
chunk = yield self.w.request_chunk(0, 0)
# Set up grassy surroundings.
for x, z in product(xrange(0, 3), repeat=2):
chunk.set_block((x, 1, z), blocks["grass"].slot)
chunk.destroy((1, 1, 1))
# Our lone Cinderella.
chunk.set_block((1, 0, 1), blocks["dirt"].slot)
# Do the actual hook run. This should take exactly one run.
self.hook.feed((1, 0, 1))
self.hook.process()
self.assertFalse(self.hook.tracked)
self.assertEqual(chunk.get_block((1, 0, 1)), blocks["grass"].slot)
def test_two_of_four(self):
"""
Grass should eventually spread to all filled-in plots on a 2x2 grid.
Discovered by TkTech.
"""
d = self.w.request_chunk(0, 0)
@d.addCallback
def cb(chunk):
for x, y, z in product(xrange(0, 4), xrange(0, 2), xrange(0, 4)):
chunk.set_block((x, y, z), blocks["grass"].slot)
for x, z in product(xrange(1, 3), repeat=2):
chunk.set_block((x, 1, z), blocks["dirt"].slot)
self.hook.feed((1, 1, 1))
self.hook.feed((2, 1, 1))
self.hook.feed((1, 1, 2))
self.hook.feed((2, 1, 2))
- # Run to completion. This can take varying amounts of time
- # depending on the RNG, but it should be fairly speedy.
- # XXX patch the RNG so we can do this deterministically
+ # Run to completion. This is still done with a live RNG, but we
+ # patch it here for determinism.
+ self.hook.r.seed(42)
while self.hook.tracked:
self.hook.process()
self.assertEqual(chunk.get_block((1, 1, 1)), blocks["grass"].slot)
self.assertEqual(chunk.get_block((2, 1, 1)), blocks["grass"].slot)
self.assertEqual(chunk.get_block((1, 1, 2)), blocks["grass"].slot)
self.assertEqual(chunk.get_block((2, 1, 2)), blocks["grass"].slot)
|
bravoserver/bravo
|
4085b138b0344ff41452906c6ad047c50a810602
|
beta: Update from protocol 61 to 74.
|
diff --git a/bravo/beta/packets.py b/bravo/beta/packets.py
index 259c494..a0b512c 100644
--- a/bravo/beta/packets.py
+++ b/bravo/beta/packets.py
@@ -1,1043 +1,1063 @@
from collections import namedtuple
from construct import Struct, Container, Embed, Enum, MetaField
from construct import MetaArray, If, Switch, Const, Peek, Magic
from construct import OptionalGreedyRange, RepeatUntil
from construct import Flag, PascalString, Adapter
from construct import UBInt8, UBInt16, UBInt32, UBInt64
from construct import SBInt8, SBInt16, SBInt32
from construct import BFloat32, BFloat64
from construct import BitStruct, BitField
from construct import StringAdapter, LengthValueAdapter, Sequence
from construct import ConstructError
def IPacket(object):
"""
Interface for packets.
"""
def parse(buf, offset):
"""
Parse a packet out of the given buffer, starting at the given offset.
If the parse is successful, returns a tuple of the parsed packet and
the next packet offset in the buffer.
If the parse fails due to insufficient data, returns a tuple of None
and the amount of data required before the parse can be retried.
Exceptions may be raised if the parser finds invalid data.
"""
def simple(name, fmt, *args):
"""
Make a customized namedtuple representing a simple, primitive packet.
"""
from struct import Struct
s = Struct(fmt)
@classmethod
def parse(cls, buf, offset):
if len(buf) >= s.size + offset:
unpacked = s.unpack_from(buf, offset)
return cls(*unpacked), s.size + offset
else:
return None, s.size - len(buf)
def build(self):
return s.pack(*self)
methods = {
"parse": parse,
"build": build,
}
return type(name, (namedtuple(name, *args),), methods)
DUMP_ALL_PACKETS = False
# Strings.
# This one is a UCS2 string, which effectively decodes single writeChar()
# invocations. We need to import the encoding for it first, though.
from bravo.encodings import ucs2
from codecs import register
register(ucs2)
class DoubleAdapter(LengthValueAdapter):
def _encode(self, obj, context):
return len(obj) / 2, obj
def AlphaString(name):
return StringAdapter(
DoubleAdapter(
Sequence(name,
UBInt16("length"),
MetaField("data", lambda ctx: ctx["length"] * 2),
)
),
encoding="ucs2",
)
# Boolean converter.
def Bool(*args, **kwargs):
return Flag(*args, default=True, **kwargs)
# Flying, position, and orientation, reused in several places.
grounded = Struct("grounded", UBInt8("grounded"))
position = Struct("position",
BFloat64("x"),
BFloat64("y"),
BFloat64("stance"),
BFloat64("z")
)
orientation = Struct("orientation", BFloat32("rotation"), BFloat32("pitch"))
# TODO: this must be replaced with 'slot' (see below)
# Notchian item packing (slot data)
items = Struct("items",
SBInt16("primary"),
If(lambda context: context["primary"] >= 0,
Embed(Struct("item_information",
UBInt8("count"),
UBInt16("secondary"),
Magic("\xff\xff"),
)),
),
)
Speed = namedtuple('speed', 'x y z')
class Slot(object):
def __init__(self, item_id=-1, count=1, damage=0, nbt=None):
self.item_id = item_id
self.count = count
self.damage = damage
# TODO: Implement packing/unpacking of gzipped NBT data
self.nbt = nbt
@classmethod
def fromItem(cls, item, count):
return cls(item[0], count, item[1])
@property
def is_empty(self):
return self.item_id == -1
def __len__(self):
return 0 if self.nbt is None else len(self.nbt)
def __repr__(self):
from bravo.blocks import items
if self.is_empty:
return 'Slot()'
elif len(self):
return 'Slot(%s, count=%d, damage=%d, +nbt:%dB)' % (
str(items[self.item_id]), self.count, self.damage, len(self)
)
else:
return 'Slot(%s, count=%d, damage=%d)' % (
str(items[self.item_id]), self.count, self.damage
)
def __eq__(self, other):
return (self.item_id == other.item_id and
self.count == other.count and
self.damage == self.damage and
self.nbt == self.nbt)
class SlotAdapter(Adapter):
def _decode(self, obj, context):
if obj.item_id == -1:
s = Slot(obj.item_id)
else:
s = Slot(obj.item_id, obj.count, obj.damage, obj.nbt)
return s
def _encode(self, obj, context):
if not isinstance(obj, Slot):
raise ConstructError('Slot object expected')
if obj.is_empty:
return Container(item_id=-1)
else:
return Container(item_id=obj.item_id, count=obj.count, damage=obj.damage,
nbt_len=len(obj) if len(obj) else -1, nbt=obj.nbt)
slot = SlotAdapter(
Struct("slot",
SBInt16("item_id"),
If(lambda context: context["item_id"] >= 0,
Embed(Struct("item_information",
UBInt8("count"),
UBInt16("damage"),
SBInt16("nbt_len"),
If(lambda context: context["nbt_len"] >= 0,
MetaField("nbt", lambda ctx: ctx["nbt_len"])
)
)),
)
)
)
Metadata = namedtuple("Metadata", "type value")
metadata_types = ["byte", "short", "int", "float", "string", "slot", "coords"]
# Metadata adaptor.
class MetadataAdapter(Adapter):
def _decode(self, obj, context):
d = {}
for m in obj.data:
d[m.id.key] = Metadata(metadata_types[m.id.type], m.value)
return d
def _encode(self, obj, context):
c = Container(data=[], terminator=None)
for k, v in obj.iteritems():
t, value = v
d = Container(
id=Container(type=metadata_types.index(t), key=k),
value=value,
peeked=None)
c.data.append(d)
if c.data:
c.data[-1].peeked = 127
else:
c.data.append(Container(id=Container(first=0, second=0), value=0,
peeked=127))
return c
# Metadata inner container.
metadata_switch = {
0: UBInt8("value"),
1: UBInt16("value"),
2: UBInt32("value"),
3: BFloat32("value"),
4: AlphaString("value"),
5: slot,
6: Struct("coords",
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
),
}
# Metadata subconstruct.
metadata = MetadataAdapter(
Struct("metadata",
RepeatUntil(lambda obj, context: obj["peeked"] == 0x7f,
Struct("data",
BitStruct("id",
BitField("type", 3),
BitField("key", 5),
),
Switch("value", lambda context: context["id"]["type"],
metadata_switch),
Peek(UBInt8("peeked")),
),
),
Const(UBInt8("terminator"), 0x7f),
),
)
# Build faces, used during dig and build.
faces = {
"noop": -1,
"-y": 0,
"+y": 1,
"-z": 2,
"+z": 3,
"-x": 4,
"+x": 5,
}
face = Enum(SBInt8("face"), **faces)
# World dimension.
dimensions = {
"earth": 0,
"sky": 1,
"nether": 255,
}
dimension = Enum(UBInt8("dimension"), **dimensions)
# Difficulty levels
difficulties = {
"peaceful": 0,
"easy": 1,
"normal": 2,
"hard": 3,
}
difficulty = Enum(UBInt8("difficulty"), **difficulties)
modes = {
"survival": 0,
"creative": 1,
"adventure": 2,
}
mode = Enum(UBInt8("mode"), **modes)
# Possible effects.
# XXX these names aren't really canonized yet
effect = Enum(UBInt8("effect"),
move_fast=1,
move_slow=2,
dig_fast=3,
dig_slow=4,
damage_boost=5,
heal=6,
harm=7,
jump=8,
confusion=9,
regenerate=10,
resistance=11,
fire_resistance=12,
water_resistance=13,
invisibility=14,
blindness=15,
night_vision=16,
hunger=17,
weakness=18,
poison=19,
wither=20,
)
# The actual packet list.
packets = {
0x00: Struct("ping",
UBInt32("pid"),
),
0x01: Struct("login",
# Player Entity ID (random number generated by the server)
UBInt32("eid"),
# default, flat, largeBiomes
AlphaString("leveltype"),
mode,
dimension,
difficulty,
UBInt8("unused"),
UBInt8("maxplayers"),
),
0x02: Struct("handshake",
UBInt8("protocol"),
AlphaString("username"),
AlphaString("host"),
UBInt32("port"),
),
0x03: Struct("chat",
AlphaString("message"),
),
0x04: Struct("time",
# Total Ticks
UBInt64("timestamp"),
# Time of day
UBInt64("time"),
),
0x05: Struct("entity-equipment",
UBInt32("eid"),
UBInt16("slot"),
Embed(items),
),
0x06: Struct("spawn",
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
0x07: Struct("use",
UBInt32("eid"),
UBInt32("target"),
UBInt8("button"),
),
0x08: Struct("health",
- UBInt16("hp"),
+ BFloat32("hp"),
UBInt16("fp"),
BFloat32("saturation"),
),
0x09: Struct("respawn",
dimension,
difficulty,
mode,
UBInt16("height"),
AlphaString("leveltype"),
),
0x0a: grounded,
0x0b: Struct("position",
position,
grounded
),
0x0c: Struct("orientation",
orientation,
grounded
),
# TODO: Differ between client and server 'position'
0x0d: Struct("location",
position,
orientation,
grounded
),
0x0e: Struct("digging",
Enum(UBInt8("state"),
started=0,
cancelled=1,
stopped=2,
checked=3,
dropped=4,
# Also eating
shooting=5,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
face,
),
0x0f: Struct("build",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
face,
Embed(items),
UBInt8("cursorx"),
UBInt8("cursory"),
UBInt8("cursorz"),
),
# Hold Item Change
0x10: Struct("equip",
# Only 0-8
UBInt16("slot"),
),
0x11: Struct("bed",
UBInt32("eid"),
UBInt8("unknown"),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
),
0x12: Struct("animate",
UBInt32("eid"),
Enum(UBInt8("animation"),
noop=0,
arm=1,
hit=2,
leave_bed=3,
eat=5,
unknown=102,
crouch=104,
uncrouch=105,
),
),
0x13: Struct("action",
UBInt32("eid"),
Enum(UBInt8("action"),
crouch=1,
uncrouch=2,
leave_bed=3,
start_sprint=4,
stop_sprint=5,
),
+ UBInt32("unknown"),
),
0x14: Struct("player",
UBInt32("eid"),
AlphaString("username"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
# 0 For none, unlike other packets
# -1 crashes clients
SBInt16("item"),
metadata,
),
0x16: Struct("collect",
UBInt32("eid"),
UBInt32("destination"),
),
# Object/Vehicle
0x17: Struct("object", # XXX: was 'vehicle'!
UBInt32("eid"),
Enum(UBInt8("type"), # See http://wiki.vg/Entities#Objects
boat=1,
item_stack=2,
minecart=10,
storage_cart=11,
powered_cart=12,
tnt=50,
ender_crystal=51,
arrow=60,
snowball=61,
egg=62,
thrown_enderpearl=65,
wither_skull=66,
falling_block=70,
frames=71,
ender_eye=72,
thrown_potion=73,
dragon_egg=74,
thrown_xp_bottle=75,
fishing_float=90,
),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("pitch"),
UBInt8("yaw"),
SBInt32("data"), # See http://www.wiki.vg/Object_Data
If(lambda context: context["data"] != 0,
Struct("speed",
SBInt16("x"),
SBInt16("y"),
SBInt16("z"),
)
),
),
0x18: Struct("mob",
UBInt32("eid"),
Enum(UBInt8("type"), **{
"Creeper": 50,
"Skeleton": 51,
"Spider": 52,
"GiantZombie": 53,
"Zombie": 54,
"Slime": 55,
"Ghast": 56,
"ZombiePig": 57,
"Enderman": 58,
"CaveSpider": 59,
"Silverfish": 60,
"Blaze": 61,
"MagmaCube": 62,
"EnderDragon": 63,
"Wither": 64,
"Bat": 65,
"Witch": 66,
"Pig": 90,
"Sheep": 91,
"Cow": 92,
"Chicken": 93,
"Squid": 94,
"Wolf": 95,
"Mooshroom": 96,
"Snowman": 97,
"Ocelot": 98,
"IronGolem": 99,
"Villager": 120
}),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
SBInt8("yaw"),
SBInt8("pitch"),
SBInt8("head_yaw"),
SBInt16("vx"),
SBInt16("vy"),
SBInt16("vz"),
metadata,
),
0x19: Struct("painting",
UBInt32("eid"),
AlphaString("title"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
face,
),
0x1a: Struct("experience",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt16("quantity"),
),
+ 0x1b: Struct("steer",
+ BFloat32("first"),
+ BFloat32("second"),
+ Bool("third"),
+ Bool("fourth"),
+ ),
0x1c: Struct("velocity",
UBInt32("eid"),
SBInt16("dx"),
SBInt16("dy"),
SBInt16("dz"),
),
0x1d: Struct("destroy",
UBInt8("count"),
MetaArray(lambda context: context["count"], UBInt32("eid")),
),
0x1e: Struct("create",
UBInt32("eid"),
),
0x1f: Struct("entity-position",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz")
),
0x20: Struct("entity-orientation",
UBInt32("eid"),
UBInt8("yaw"),
UBInt8("pitch")
),
0x21: Struct("entity-location",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz"),
UBInt8("yaw"),
UBInt8("pitch")
),
0x22: Struct("teleport",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
),
0x23: Struct("entity-head",
UBInt32("eid"),
UBInt8("yaw"),
),
0x26: Struct("status",
UBInt32("eid"),
Enum(UBInt8("status"),
damaged=2,
killed=3,
taming=6,
tamed=7,
drying=8,
eating=9,
sheep_eat=10,
golem_rose=11,
heart_particle=12,
angry_particle=13,
happy_particle=14,
magic_particle=15,
shaking=16,
firework=17,
),
),
0x27: Struct("attach",
UBInt32("eid"),
- # -1 for detatching
+ # XXX -1 for detatching
UBInt32("vid"),
+ UBInt8("unknown"),
),
0x28: Struct("metadata",
UBInt32("eid"),
metadata,
),
0x29: Struct("effect",
UBInt32("eid"),
effect,
UBInt8("amount"),
UBInt16("duration"),
),
0x2a: Struct("uneffect",
UBInt32("eid"),
effect,
),
0x2b: Struct("levelup",
BFloat32("current"),
UBInt16("level"),
UBInt16("total"),
),
+ # XXX 0x2c, server to client, needs to be implemented, needs special
+ # UUID-packing techniques
0x33: Struct("chunk",
SBInt32("x"),
SBInt32("z"),
Bool("continuous"),
UBInt16("primary"),
UBInt16("add"),
PascalString("data", length_field=UBInt32("length"), encoding="zlib"),
),
0x34: Struct("batch",
SBInt32("x"),
SBInt32("z"),
UBInt16("count"),
PascalString("data", length_field=UBInt32("length")),
),
0x35: Struct("block",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt16("type"),
UBInt8("meta"),
),
# XXX This covers general tile actions, not just note blocks.
# TODO: Needs work
0x36: Struct("block-action",
SBInt32("x"),
SBInt16("y"),
SBInt32("z"),
UBInt8("byte1"),
UBInt8("byte2"),
UBInt16("blockid"),
),
0x37: Struct("block-break-anim",
UBInt32("eid"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
UBInt8("stage"),
),
# XXX Server -> Client. Use 0x33 instead.
0x38: Struct("bulk-chunk",
UBInt16("count"),
UBInt32("length"),
UBInt8("sky_light"),
MetaField("data", lambda ctx: ctx["length"]),
MetaArray(lambda context: context["count"],
Struct("metadata",
UBInt32("chunk_x"),
UBInt32("chunk_z"),
UBInt16("bitmap_primary"),
UBInt16("bitmap_secondary"),
)
)
),
# TODO: Needs work?
0x3c: Struct("explosion",
BFloat64("x"),
BFloat64("y"),
BFloat64("z"),
BFloat32("radius"),
UBInt32("count"),
MetaField("blocks", lambda context: context["count"] * 3),
BFloat32("motionx"),
BFloat32("motiony"),
BFloat32("motionz"),
),
0x3d: Struct("sound",
Enum(UBInt32("sid"),
click2=1000,
click1=1001,
bow_fire=1002,
door_toggle=1003,
extinguish=1004,
record_play=1005,
charge=1007,
fireball=1008,
zombie_wood=1010,
zombie_metal=1011,
zombie_break=1012,
wither=1013,
smoke=2000,
block_break=2001,
splash_potion=2002,
ender_eye=2003,
blaze=2004,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt32("data"),
Bool("volume-mod"),
),
0x3e: Struct("named-sound",
AlphaString("name"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
BFloat32("volume"),
UBInt8("pitch"),
),
0x3f: Struct("particle",
AlphaString("name"),
BFloat32("x"),
BFloat32("y"),
BFloat32("z"),
BFloat32("x_offset"),
BFloat32("y_offset"),
BFloat32("z_offset"),
BFloat32("speed"),
UBInt32("count"),
),
0x46: Struct("state",
Enum(UBInt8("state"),
bad_bed=0,
start_rain=1,
stop_rain=2,
mode_change=3,
run_credits=4,
),
mode,
),
0x47: Struct("thunderbolt",
UBInt32("eid"),
UBInt8("gid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
0x64: Struct("window-open",
UBInt8("wid"),
Enum(UBInt8("type"),
chest=0,
workbench=1,
furnace=2,
dispenser=3,
enchatment_table=4,
brewing_stand=5,
npc_trade=6,
beacon=7,
anvil=8,
hopper=9,
),
AlphaString("title"),
UBInt8("slots"),
UBInt8("use_title"),
+ # XXX iff type == 0xb (currently unknown) write an extra secret int
+ # here. WTF?
),
0x65: Struct("window-close",
UBInt8("wid"),
),
0x66: Struct("window-action",
UBInt8("wid"),
UBInt16("slot"),
UBInt8("button"),
UBInt16("token"),
UBInt8("shift"), # TODO: rename to 'mode'
Embed(items),
),
0x67: Struct("window-slot",
UBInt8("wid"),
UBInt16("slot"),
Embed(items),
),
0x68: Struct("inventory",
UBInt8("wid"),
UBInt16("length"),
MetaArray(lambda context: context["length"], items),
),
0x69: Struct("window-progress",
UBInt8("wid"),
UBInt16("bar"),
UBInt16("progress"),
),
0x6a: Struct("window-token",
UBInt8("wid"),
UBInt16("token"),
Bool("acknowledged"),
),
0x6b: Struct("window-creative",
UBInt16("slot"),
Embed(items),
),
0x6c: Struct("enchant",
UBInt8("wid"),
UBInt8("enchantment"),
),
0x82: Struct("sign",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
AlphaString("line1"),
AlphaString("line2"),
AlphaString("line3"),
AlphaString("line4"),
),
0x83: Struct("map",
UBInt16("type"),
UBInt16("itemid"),
PascalString("data", length_field=UBInt16("length")),
),
0x84: Struct("tile-update",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
UBInt8("action"),
PascalString("nbt_data", length_field=UBInt16("length")), # gzipped
),
+ 0x85: Struct("0x85",
+ UBInt8("first"),
+ UBInt32("second"),
+ UBInt32("third"),
+ UBInt32("fourth"),
+ ),
0xc8: Struct("statistics",
- UBInt32("sid"), # XXX I could be an Enum
- UBInt8("count"),
+ UBInt32("sid"), # XXX I should be an Enum!
+ UBInt32("count"),
),
0xc9: Struct("players",
AlphaString("name"),
Bool("online"),
UBInt16("ping"),
),
0xca: Struct("abilities",
UBInt8("flags"),
- UBInt8("fly-speed"),
- UBInt8("walk-speed"),
+ BFloat32("fly-speed"),
+ BFloat32("walk-speed"),
),
0xcb: Struct("tab",
AlphaString("autocomplete"),
),
0xcc: Struct("settings",
AlphaString("locale"),
UBInt8("distance"),
UBInt8("chat"),
difficulty,
Bool("cape"),
),
0xcd: Struct("statuses",
UBInt8("payload")
),
0xce: Struct("score_item",
AlphaString("name"),
AlphaString("value"),
Enum(UBInt8("action"),
create=0,
remove=1,
update=2,
),
),
0xcf: Struct("score_update",
AlphaString("item_name"),
UBInt8("remove"),
If(lambda context: context["remove"] == 0,
Embed(Struct("information",
AlphaString("score_name"),
UBInt32("value"),
))
),
),
0xd0: Struct("score_display",
Enum(UBInt8("position"),
as_list=0,
sidebar=1,
below_name=2,
),
AlphaString("score_name"),
),
0xd1: Struct("teams",
AlphaString("name"),
Enum(UBInt8("mode"),
team_created=0,
team_removed=1,
team_updates=2,
players_added=3,
players_removed=4,
),
If(lambda context: context["mode"] in ("team_created", "team_updated"),
Embed(Struct("team_info",
AlphaString("team_name"),
AlphaString("team_prefix"),
AlphaString("team_suffix"),
Enum(UBInt8("friendly_fire"),
off=0,
on=1,
invisibles=2,
),
))
),
If(lambda context: context["mode"] in ("team_created", "players_added", "players_removed"),
Embed(Struct("players_info",
UBInt16("count"),
MetaArray(lambda context: context["count"], AlphaString("player_names")),
))
),
),
0xfa: Struct("plugin-message",
AlphaString("channel"),
PascalString("data", length_field=UBInt16("length")),
),
0xfc: Struct("key-response",
PascalString("key", length_field=UBInt16("key-len")),
PascalString("token", length_field=UBInt16("token-len")),
),
0xfd: Struct("key-request",
AlphaString("server"),
PascalString("key", length_field=UBInt16("key-len")),
PascalString("token", length_field=UBInt16("token-len")),
),
+ # XXX changed structure, looks like more weird-ass
+ # pseudo-backwards-compatible imperative protocol bullshit
0xfe: Struct("poll", UBInt8("unused")),
# TODO: rename to 'kick'
0xff: Struct("error", AlphaString("message")),
}
packet_stream = Struct("packet_stream",
OptionalGreedyRange(
Struct("full_packet",
UBInt8("header"),
Switch("payload", lambda context: context["header"], packets),
),
),
OptionalGreedyRange(
UBInt8("leftovers"),
),
)
def parse_packets(bytestream):
"""
Opportunistically parse out as many packets as possible from a raw
bytestream.
Returns a tuple containing a list of unpacked packet containers, and any
leftover unparseable bytes.
"""
container = packet_stream.parse(bytestream)
l = [(i.header, i.payload) for i in container.full_packet]
leftovers = "".join(chr(i) for i in container.leftovers)
if DUMP_ALL_PACKETS:
for header, payload in l:
print "Parsed packet 0x%.2x" % header
print payload
return l, leftovers
incremental_packet_stream = Struct("incremental_packet_stream",
Struct("full_packet",
UBInt8("header"),
Switch("payload", lambda context: context["header"], packets),
),
OptionalGreedyRange(
UBInt8("leftovers"),
),
)
def parse_packets_incrementally(bytestream):
"""
Parse out packets one-by-one, yielding a tuple of packet header and packet
payload.
This function returns a generator.
This function will yield all valid packets in the bytestream up to the
first invalid packet.
:returns: a generator yielding tuples of headers and payloads
"""
while bytestream:
parsed = incremental_packet_stream.parse(bytestream)
header = parsed.full_packet.header
payload = parsed.full_packet.payload
bytestream = "".join(chr(i) for i in parsed.leftovers)
yield header, payload
packets_by_name = dict((v.name, k) for (k, v) in packets.iteritems())
def make_packet(packet, *args, **kwargs):
"""
Constructs a packet bytestream from a packet header and payload.
The payload should be passed as keyword arguments. Additional containers
or dictionaries to be added to the payload may be passed positionally, as
well.
"""
if packet not in packets_by_name:
print "Couldn't find packet name %s!" % packet
return ""
header = packets_by_name[packet]
for arg in args:
kwargs.update(dict(arg))
container = Container(**kwargs)
if DUMP_ALL_PACKETS:
print "Making packet <%s> (0x%.2x)" % (packet, header)
print container
payload = packets[header].build(container)
return chr(header) + payload
def make_error_packet(message):
"""
Convenience method to generate an error packet bytestream.
"""
return make_packet("error", message=message)
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index f825508..8980616 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -1,553 +1,553 @@
# vim: set fileencoding=utf8 :
from itertools import product, chain
from time import time
from urlparse import urlunparse
from twisted.internet import reactor
from twisted.internet.defer import (DeferredList, inlineCallbacks,
maybeDeferred, succeed)
from twisted.internet.protocol import Protocol, connectionDone
from twisted.internet.task import cooperate, deferLater, LoopingCall
from twisted.internet.task import TaskDone, TaskFailed
from twisted.protocols.policies import TimeoutMixin
from twisted.python import log
from twisted.web.client import getPage
from bravo import version
from bravo.beta.structures import BuildData, Settings
from bravo.blocks import blocks, items
from bravo.chunk import CHUNK_HEIGHT
from bravo.entity import Sign
from bravo.errors import BetaClientError, BuildError
from bravo.ibravo import (IChatCommand, IPreBuildHook, IPostBuildHook,
IWindowOpenHook, IWindowClickHook, IWindowCloseHook,
IPreDigHook, IDigHook, ISignHook, IUseHook)
from bravo.infini.factory import InfiniClientFactory
from bravo.inventory.windows import InventoryWindow
from bravo.location import Location, Orientation, Position
from bravo.motd import get_motd
from bravo.beta.packets import parse_packets, make_packet, make_error_packet
from bravo.plugin import retrieve_plugins
from bravo.policy.dig import dig_policies
from bravo.utilities.coords import adjust_coords_for_face, split_coords
from bravo.utilities.chat import complete, username_alternatives
from bravo.utilities.maths import circling, clamp, sorted_by_distance
from bravo.utilities.temporal import timestamp_from_clock
# States of the protocol.
(STATE_UNAUTHENTICATED, STATE_AUTHENTICATED, STATE_LOCATED) = range(3)
-SUPPORTED_PROTOCOL = 61
+SUPPORTED_PROTOCOL = 74
class BetaServerProtocol(object, Protocol, TimeoutMixin):
"""
The Minecraft Alpha/Beta server protocol.
This class is mostly designed to be a skeleton for featureful clients. It
tries hard to not step on the toes of potential subclasses.
"""
excess = ""
packet = None
state = STATE_UNAUTHENTICATED
buf = ""
parser = None
handler = None
player = None
username = None
settings = Settings()
motd = "Bravo Generic Beta Server"
_health = 20
_latency = 0
def __init__(self):
self.chunks = dict()
self.windows = {}
self.wid = 1
self.location = Location()
self.handlers = {
0x00: self.ping,
0x02: self.handshake,
0x03: self.chat,
0x07: self.use,
0x09: self.respawn,
0x0a: self.grounded,
0x0b: self.position,
0x0c: self.orientation,
0x0d: self.location_packet,
0x0e: self.digging,
0x0f: self.build,
0x10: self.equip,
0x12: self.animate,
0x13: self.action,
0x15: self.pickup,
0x65: self.wclose,
0x66: self.waction,
0x6a: self.wacknowledge,
0x6b: self.wcreative,
0x82: self.sign,
0xca: self.client_settings,
0xcb: self.complete,
0xcc: self.settings_packet,
0xfe: self.poll,
0xff: self.quit,
}
self._ping_loop = LoopingCall(self.update_ping)
self.setTimeout(30)
# Low-level packet handlers
# Try not to hook these if possible, since they offer no convenient
# abstractions or protections.
def ping(self, container):
"""
Hook for ping packets.
By default, this hook will examine the timestamps on incoming pings,
and use them to estimate the current latency of the connected client.
"""
now = timestamp_from_clock(reactor)
then = container.pid
self.latency = now - then
def handshake(self, container):
"""
Hook for handshake packets.
Override this to customize how logins are handled. By default, this
method will only confirm that the negotiated wire protocol is the
correct version, copy data out of the packet and onto the protocol,
and then run the ``authenticated`` callback.
This method will call the ``pre_handshake`` method hook prior to
logging in the client.
"""
self.username = container.username
if container.protocol < SUPPORTED_PROTOCOL:
# Kick old clients.
self.error("This server doesn't support your ancient client.")
return
elif container.protocol > SUPPORTED_PROTOCOL:
# Kick new clients.
self.error("This server doesn't support your newfangled client.")
return
log.msg("Handshaking with client, protocol version %d" %
container.protocol)
if not self.pre_handshake():
log.msg("Pre-handshake hook failed; kicking client")
self.error("You failed the pre-handshake hook.")
return
players = min(self.factory.limitConnections, 20)
self.write_packet("login", eid=self.eid, leveltype="default",
mode=self.factory.mode,
dimension=self.factory.world.dimension,
difficulty="peaceful", unused=0, maxplayers=players)
self.authenticated()
def pre_handshake(self):
"""
Whether this client should be logged in.
"""
return True
def chat(self, container):
"""
Hook for chat packets.
"""
def use(self, container):
"""
Hook for use packets.
"""
def respawn(self, container):
"""
Hook for respawn packets.
"""
def grounded(self, container):
"""
Hook for grounded packets.
"""
self.location.grounded = bool(container.grounded)
def position(self, container):
"""
Hook for position packets.
"""
if self.state != STATE_LOCATED:
log.msg("Ignoring unlocated position!")
return
self.grounded(container.grounded)
old_position = self.location.pos
position = Position.from_player(container.position.x,
container.position.y, container.position.z)
altered = False
dx, dy, dz = old_position - position
if any(abs(d) >= 64 for d in (dx, dy, dz)):
# Whoa, slow down there, cowboy. You're moving too fast. We're
# gonna ignore this position change completely, because it's
# either bogus or ignoring a recent teleport.
altered = True
else:
self.location.pos = position
self.location.stance = container.position.stance
# Santitize location. This handles safety boundaries, illegal stance,
# etc.
altered = self.location.clamp() or altered
# If, for any reason, our opinion on where the client should be
# located is different than theirs, force them to conform to our point
# of view.
if altered:
log.msg("Not updating bogus position!")
self.update_location()
# If our position actually changed, fire the position change hook.
if old_position != position:
self.position_changed()
def orientation(self, container):
"""
Hook for orientation packets.
"""
self.grounded(container.grounded)
old_orientation = self.location.ori
orientation = Orientation.from_degs(container.orientation.rotation,
container.orientation.pitch)
self.location.ori = orientation
if old_orientation != orientation:
self.orientation_changed()
def location_packet(self, container):
"""
Hook for location packets.
"""
self.position(container)
self.orientation(container)
def digging(self, container):
"""
Hook for digging packets.
"""
def build(self, container):
"""
Hook for build packets.
"""
def equip(self, container):
"""
Hook for equip packets.
"""
def pickup(self, container):
"""
Hook for pickup packets.
"""
def animate(self, container):
"""
Hook for animate packets.
"""
def action(self, container):
"""
Hook for action packets.
"""
def wclose(self, container):
"""
Hook for wclose packets.
"""
def waction(self, container):
"""
Hook for waction packets.
"""
def wacknowledge(self, container):
"""
Hook for wacknowledge packets.
"""
def wcreative(self, container):
"""
Hook for creative inventory action packets.
"""
def sign(self, container):
"""
Hook for sign packets.
"""
def client_settings(self, container):
"""
Hook for interaction setting packets.
"""
self.settings.update_interaction(container)
def complete(self, container):
"""
Hook for tab-completion packets.
"""
def settings_packet(self, container):
"""
Hook for presentation setting packets.
"""
self.settings.update_presentation(container)
def poll(self, container):
"""
Hook for poll packets.
By default, queries the parent factory for some data, and replays it
in a specific format to the requester. The connection is then closed
at both ends. This functionality is used by Beta 1.8 clients to poll
servers for status.
"""
players = unicode(len(self.factory.protocols))
max_players = unicode(self.factory.limitConnections or 1000000)
data = [
u"§1",
unicode(SUPPORTED_PROTOCOL),
u"Bravo %s" % version,
self.motd,
players,
max_players,
]
response = u"\u0000".join(data)
self.error(response)
def quit(self, container):
"""
Hook for quit packets.
By default, merely logs the quit message and drops the connection.
Even if the connection is not dropped, it will be lost anyway since
the client will close the connection. It's better to explicitly let it
go here than to have zombie protocols.
"""
log.msg("Client is quitting: %s" % container.message)
self.transport.loseConnection()
# Twisted-level data handlers and methods
# Please don't override these needlessly, as they are pretty solid and
# shouldn't need to be touched.
def dataReceived(self, data):
self.buf += data
packets, self.buf = parse_packets(self.buf)
if packets:
self.resetTimeout()
for header, payload in packets:
if header in self.handlers:
d = maybeDeferred(self.handlers[header], payload)
@d.addErrback
def eb(failure):
log.err("Error while handling packet 0x%.2x" % header)
log.err(failure)
return None
else:
log.err("Didn't handle parseable packet 0x%.2x!" % header)
log.err(payload)
def connectionLost(self, reason=connectionDone):
if self._ping_loop.running:
self._ping_loop.stop()
def timeoutConnection(self):
self.error("Connection timed out")
# State-change callbacks
# Feel free to override these, but call them at some point.
def authenticated(self):
"""
Called when the client has successfully authenticated with the server.
"""
self.state = STATE_AUTHENTICATED
self._ping_loop.start(30)
# Event callbacks
# These are meant to be overriden.
def orientation_changed(self):
"""
Called when the client moves.
This callback is only for orientation, not position.
"""
pass
def position_changed(self):
"""
Called when the client moves.
This callback is only for position, not orientation.
"""
pass
# Convenience methods for consolidating code and expressing intent. I
# hear that these are occasionally useful. If a method in this section can
# be used, then *PLEASE* use it; not using it is the same as open-coding
# whatever you're doing, and only hurts in the long run.
def write_packet(self, header, **payload):
"""
Send a packet to the client.
"""
self.transport.write(make_packet(header, **payload))
def update_ping(self):
"""
Send a keepalive to the client.
"""
timestamp = timestamp_from_clock(reactor)
self.write_packet("ping", pid=timestamp)
def update_location(self):
"""
Send this client's location to the client.
Also let other clients know where this client is.
"""
# Don't bother trying to update things if the position's not yet
# synchronized. We could end up jettisoning them into the void.
if self.state != STATE_LOCATED:
return
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
# Inform everybody of our new location.
packet = make_packet("teleport", eid=self.player.eid, x=x, y=y, z=z,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
# Inform ourselves of our new location.
packet = self.location.save_to_packet()
self.transport.write(packet)
def ascend(self, count):
"""
Ascend to the next XZ-plane.
``count`` is the number of ascensions to perform, and may be zero in
order to force this player to not be standing inside a block.
:returns: bool of whether the ascension was successful
This client must be located for this method to have any effect.
"""
if self.state != STATE_LOCATED:
return False
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
chunk = self.chunks[bigx, bigz]
column = [chunk.get_block((smallx, i, smallz))
for i in range(CHUNK_HEIGHT)]
# Special case: Ascend at most once, if the current spot isn't good.
if count == 0:
if (not column[y]) or column[y + 1] or column[y + 2]:
# Yeah, we're gonna need to move.
count += 1
else:
# Nope, we're fine where we are.
return True
for i in xrange(y, 255):
# Find the next spot above us which has a platform and two empty
# blocks of air.
if column[i] and (not column[i + 1]) and not column[i + 2]:
count -= 1
if not count:
break
else:
return False
self.location.pos = self.location.pos._replace(y=i * 32)
return True
def error(self, message):
"""
Error out.
This method sends ``message`` to the client as a descriptive error
message, then closes the connection.
"""
log.msg("Error: %s" % message)
self.transport.write(make_error_packet(message))
self.transport.loseConnection()
def play_notes(self, notes):
"""
Play some music.
Send a sequence of notes to the player. ``notes`` is a finite iterable
of pairs of instruments and pitches.
There is no way to time notes; if staggered playback is desired (and
it usually is!), then ``play_notes()`` should be called repeatedly at
the appropriate times.
This method turns the block beneath the player into a note block,
plays the requested notes through it, then turns it back into the
original block, all without actually modifying the chunk.
"""
x, y, z = self.location.pos.to_block()
|
bravoserver/bravo
|
da7415796874942ad3d66fed456879fea04f3a71
|
world: As seasons change, apply the changed season to cached chunks.
|
diff --git a/bravo/world.py b/bravo/world.py
index cd5cb12..a4ea715 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,695 +1,710 @@
from array import array
from functools import wraps
-from itertools import product
+from itertools import imap, product
import random
import sys
import weakref
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
from twisted.internet.task import LoopingCall, coiterate
from twisted.python import log
from bravo.beta.structures import Level
from bravo.chunk import Chunk, CHUNK_HEIGHT
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
class ChunkCache(object):
"""
A cache which holds references to all chunks which should be held in
memory.
This cache remembers chunks that were recently used, that are in permanent
residency, and so forth. Its exact caching algorithm is currently null.
When chunks dirty themselves, they are expected to notify the cache, which
will then schedule an eviction for the chunk.
"""
def __init__(self):
self._perm = {}
self._dirty = {}
def pin(self, chunk):
self._perm[chunk.x, chunk.z] = chunk
def unpin(self, chunk):
del self._perm[chunk.x, chunk.z]
def put(self, chunk):
# XXX expand caching strategy
pass
def get(self, coords):
if coords in self._perm:
return self._perm[coords]
# Returns None if not found!
return self._dirty.get(coords)
def cleaned(self, chunk):
del self._dirty[chunk.x, chunk.z]
def dirtied(self, chunk):
self._dirty[chunk.x, chunk.z] = chunk
+ def iterperm(self):
+ return self._perm.itervalues()
+
def iterdirty(self):
return self._dirty.itervalues()
class ImpossibleCoordinates(Exception):
"""
A coordinate could not ever be valid.
"""
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
chunk = self._cache.get(bigcoords)
if chunk is None:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
- season = None
+ _season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
The initial level data.
"""
_cache = None
"""
The chunk cache.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
self._pending_chunks = dict()
+ @property
+ def season(self):
+ return self._season
+
+ @season.setter
+ def season(self, value):
+ if self._season != value:
+ self._season = value
+ if self._cache is not None:
+ # Issue 388: Apply the season to the permanent cache.
+ coiterate(imap(value.transform, self._cache.iterperm()))
+
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
log.msg("World connected on %s, using serializer %s" %
(world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
# Create our cache.
self._cache = ChunkCache()
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
# Start up the permanent cache.
# has_option() is not exactly desirable, but it's appropriate here
# because we don't want to take any action if the key is unset.
if self.config.has_option(self.config_name, "perm_cache"):
cache_level = self.config.getint(self.config_name, "perm_cache")
self.enable_cache(cache_level)
self.chunk_management_loop = LoopingCall(self.flush_chunk)
self.chunk_management_loop.start(1)
# XXX Put this in init or here?
self.mob_manager = MobManager()
# XXX Put this in the managers constructor?
self.mob_manager.world = self
@inlineCallbacks
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
:returns: A ``Deferred`` that fires after the world has stopped.
"""
self.chunk_management_loop.stop()
# Flush all dirty chunks to disk. Don't bother cleaning them off.
for chunk in self._cache.iterdirty():
yield self.save_chunk(chunk)
# Destroy the cache.
self._cache = None
# Save the level data.
yield maybeDeferred(self.serializer.save_level, self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
:returns: A ``Deferred`` which will fire when the cache has been
adjusted.
"""
log.msg("Setting cache size to %d, please hold..." % size)
assign = self._cache.pin
def worker(x, z):
log.msg("Adding %d, %d to cache..." % (x, z))
return self.request_chunk(x, z).addCallback(assign)
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
work = (worker(x, z) for x, z in product(rx, rz))
d = coiterate(work)
@d.addCallback
def notify(none):
log.msg("Cache size is now %d!" % size)
return d
def flush_chunk(self):
"""
Flush a dirty chunk.
This method will always block when there are dirty chunks.
"""
for chunk in self._cache.iterdirty():
# Save a single chunk, and add a callback to remove it from the
# cache when it's been cleaned.
d = self.save_chunk(chunk)
d.addCallback(self._cache.cleaned)
break
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
self.chunk_management_loop.stop()
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
self.chunk_management_loop.start(1)
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
if hasattr(entity, 'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
# First, try the cache.
cached = self._cache.get((x, z))
if cached is not None:
returnValue(cached)
# Is it pending?
if (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
# Create a new chunk object, since the cache turned up empty.
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
# Add in our magic dirtiness hook so that the cache can be aware of
# chunks who have been...naughty.
chunk.dirtied = self._cache.dirtied
if chunk.dirty:
# The chunk was already dirty!? Oh, naughty indeed!
self._cache.dirtied(chunk)
if chunk.populated:
self._cache.put(chunk)
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
def pp(chunk):
chunk.populated = True
chunk.dirty = True
self.postprocess_chunk(chunk)
self._cache.dirtied(chunk)
del self._pending_chunks[x, z]
return chunk
# Set up callbacks.
d.addCallback(pp)
d.chainDeferred(pe)
# Because multiple people might be attached to this callback, we're
# going to do something magical here. We will yield a forked version
# of our Deferred. This means that we will wait right here, for a
# long, long time, before actually returning with the chunk, *but*,
# when we actually finish, we'll be ready to return the chunk
# immediately. Our caller cannot possibly care because they only see a
# Deferred either way.
retval = yield retval
returnValue(retval)
def save_chunk(self, chunk):
"""
Write a chunk to the serializer.
Note that this method does nothing when the given chunk is not dirty
or saving is off!
:returns: A ``Deferred`` which will fire after the chunk has been
saved with the chunk.
"""
if not chunk.dirty or not self.saving:
return succeed(chunk)
d = maybeDeferred(self.serializer.save_chunk, chunk)
@d.addCallback
def cb(none):
chunk.dirty = False
return chunk
@d.addErrback
def eb(failure):
failure.trap(SerializerWriteException)
log.msg("Couldn't write %r" % chunk)
return d
def load_player(self, username):
"""
Retrieve player data.
:returns: a ``Deferred`` that will be fired with a ``Player``
"""
# Get the player, possibly.
d = maybeDeferred(self.serializer.load_player, username)
@d.addErrback
def eb(failure):
failure.trap(SerializerReadException)
log.msg("Couldn't load player %r" % username)
# Make a player.
player = Player(username=username)
player.location.x = self.level.spawn[0]
player.location.y = self.level.spawn[1]
player.location.stance = self.level.spawn[1]
player.location.z = self.level.spawn[2]
return player
# This Deferred's good to go as-is.
return d
def save_player(self, username, player):
if self.saving:
self.serializer.save_player(player)
# World-level geometry access.
# These methods let external API users refrain from going through the
# standard motions of looking up and loading chunk information.
@coords_to_chunk
def get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_block(coords)
@coords_to_chunk
def set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_block(coords, value)
@coords_to_chunk
def get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_metadata(coords)
@coords_to_chunk
def set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_metadata(coords, value)
@coords_to_chunk
def destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.destroy(coords)
@coords_to_chunk
def mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: the requested block
"""
return chunk.get_block(coords)
@sync_coords_to_chunk
def sync_set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: None
"""
chunk.set_block(coords, value)
@sync_coords_to_chunk
def sync_get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: the requested metadata
"""
return chunk.get_metadata(coords)
@sync_coords_to_chunk
def sync_set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: None
"""
chunk.set_metadata(coords, value)
@sync_coords_to_chunk
def sync_destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
|
bravoserver/bravo
|
ea371440e951ec697b0e0ed4c7be7e282a40cd7b
|
Add a mailmap.
|
diff --git a/.mailmap b/.mailmap
new file mode 100644
index 0000000..148574b
--- /dev/null
+++ b/.mailmap
@@ -0,0 +1,5 @@
+Corbin Simpson <[email protected]> <[email protected]>
+Corbin Simpson <[email protected]> <[email protected]>
+Justin Noah <[email protected]> <[email protected]>
+Justin Noah <[email protected]> <[email protected]>
+Derrick Dymock <[email protected]> <[email protected]>
|
bravoserver/bravo
|
a3cdae89f8c81ef5edb7510bfcac2bfb959caf76
|
world: Switch caching subsystem to separate caching object.
|
diff --git a/bravo/beta/factory.py b/bravo/beta/factory.py
index 2dfaa58..05a92c1 100644
--- a/bravo/beta/factory.py
+++ b/bravo/beta/factory.py
@@ -1,566 +1,565 @@
from collections import defaultdict
from itertools import chain, product
from twisted.internet import reactor
from twisted.internet.interfaces import IPushProducer
from twisted.internet.protocol import Factory
from twisted.internet.task import LoopingCall
from twisted.python import log
from zope.interface import implements
from bravo.beta.packets import make_packet
from bravo.beta.protocol import BravoProtocol, KickedProtocol
from bravo.entity import entities
from bravo.ibravo import (ISortedPlugin, IAutomaton, ITerrainGenerator,
IUseHook, ISignHook, IPreDigHook, IDigHook,
IPreBuildHook, IPostBuildHook, IWindowOpenHook,
IWindowClickHook, IWindowCloseHook)
from bravo.location import Location
from bravo.plugin import retrieve_named_plugins, retrieve_sorted_plugins
from bravo.policy.packs import packs as available_packs
from bravo.policy.seasons import Spring, Winter
from bravo.utilities.chat import chat_name, sanitize_chat
from bravo.weather import WeatherVane
from bravo.world import World
(STATE_UNAUTHENTICATED, STATE_CHALLENGED, STATE_AUTHENTICATED,
STATE_LOCATED) = range(4)
circle = [(i, j)
for i, j in product(xrange(-5, 5), xrange(-5, 5))
if i**2 + j**2 <= 25
]
class BravoFactory(Factory):
"""
A ``Factory`` that creates ``BravoProtocol`` objects when connected to.
"""
implements(IPushProducer)
protocol = BravoProtocol
timestamp = None
time = 0
day = 0
eid = 1
interfaces = []
def __init__(self, config, name):
"""
Create a factory and world.
``name`` is the string used to look up factory-specific settings from
the configuration.
:param str name: internal name of this factory
"""
self.name = name
self.config = config
self.config_name = "world %s" % name
self.world = World(self.config, self.name)
self.world.factory = self
self.protocols = dict()
self.connectedIPs = defaultdict(int)
self.mode = self.config.get(self.config_name, "mode")
if self.mode not in ("creative", "survival"):
raise Exception("Unsupported mode %s" % self.mode)
self.limitConnections = self.config.getintdefault(self.config_name,
"limitConnections",
0)
self.limitPerIP = self.config.getintdefault(self.config_name,
"limitPerIP", 0)
self.vane = WeatherVane(self)
def startFactory(self):
log.msg("Initializing factory for world '%s'..." % self.name)
# Get our plugins set up.
self.register_plugins()
log.msg("Starting world...")
self.world.start()
log.msg("Starting timekeeping...")
self.timestamp = reactor.seconds()
self.time = self.world.level.time
self.update_season()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(2)
log.msg("Starting entity updates...")
# Start automatons.
for automaton in self.automatons:
automaton.start()
self.chat_consumers = set()
log.msg("Factory successfully initialized for world '%s'!" % self.name)
def stopFactory(self):
"""
Called before factory stops listening on ports. Used to perform
shutdown tasks.
"""
log.msg("Shutting down world...")
# Stop automatons. Technically, they may not actually halt until their
# next iteration, but that is close enough for us, probably.
# Automatons are contracted to not access the world after stop() is
# called.
for automaton in self.automatons:
automaton.stop()
# Evict plugins as soon as possible. Can't be done before stopping
# automatons.
self.unregister_plugins()
self.time_loop.stop()
# Write back current world time. This must be done before stopping the
# world.
self.world.time = self.time
# And now stop the world.
self.world.stop()
log.msg("World data saved!")
def buildProtocol(self, addr):
"""
Create a protocol.
This overriden method provides early player entity registration, as a
solution to the username/entity race that occurs on login.
"""
banned = self.world.serializer.load_plugin_data("banned_ips")
# Do IP bans first.
for ip in banned.split():
if addr.host == ip:
# Use KickedProtocol with extreme prejudice.
log.msg("Kicking banned IP %s" % addr.host)
p = KickedProtocol("Sorry, but your IP address is banned.")
p.factory = self
return p
# We are ignoring values less that 1, but making sure not to go over
# the connection limit.
if (self.limitConnections
and len(self.protocols) >= self.limitConnections):
log.msg("Reached maximum players, turning %s away." % addr.host)
p = KickedProtocol("The player limit has already been reached."
" Please try again later.")
p.factory = self
return p
# Do our connection-per-IP check.
if (self.limitPerIP and
self.connectedIPs[addr.host] >= self.limitPerIP):
log.msg("At maximum connections for %s already, dropping." % addr.host)
p = KickedProtocol("There are too many players connected from this IP.")
p.factory = self
return p
else:
self.connectedIPs[addr.host] += 1
# If the player wasn't kicked, let's continue!
log.msg("Starting connection for %s" % addr)
p = self.protocol(self.config, self.name)
p.host = addr.host
p.factory = self
self.register_entity(p)
# Copy our hooks to the protocol.
p.register_hooks()
return p
def teardown_protocol(self, protocol):
"""
Do internal bookkeeping on behalf of a protocol which has been
disconnected.
Did you know that "bookkeeping" is one of the few words in English
which has three pairs of double letters in a row?
"""
username = protocol.username
host = protocol.host
if username in self.protocols:
del self.protocols[username]
self.connectedIPs[host] -= 1
def set_username(self, protocol, username):
"""
Attempt to set a new username for a protocol.
:returns: whether the username was changed
"""
# If the username's already taken, refuse it.
if username in self.protocols:
return False
if protocol.username in self.protocols:
# This protocol's known under another name, so remove it.
del self.protocols[protocol.username]
# Set the username.
self.protocols[username] = protocol
protocol.username = username
return True
def register_plugins(self):
"""
Setup plugin hooks.
"""
log.msg("Registering client plugin hooks...")
plugin_types = {
"automatons": IAutomaton,
"generators": ITerrainGenerator,
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
packs = self.config.getlistdefault(self.config_name, "packs", [])
try:
packs = [available_packs[pack] for pack in packs]
except KeyError, e:
raise Exception("Couldn't find plugin pack %s" % e.args)
for t, interface in plugin_types.iteritems():
l = self.config.getlistdefault(self.config_name, t, [])
# Grab extra plugins from the pack. Order doesn't really matter
# since the plugin loader sorts things anyway.
for pack in packs:
if t in pack:
l += pack[t]
# Hax. :T
if t == "generators":
plugins = retrieve_sorted_plugins(interface, l)
elif issubclass(interface, ISortedPlugin):
plugins = retrieve_sorted_plugins(interface, l, factory=self)
else:
plugins = retrieve_named_plugins(interface, l, factory=self)
log.msg("Using %s: %s" % (t.replace("_", " "),
", ".join(plugin.name for plugin in plugins)))
setattr(self, t, plugins)
# Deal with seasons.
seasons = self.config.getlistdefault(self.config_name, "seasons", [])
for pack in packs:
if "seasons" in pack:
seasons += pack["seasons"]
self.seasons = []
if "spring" in seasons:
self.seasons.append(Spring())
if "winter" in seasons:
self.seasons.append(Winter())
# Assign generators to the world pipeline.
self.world.pipeline = self.generators
# Use hooks have special funkiness.
uh = self.use_hooks
self.use_hooks = defaultdict(list)
for plugin in uh:
for target in plugin.targets:
self.use_hooks[target].append(plugin)
def unregister_plugins(self):
log.msg("Unregistering client plugin hooks...")
for name in [
"automatons",
"generators",
"open_hooks",
"click_hooks",
"close_hooks",
"pre_build_hooks",
"post_build_hooks",
"pre_dig_hooks",
"dig_hooks",
"sign_hooks",
"use_hooks",
]:
delattr(self, name)
def create_entity(self, x, y, z, name, **kwargs):
"""
Spawn an entirely new entity at the specified block coordinates.
Handles entity registration as well as instantiation.
"""
bigx = x // 16
bigz = z // 16
location = Location.at_block(x, y, z)
entity = entities[name](eid=0, location=location, **kwargs)
self.register_entity(entity)
d = self.world.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
chunk.entities.add(entity)
log.msg("Created entity %s" % entity)
# XXX Maybe just send the entity object to the manager instead of
# the following?
if hasattr(entity,'loop'):
self.world.mob_manager.start_mob(entity)
return entity
def register_entity(self, entity):
"""
Registers an entity with this factory.
Registration is perhaps too fancy of a name; this method merely makes
sure that the entity has a unique and usable entity ID. In particular,
this method does *not* make the entity attached to the world, or
advertise its existence.
"""
if not entity.eid:
self.eid += 1
entity.eid = self.eid
log.msg("Registered entity %s" % entity)
def destroy_entity(self, entity):
"""
Destroy an entity.
The factory doesn't have to know about entities, but it is a good
place to put this logic.
"""
bigx, bigz = entity.location.pos.to_chunk()
d = self.world.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
chunk.entities.discard(entity)
chunk.dirty = True
log.msg("Destroyed entity %s" % entity)
def update_time(self):
"""
Update the in-game timer.
The timer goes from 0 to 24000, both of which are high noon. The clock
increments by 20 every second. Days are 20 minutes long.
The day clock is incremented every in-game day, which is every 20
minutes. The day clock goes from 0 to 360, which works out to a reset
once every 5 days. This is a Babylonian in-game year.
"""
t = reactor.seconds()
self.time += 20 * (t - self.timestamp)
self.timestamp = t
days, self.time = divmod(self.time, 24000)
if days:
self.day += days
self.day %= 360
self.update_season()
def broadcast_time(self):
packet = make_packet("time", timestamp=int(self.time))
self.broadcast(packet)
def update_season(self):
"""
Update the world's season.
"""
all_seasons = sorted(self.seasons, key=lambda s: s.day)
# Get all the seasons that we have past the start date of this year.
# We are looking for the season which is closest to our current day,
# without going over; I call this the Price-is-Right style of season
# handling. :3
past_seasons = [s for s in all_seasons if s.day <= self.day]
if past_seasons:
# The most recent one is the one we are in
self.world.season = past_seasons[-1]
elif all_seasons:
# We haven't past any seasons yet this year, so grab the last one
# from 'last year'
self.world.season = all_seasons[-1]
else:
# No seasons enabled.
self.world.season = None
def chat(self, message):
"""
Relay chat messages.
Chat messages are sent to all connected clients, as well as to anybody
consuming this factory.
"""
for consumer in self.chat_consumers:
consumer.write((self, message))
# Prepare the message for chat packeting.
for user in self.protocols:
message = message.replace(user, chat_name(user))
message = sanitize_chat(message)
log.msg("Chat: %s" % message.encode("utf8"))
packet = make_packet("chat", message=message)
self.broadcast(packet)
def broadcast(self, packet):
"""
Broadcast a packet to all connected players.
"""
for player in self.protocols.itervalues():
player.transport.write(packet)
def broadcast_for_others(self, packet, protocol):
"""
Broadcast a packet to all players except the originating player.
Useful for certain packets like player entity spawns which should
never be reflexive.
"""
for player in self.protocols.itervalues():
if player is not protocol:
player.transport.write(packet)
def broadcast_for_chunk(self, packet, x, z):
"""
Broadcast a packet to all players that have a certain chunk loaded.
`x` and `z` are chunk coordinates, not block coordinates.
"""
for player in self.protocols.itervalues():
if (x, z) in player.chunks:
player.transport.write(packet)
def scan_chunk(self, chunk):
"""
Tell automatons about this chunk.
"""
# It's possible for there to be no automatons; this usually means that
# the factory is shutting down. We should be permissive and handle
# this case correctly.
if hasattr(self, "automatons"):
for automaton in self.automatons:
automaton.scan(chunk)
def flush_chunk(self, chunk):
"""
Flush a damaged chunk to all players that have it loaded.
"""
if chunk.is_damaged():
packet = chunk.get_damage_packet()
for player in self.protocols.itervalues():
if (chunk.x, chunk.z) in player.chunks:
player.transport.write(packet)
chunk.clear_damage()
def flush_all_chunks(self):
"""
Flush any damage anywhere in this world to all players.
This is a sledgehammer which should be used sparingly at best, and is
only well-suited to plugins which touch multiple chunks at once.
In other words, if I catch you using this in your plugin needlessly,
I'm gonna have a chat with you.
"""
- for chunk in chain(self.world.chunk_cache.itervalues(),
- self.world.dirty_chunk_cache.itervalues()):
+ for chunk in self.world._cache.iterdirty():
self.flush_chunk(chunk)
def give(self, coords, block, quantity):
"""
Spawn a pickup at the specified coordinates.
The coordinates need to be in pixels, not blocks.
If the size of the stack is too big, multiple stacks will be dropped.
:param tuple coords: coordinates, in pixels
:param tuple block: key of block or item to drop
:param int quantity: number of blocks to drop in the stack
"""
x, y, z = coords
while quantity > 0:
entity = self.create_entity(x // 32, y // 32, z // 32, "Item",
item=block, quantity=min(quantity, 64))
packet = entity.save_to_packet()
packet += make_packet("create", eid=entity.eid)
self.broadcast(packet)
quantity -= 64
def players_near(self, player, radius):
"""
Obtain other players within a radius of a given player.
Radius is measured in blocks.
"""
radius *= 32
for p in self.protocols.itervalues():
if p.player == player:
continue
distance = player.location.distance(p.location)
if distance <= radius:
yield p.player
def pauseProducing(self):
pass
def resumeProducing(self):
pass
def stopProducing(self):
pass
diff --git a/bravo/plugins/commands/common.py b/bravo/plugins/commands/common.py
index 88c9982..ae62102 100644
--- a/bravo/plugins/commands/common.py
+++ b/bravo/plugins/commands/common.py
@@ -1,479 +1,479 @@
from textwrap import wrap
from twisted.internet import reactor
from zope.interface import implements
from bravo.beta.packets import make_packet
from bravo.blocks import parse_block
from bravo.ibravo import IChatCommand, IConsoleCommand
from bravo.plugin import retrieve_plugins
from bravo.policy.seasons import Spring, Winter
from bravo.utilities.temporal import split_time
def parse_player(factory, name):
if name in factory.protocols:
return factory.protocols[name]
else:
raise Exception("Couldn't find player %s" % name)
class Help(object):
"""
Provide helpful information about commands.
"""
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def general_help(self, plugins):
"""
Return a list of commands.
"""
commands = [plugin.name for plugin in set(plugins.itervalues())]
commands.sort()
wrapped = wrap(", ".join(commands), 60)
help_text = [
"Use /help <command> for more information on a command.",
"List of commands:",
] + wrapped
return help_text
def specific_help(self, plugins, name):
"""
Return specific help about a single plugin.
"""
try:
plugin = plugins[name]
except:
return ("No such command!",)
help_text = [
"Usage: %s %s" % (plugin.name, plugin.usage),
]
if plugin.aliases:
help_text.append("Aliases: %s" % ", ".join(plugin.aliases))
help_text.append(plugin.__doc__)
return help_text
def chat_command(self, username, parameters):
plugins = retrieve_plugins(IChatCommand, factory=self.factory)
if parameters:
return self.specific_help(plugins, "".join(parameters))
else:
return self.general_help(plugins)
def console_command(self, parameters):
plugins = retrieve_plugins(IConsoleCommand, factory=self.factory)
if parameters:
return self.specific_help(plugins, "".join(parameters))
else:
return self.general_help(plugins)
name = "help"
aliases = tuple()
usage = ""
class List(object):
"""
List the currently connected players.
"""
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def dispatch(self, factory):
yield "Connected players: %s" % (", ".join(
player for player in factory.protocols))
def chat_command(self, username, parameters):
for i in self.dispatch(self.factory):
yield i
def console_command(self, parameters):
for i in self.dispatch(self.factory):
yield i
name = "list"
aliases = ("playerlist",)
usage = ""
class Time(object):
"""
Obtain or change the current time and date.
"""
# XXX my code is all over the place; clean me up
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def dispatch(self, factory):
hours, minutes = split_time(factory.time)
# If the factory's got seasons enabled, then the world will have
# a season, and we can examine it. Otherwise, just print the day as-is
# for the date.
season = factory.world.season
if season:
day_of_season = factory.day - season.day
while day_of_season < 0:
day_of_season += 360
date = "{0} ({1} {2})".format(factory.day, day_of_season,
season.name)
else:
date = "%d" % factory.day
return ("%02d:%02d, %s" % (hours, minutes, date),)
def chat_command(self, username, parameters):
if len(parameters) >= 1:
# Set the time
time = parameters[0]
if time == 'sunset':
time = 12000
elif time == 'sunrise':
time = 24000
elif ':' in time:
# Interpret it as a real-world esque time (24hr clock)
hours, minutes = time.split(':')
hours, minutes = int(hours), int(minutes)
# 24000 ticks / day = 1000 ticks / hour ~= 16.6 ticks / minute
time = (hours * 1000) + (minutes * 50 / 3)
time -= 6000 # to account for 24000 being high noon in minecraft.
if len(parameters) >= 2:
self.factory.day = int(parameters[1])
self.factory.time = int(time)
self.factory.update_time()
self.factory.update_season()
# Update the time for the clients
self.factory.broadcast_time()
# Tell the user the current time.
return self.dispatch(self.factory)
def console_command(self, parameters):
return self.dispatch(self.factory)
name = "time"
aliases = ("date",)
usage = "[time] [day]"
class Say(object):
"""
Broadcast a message to everybody.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
message = "[Server] %s" % " ".join(parameters)
yield message
packet = make_packet("chat", message=message)
self.factory.broadcast(packet)
name = "say"
aliases = tuple()
usage = "<message>"
class Give(object):
"""
Spawn block or item pickups near a player.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
if len(parameters) == 0:
return ("Usage: /{0} {1}".format(self.name, self.usage),)
elif len(parameters) == 1:
block = parameters[0]
count = 1
elif len(parameters) == 2:
block = parameters[0]
count = parameters[1]
else:
block = " ".join(parameters[:-1])
count = parameters[-1]
player = parse_player(self.factory, username)
block = parse_block(block)
count = int(count)
# Get a location two blocks in front of the player.
dest = player.player.location.in_front_of(2)
dest.y += 1
coords = int(dest.x * 32), int(dest.y * 32), int(dest.z * 32)
self.factory.give(coords, block, count)
# Return an empty tuple for iteration
return tuple()
name = "give"
aliases = tuple()
usage = "<block> <quantity>"
class Quit(object):
"""
Gracefully shutdown the server.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
# Let's shutdown!
message = "Server shutting down."
yield message
# Use an error packet to kick clients cleanly.
packet = make_packet("error", message=message)
self.factory.broadcast(packet)
yield "Saving all chunks to disk..."
- for chunk in self.factory.world.dirty_chunk_cache.itervalues():
+ for chunk in self.factory.world._cache.iterdirty():
yield self.factory.world.save_chunk(chunk)
yield "Halting."
reactor.stop()
name = "quit"
aliases = ("exit",)
usage = ""
class SaveAll(object):
"""
Save all world data to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
yield "Flushing all chunks..."
- for chunk in self.factory.world.chunk_cache.itervalues():
+ for chunk in self.factory.world._cache.iterdirty():
yield self.factory.world.save_chunk(chunk)
yield "Save complete!"
name = "save-all"
aliases = tuple()
usage = ""
class SaveOff(object):
"""
Disable saving world data to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
yield "Disabling saving..."
self.factory.world.save_off()
yield "Saving disabled. Currently running in memory."
name = "save-off"
aliases = tuple()
usage = ""
class SaveOn(object):
"""
Enable saving world data to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
yield "Enabling saving (this could take a bit)..."
self.factory.world.save_on()
yield "Saving enabled."
name = "save-on"
aliases = tuple()
usage = ""
class WriteConfig(object):
"""
Write configuration to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
with open("".join(parameters), "wb") as f:
self.factory.config.write(f)
yield "Configuration saved."
name = "write-config"
aliases = tuple()
usage = ""
class Season(object):
"""
Change the season.
This command fast-forwards the calendar to the first day of the requested
season.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
wanted = " ".join(parameters)
if wanted == "spring":
season = Spring()
elif wanted == "winter":
season = Winter()
else:
yield "Couldn't find season %s" % wanted
return
msg = "Changing season to %s..." % wanted
yield msg
self.factory.day = season.day
self.factory.update_season()
yield "Season successfully changed!"
name = "season"
aliases = tuple()
usage = "<season>"
class Me(object):
"""
Emote.
"""
implements(IChatCommand)
def __init__(self, factory):
pass
def chat_command(self, username, parameters):
say = " ".join(parameters)
msg = "* %s %s" % (username, say)
return (msg,)
name = "me"
aliases = tuple()
usage = "<message>"
class Kick(object):
"""
Kick a player from the world.
With great power comes great responsibility; use this wisely.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def dispatch(self, parameters):
player = parse_player(self.factory, parameters[0])
if len(parameters) == 1:
msg = "%s has been kicked." % parameters[0]
elif len(parameters) > 1:
reason = " ".join(parameters[1:])
msg = "%s has been kicked for %s" % (parameters[0],reason)
packet = make_packet("error", message=msg)
player.transport.write(packet)
yield msg
def console_command(self, parameters):
for i in self.dispatch(parameters):
yield i
name = "kick"
aliases = tuple()
usage = "<player> [<reason>]"
class GetPos(object):
"""
Ascertain a player's location.
This command is identical to the command provided by Hey0.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
player = parse_player(self.factory, username)
l = player.player.location
locMsg = "Your location is <%d, %d, %d>" % l.pos.to_block()
yield locMsg
name = "getpos"
aliases = tuple()
usage = ""
class Nick(object):
"""
Set a player's nickname.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
player = parse_player(self.factory, username)
if len(parameters) == 0:
return ("Usage: /nick <nickname>",)
else:
new = parameters[0]
if self.factory.set_username(player, new):
return ("Changed nickname from %s to %s" % (username, new),)
else:
return ("Couldn't change nickname!",)
name = "nick"
aliases = tuple()
usage = "<nickname>"
diff --git a/bravo/plugins/commands/debug.py b/bravo/plugins/commands/debug.py
index d754a1a..77cc141 100644
--- a/bravo/plugins/commands/debug.py
+++ b/bravo/plugins/commands/debug.py
@@ -1,191 +1,191 @@
from __future__ import division
from zope.interface import implements
from bravo.utilities.coords import polar_round_vector
from bravo.ibravo import IConsoleCommand, IChatCommand
# Trivial hello-world command.
# If this is ever modified, please also update the documentation;
# docs/extending.rst includes this verbatim in order to demonstrate authoring
# commands.
class Hello(object):
"""
Say hello to the world.
"""
implements(IChatCommand)
def chat_command(self, username, parameters):
greeting = "Hello, %s!" % username
yield greeting
name = "hello"
aliases = tuple()
usage = ""
class Meliae(object):
"""
Dump a Meliae snapshot to disk.
"""
implements(IConsoleCommand)
def console_command(self, parameters):
out = "".join(parameters)
try:
import meliae.scanner
meliae.scanner.dump_all_objects(out)
except ImportError:
raise Exception("Couldn't import meliae!")
except IOError:
raise Exception("Couldn't save to file %s!" % parameters)
return tuple()
name = "dump-memory"
aliases = tuple()
usage = "<filename>"
class Status(object):
"""
Print a short summary of the world's status.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
protocol_count = len(self.factory.protocols)
yield "%d protocols connected" % protocol_count
for name, protocol in self.factory.protocols.iteritems():
count = len(protocol.chunks)
dirty = len([i for i in protocol.chunks.values() if i.dirty])
yield "%s: %d chunks (%d dirty)" % (name, count, dirty)
- chunk_count = len(self.factory.world.chunk_cache)
- dirty = len(self.factory.world.dirty_chunk_cache)
+ chunk_count = 0 # len(self.factory.world.chunk_cache)
+ dirty = len(self.factory.world._cache._dirty)
chunk_count += dirty
yield "World cache: %d chunks (%d dirty)" % (chunk_count, dirty)
name = "status"
aliases = tuple()
usage = ""
class Colors(object):
"""
Paint with all the colors of the wind.
"""
implements(IChatCommand)
def chat_command(self, username, parameters):
from bravo.utilities.chat import chat_colors
names = """black dblue dgreen dcyan dred dmagenta dorange gray dgray
blue green cyan red magenta yellow""".split()
for pair in zip(chat_colors, names):
yield "%s%s" % pair
name = "colors"
aliases = tuple()
usage = ""
class Rain(object):
"""
Perform a rain dance.
"""
# XXX I recommend that this touch the weather vane directly.
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
from bravo.beta.packets import make_packet
arg = "".join(parameters)
if arg == "start":
self.factory.broadcast(make_packet("state", state="start_rain",
creative=False))
elif arg == "stop":
self.factory.broadcast(make_packet("state", state="stop_rain",
creative=False))
else:
return ("Couldn't understand you!",)
return ("*%s did the rain dance*" % (username),)
name = "rain"
aliases = tuple()
usage = "<state>"
class CreateMob(object):
"""
Create a mob
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
make = True
position = self.factory.protocols[username].location
if len(parameters) == 1:
mob = parameters[0]
number = 1
elif len(parameters) == 2:
mob = parameters[0]
number = int(parameters[1])
else:
make = False
return ("Couldn't understand you!",)
if make:
# try:
for i in range(0,number):
print mob, number
entity = self.factory.create_entity(position.x, position.y,
position.z, mob)
self.factory.broadcast(entity.save_to_packet())
self.factory.world.mob_manager.start_mob(entity)
return ("Made mob!",)
# except:
# return ("Couldn't make mob!",)
name = "mob"
aliases = tuple()
usage = "<state>"
class CheckCoords(object):
"""
Create a mob
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
offset = set()
calc_offset = set()
for x in range(-1,2):
for y in range(0,2):
for z in range(-1,2):
i = x/2
j = y
k = z/2
offset.add((i,j,k))
for i in offset:
calc_offset.add(polar_round_vector(i))
for i in calc_offset:
self.factory.world.sync_set_block(i,8)
print 'offset', offset
print 'offsetlist', calc_offset
return "Done"
name = "check"
aliases = tuple()
usage = "<state>"
diff --git a/bravo/tests/test_world.py b/bravo/tests/test_world.py
index b50a2f4..a6b132f 100644
--- a/bravo/tests/test_world.py
+++ b/bravo/tests/test_world.py
@@ -1,295 +1,285 @@
from twisted.trial import unittest
from twisted.internet.defer import inlineCallbacks
from array import array
from itertools import product
import os
from bravo.config import BravoConfigParser
from bravo.errors import ChunkNotLoaded
from bravo.world import ChunkCache, World
class MockChunk(object):
def __init__(self, x, z):
self.x = x
self.z = z
class TestChunkCache(unittest.TestCase):
def test_pin_single(self):
cc = ChunkCache()
chunk = MockChunk(1, 2)
cc.pin(chunk)
self.assertIs(cc.get((1, 2)), chunk)
def test_dirty_single(self):
cc = ChunkCache()
chunk = MockChunk(1, 2)
cc.dirtied(chunk)
self.assertIs(cc.get((1, 2)), chunk)
def test_pin_dirty(self):
cc = ChunkCache()
chunk = MockChunk(1, 2)
cc.pin(chunk)
cc.dirtied(chunk)
cc.unpin(chunk)
self.assertIs(cc.get((1, 2)), chunk)
class TestWorldChunks(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
self.w.start()
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
@inlineCallbacks
def test_request_chunk_identity(self):
first = yield self.w.request_chunk(0, 0)
second = yield self.w.request_chunk(0, 0)
self.assertIs(first, second)
@inlineCallbacks
def test_request_chunk_cached_identity(self):
# Turn on the cache and get a few chunks in there, then request a
# chunk that is in the cache.
yield self.w.enable_cache(1)
first = yield self.w.request_chunk(0, 0)
second = yield self.w.request_chunk(0, 0)
self.assertIs(first, second)
@inlineCallbacks
def test_get_block(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = yield self.w.get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_metadata(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.metadata = array("B")
chunk.metadata.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
metadata = yield self.w.get_metadata((x, y, z))
self.assertEqual(metadata, chunk.get_metadata((x, y, z)))
@inlineCallbacks
def test_get_block_readback(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
yield self.w.save_chunk(chunk)
del chunk
- self.w.chunk_cache.clear()
- self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = yield self.w.get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_block_readback_negative(self):
chunk = yield self.w.request_chunk(-1, -1)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
yield self.w.save_chunk(chunk)
del chunk
- self.w.chunk_cache.clear()
- self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(-1, -1)
for x, y, z in product(xrange(2), repeat=3):
block = yield self.w.get_block((x - 16, y, z - 16))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_metadata_readback(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.metadata = array("B")
chunk.metadata.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
yield self.w.save_chunk(chunk)
del chunk
- self.w.chunk_cache.clear()
- self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
metadata = yield self.w.get_metadata((x, y, z))
self.assertEqual(metadata, chunk.get_metadata((x, y, z)))
@inlineCallbacks
def test_world_level_mark_chunk_dirty(self):
chunk = yield self.w.request_chunk(0, 0)
# Reload chunk.
yield self.w.save_chunk(chunk)
del chunk
- self.w.chunk_cache.clear()
- self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
self.assertFalse(chunk.dirty)
self.w.mark_dirty((12, 64, 4))
chunk = yield self.w.request_chunk(0, 0)
self.assertTrue(chunk.dirty)
@inlineCallbacks
def test_world_level_mark_chunk_dirty_offset(self):
chunk = yield self.w.request_chunk(1, 2)
# Reload chunk.
yield self.w.save_chunk(chunk)
del chunk
- self.w.chunk_cache.clear()
- self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(1, 2)
self.assertFalse(chunk.dirty)
self.w.mark_dirty((29, 64, 43))
chunk = yield self.w.request_chunk(1, 2)
self.assertTrue(chunk.dirty)
@inlineCallbacks
def test_sync_get_block(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = self.w.sync_get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
def test_sync_get_block_unloaded(self):
self.assertRaises(ChunkNotLoaded, self.w.sync_get_block, (0, 0, 0))
def test_sync_get_metadata_neighboring(self):
"""
Even if a neighboring chunk is loaded, the target chunk could still be
unloaded.
Test with sync_get_metadata() to increase test coverage.
"""
d = self.w.request_chunk(0, 0)
@d.addCallback
def cb(chunk):
self.assertRaises(ChunkNotLoaded,
self.w.sync_get_metadata, (16, 0, 0))
return d
class TestWorld(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
self.w.start()
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
def test_load_player_initial(self):
"""
Calling load_player() on a player which has never been loaded should
not result in an exception. Instead, the player should be returned,
wrapped in a Deferred.
"""
# For bonus points, assert that the player's username is correct.
d = self.w.load_player("unittest")
@d.addCallback
def cb(player):
self.assertEqual(player.username, "unittest")
return d
class TestWorldConfig(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
def test_trivial(self):
pass
def test_world_configured_seed(self):
"""
Worlds can have their seed set via configuration.
"""
self.bcp.set("world unittest", "seed", "42")
self.w.start()
self.assertEqual(self.w.level.seed, 42)
self.w.stop()
diff --git a/bravo/web.py b/bravo/web.py
index 4ea42d7..d6fef23 100644
--- a/bravo/web.py
+++ b/bravo/web.py
@@ -1,164 +1,163 @@
from twisted.web.resource import Resource
from twisted.web.server import Site, NOT_DONE_YET
from twisted.web.template import flatten, renderer, tags, Element, XMLString
from bravo import version
from bravo.beta.factory import BravoFactory
from bravo.ibravo import IWorldResource
from bravo.plugin import retrieve_plugins
root_template = """
<html xmlns:t="http://twistedmatrix.com/ns/twisted.web.template/0.1">
<head>
<title t:render="title" />
</head>
<body>
<h1 t:render="title" />
<div t:render="world" />
<div t:render="service" />
</body>
</html>
"""
world_template = """
<html xmlns:t="http://twistedmatrix.com/ns/twisted.web.template/0.1">
<head>
<title t:render="title" />
</head>
<body>
<h1 t:render="title" />
<div t:render="user" />
<div t:render="status" />
<div t:render="plugin" />
</body>
</html>
"""
class BravoRootElement(Element):
"""
Element representing the web site root.
"""
loader = XMLString(root_template)
def __init__(self, worlds, services):
Element.__init__(self)
self.services = services
self.worlds = worlds
@renderer
def title(self, request, tag):
return tag("Bravo %s" % version)
@renderer
def service(self, request, tag):
services = []
for name, factory in self.services.iteritems():
services.append(tags.li("%s (%s)" % (name, factory.__class__)))
return tag(tags.h2("Services"), tags.ul(*services))
@renderer
def world(self, request, tag):
worlds = []
for name in self.worlds.keys():
worlds.append(tags.li(tags.a(name.title(), href=name)))
return tag(tags.h2("Worlds"), tags.ul(*worlds))
class BravoWorldElement(Element):
"""
Element representing a single world.
"""
loader = XMLString(world_template)
def __init__(self, factory, plugins):
Element.__init__(self)
self.factory = factory
self.plugins = plugins
@renderer
def title(self, request, tag):
return tag("World %s" % self.factory.name.title())
@renderer
def user(self, request, tag):
users = (tags.li(username) for username in self.factory.protocols)
return tag(tags.h2("Users"), tags.ul(*users))
@renderer
def status(self, request, tag):
world = self.factory.world
l = []
- total = (len(world.chunk_cache) + len(world.dirty_chunk_cache) +
- len(world._pending_chunks))
+ total = 0 + len(world._cache._dirty) + len(world._pending_chunks)
l.append(tags.li("Total chunks: %d" % total))
- l.append(tags.li("Clean chunks: %d" % len(world.chunk_cache)))
- l.append(tags.li("Dirty chunks: %d" % len(world.dirty_chunk_cache)))
+ l.append(tags.li("Clean chunks: %d" % 0))
+ l.append(tags.li("Dirty chunks: %d" % len(world._cache._dirty)))
l.append(tags.li("Chunks being generated: %d" %
- len(world._pending_chunks)))
- if world.permanent_cache:
+ len(world._pending_chunks)))
+ if world._cache._perm:
l.append(tags.li("Permanent cache: enabled, %d chunks" %
- len(world.permanent_cache)))
+ len(world._cache._perm)))
else:
l.append(tags.li("Permanent cache: disabled"))
status = tags.ul(*l)
return tag(tags.h2("Status"), status)
@renderer
def plugin(self, request, tag):
plugins = []
for name in self.plugins.keys():
plugins.append(tags.li(tags.a(name.title(),
href='%s/%s' % (self.factory.name, name))))
return tag(tags.h2("Plugins"), tags.ul(*plugins))
class BravoResource(Resource):
def __init__(self, element, isLeaf=True):
Resource.__init__(self)
self.element = element
self.isLeaf = isLeaf
def render_GET(self, request):
def write(s):
if not request._disconnected:
request.write(s)
d = flatten(request, self.element, write)
@d.addCallback
def complete_request(html):
if not request._disconnected:
request.finish()
return NOT_DONE_YET
def bravo_site(services):
# extract worlds and non-world services only once at startup
worlds = {}
other_services = {}
for name, service in services.iteritems():
factory = service.args[1]
if isinstance(factory, BravoFactory):
worlds[factory.name] = factory
else:
# XXX: do we really need those ?
other_services[name] = factory
# add site root
root = Resource()
root.putChild('', BravoResource(BravoRootElement(worlds, other_services)))
# add world sub pages and related plugins
for world, factory in worlds.iteritems():
# Discover parameterized plugins.
plugins = retrieve_plugins(IWorldResource,
parameters={"factory": factory})
# add sub page
child = BravoResource(BravoWorldElement(factory, plugins), False)
root.putChild(world, child)
# add plugins
for name, resource in plugins.iteritems():
# add plugin page
child.putChild(name, resource)
# create site
site = Site(root)
return site
diff --git a/bravo/world.py b/bravo/world.py
index 36c651c..cd5cb12 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,735 +1,720 @@
from array import array
from functools import wraps
from itertools import product
import random
import sys
import weakref
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
from twisted.internet.task import LoopingCall, coiterate
from twisted.python import log
from bravo.beta.structures import Level
from bravo.chunk import Chunk, CHUNK_HEIGHT
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
class ChunkCache(object):
"""
A cache which holds references to all chunks which should be held in
memory.
This cache remembers chunks that were recently used, that are in permanent
residency, and so forth. Its exact caching algorithm is currently null.
When chunks dirty themselves, they are expected to notify the cache, which
will then schedule an eviction for the chunk.
"""
def __init__(self):
self._perm = {}
self._dirty = {}
def pin(self, chunk):
self._perm[chunk.x, chunk.z] = chunk
def unpin(self, chunk):
del self._perm[chunk.x, chunk.z]
+ def put(self, chunk):
+ # XXX expand caching strategy
+ pass
+
def get(self, coords):
if coords in self._perm:
return self._perm[coords]
# Returns None if not found!
return self._dirty.get(coords)
+ def cleaned(self, chunk):
+ del self._dirty[chunk.x, chunk.z]
+
def dirtied(self, chunk):
self._dirty[chunk.x, chunk.z] = chunk
+ def iterdirty(self):
+ return self._dirty.itervalues()
+
class ImpossibleCoordinates(Exception):
"""
A coordinate could not ever be valid.
"""
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
- if bigcoords in self.chunk_cache:
- chunk = self.chunk_cache[bigcoords]
- elif bigcoords in self.dirty_chunk_cache:
- chunk = self.dirty_chunk_cache[bigcoords]
- else:
+ chunk = self._cache.get(bigcoords)
+
+ if chunk is None:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
The initial level data.
"""
_cache = None
"""
The chunk cache.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
- self.chunk_cache = weakref.WeakValueDictionary()
- self.dirty_chunk_cache = dict()
-
self._pending_chunks = dict()
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
log.msg("World connected on %s, using serializer %s" %
(world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
# Create our cache.
self._cache = ChunkCache()
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
# Start up the permanent cache.
# has_option() is not exactly desirable, but it's appropriate here
# because we don't want to take any action if the key is unset.
if self.config.has_option(self.config_name, "perm_cache"):
cache_level = self.config.getint(self.config_name, "perm_cache")
self.enable_cache(cache_level)
- self.chunk_management_loop = LoopingCall(self.sort_chunks)
+ self.chunk_management_loop = LoopingCall(self.flush_chunk)
self.chunk_management_loop.start(1)
# XXX Put this in init or here?
self.mob_manager = MobManager()
# XXX Put this in the managers constructor?
self.mob_manager.world = self
@inlineCallbacks
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
:returns: A ``Deferred`` that fires after the world has stopped.
"""
self.chunk_management_loop.stop()
- # Flush all dirty chunks to disk.
- for chunk in self.dirty_chunk_cache.itervalues():
+ # Flush all dirty chunks to disk. Don't bother cleaning them off.
+ for chunk in self._cache.iterdirty():
yield self.save_chunk(chunk)
- # Evict all chunks.
- self.chunk_cache.clear()
- self.dirty_chunk_cache.clear()
-
# Destroy the cache.
self._cache = None
# Save the level data.
yield maybeDeferred(self.serializer.save_level, self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
:returns: A ``Deferred`` which will fire when the cache has been
adjusted.
"""
log.msg("Setting cache size to %d, please hold..." % size)
assign = self._cache.pin
def worker(x, z):
log.msg("Adding %d, %d to cache..." % (x, z))
return self.request_chunk(x, z).addCallback(assign)
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
work = (worker(x, z) for x, z in product(rx, rz))
d = coiterate(work)
@d.addCallback
def notify(none):
log.msg("Cache size is now %d!" % size)
return d
- def sort_chunks(self):
+ def flush_chunk(self):
"""
- Sort out the internal caches.
+ Flush a dirty chunk.
This method will always block when there are dirty chunks.
"""
- first = True
-
- all_chunks = dict(self.dirty_chunk_cache)
- all_chunks.update(self.chunk_cache)
- self.chunk_cache.clear()
- self.dirty_chunk_cache.clear()
- for coords, chunk in all_chunks.iteritems():
- if chunk.dirty:
- if first:
- first = False
- self.save_chunk(chunk)
- self.chunk_cache[coords] = chunk
- else:
- self.dirty_chunk_cache[coords] = chunk
- else:
- self.chunk_cache[coords] = chunk
+ for chunk in self._cache.iterdirty():
+ # Save a single chunk, and add a callback to remove it from the
+ # cache when it's been cleaned.
+ d = self.save_chunk(chunk)
+ d.addCallback(self._cache.cleaned)
+ break
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
- d = dict(self.chunk_cache)
- self.chunk_cache = d
+ self.chunk_management_loop.stop()
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
- d = weakref.WeakValueDictionary(self.chunk_cache)
- self.chunk_cache = d
+ self.chunk_management_loop.start(1)
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
if hasattr(entity, 'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
# First, try the cache.
cached = self._cache.get((x, z))
if cached is not None:
returnValue(cached)
- # Then, legacy caches.
- if (x, z) in self.chunk_cache:
- returnValue(self.chunk_cache[x, z])
- elif (x, z) in self.dirty_chunk_cache:
- returnValue(self.dirty_chunk_cache[x, z])
- elif (x, z) in self._pending_chunks:
+ # Is it pending?
+ if (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
# Create a new chunk object, since the cache turned up empty.
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
# Add in our magic dirtiness hook so that the cache can be aware of
# chunks who have been...naughty.
chunk.dirtied = self._cache.dirtied
if chunk.dirty:
# The chunk was already dirty!? Oh, naughty indeed!
self._cache.dirtied(chunk)
if chunk.populated:
- self.chunk_cache[x, z] = chunk
+ self._cache.put(chunk)
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
def pp(chunk):
chunk.populated = True
chunk.dirty = True
self.postprocess_chunk(chunk)
- self.dirty_chunk_cache[x, z] = chunk
+ self._cache.dirtied(chunk)
del self._pending_chunks[x, z]
return chunk
# Set up callbacks.
d.addCallback(pp)
d.chainDeferred(pe)
# Because multiple people might be attached to this callback, we're
# going to do something magical here. We will yield a forked version
# of our Deferred. This means that we will wait right here, for a
# long, long time, before actually returning with the chunk, *but*,
# when we actually finish, we'll be ready to return the chunk
# immediately. Our caller cannot possibly care because they only see a
# Deferred either way.
retval = yield retval
returnValue(retval)
def save_chunk(self, chunk):
"""
Write a chunk to the serializer.
Note that this method does nothing when the given chunk is not dirty
or saving is off!
:returns: A ``Deferred`` which will fire after the chunk has been
saved with the chunk.
"""
if not chunk.dirty or not self.saving:
return succeed(chunk)
d = maybeDeferred(self.serializer.save_chunk, chunk)
@d.addCallback
def cb(none):
chunk.dirty = False
return chunk
@d.addErrback
def eb(failure):
failure.trap(SerializerWriteException)
log.msg("Couldn't write %r" % chunk)
return d
def load_player(self, username):
"""
Retrieve player data.
:returns: a ``Deferred`` that will be fired with a ``Player``
"""
# Get the player, possibly.
d = maybeDeferred(self.serializer.load_player, username)
@d.addErrback
def eb(failure):
failure.trap(SerializerReadException)
log.msg("Couldn't load player %r" % username)
# Make a player.
player = Player(username=username)
player.location.x = self.level.spawn[0]
player.location.y = self.level.spawn[1]
player.location.stance = self.level.spawn[1]
player.location.z = self.level.spawn[2]
return player
# This Deferred's good to go as-is.
return d
def save_player(self, username, player):
if self.saving:
self.serializer.save_player(player)
# World-level geometry access.
# These methods let external API users refrain from going through the
# standard motions of looking up and loading chunk information.
@coords_to_chunk
def get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_block(coords)
@coords_to_chunk
def set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_block(coords, value)
@coords_to_chunk
def get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_metadata(coords)
@coords_to_chunk
def set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_metadata(coords, value)
@coords_to_chunk
def destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.destroy(coords)
@coords_to_chunk
def mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: the requested block
"""
return chunk.get_block(coords)
@sync_coords_to_chunk
def sync_set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: None
"""
chunk.set_block(coords, value)
@sync_coords_to_chunk
def sync_get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: the requested metadata
"""
return chunk.get_metadata(coords)
@sync_coords_to_chunk
def sync_set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: None
"""
chunk.set_metadata(coords, value)
@sync_coords_to_chunk
def sync_destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: None
"""
chunk.destroy(coords)
@sync_coords_to_chunk
def sync_mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: None
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_request_chunk(self, chunk, coords):
"""
Get an unknown chunk.
:returns: the requested ``Chunk``
"""
return chunk
|
bravoserver/bravo
|
92ea0f308b084ea762cadaad8937b58fda4c7210
|
world: Fix up save_chunk() and make its callers less racy.
|
diff --git a/bravo/plugins/commands/common.py b/bravo/plugins/commands/common.py
index 3d7db39..88c9982 100644
--- a/bravo/plugins/commands/common.py
+++ b/bravo/plugins/commands/common.py
@@ -1,479 +1,479 @@
from textwrap import wrap
from twisted.internet import reactor
from zope.interface import implements
from bravo.beta.packets import make_packet
from bravo.blocks import parse_block
from bravo.ibravo import IChatCommand, IConsoleCommand
from bravo.plugin import retrieve_plugins
from bravo.policy.seasons import Spring, Winter
from bravo.utilities.temporal import split_time
def parse_player(factory, name):
if name in factory.protocols:
return factory.protocols[name]
else:
raise Exception("Couldn't find player %s" % name)
class Help(object):
"""
Provide helpful information about commands.
"""
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def general_help(self, plugins):
"""
Return a list of commands.
"""
commands = [plugin.name for plugin in set(plugins.itervalues())]
commands.sort()
wrapped = wrap(", ".join(commands), 60)
help_text = [
"Use /help <command> for more information on a command.",
"List of commands:",
] + wrapped
return help_text
def specific_help(self, plugins, name):
"""
Return specific help about a single plugin.
"""
try:
plugin = plugins[name]
except:
return ("No such command!",)
help_text = [
"Usage: %s %s" % (plugin.name, plugin.usage),
]
if plugin.aliases:
help_text.append("Aliases: %s" % ", ".join(plugin.aliases))
help_text.append(plugin.__doc__)
return help_text
def chat_command(self, username, parameters):
plugins = retrieve_plugins(IChatCommand, factory=self.factory)
if parameters:
return self.specific_help(plugins, "".join(parameters))
else:
return self.general_help(plugins)
def console_command(self, parameters):
plugins = retrieve_plugins(IConsoleCommand, factory=self.factory)
if parameters:
return self.specific_help(plugins, "".join(parameters))
else:
return self.general_help(plugins)
name = "help"
aliases = tuple()
usage = ""
class List(object):
"""
List the currently connected players.
"""
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def dispatch(self, factory):
yield "Connected players: %s" % (", ".join(
player for player in factory.protocols))
def chat_command(self, username, parameters):
for i in self.dispatch(self.factory):
yield i
def console_command(self, parameters):
for i in self.dispatch(self.factory):
yield i
name = "list"
aliases = ("playerlist",)
usage = ""
class Time(object):
"""
Obtain or change the current time and date.
"""
# XXX my code is all over the place; clean me up
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def dispatch(self, factory):
hours, minutes = split_time(factory.time)
# If the factory's got seasons enabled, then the world will have
# a season, and we can examine it. Otherwise, just print the day as-is
# for the date.
season = factory.world.season
if season:
day_of_season = factory.day - season.day
while day_of_season < 0:
day_of_season += 360
date = "{0} ({1} {2})".format(factory.day, day_of_season,
season.name)
else:
date = "%d" % factory.day
return ("%02d:%02d, %s" % (hours, minutes, date),)
def chat_command(self, username, parameters):
if len(parameters) >= 1:
# Set the time
time = parameters[0]
if time == 'sunset':
time = 12000
elif time == 'sunrise':
time = 24000
elif ':' in time:
# Interpret it as a real-world esque time (24hr clock)
hours, minutes = time.split(':')
hours, minutes = int(hours), int(minutes)
# 24000 ticks / day = 1000 ticks / hour ~= 16.6 ticks / minute
time = (hours * 1000) + (minutes * 50 / 3)
time -= 6000 # to account for 24000 being high noon in minecraft.
if len(parameters) >= 2:
self.factory.day = int(parameters[1])
self.factory.time = int(time)
self.factory.update_time()
self.factory.update_season()
# Update the time for the clients
self.factory.broadcast_time()
# Tell the user the current time.
return self.dispatch(self.factory)
def console_command(self, parameters):
return self.dispatch(self.factory)
name = "time"
aliases = ("date",)
usage = "[time] [day]"
class Say(object):
"""
Broadcast a message to everybody.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
message = "[Server] %s" % " ".join(parameters)
yield message
packet = make_packet("chat", message=message)
self.factory.broadcast(packet)
name = "say"
aliases = tuple()
usage = "<message>"
class Give(object):
"""
Spawn block or item pickups near a player.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
if len(parameters) == 0:
return ("Usage: /{0} {1}".format(self.name, self.usage),)
elif len(parameters) == 1:
block = parameters[0]
count = 1
elif len(parameters) == 2:
block = parameters[0]
count = parameters[1]
else:
block = " ".join(parameters[:-1])
count = parameters[-1]
player = parse_player(self.factory, username)
block = parse_block(block)
count = int(count)
# Get a location two blocks in front of the player.
dest = player.player.location.in_front_of(2)
dest.y += 1
coords = int(dest.x * 32), int(dest.y * 32), int(dest.z * 32)
self.factory.give(coords, block, count)
# Return an empty tuple for iteration
return tuple()
name = "give"
aliases = tuple()
usage = "<block> <quantity>"
class Quit(object):
"""
Gracefully shutdown the server.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
# Let's shutdown!
message = "Server shutting down."
yield message
# Use an error packet to kick clients cleanly.
packet = make_packet("error", message=message)
self.factory.broadcast(packet)
yield "Saving all chunks to disk..."
for chunk in self.factory.world.dirty_chunk_cache.itervalues():
- self.factory.world.save_chunk(chunk)
+ yield self.factory.world.save_chunk(chunk)
yield "Halting."
reactor.stop()
name = "quit"
aliases = ("exit",)
usage = ""
class SaveAll(object):
"""
Save all world data to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
yield "Flushing all chunks..."
for chunk in self.factory.world.chunk_cache.itervalues():
- self.factory.world.save_chunk(chunk)
+ yield self.factory.world.save_chunk(chunk)
yield "Save complete!"
name = "save-all"
aliases = tuple()
usage = ""
class SaveOff(object):
"""
Disable saving world data to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
yield "Disabling saving..."
self.factory.world.save_off()
yield "Saving disabled. Currently running in memory."
name = "save-off"
aliases = tuple()
usage = ""
class SaveOn(object):
"""
Enable saving world data to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
yield "Enabling saving (this could take a bit)..."
self.factory.world.save_on()
yield "Saving enabled."
name = "save-on"
aliases = tuple()
usage = ""
class WriteConfig(object):
"""
Write configuration to disk.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
with open("".join(parameters), "wb") as f:
self.factory.config.write(f)
yield "Configuration saved."
name = "write-config"
aliases = tuple()
usage = ""
class Season(object):
"""
Change the season.
This command fast-forwards the calendar to the first day of the requested
season.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def console_command(self, parameters):
wanted = " ".join(parameters)
if wanted == "spring":
season = Spring()
elif wanted == "winter":
season = Winter()
else:
yield "Couldn't find season %s" % wanted
return
msg = "Changing season to %s..." % wanted
yield msg
self.factory.day = season.day
self.factory.update_season()
yield "Season successfully changed!"
name = "season"
aliases = tuple()
usage = "<season>"
class Me(object):
"""
Emote.
"""
implements(IChatCommand)
def __init__(self, factory):
pass
def chat_command(self, username, parameters):
say = " ".join(parameters)
msg = "* %s %s" % (username, say)
return (msg,)
name = "me"
aliases = tuple()
usage = "<message>"
class Kick(object):
"""
Kick a player from the world.
With great power comes great responsibility; use this wisely.
"""
implements(IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def dispatch(self, parameters):
player = parse_player(self.factory, parameters[0])
if len(parameters) == 1:
msg = "%s has been kicked." % parameters[0]
elif len(parameters) > 1:
reason = " ".join(parameters[1:])
msg = "%s has been kicked for %s" % (parameters[0],reason)
packet = make_packet("error", message=msg)
player.transport.write(packet)
yield msg
def console_command(self, parameters):
for i in self.dispatch(parameters):
yield i
name = "kick"
aliases = tuple()
usage = "<player> [<reason>]"
class GetPos(object):
"""
Ascertain a player's location.
This command is identical to the command provided by Hey0.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
player = parse_player(self.factory, username)
l = player.player.location
locMsg = "Your location is <%d, %d, %d>" % l.pos.to_block()
yield locMsg
name = "getpos"
aliases = tuple()
usage = ""
class Nick(object):
"""
Set a player's nickname.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
player = parse_player(self.factory, username)
if len(parameters) == 0:
return ("Usage: /nick <nickname>",)
else:
new = parameters[0]
if self.factory.set_username(player, new):
return ("Changed nickname from %s to %s" % (username, new),)
else:
return ("Couldn't change nickname!",)
name = "nick"
aliases = tuple()
usage = "<nickname>"
diff --git a/bravo/tests/test_world.py b/bravo/tests/test_world.py
index b1f3939..b50a2f4 100644
--- a/bravo/tests/test_world.py
+++ b/bravo/tests/test_world.py
@@ -1,295 +1,295 @@
from twisted.trial import unittest
from twisted.internet.defer import inlineCallbacks
from array import array
from itertools import product
import os
from bravo.config import BravoConfigParser
from bravo.errors import ChunkNotLoaded
from bravo.world import ChunkCache, World
class MockChunk(object):
def __init__(self, x, z):
self.x = x
self.z = z
class TestChunkCache(unittest.TestCase):
def test_pin_single(self):
cc = ChunkCache()
chunk = MockChunk(1, 2)
cc.pin(chunk)
self.assertIs(cc.get((1, 2)), chunk)
def test_dirty_single(self):
cc = ChunkCache()
chunk = MockChunk(1, 2)
cc.dirtied(chunk)
self.assertIs(cc.get((1, 2)), chunk)
def test_pin_dirty(self):
cc = ChunkCache()
chunk = MockChunk(1, 2)
cc.pin(chunk)
cc.dirtied(chunk)
cc.unpin(chunk)
self.assertIs(cc.get((1, 2)), chunk)
class TestWorldChunks(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
self.w.start()
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
@inlineCallbacks
def test_request_chunk_identity(self):
first = yield self.w.request_chunk(0, 0)
second = yield self.w.request_chunk(0, 0)
self.assertIs(first, second)
@inlineCallbacks
def test_request_chunk_cached_identity(self):
# Turn on the cache and get a few chunks in there, then request a
# chunk that is in the cache.
yield self.w.enable_cache(1)
first = yield self.w.request_chunk(0, 0)
second = yield self.w.request_chunk(0, 0)
self.assertIs(first, second)
@inlineCallbacks
def test_get_block(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = yield self.w.get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_metadata(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.metadata = array("B")
chunk.metadata.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
metadata = yield self.w.get_metadata((x, y, z))
self.assertEqual(metadata, chunk.get_metadata((x, y, z)))
@inlineCallbacks
def test_get_block_readback(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
- self.w.save_chunk(chunk)
+ yield self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = yield self.w.get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_block_readback_negative(self):
chunk = yield self.w.request_chunk(-1, -1)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
- self.w.save_chunk(chunk)
+ yield self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(-1, -1)
for x, y, z in product(xrange(2), repeat=3):
block = yield self.w.get_block((x - 16, y, z - 16))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_metadata_readback(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.metadata = array("B")
chunk.metadata.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
- self.w.save_chunk(chunk)
+ yield self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
metadata = yield self.w.get_metadata((x, y, z))
self.assertEqual(metadata, chunk.get_metadata((x, y, z)))
@inlineCallbacks
def test_world_level_mark_chunk_dirty(self):
chunk = yield self.w.request_chunk(0, 0)
# Reload chunk.
- self.w.save_chunk(chunk)
+ yield self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
self.assertFalse(chunk.dirty)
self.w.mark_dirty((12, 64, 4))
chunk = yield self.w.request_chunk(0, 0)
self.assertTrue(chunk.dirty)
@inlineCallbacks
def test_world_level_mark_chunk_dirty_offset(self):
chunk = yield self.w.request_chunk(1, 2)
# Reload chunk.
- self.w.save_chunk(chunk)
+ yield self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(1, 2)
self.assertFalse(chunk.dirty)
self.w.mark_dirty((29, 64, 43))
chunk = yield self.w.request_chunk(1, 2)
self.assertTrue(chunk.dirty)
@inlineCallbacks
def test_sync_get_block(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = self.w.sync_get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
def test_sync_get_block_unloaded(self):
self.assertRaises(ChunkNotLoaded, self.w.sync_get_block, (0, 0, 0))
def test_sync_get_metadata_neighboring(self):
"""
Even if a neighboring chunk is loaded, the target chunk could still be
unloaded.
Test with sync_get_metadata() to increase test coverage.
"""
d = self.w.request_chunk(0, 0)
@d.addCallback
def cb(chunk):
self.assertRaises(ChunkNotLoaded,
self.w.sync_get_metadata, (16, 0, 0))
return d
class TestWorld(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
self.w.start()
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
def test_load_player_initial(self):
"""
Calling load_player() on a player which has never been loaded should
not result in an exception. Instead, the player should be returned,
wrapped in a Deferred.
"""
# For bonus points, assert that the player's username is correct.
d = self.w.load_player("unittest")
@d.addCallback
def cb(player):
self.assertEqual(player.username, "unittest")
return d
class TestWorldConfig(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
def test_trivial(self):
pass
def test_world_configured_seed(self):
"""
Worlds can have their seed set via configuration.
"""
self.bcp.set("world unittest", "seed", "42")
self.w.start()
self.assertEqual(self.w.level.seed, 42)
self.w.stop()
diff --git a/bravo/world.py b/bravo/world.py
index 5294cc9..36c651c 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,720 +1,735 @@
from array import array
from functools import wraps
from itertools import product
import random
import sys
import weakref
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
from twisted.internet.task import LoopingCall, coiterate
from twisted.python import log
from bravo.beta.structures import Level
from bravo.chunk import Chunk, CHUNK_HEIGHT
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
class ChunkCache(object):
"""
A cache which holds references to all chunks which should be held in
memory.
This cache remembers chunks that were recently used, that are in permanent
residency, and so forth. Its exact caching algorithm is currently null.
When chunks dirty themselves, they are expected to notify the cache, which
will then schedule an eviction for the chunk.
"""
def __init__(self):
self._perm = {}
self._dirty = {}
def pin(self, chunk):
self._perm[chunk.x, chunk.z] = chunk
def unpin(self, chunk):
del self._perm[chunk.x, chunk.z]
def get(self, coords):
if coords in self._perm:
return self._perm[coords]
# Returns None if not found!
return self._dirty.get(coords)
def dirtied(self, chunk):
self._dirty[chunk.x, chunk.z] = chunk
class ImpossibleCoordinates(Exception):
"""
A coordinate could not ever be valid.
"""
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
if bigcoords in self.chunk_cache:
chunk = self.chunk_cache[bigcoords]
elif bigcoords in self.dirty_chunk_cache:
chunk = self.dirty_chunk_cache[bigcoords]
else:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
The initial level data.
"""
_cache = None
"""
The chunk cache.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
self.chunk_cache = weakref.WeakValueDictionary()
self.dirty_chunk_cache = dict()
self._pending_chunks = dict()
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
log.msg("World connected on %s, using serializer %s" %
(world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
# Create our cache.
self._cache = ChunkCache()
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
# Start up the permanent cache.
# has_option() is not exactly desirable, but it's appropriate here
# because we don't want to take any action if the key is unset.
if self.config.has_option(self.config_name, "perm_cache"):
cache_level = self.config.getint(self.config_name, "perm_cache")
self.enable_cache(cache_level)
self.chunk_management_loop = LoopingCall(self.sort_chunks)
self.chunk_management_loop.start(1)
# XXX Put this in init or here?
self.mob_manager = MobManager()
# XXX Put this in the managers constructor?
self.mob_manager.world = self
+ @inlineCallbacks
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
+
+ :returns: A ``Deferred`` that fires after the world has stopped.
"""
self.chunk_management_loop.stop()
# Flush all dirty chunks to disk.
for chunk in self.dirty_chunk_cache.itervalues():
- self.save_chunk(chunk)
+ yield self.save_chunk(chunk)
# Evict all chunks.
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
# Destroy the cache.
self._cache = None
# Save the level data.
- self.serializer.save_level(self.level)
+ yield maybeDeferred(self.serializer.save_level, self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
:returns: A ``Deferred`` which will fire when the cache has been
adjusted.
"""
log.msg("Setting cache size to %d, please hold..." % size)
assign = self._cache.pin
def worker(x, z):
log.msg("Adding %d, %d to cache..." % (x, z))
return self.request_chunk(x, z).addCallback(assign)
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
work = (worker(x, z) for x, z in product(rx, rz))
d = coiterate(work)
@d.addCallback
def notify(none):
log.msg("Cache size is now %d!" % size)
return d
def sort_chunks(self):
"""
Sort out the internal caches.
This method will always block when there are dirty chunks.
"""
first = True
all_chunks = dict(self.dirty_chunk_cache)
all_chunks.update(self.chunk_cache)
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
for coords, chunk in all_chunks.iteritems():
if chunk.dirty:
if first:
first = False
self.save_chunk(chunk)
self.chunk_cache[coords] = chunk
else:
self.dirty_chunk_cache[coords] = chunk
else:
self.chunk_cache[coords] = chunk
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
d = dict(self.chunk_cache)
self.chunk_cache = d
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
d = weakref.WeakValueDictionary(self.chunk_cache)
self.chunk_cache = d
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
if hasattr(entity, 'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
# First, try the cache.
cached = self._cache.get((x, z))
if cached is not None:
returnValue(cached)
# Then, legacy caches.
if (x, z) in self.chunk_cache:
returnValue(self.chunk_cache[x, z])
elif (x, z) in self.dirty_chunk_cache:
returnValue(self.dirty_chunk_cache[x, z])
elif (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
# Create a new chunk object, since the cache turned up empty.
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
# Add in our magic dirtiness hook so that the cache can be aware of
# chunks who have been...naughty.
chunk.dirtied = self._cache.dirtied
if chunk.dirty:
# The chunk was already dirty!? Oh, naughty indeed!
self._cache.dirtied(chunk)
if chunk.populated:
self.chunk_cache[x, z] = chunk
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
def pp(chunk):
chunk.populated = True
chunk.dirty = True
self.postprocess_chunk(chunk)
self.dirty_chunk_cache[x, z] = chunk
del self._pending_chunks[x, z]
return chunk
# Set up callbacks.
d.addCallback(pp)
d.chainDeferred(pe)
# Because multiple people might be attached to this callback, we're
# going to do something magical here. We will yield a forked version
# of our Deferred. This means that we will wait right here, for a
# long, long time, before actually returning with the chunk, *but*,
# when we actually finish, we'll be ready to return the chunk
# immediately. Our caller cannot possibly care because they only see a
# Deferred either way.
retval = yield retval
returnValue(retval)
def save_chunk(self, chunk):
+ """
+ Write a chunk to the serializer.
+
+ Note that this method does nothing when the given chunk is not dirty
+ or saving is off!
+
+ :returns: A ``Deferred`` which will fire after the chunk has been
+ saved with the chunk.
+ """
if not chunk.dirty or not self.saving:
- return
+ return succeed(chunk)
d = maybeDeferred(self.serializer.save_chunk, chunk)
@d.addCallback
def cb(none):
chunk.dirty = False
+ return chunk
@d.addErrback
def eb(failure):
failure.trap(SerializerWriteException)
log.msg("Couldn't write %r" % chunk)
+ return d
+
def load_player(self, username):
"""
Retrieve player data.
:returns: a ``Deferred`` that will be fired with a ``Player``
"""
# Get the player, possibly.
d = maybeDeferred(self.serializer.load_player, username)
@d.addErrback
def eb(failure):
failure.trap(SerializerReadException)
log.msg("Couldn't load player %r" % username)
# Make a player.
player = Player(username=username)
player.location.x = self.level.spawn[0]
player.location.y = self.level.spawn[1]
player.location.stance = self.level.spawn[1]
player.location.z = self.level.spawn[2]
return player
# This Deferred's good to go as-is.
return d
def save_player(self, username, player):
if self.saving:
self.serializer.save_player(player)
# World-level geometry access.
# These methods let external API users refrain from going through the
# standard motions of looking up and loading chunk information.
@coords_to_chunk
def get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_block(coords)
@coords_to_chunk
def set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_block(coords, value)
@coords_to_chunk
def get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_metadata(coords)
@coords_to_chunk
def set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_metadata(coords, value)
@coords_to_chunk
def destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.destroy(coords)
@coords_to_chunk
def mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: the requested block
"""
return chunk.get_block(coords)
@sync_coords_to_chunk
def sync_set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: None
"""
chunk.set_block(coords, value)
@sync_coords_to_chunk
def sync_get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: the requested metadata
"""
return chunk.get_metadata(coords)
@sync_coords_to_chunk
def sync_set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: None
"""
chunk.set_metadata(coords, value)
@sync_coords_to_chunk
def sync_destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: None
"""
chunk.destroy(coords)
@sync_coords_to_chunk
def sync_mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: None
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_request_chunk(self, chunk, coords):
"""
Get an unknown chunk.
:returns: the requested ``Chunk``
"""
return chunk
|
bravoserver/bravo
|
b328b4426a2234cde95f74953eabac1132706306
|
world: Add dirtiness tracking as an event to the chunk cache.
|
diff --git a/bravo/tests/test_world.py b/bravo/tests/test_world.py
index 9f75cdf..b1f3939 100644
--- a/bravo/tests/test_world.py
+++ b/bravo/tests/test_world.py
@@ -1,285 +1,295 @@
from twisted.trial import unittest
from twisted.internet.defer import inlineCallbacks
from array import array
from itertools import product
import os
from bravo.config import BravoConfigParser
from bravo.errors import ChunkNotLoaded
from bravo.world import ChunkCache, World
class MockChunk(object):
def __init__(self, x, z):
self.x = x
self.z = z
class TestChunkCache(unittest.TestCase):
def test_pin_single(self):
cc = ChunkCache()
chunk = MockChunk(1, 2)
cc.pin(chunk)
self.assertIs(cc.get((1, 2)), chunk)
+ def test_dirty_single(self):
+ cc = ChunkCache()
+ chunk = MockChunk(1, 2)
+ cc.dirtied(chunk)
+ self.assertIs(cc.get((1, 2)), chunk)
+
+ def test_pin_dirty(self):
+ cc = ChunkCache()
+ chunk = MockChunk(1, 2)
+ cc.pin(chunk)
+ cc.dirtied(chunk)
+ cc.unpin(chunk)
+ self.assertIs(cc.get((1, 2)), chunk)
+
class TestWorldChunks(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
self.w.start()
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
@inlineCallbacks
def test_request_chunk_identity(self):
first = yield self.w.request_chunk(0, 0)
second = yield self.w.request_chunk(0, 0)
self.assertIs(first, second)
@inlineCallbacks
def test_request_chunk_cached_identity(self):
# Turn on the cache and get a few chunks in there, then request a
# chunk that is in the cache.
yield self.w.enable_cache(1)
first = yield self.w.request_chunk(0, 0)
second = yield self.w.request_chunk(0, 0)
self.assertIs(first, second)
@inlineCallbacks
def test_get_block(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = yield self.w.get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_metadata(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.metadata = array("B")
chunk.metadata.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
metadata = yield self.w.get_metadata((x, y, z))
self.assertEqual(metadata, chunk.get_metadata((x, y, z)))
@inlineCallbacks
def test_get_block_readback(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = yield self.w.get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_block_readback_negative(self):
chunk = yield self.w.request_chunk(-1, -1)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(-1, -1)
for x, y, z in product(xrange(2), repeat=3):
block = yield self.w.get_block((x - 16, y, z - 16))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_metadata_readback(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.metadata = array("B")
chunk.metadata.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
metadata = yield self.w.get_metadata((x, y, z))
self.assertEqual(metadata, chunk.get_metadata((x, y, z)))
@inlineCallbacks
def test_world_level_mark_chunk_dirty(self):
chunk = yield self.w.request_chunk(0, 0)
# Reload chunk.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
self.assertFalse(chunk.dirty)
self.w.mark_dirty((12, 64, 4))
chunk = yield self.w.request_chunk(0, 0)
self.assertTrue(chunk.dirty)
- test_world_level_mark_chunk_dirty.todo = "Needs work"
-
@inlineCallbacks
def test_world_level_mark_chunk_dirty_offset(self):
chunk = yield self.w.request_chunk(1, 2)
# Reload chunk.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(1, 2)
self.assertFalse(chunk.dirty)
self.w.mark_dirty((29, 64, 43))
chunk = yield self.w.request_chunk(1, 2)
self.assertTrue(chunk.dirty)
- test_world_level_mark_chunk_dirty_offset.todo = "Needs work"
-
@inlineCallbacks
def test_sync_get_block(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = self.w.sync_get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
def test_sync_get_block_unloaded(self):
self.assertRaises(ChunkNotLoaded, self.w.sync_get_block, (0, 0, 0))
def test_sync_get_metadata_neighboring(self):
"""
Even if a neighboring chunk is loaded, the target chunk could still be
unloaded.
Test with sync_get_metadata() to increase test coverage.
"""
d = self.w.request_chunk(0, 0)
@d.addCallback
def cb(chunk):
self.assertRaises(ChunkNotLoaded,
self.w.sync_get_metadata, (16, 0, 0))
return d
class TestWorld(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
self.w.start()
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
def test_load_player_initial(self):
"""
Calling load_player() on a player which has never been loaded should
not result in an exception. Instead, the player should be returned,
wrapped in a Deferred.
"""
# For bonus points, assert that the player's username is correct.
d = self.w.load_player("unittest")
@d.addCallback
def cb(player):
self.assertEqual(player.username, "unittest")
return d
class TestWorldConfig(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
def test_trivial(self):
pass
def test_world_configured_seed(self):
"""
Worlds can have their seed set via configuration.
"""
self.bcp.set("world unittest", "seed", "42")
self.w.start()
self.assertEqual(self.w.level.seed, 42)
self.w.stop()
diff --git a/bravo/world.py b/bravo/world.py
index d003b77..5294cc9 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,714 +1,720 @@
from array import array
from functools import wraps
from itertools import product
import random
import sys
import weakref
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
from twisted.internet.task import LoopingCall, coiterate
from twisted.python import log
from bravo.beta.structures import Level
from bravo.chunk import Chunk, CHUNK_HEIGHT
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
class ChunkCache(object):
"""
A cache which holds references to all chunks which should be held in
memory.
This cache remembers chunks that were recently used, that are in permanent
residency, and so forth. Its exact caching algorithm is currently null.
When chunks dirty themselves, they are expected to notify the cache, which
will then schedule an eviction for the chunk.
"""
- _perm = None
- """
- A permanent cache of chunks which are never evicted from memory.
-
- This cache is used to speed up logins near the spawn point.
- """
-
def __init__(self):
self._perm = {}
+ self._dirty = {}
def pin(self, chunk):
self._perm[chunk.x, chunk.z] = chunk
def unpin(self, chunk):
del self._perm[chunk.x, chunk.z]
def get(self, coords):
if coords in self._perm:
return self._perm[coords]
- return None
+ # Returns None if not found!
+ return self._dirty.get(coords)
+
+ def dirtied(self, chunk):
+ self._dirty[chunk.x, chunk.z] = chunk
class ImpossibleCoordinates(Exception):
"""
A coordinate could not ever be valid.
"""
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
if bigcoords in self.chunk_cache:
chunk = self.chunk_cache[bigcoords]
elif bigcoords in self.dirty_chunk_cache:
chunk = self.dirty_chunk_cache[bigcoords]
else:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
The initial level data.
"""
_cache = None
"""
The chunk cache.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
self.chunk_cache = weakref.WeakValueDictionary()
self.dirty_chunk_cache = dict()
self._pending_chunks = dict()
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
log.msg("World connected on %s, using serializer %s" %
(world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
# Create our cache.
self._cache = ChunkCache()
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
# Start up the permanent cache.
# has_option() is not exactly desirable, but it's appropriate here
# because we don't want to take any action if the key is unset.
if self.config.has_option(self.config_name, "perm_cache"):
cache_level = self.config.getint(self.config_name, "perm_cache")
self.enable_cache(cache_level)
self.chunk_management_loop = LoopingCall(self.sort_chunks)
self.chunk_management_loop.start(1)
# XXX Put this in init or here?
self.mob_manager = MobManager()
# XXX Put this in the managers constructor?
self.mob_manager.world = self
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
"""
self.chunk_management_loop.stop()
# Flush all dirty chunks to disk.
for chunk in self.dirty_chunk_cache.itervalues():
self.save_chunk(chunk)
# Evict all chunks.
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
# Destroy the cache.
self._cache = None
# Save the level data.
self.serializer.save_level(self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
:returns: A ``Deferred`` which will fire when the cache has been
adjusted.
"""
log.msg("Setting cache size to %d, please hold..." % size)
assign = self._cache.pin
def worker(x, z):
log.msg("Adding %d, %d to cache..." % (x, z))
return self.request_chunk(x, z).addCallback(assign)
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
work = (worker(x, z) for x, z in product(rx, rz))
d = coiterate(work)
@d.addCallback
def notify(none):
log.msg("Cache size is now %d!" % size)
return d
def sort_chunks(self):
"""
Sort out the internal caches.
This method will always block when there are dirty chunks.
"""
first = True
all_chunks = dict(self.dirty_chunk_cache)
all_chunks.update(self.chunk_cache)
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
for coords, chunk in all_chunks.iteritems():
if chunk.dirty:
if first:
first = False
self.save_chunk(chunk)
self.chunk_cache[coords] = chunk
else:
self.dirty_chunk_cache[coords] = chunk
else:
self.chunk_cache[coords] = chunk
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
d = dict(self.chunk_cache)
self.chunk_cache = d
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
d = weakref.WeakValueDictionary(self.chunk_cache)
self.chunk_cache = d
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
if hasattr(entity, 'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
# First, try the cache.
cached = self._cache.get((x, z))
if cached is not None:
returnValue(cached)
# Then, legacy caches.
if (x, z) in self.chunk_cache:
returnValue(self.chunk_cache[x, z])
elif (x, z) in self.dirty_chunk_cache:
returnValue(self.dirty_chunk_cache[x, z])
elif (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
+ # Create a new chunk object, since the cache turned up empty.
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
+ # Add in our magic dirtiness hook so that the cache can be aware of
+ # chunks who have been...naughty.
+ chunk.dirtied = self._cache.dirtied
+ if chunk.dirty:
+ # The chunk was already dirty!? Oh, naughty indeed!
+ self._cache.dirtied(chunk)
+
if chunk.populated:
self.chunk_cache[x, z] = chunk
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
def pp(chunk):
chunk.populated = True
chunk.dirty = True
self.postprocess_chunk(chunk)
self.dirty_chunk_cache[x, z] = chunk
del self._pending_chunks[x, z]
return chunk
# Set up callbacks.
d.addCallback(pp)
d.chainDeferred(pe)
# Because multiple people might be attached to this callback, we're
# going to do something magical here. We will yield a forked version
# of our Deferred. This means that we will wait right here, for a
# long, long time, before actually returning with the chunk, *but*,
# when we actually finish, we'll be ready to return the chunk
# immediately. Our caller cannot possibly care because they only see a
# Deferred either way.
retval = yield retval
returnValue(retval)
def save_chunk(self, chunk):
if not chunk.dirty or not self.saving:
return
d = maybeDeferred(self.serializer.save_chunk, chunk)
@d.addCallback
def cb(none):
chunk.dirty = False
@d.addErrback
def eb(failure):
failure.trap(SerializerWriteException)
log.msg("Couldn't write %r" % chunk)
def load_player(self, username):
"""
Retrieve player data.
:returns: a ``Deferred`` that will be fired with a ``Player``
"""
# Get the player, possibly.
d = maybeDeferred(self.serializer.load_player, username)
@d.addErrback
def eb(failure):
failure.trap(SerializerReadException)
log.msg("Couldn't load player %r" % username)
# Make a player.
player = Player(username=username)
player.location.x = self.level.spawn[0]
player.location.y = self.level.spawn[1]
player.location.stance = self.level.spawn[1]
player.location.z = self.level.spawn[2]
return player
# This Deferred's good to go as-is.
return d
def save_player(self, username, player):
if self.saving:
self.serializer.save_player(player)
# World-level geometry access.
# These methods let external API users refrain from going through the
# standard motions of looking up and loading chunk information.
@coords_to_chunk
def get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_block(coords)
@coords_to_chunk
def set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_block(coords, value)
@coords_to_chunk
def get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_metadata(coords)
@coords_to_chunk
def set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_metadata(coords, value)
@coords_to_chunk
def destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.destroy(coords)
@coords_to_chunk
def mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: the requested block
"""
return chunk.get_block(coords)
@sync_coords_to_chunk
def sync_set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: None
"""
chunk.set_block(coords, value)
@sync_coords_to_chunk
def sync_get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: the requested metadata
"""
return chunk.get_metadata(coords)
@sync_coords_to_chunk
def sync_set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: None
"""
chunk.set_metadata(coords, value)
@sync_coords_to_chunk
def sync_destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: None
"""
chunk.destroy(coords)
@sync_coords_to_chunk
def sync_mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: None
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_request_chunk(self, chunk, coords):
"""
Get an unknown chunk.
:returns: the requested ``Chunk``
"""
return chunk
|
bravoserver/bravo
|
72c03c454016d669cabb3c8fe29625f3e35f103d
|
chunk: Refactor dirtiness to fire a simple callback hook.
|
diff --git a/bravo/chunk.py b/bravo/chunk.py
index 3453547..2a8b6f7 100644
--- a/bravo/chunk.py
+++ b/bravo/chunk.py
@@ -1,626 +1,649 @@
from array import array
from functools import wraps
from itertools import product
from struct import pack
from warnings import warn
from bravo.blocks import blocks, glowing_blocks
from bravo.beta.packets import make_packet
from bravo.geometry.section import Section
from bravo.utilities.bits import pack_nibbles
from bravo.utilities.coords import CHUNK_HEIGHT, XZ, iterchunk
from bravo.utilities.maths import clamp
class ChunkWarning(Warning):
"""
Somebody did something inappropriate to this chunk, but it probably isn't
lethal, so the chunk is issuing a warning instead of an exception.
"""
def check_bounds(f):
"""
Decorate a function or method to have its first positional argument be
treated as an (x, y, z) tuple which must fit inside chunk boundaries of
16, CHUNK_HEIGHT, and 16, respectively.
A warning will be raised if the bounds check fails.
"""
@wraps(f)
def deco(chunk, coords, *args, **kwargs):
x, y, z = coords
# Coordinates were out-of-bounds; warn and run away.
if not (0 <= x < 16 and 0 <= z < 16 and 0 <= y < CHUNK_HEIGHT):
warn("Coordinates %s are OOB in %s() of %s, ignoring call"
% (coords, f.func_name, chunk), ChunkWarning, stacklevel=2)
# A concession towards where this decorator will be used. The
# value is likely to be discarded either way, but if the value is
# used, we shouldn't horribly die because of None/0 mismatch.
return 0
return f(chunk, coords, *args, **kwargs)
return deco
def ci(x, y, z):
"""
Turn an (x, y, z) tuple into a chunk index.
This is really a macro and not a function, but Python doesn't know the
difference. Hopefully this is faster on PyPy than on CPython.
"""
return (x * 16 + z) * CHUNK_HEIGHT + y
def segment_array(a):
"""
Chop up a chunk-sized array into sixteen components.
The chops are done in order to produce the smaller chunks preferred by
modern clients.
"""
l = [array(a.typecode) for chaff in range(16)]
index = 0
for i in range(0, len(a), 16):
l[index].extend(a[i:i + 16])
index = (index + 1) % 16
return l
def make_glows():
"""
Set up glow tables.
These tables provide glow maps for illuminated points.
"""
glow = [None] * 16
for i in range(16):
dim = 2 * i + 1
glow[i] = array("b", [0] * (dim**3))
for x, y, z in product(xrange(dim), repeat=3):
distance = abs(x - i) + abs(y - i) + abs(z - i)
glow[i][(x * dim + y) * dim + z] = i + 1 - distance
glow[i] = array("B", [clamp(x, 0, 15) for x in glow[i]])
return glow
glow = make_glows()
def composite_glow(target, strength, x, y, z):
"""
Composite a light source onto a lightmap.
The exact operation is not quite unlike an add.
"""
ambient = glow[strength]
xbound, zbound, ybound = 16, CHUNK_HEIGHT, 16
sx = x - strength
sy = y - strength
sz = z - strength
ex = x + strength
ey = y + strength
ez = z + strength
si, sj, sk = 0, 0, 0
ei, ej, ek = strength * 2, strength * 2, strength * 2
if sx < 0:
sx, si = 0, -sx
if sy < 0:
sy, sj = 0, -sy
if sz < 0:
sz, sk = 0, -sz
if ex > xbound:
ex, ei = xbound, ei - ex + xbound
if ey > ybound:
ey, ej = ybound, ej - ey + ybound
if ez > zbound:
ez, ek = zbound, ek - ez + zbound
adim = 2 * strength + 1
# Composite! Apologies for the loops.
for (tx, ax) in zip(range(sx, ex), range(si, ei)):
for (tz, az) in zip(range(sz, ez), range(sk, ek)):
for (ty, ay) in zip(range(sy, ey), range(sj, ej)):
ambient_index = (ax * adim + az) * adim + ay
target[ci(tx, ty, tz)] += ambient[ambient_index]
def iter_neighbors(coords):
"""
Iterate over the chunk-local coordinates surrounding the given
coordinates.
All coordinates are chunk-local.
Coordinates which are not valid chunk-local coordinates will not be
generated.
"""
x, z, y = coords
for dx, dz, dy in (
(1, 0, 0),
(-1, 0, 0),
(0, 1, 0),
(0, -1, 0),
(0, 0, 1),
(0, 0, -1)):
nx = x + dx
nz = z + dz
ny = y + dy
if not (0 <= nx < 16 and
0 <= nz < 16 and
0 <= ny < CHUNK_HEIGHT):
continue
yield nx, nz, ny
def neighboring_light(glow, block):
"""
Calculate the amount of light that should be shone on a block.
``glow`` is the brighest neighboring light. ``block`` is the slot of the
block being illuminated.
The return value is always a valid light value.
"""
return clamp(glow - blocks[block].dim, 0, 15)
+
class Chunk(object):
"""
A chunk of blocks.
Chunks are large pieces of world geometry (block data). The blocks, light
maps, and associated metadata are stored in chunks. Chunks are
always measured 16xCHUNK_HEIGHTx16 and are aligned on 16x16 boundaries in
the xz-plane.
:cvar bool dirty: Whether this chunk needs to be flushed to disk.
:cvar bool populated: Whether this chunk has had its initial block data
filled out.
"""
all_damaged = False
- dirty = True
populated = False
+ dirtied = None
+ """
+ Optional hook to be called when this chunk becomes dirty.
+ """
+
+ _dirty = True
+ """
+ Internal flag describing whether the chunk is dirty. Don't touch directly;
+ use the ``dirty`` property instead.
+ """
+
def __init__(self, x, z):
"""
:param int x: X coordinate in chunk coords
:param int z: Z coordinate in chunk coords
:ivar array.array heightmap: Tracks the tallest block in each xz-column.
:ivar bool all_damaged: Flag for forcing the entire chunk to be
damaged. This is for efficiency; past a certain point, it is not
efficient to batch block updates or track damage. Heavily damaged
chunks have their damage represented as a complete resend of the
entire chunk.
"""
self.x = int(x)
self.z = int(z)
self.heightmap = array("B", [0] * (16 * 16))
self.blocklight = array("B", [0] * (16 * 16 * CHUNK_HEIGHT))
self.sections = [Section() for i in range(16)]
self.entities = set()
self.tiles = {}
self.damaged = set()
def __repr__(self):
return "Chunk(%d, %d)" % (self.x, self.z)
__str__ = __repr__
+ @property
+ def dirty(self):
+ return self._dirty
+
+ @dirty.setter
+ def dirty(self, value):
+ if value and not self._dirty:
+ # Notify whoever cares.
+ if self.dirtied is not None:
+ self.dirtied(self)
+ self._dirty = value
+
def regenerate_heightmap(self):
"""
Regenerate the height map array.
The height map is merely the position of the tallest block in any
xz-column.
"""
for x in range(16):
for z in range(16):
column = x * 16 + z
for y in range(255, -1, -1):
if self.get_block((x, y, z)):
break
self.heightmap[column] = y
def regenerate_blocklight(self):
lightmap = array("L", [0] * (16 * 16 * CHUNK_HEIGHT))
for x, z, y in iterchunk():
block = self.get_block((x, y, z))
if block in glowing_blocks:
composite_glow(lightmap, glowing_blocks[block], x, y, z)
self.blocklight = array("B", [clamp(x, 0, 15) for x in lightmap])
def regenerate_skylight(self):
"""
Regenerate the ambient light map.
Each block's individual light comes from two sources. The ambient
light comes from the sky.
The height map must be valid for this method to produce valid results.
"""
# Create an array of skylights, and a mask of dimming blocks.
lights = [0xf] * (16 * 16)
mask = [0x0] * (16 * 16)
# For each y-level, we're going to update the mask, apply it to the
# lights, apply the lights to the section, and then blur the lights
# and move downwards. Since empty sections are full of air, and air
# doesn't ever dim, ignoring empty sections should be a correct way
# to speed things up. Another optimization is that the process ends
# early if the entire slice of lights is dark.
for section in reversed(self.sections):
if not section:
continue
for y in range(15, -1, -1):
# Early-out if there's no more light left.
if not any(lights):
break
# Update the mask.
for x, z in XZ:
offset = x * 16 + z
block = section.get_block((x, y, z))
mask[offset] = blocks[block].dim
# Apply the mask to the lights.
for i, dim in enumerate(mask):
# Keep it positive.
lights[i] = max(0, lights[i] - dim)
# Apply the lights to the section.
for x, z in XZ:
offset = x * 16 + z
section.set_skylight((x, y, z), lights[offset])
# XXX blur the lights
# And continue moving downward.
def regenerate(self):
"""
Regenerate all auxiliary tables.
"""
self.regenerate_heightmap()
self.regenerate_blocklight()
self.regenerate_skylight()
self.dirty = True
def damage(self, coords):
"""
Record damage on this chunk.
"""
if self.all_damaged:
return
x, y, z = coords
self.damaged.add(coords)
# The number 176 represents the threshold at which it is cheaper to
# resend the entire chunk instead of individual blocks.
if len(self.damaged) > 176:
self.all_damaged = True
self.damaged.clear()
def is_damaged(self):
"""
Determine whether any damage is pending on this chunk.
:rtype: bool
:returns: True if any damage is pending on this chunk, False if not.
"""
return self.all_damaged or bool(self.damaged)
def get_damage_packet(self):
"""
Make a packet representing the current damage on this chunk.
This method is not private, but some care should be taken with it,
since it wraps some fairly cryptic internal data structures.
If this chunk is currently undamaged, this method will return an empty
string, which should be safe to treat as a packet. Please check with
`is_damaged()` before doing this if you need to optimize this case.
To avoid extra overhead, this method should really be used in
conjunction with `Factory.broadcast_for_chunk()`.
Do not forget to clear this chunk's damage! Callers are responsible
for doing this.
>>> packet = chunk.get_damage_packet()
>>> factory.broadcast_for_chunk(packet, chunk.x, chunk.z)
>>> chunk.clear_damage()
:rtype: str
:returns: String representation of the packet.
"""
if self.all_damaged:
# Resend the entire chunk!
return self.save_to_packet()
elif not self.damaged:
# Send nothing at all; we don't even have a scratch on us.
return ""
elif len(self.damaged) == 1:
# Use a single block update packet. Find the first (only) set bit
# in the damaged array, and use it as an index.
coords = next(iter(self.damaged))
block = self.get_block(coords)
metadata = self.get_metadata(coords)
x, y, z = coords
return make_packet("block",
x=x + self.x * 16,
y=y,
z=z + self.z * 16,
type=block,
meta=metadata)
else:
# Use a batch update.
records = []
for coords in self.damaged:
block = self.get_block(coords)
metadata = self.get_metadata(coords)
x, y, z = coords
record = x << 28 | z << 24 | y << 16 | block << 4 | metadata
records.append(record)
data = "".join(pack(">I", record) for record in records)
return make_packet("batch", x=self.x, z=self.z,
count=len(records), data=data)
def clear_damage(self):
"""
Clear this chunk's damage.
"""
self.damaged.clear()
self.all_damaged = False
def save_to_packet(self):
"""
Generate a chunk packet.
"""
mask = 0
packed = []
ls = segment_array(self.blocklight)
for i, section in enumerate(self.sections):
if any(section.blocks):
mask |= 1 << i
packed.append(section.blocks.tostring())
for i, section in enumerate(self.sections):
if mask & 1 << i:
packed.append(pack_nibbles(section.metadata))
for i, l in enumerate(ls):
if mask & 1 << i:
packed.append(pack_nibbles(l))
for i, section in enumerate(self.sections):
if mask & 1 << i:
packed.append(pack_nibbles(section.skylight))
# Fake the biome data.
packed.append("\x00" * 256)
packet = make_packet("chunk", x=self.x, z=self.z, continuous=True,
primary=mask, add=0x0, data="".join(packed))
return packet
@check_bounds
def get_block(self, coords):
"""
Look up a block value.
:param tuple coords: coordinate triplet
:rtype: int
:returns: int representing block type
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_block((x, y, z))
@check_bounds
def set_block(self, coords, block):
"""
Update a block value.
:param tuple coords: coordinate triplet
:param int block: block type
"""
x, y, z = coords
index, section_y = divmod(y, 16)
column = x * 16 + z
if self.get_block(coords) != block:
self.sections[index].set_block((x, section_y, z), block)
if not self.populated:
return
# Regenerate heightmap at this coordinate.
if block:
self.heightmap[column] = max(self.heightmap[column], y)
else:
# If we replace the highest block with air, we need to go
# through all blocks below it to find the new top block.
height = self.heightmap[column]
if y == height:
for y in range(height, -1, -1):
if self.get_block((x, y, z)):
break
self.heightmap[column] = y
# Do the blocklight at this coordinate, if appropriate.
if block in glowing_blocks:
composite_glow(self.blocklight, glowing_blocks[block],
x, y, z)
self.blocklight = array("B", [clamp(x, 0, 15) for x in
self.blocklight])
# And the skylight.
glow = max(self.get_skylight((nx, ny, nz))
for nx, nz, ny in iter_neighbors((x, z, y)))
self.set_skylight((x, y, z), neighboring_light(glow, block))
self.dirty = True
self.damage(coords)
@check_bounds
def get_metadata(self, coords):
"""
Look up metadata.
:param tuple coords: coordinate triplet
:rtype: int
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_metadata((x, y, z))
@check_bounds
def set_metadata(self, coords, metadata):
"""
Update metadata.
:param tuple coords: coordinate triplet
:param int metadata:
"""
if self.get_metadata(coords) != metadata:
x, y, z = coords
index, y = divmod(y, 16)
self.sections[index].set_metadata((x, y, z), metadata)
self.dirty = True
self.damage(coords)
@check_bounds
def get_skylight(self, coords):
"""
Look up skylight value.
:param tuple coords: coordinate triplet
:rtype: int
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_skylight((x, y, z))
@check_bounds
def set_skylight(self, coords, value):
"""
Update skylight value.
:param tuple coords: coordinate triplet
:param int metadata:
"""
if self.get_metadata(coords) != value:
x, y, z = coords
index, y = divmod(y, 16)
self.sections[index].set_skylight((x, y, z), value)
@check_bounds
def destroy(self, coords):
"""
Destroy the block at the given coordinates.
This may or may not set the block to be full of air; it uses the
block's preferred replacement. For example, ice generally turns to
water when destroyed.
This is safe as a no-op; for example, destroying a block of air with
no metadata is not going to cause state changes.
:param tuple coords: coordinate triplet
"""
block = blocks[self.get_block(coords)]
self.set_block(coords, block.replace)
self.set_metadata(coords, 0)
def height_at(self, x, z):
"""
Get the height of an xz-column of blocks.
:param int x: X coordinate
:param int z: Z coordinate
:rtype: int
:returns: The height of the given column of blocks.
"""
return self.heightmap[x * 16 + z]
def sed(self, search, replace):
"""
Execute a search and replace on all blocks in this chunk.
Named after the ubiquitous Unix tool. Does a semantic
s/search/replace/g on this chunk's blocks.
:param int search: block to find
:param int replace: block to use as a replacement
"""
for section in self.sections:
for i, block in enumerate(section.blocks):
if block == search:
section.blocks[i] = replace
self.all_damaged = True
self.dirty = True
|
bravoserver/bravo
|
ab0ae693b2c316c20a0c7c04a4ec681eccebbe16
|
utilities/automatic: Only feed acceptable blocks to automatons.
|
diff --git a/bravo/utilities/automatic.py b/bravo/utilities/automatic.py
index c129964..40a5114 100644
--- a/bravo/utilities/automatic.py
+++ b/bravo/utilities/automatic.py
@@ -1,32 +1,39 @@
from bravo.utilities.coords import XZ
+
def naive_scan(automaton, chunk):
"""
Utility function which can be used to implement a naive, slow, but
thorough chunk scan for automatons.
This method is designed to be directly useable on automaton classes to
provide the `scan()` interface.
This function depends on implementation details of ``Chunk``.
"""
+ acceptable = automaton.blocks
+
for index, section in enumerate(chunk.sections):
if section:
for i, block in enumerate(section.blocks):
- coords = i & 0xf, (i >> 8) + index * 16, i >> 4 & 0xf
- automaton.feed(coords)
+ if block in acceptable:
+ coords = i & 0xf, (i >> 8) + index * 16, i >> 4 & 0xf
+ automaton.feed(coords)
+
def column_scan(automaton, chunk):
"""
Utility function which provides a chunk scanner which only examines the
tallest blocks in the chunk. This can be useful for automatons which only
care about sunlit or elevated areas.
This method can be used directly in automaton classes to provide `scan()`.
"""
+ acceptable = automaton.blocks
+
for x, z in XZ:
y = chunk.height_at(x, z)
- if chunk.get_block((x, y, z)) in automaton.blocks:
+ if chunk.get_block((x, y, z)) in acceptable:
automaton.feed((x + chunk.x * 16, y, z + chunk.z * 16))
|
bravoserver/bravo
|
9676b27f585120cbc568db850ef2c97a85b4fe60
|
world: Make enable_chunks() return a Deferred, and test.
|
diff --git a/bravo/tests/test_world.py b/bravo/tests/test_world.py
index a94a2dd..9f75cdf 100644
--- a/bravo/tests/test_world.py
+++ b/bravo/tests/test_world.py
@@ -1,276 +1,285 @@
from twisted.trial import unittest
from twisted.internet.defer import inlineCallbacks
from array import array
from itertools import product
import os
from bravo.config import BravoConfigParser
from bravo.errors import ChunkNotLoaded
from bravo.world import ChunkCache, World
class MockChunk(object):
def __init__(self, x, z):
self.x = x
self.z = z
class TestChunkCache(unittest.TestCase):
def test_pin_single(self):
cc = ChunkCache()
chunk = MockChunk(1, 2)
cc.pin(chunk)
self.assertIs(cc.get((1, 2)), chunk)
class TestWorldChunks(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
self.w.start()
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
@inlineCallbacks
def test_request_chunk_identity(self):
first = yield self.w.request_chunk(0, 0)
second = yield self.w.request_chunk(0, 0)
self.assertIs(first, second)
+ @inlineCallbacks
+ def test_request_chunk_cached_identity(self):
+ # Turn on the cache and get a few chunks in there, then request a
+ # chunk that is in the cache.
+ yield self.w.enable_cache(1)
+ first = yield self.w.request_chunk(0, 0)
+ second = yield self.w.request_chunk(0, 0)
+ self.assertIs(first, second)
+
@inlineCallbacks
def test_get_block(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = yield self.w.get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_metadata(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.metadata = array("B")
chunk.metadata.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
metadata = yield self.w.get_metadata((x, y, z))
self.assertEqual(metadata, chunk.get_metadata((x, y, z)))
@inlineCallbacks
def test_get_block_readback(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = yield self.w.get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_block_readback_negative(self):
chunk = yield self.w.request_chunk(-1, -1)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(-1, -1)
for x, y, z in product(xrange(2), repeat=3):
block = yield self.w.get_block((x - 16, y, z - 16))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_metadata_readback(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.metadata = array("B")
chunk.metadata.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
metadata = yield self.w.get_metadata((x, y, z))
self.assertEqual(metadata, chunk.get_metadata((x, y, z)))
@inlineCallbacks
def test_world_level_mark_chunk_dirty(self):
chunk = yield self.w.request_chunk(0, 0)
# Reload chunk.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
self.assertFalse(chunk.dirty)
self.w.mark_dirty((12, 64, 4))
chunk = yield self.w.request_chunk(0, 0)
self.assertTrue(chunk.dirty)
test_world_level_mark_chunk_dirty.todo = "Needs work"
@inlineCallbacks
def test_world_level_mark_chunk_dirty_offset(self):
chunk = yield self.w.request_chunk(1, 2)
# Reload chunk.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(1, 2)
self.assertFalse(chunk.dirty)
self.w.mark_dirty((29, 64, 43))
chunk = yield self.w.request_chunk(1, 2)
self.assertTrue(chunk.dirty)
test_world_level_mark_chunk_dirty_offset.todo = "Needs work"
@inlineCallbacks
def test_sync_get_block(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = self.w.sync_get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
def test_sync_get_block_unloaded(self):
self.assertRaises(ChunkNotLoaded, self.w.sync_get_block, (0, 0, 0))
def test_sync_get_metadata_neighboring(self):
"""
Even if a neighboring chunk is loaded, the target chunk could still be
unloaded.
Test with sync_get_metadata() to increase test coverage.
"""
d = self.w.request_chunk(0, 0)
@d.addCallback
def cb(chunk):
self.assertRaises(ChunkNotLoaded,
self.w.sync_get_metadata, (16, 0, 0))
return d
class TestWorld(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
self.w.start()
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
def test_load_player_initial(self):
"""
Calling load_player() on a player which has never been loaded should
not result in an exception. Instead, the player should be returned,
wrapped in a Deferred.
"""
# For bonus points, assert that the player's username is correct.
d = self.w.load_player("unittest")
@d.addCallback
def cb(player):
self.assertEqual(player.username, "unittest")
return d
class TestWorldConfig(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
def test_trivial(self):
pass
def test_world_configured_seed(self):
"""
Worlds can have their seed set via configuration.
"""
self.bcp.set("world unittest", "seed", "42")
self.w.start()
self.assertEqual(self.w.level.seed, 42)
self.w.stop()
diff --git a/bravo/world.py b/bravo/world.py
index 3673fac..d003b77 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,703 +1,714 @@
from array import array
from functools import wraps
from itertools import product
import random
import sys
import weakref
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
-from twisted.internet.task import LoopingCall
+from twisted.internet.task import LoopingCall, coiterate
from twisted.python import log
from bravo.beta.structures import Level
from bravo.chunk import Chunk, CHUNK_HEIGHT
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
class ChunkCache(object):
"""
A cache which holds references to all chunks which should be held in
memory.
This cache remembers chunks that were recently used, that are in permanent
residency, and so forth. Its exact caching algorithm is currently null.
When chunks dirty themselves, they are expected to notify the cache, which
will then schedule an eviction for the chunk.
"""
_perm = None
"""
A permanent cache of chunks which are never evicted from memory.
This cache is used to speed up logins near the spawn point.
"""
def __init__(self):
self._perm = {}
def pin(self, chunk):
self._perm[chunk.x, chunk.z] = chunk
def unpin(self, chunk):
del self._perm[chunk.x, chunk.z]
def get(self, coords):
if coords in self._perm:
return self._perm[coords]
return None
class ImpossibleCoordinates(Exception):
"""
A coordinate could not ever be valid.
"""
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
if bigcoords in self.chunk_cache:
chunk = self.chunk_cache[bigcoords]
elif bigcoords in self.dirty_chunk_cache:
chunk = self.dirty_chunk_cache[bigcoords]
else:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
The initial level data.
"""
_cache = None
"""
The chunk cache.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
self.chunk_cache = weakref.WeakValueDictionary()
self.dirty_chunk_cache = dict()
self._pending_chunks = dict()
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
log.msg("World connected on %s, using serializer %s" %
(world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
# Create our cache.
self._cache = ChunkCache()
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
# Start up the permanent cache.
# has_option() is not exactly desirable, but it's appropriate here
# because we don't want to take any action if the key is unset.
if self.config.has_option(self.config_name, "perm_cache"):
cache_level = self.config.getint(self.config_name, "perm_cache")
self.enable_cache(cache_level)
self.chunk_management_loop = LoopingCall(self.sort_chunks)
self.chunk_management_loop.start(1)
# XXX Put this in init or here?
self.mob_manager = MobManager()
# XXX Put this in the managers constructor?
self.mob_manager.world = self
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
"""
self.chunk_management_loop.stop()
# Flush all dirty chunks to disk.
for chunk in self.dirty_chunk_cache.itervalues():
self.save_chunk(chunk)
# Evict all chunks.
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
# Destroy the cache.
self._cache = None
# Save the level data.
self.serializer.save_level(self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
+
+ :returns: A ``Deferred`` which will fire when the cache has been
+ adjusted.
"""
log.msg("Setting cache size to %d, please hold..." % size)
assign = self._cache.pin
+ def worker(x, z):
+ log.msg("Adding %d, %d to cache..." % (x, z))
+ return self.request_chunk(x, z).addCallback(assign)
+
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
- for x, z in product(rx, rz):
- log.msg("Adding %d, %d to cache..." % (x, z))
- self.request_chunk(x, z).addCallback(assign)
+ work = (worker(x, z) for x, z in product(rx, rz))
- log.msg("Cache size is now %d!" % size)
+ d = coiterate(work)
+
+ @d.addCallback
+ def notify(none):
+ log.msg("Cache size is now %d!" % size)
+
+ return d
def sort_chunks(self):
"""
Sort out the internal caches.
This method will always block when there are dirty chunks.
"""
first = True
all_chunks = dict(self.dirty_chunk_cache)
all_chunks.update(self.chunk_cache)
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
for coords, chunk in all_chunks.iteritems():
if chunk.dirty:
if first:
first = False
self.save_chunk(chunk)
self.chunk_cache[coords] = chunk
else:
self.dirty_chunk_cache[coords] = chunk
else:
self.chunk_cache[coords] = chunk
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
d = dict(self.chunk_cache)
self.chunk_cache = d
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
d = weakref.WeakValueDictionary(self.chunk_cache)
self.chunk_cache = d
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
if hasattr(entity, 'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
# First, try the cache.
cached = self._cache.get((x, z))
if cached is not None:
returnValue(cached)
# Then, legacy caches.
if (x, z) in self.chunk_cache:
returnValue(self.chunk_cache[x, z])
elif (x, z) in self.dirty_chunk_cache:
returnValue(self.dirty_chunk_cache[x, z])
elif (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
if chunk.populated:
self.chunk_cache[x, z] = chunk
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
def pp(chunk):
chunk.populated = True
chunk.dirty = True
self.postprocess_chunk(chunk)
self.dirty_chunk_cache[x, z] = chunk
del self._pending_chunks[x, z]
return chunk
# Set up callbacks.
d.addCallback(pp)
d.chainDeferred(pe)
# Because multiple people might be attached to this callback, we're
# going to do something magical here. We will yield a forked version
# of our Deferred. This means that we will wait right here, for a
# long, long time, before actually returning with the chunk, *but*,
# when we actually finish, we'll be ready to return the chunk
# immediately. Our caller cannot possibly care because they only see a
# Deferred either way.
retval = yield retval
returnValue(retval)
def save_chunk(self, chunk):
if not chunk.dirty or not self.saving:
return
d = maybeDeferred(self.serializer.save_chunk, chunk)
@d.addCallback
def cb(none):
chunk.dirty = False
@d.addErrback
def eb(failure):
failure.trap(SerializerWriteException)
log.msg("Couldn't write %r" % chunk)
def load_player(self, username):
"""
Retrieve player data.
:returns: a ``Deferred`` that will be fired with a ``Player``
"""
# Get the player, possibly.
d = maybeDeferred(self.serializer.load_player, username)
@d.addErrback
def eb(failure):
failure.trap(SerializerReadException)
log.msg("Couldn't load player %r" % username)
# Make a player.
player = Player(username=username)
player.location.x = self.level.spawn[0]
player.location.y = self.level.spawn[1]
player.location.stance = self.level.spawn[1]
player.location.z = self.level.spawn[2]
return player
# This Deferred's good to go as-is.
return d
def save_player(self, username, player):
if self.saving:
self.serializer.save_player(player)
# World-level geometry access.
# These methods let external API users refrain from going through the
# standard motions of looking up and loading chunk information.
@coords_to_chunk
def get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_block(coords)
@coords_to_chunk
def set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_block(coords, value)
@coords_to_chunk
def get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_metadata(coords)
@coords_to_chunk
def set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_metadata(coords, value)
@coords_to_chunk
def destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.destroy(coords)
@coords_to_chunk
def mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: the requested block
"""
return chunk.get_block(coords)
@sync_coords_to_chunk
def sync_set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: None
"""
chunk.set_block(coords, value)
@sync_coords_to_chunk
def sync_get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: the requested metadata
"""
return chunk.get_metadata(coords)
@sync_coords_to_chunk
def sync_set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: None
"""
chunk.set_metadata(coords, value)
@sync_coords_to_chunk
def sync_destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: None
"""
chunk.destroy(coords)
@sync_coords_to_chunk
def sync_mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: None
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_request_chunk(self, chunk, coords):
"""
Get an unknown chunk.
:returns: the requested ``Chunk``
"""
return chunk
|
bravoserver/bravo
|
9d3ab1c03831dec9e046a42f584b09328d91cec6
|
world: Start factoring out caching behavior.
|
diff --git a/bravo/beta/factory.py b/bravo/beta/factory.py
index b5badc6..2dfaa58 100644
--- a/bravo/beta/factory.py
+++ b/bravo/beta/factory.py
@@ -1,573 +1,566 @@
from collections import defaultdict
from itertools import chain, product
from twisted.internet import reactor
from twisted.internet.interfaces import IPushProducer
from twisted.internet.protocol import Factory
from twisted.internet.task import LoopingCall
from twisted.python import log
from zope.interface import implements
from bravo.beta.packets import make_packet
from bravo.beta.protocol import BravoProtocol, KickedProtocol
from bravo.entity import entities
from bravo.ibravo import (ISortedPlugin, IAutomaton, ITerrainGenerator,
IUseHook, ISignHook, IPreDigHook, IDigHook,
IPreBuildHook, IPostBuildHook, IWindowOpenHook,
IWindowClickHook, IWindowCloseHook)
from bravo.location import Location
from bravo.plugin import retrieve_named_plugins, retrieve_sorted_plugins
from bravo.policy.packs import packs as available_packs
from bravo.policy.seasons import Spring, Winter
from bravo.utilities.chat import chat_name, sanitize_chat
from bravo.weather import WeatherVane
from bravo.world import World
(STATE_UNAUTHENTICATED, STATE_CHALLENGED, STATE_AUTHENTICATED,
STATE_LOCATED) = range(4)
circle = [(i, j)
for i, j in product(xrange(-5, 5), xrange(-5, 5))
if i**2 + j**2 <= 25
]
class BravoFactory(Factory):
"""
A ``Factory`` that creates ``BravoProtocol`` objects when connected to.
"""
implements(IPushProducer)
protocol = BravoProtocol
timestamp = None
time = 0
day = 0
eid = 1
interfaces = []
def __init__(self, config, name):
"""
Create a factory and world.
``name`` is the string used to look up factory-specific settings from
the configuration.
:param str name: internal name of this factory
"""
self.name = name
self.config = config
self.config_name = "world %s" % name
self.world = World(self.config, self.name)
self.world.factory = self
self.protocols = dict()
self.connectedIPs = defaultdict(int)
self.mode = self.config.get(self.config_name, "mode")
if self.mode not in ("creative", "survival"):
raise Exception("Unsupported mode %s" % self.mode)
self.limitConnections = self.config.getintdefault(self.config_name,
"limitConnections",
0)
self.limitPerIP = self.config.getintdefault(self.config_name,
"limitPerIP", 0)
self.vane = WeatherVane(self)
def startFactory(self):
log.msg("Initializing factory for world '%s'..." % self.name)
# Get our plugins set up.
self.register_plugins()
log.msg("Starting world...")
self.world.start()
- # Start up the permanent cache.
- # has_option() is not exactly desirable, but it's appropriate here
- # because we don't want to take any action if the key is unset.
- if self.config.has_option(self.config_name, "perm_cache"):
- cache_level = self.config.getint(self.config_name, "perm_cache")
- self.world.enable_cache(cache_level)
-
log.msg("Starting timekeeping...")
self.timestamp = reactor.seconds()
self.time = self.world.level.time
self.update_season()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(2)
log.msg("Starting entity updates...")
# Start automatons.
for automaton in self.automatons:
automaton.start()
self.chat_consumers = set()
log.msg("Factory successfully initialized for world '%s'!" % self.name)
def stopFactory(self):
"""
Called before factory stops listening on ports. Used to perform
shutdown tasks.
"""
log.msg("Shutting down world...")
# Stop automatons. Technically, they may not actually halt until their
# next iteration, but that is close enough for us, probably.
# Automatons are contracted to not access the world after stop() is
# called.
for automaton in self.automatons:
automaton.stop()
# Evict plugins as soon as possible. Can't be done before stopping
# automatons.
self.unregister_plugins()
self.time_loop.stop()
# Write back current world time. This must be done before stopping the
# world.
self.world.time = self.time
# And now stop the world.
self.world.stop()
log.msg("World data saved!")
def buildProtocol(self, addr):
"""
Create a protocol.
This overriden method provides early player entity registration, as a
solution to the username/entity race that occurs on login.
"""
banned = self.world.serializer.load_plugin_data("banned_ips")
# Do IP bans first.
for ip in banned.split():
if addr.host == ip:
# Use KickedProtocol with extreme prejudice.
log.msg("Kicking banned IP %s" % addr.host)
p = KickedProtocol("Sorry, but your IP address is banned.")
p.factory = self
return p
# We are ignoring values less that 1, but making sure not to go over
# the connection limit.
if (self.limitConnections
and len(self.protocols) >= self.limitConnections):
log.msg("Reached maximum players, turning %s away." % addr.host)
p = KickedProtocol("The player limit has already been reached."
" Please try again later.")
p.factory = self
return p
# Do our connection-per-IP check.
if (self.limitPerIP and
self.connectedIPs[addr.host] >= self.limitPerIP):
log.msg("At maximum connections for %s already, dropping." % addr.host)
p = KickedProtocol("There are too many players connected from this IP.")
p.factory = self
return p
else:
self.connectedIPs[addr.host] += 1
# If the player wasn't kicked, let's continue!
log.msg("Starting connection for %s" % addr)
p = self.protocol(self.config, self.name)
p.host = addr.host
p.factory = self
self.register_entity(p)
# Copy our hooks to the protocol.
p.register_hooks()
return p
def teardown_protocol(self, protocol):
"""
Do internal bookkeeping on behalf of a protocol which has been
disconnected.
Did you know that "bookkeeping" is one of the few words in English
which has three pairs of double letters in a row?
"""
username = protocol.username
host = protocol.host
if username in self.protocols:
del self.protocols[username]
self.connectedIPs[host] -= 1
def set_username(self, protocol, username):
"""
Attempt to set a new username for a protocol.
:returns: whether the username was changed
"""
# If the username's already taken, refuse it.
if username in self.protocols:
return False
if protocol.username in self.protocols:
# This protocol's known under another name, so remove it.
del self.protocols[protocol.username]
# Set the username.
self.protocols[username] = protocol
protocol.username = username
return True
def register_plugins(self):
"""
Setup plugin hooks.
"""
log.msg("Registering client plugin hooks...")
plugin_types = {
"automatons": IAutomaton,
"generators": ITerrainGenerator,
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
packs = self.config.getlistdefault(self.config_name, "packs", [])
try:
packs = [available_packs[pack] for pack in packs]
except KeyError, e:
raise Exception("Couldn't find plugin pack %s" % e.args)
for t, interface in plugin_types.iteritems():
l = self.config.getlistdefault(self.config_name, t, [])
# Grab extra plugins from the pack. Order doesn't really matter
# since the plugin loader sorts things anyway.
for pack in packs:
if t in pack:
l += pack[t]
# Hax. :T
if t == "generators":
plugins = retrieve_sorted_plugins(interface, l)
elif issubclass(interface, ISortedPlugin):
plugins = retrieve_sorted_plugins(interface, l, factory=self)
else:
plugins = retrieve_named_plugins(interface, l, factory=self)
log.msg("Using %s: %s" % (t.replace("_", " "),
", ".join(plugin.name for plugin in plugins)))
setattr(self, t, plugins)
# Deal with seasons.
seasons = self.config.getlistdefault(self.config_name, "seasons", [])
for pack in packs:
if "seasons" in pack:
seasons += pack["seasons"]
self.seasons = []
if "spring" in seasons:
self.seasons.append(Spring())
if "winter" in seasons:
self.seasons.append(Winter())
# Assign generators to the world pipeline.
self.world.pipeline = self.generators
# Use hooks have special funkiness.
uh = self.use_hooks
self.use_hooks = defaultdict(list)
for plugin in uh:
for target in plugin.targets:
self.use_hooks[target].append(plugin)
def unregister_plugins(self):
log.msg("Unregistering client plugin hooks...")
for name in [
"automatons",
"generators",
"open_hooks",
"click_hooks",
"close_hooks",
"pre_build_hooks",
"post_build_hooks",
"pre_dig_hooks",
"dig_hooks",
"sign_hooks",
"use_hooks",
]:
delattr(self, name)
def create_entity(self, x, y, z, name, **kwargs):
"""
Spawn an entirely new entity at the specified block coordinates.
Handles entity registration as well as instantiation.
"""
bigx = x // 16
bigz = z // 16
location = Location.at_block(x, y, z)
entity = entities[name](eid=0, location=location, **kwargs)
self.register_entity(entity)
d = self.world.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
chunk.entities.add(entity)
log.msg("Created entity %s" % entity)
# XXX Maybe just send the entity object to the manager instead of
# the following?
if hasattr(entity,'loop'):
self.world.mob_manager.start_mob(entity)
return entity
def register_entity(self, entity):
"""
Registers an entity with this factory.
Registration is perhaps too fancy of a name; this method merely makes
sure that the entity has a unique and usable entity ID. In particular,
this method does *not* make the entity attached to the world, or
advertise its existence.
"""
if not entity.eid:
self.eid += 1
entity.eid = self.eid
log.msg("Registered entity %s" % entity)
def destroy_entity(self, entity):
"""
Destroy an entity.
The factory doesn't have to know about entities, but it is a good
place to put this logic.
"""
bigx, bigz = entity.location.pos.to_chunk()
d = self.world.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
chunk.entities.discard(entity)
chunk.dirty = True
log.msg("Destroyed entity %s" % entity)
def update_time(self):
"""
Update the in-game timer.
The timer goes from 0 to 24000, both of which are high noon. The clock
increments by 20 every second. Days are 20 minutes long.
The day clock is incremented every in-game day, which is every 20
minutes. The day clock goes from 0 to 360, which works out to a reset
once every 5 days. This is a Babylonian in-game year.
"""
t = reactor.seconds()
self.time += 20 * (t - self.timestamp)
self.timestamp = t
days, self.time = divmod(self.time, 24000)
if days:
self.day += days
self.day %= 360
self.update_season()
def broadcast_time(self):
packet = make_packet("time", timestamp=int(self.time))
self.broadcast(packet)
def update_season(self):
"""
Update the world's season.
"""
all_seasons = sorted(self.seasons, key=lambda s: s.day)
# Get all the seasons that we have past the start date of this year.
# We are looking for the season which is closest to our current day,
# without going over; I call this the Price-is-Right style of season
# handling. :3
past_seasons = [s for s in all_seasons if s.day <= self.day]
if past_seasons:
# The most recent one is the one we are in
self.world.season = past_seasons[-1]
elif all_seasons:
# We haven't past any seasons yet this year, so grab the last one
# from 'last year'
self.world.season = all_seasons[-1]
else:
# No seasons enabled.
self.world.season = None
def chat(self, message):
"""
Relay chat messages.
Chat messages are sent to all connected clients, as well as to anybody
consuming this factory.
"""
for consumer in self.chat_consumers:
consumer.write((self, message))
# Prepare the message for chat packeting.
for user in self.protocols:
message = message.replace(user, chat_name(user))
message = sanitize_chat(message)
log.msg("Chat: %s" % message.encode("utf8"))
packet = make_packet("chat", message=message)
self.broadcast(packet)
def broadcast(self, packet):
"""
Broadcast a packet to all connected players.
"""
for player in self.protocols.itervalues():
player.transport.write(packet)
def broadcast_for_others(self, packet, protocol):
"""
Broadcast a packet to all players except the originating player.
Useful for certain packets like player entity spawns which should
never be reflexive.
"""
for player in self.protocols.itervalues():
if player is not protocol:
player.transport.write(packet)
def broadcast_for_chunk(self, packet, x, z):
"""
Broadcast a packet to all players that have a certain chunk loaded.
`x` and `z` are chunk coordinates, not block coordinates.
"""
for player in self.protocols.itervalues():
if (x, z) in player.chunks:
player.transport.write(packet)
def scan_chunk(self, chunk):
"""
Tell automatons about this chunk.
"""
# It's possible for there to be no automatons; this usually means that
# the factory is shutting down. We should be permissive and handle
# this case correctly.
if hasattr(self, "automatons"):
for automaton in self.automatons:
automaton.scan(chunk)
def flush_chunk(self, chunk):
"""
Flush a damaged chunk to all players that have it loaded.
"""
if chunk.is_damaged():
packet = chunk.get_damage_packet()
for player in self.protocols.itervalues():
if (chunk.x, chunk.z) in player.chunks:
player.transport.write(packet)
chunk.clear_damage()
def flush_all_chunks(self):
"""
Flush any damage anywhere in this world to all players.
This is a sledgehammer which should be used sparingly at best, and is
only well-suited to plugins which touch multiple chunks at once.
In other words, if I catch you using this in your plugin needlessly,
I'm gonna have a chat with you.
"""
for chunk in chain(self.world.chunk_cache.itervalues(),
self.world.dirty_chunk_cache.itervalues()):
self.flush_chunk(chunk)
def give(self, coords, block, quantity):
"""
Spawn a pickup at the specified coordinates.
The coordinates need to be in pixels, not blocks.
If the size of the stack is too big, multiple stacks will be dropped.
:param tuple coords: coordinates, in pixels
:param tuple block: key of block or item to drop
:param int quantity: number of blocks to drop in the stack
"""
x, y, z = coords
while quantity > 0:
entity = self.create_entity(x // 32, y // 32, z // 32, "Item",
item=block, quantity=min(quantity, 64))
packet = entity.save_to_packet()
packet += make_packet("create", eid=entity.eid)
self.broadcast(packet)
quantity -= 64
def players_near(self, player, radius):
"""
Obtain other players within a radius of a given player.
Radius is measured in blocks.
"""
radius *= 32
for p in self.protocols.itervalues():
if p.player == player:
continue
distance = player.location.distance(p.location)
if distance <= radius:
yield p.player
def pauseProducing(self):
pass
def resumeProducing(self):
pass
def stopProducing(self):
pass
diff --git a/bravo/tests/test_world.py b/bravo/tests/test_world.py
index b004c84..a94a2dd 100644
--- a/bravo/tests/test_world.py
+++ b/bravo/tests/test_world.py
@@ -1,251 +1,276 @@
from twisted.trial import unittest
from twisted.internet.defer import inlineCallbacks
from array import array
from itertools import product
import os
from bravo.config import BravoConfigParser
from bravo.errors import ChunkNotLoaded
-from bravo.world import World
+from bravo.world import ChunkCache, World
+
+
+class MockChunk(object):
+
+ def __init__(self, x, z):
+ self.x = x
+ self.z = z
+
+
+class TestChunkCache(unittest.TestCase):
+
+ def test_pin_single(self):
+ cc = ChunkCache()
+ chunk = MockChunk(1, 2)
+ cc.pin(chunk)
+ self.assertIs(cc.get((1, 2)), chunk)
+
class TestWorldChunks(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
self.w.start()
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
+ @inlineCallbacks
+ def test_request_chunk_identity(self):
+ first = yield self.w.request_chunk(0, 0)
+ second = yield self.w.request_chunk(0, 0)
+ self.assertIs(first, second)
+
@inlineCallbacks
def test_get_block(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = yield self.w.get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_metadata(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.metadata = array("B")
chunk.metadata.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
metadata = yield self.w.get_metadata((x, y, z))
self.assertEqual(metadata, chunk.get_metadata((x, y, z)))
@inlineCallbacks
def test_get_block_readback(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = yield self.w.get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_block_readback_negative(self):
chunk = yield self.w.request_chunk(-1, -1)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(-1, -1)
for x, y, z in product(xrange(2), repeat=3):
block = yield self.w.get_block((x - 16, y, z - 16))
self.assertEqual(block, chunk.get_block((x, y, z)))
@inlineCallbacks
def test_get_metadata_readback(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.metadata = array("B")
chunk.metadata.fromstring(os.urandom(32768))
# Evict the chunk and grab it again.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
metadata = yield self.w.get_metadata((x, y, z))
self.assertEqual(metadata, chunk.get_metadata((x, y, z)))
@inlineCallbacks
def test_world_level_mark_chunk_dirty(self):
chunk = yield self.w.request_chunk(0, 0)
# Reload chunk.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(0, 0)
self.assertFalse(chunk.dirty)
self.w.mark_dirty((12, 64, 4))
chunk = yield self.w.request_chunk(0, 0)
self.assertTrue(chunk.dirty)
test_world_level_mark_chunk_dirty.todo = "Needs work"
@inlineCallbacks
def test_world_level_mark_chunk_dirty_offset(self):
chunk = yield self.w.request_chunk(1, 2)
# Reload chunk.
self.w.save_chunk(chunk)
del chunk
self.w.chunk_cache.clear()
self.w.dirty_chunk_cache.clear()
chunk = yield self.w.request_chunk(1, 2)
self.assertFalse(chunk.dirty)
self.w.mark_dirty((29, 64, 43))
chunk = yield self.w.request_chunk(1, 2)
self.assertTrue(chunk.dirty)
test_world_level_mark_chunk_dirty_offset.todo = "Needs work"
@inlineCallbacks
def test_sync_get_block(self):
chunk = yield self.w.request_chunk(0, 0)
# Fill the chunk with random stuff.
chunk.blocks = array("B")
chunk.blocks.fromstring(os.urandom(32768))
for x, y, z in product(xrange(2), repeat=3):
# This works because the chunk is at (0, 0) so the coords don't
# need to be adjusted.
block = self.w.sync_get_block((x, y, z))
self.assertEqual(block, chunk.get_block((x, y, z)))
def test_sync_get_block_unloaded(self):
self.assertRaises(ChunkNotLoaded, self.w.sync_get_block, (0, 0, 0))
def test_sync_get_metadata_neighboring(self):
"""
Even if a neighboring chunk is loaded, the target chunk could still be
unloaded.
Test with sync_get_metadata() to increase test coverage.
"""
d = self.w.request_chunk(0, 0)
@d.addCallback
def cb(chunk):
self.assertRaises(ChunkNotLoaded,
self.w.sync_get_metadata, (16, 0, 0))
return d
+
class TestWorld(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
self.w.start()
def tearDown(self):
self.w.stop()
def test_trivial(self):
pass
def test_load_player_initial(self):
"""
Calling load_player() on a player which has never been loaded should
not result in an exception. Instead, the player should be returned,
wrapped in a Deferred.
"""
# For bonus points, assert that the player's username is correct.
d = self.w.load_player("unittest")
@d.addCallback
def cb(player):
self.assertEqual(player.username, "unittest")
return d
+
class TestWorldConfig(unittest.TestCase):
def setUp(self):
self.name = "unittest"
self.bcp = BravoConfigParser()
self.bcp.add_section("world unittest")
self.bcp.set("world unittest", "url", "")
self.bcp.set("world unittest", "serializer", "memory")
self.w = World(self.bcp, self.name)
self.w.pipeline = []
def test_trivial(self):
pass
def test_world_configured_seed(self):
"""
Worlds can have their seed set via configuration.
"""
self.bcp.set("world unittest", "seed", "42")
self.w.start()
self.assertEqual(self.w.level.seed, 42)
self.w.stop()
diff --git a/bravo/world.py b/bravo/world.py
index e415a9f..3673fac 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,653 +1,703 @@
from array import array
from functools import wraps
from itertools import product
import random
import sys
import weakref
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
from twisted.internet.task import LoopingCall
from twisted.python import log
from bravo.beta.structures import Level
from bravo.chunk import Chunk, CHUNK_HEIGHT
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
+class ChunkCache(object):
+ """
+ A cache which holds references to all chunks which should be held in
+ memory.
+
+ This cache remembers chunks that were recently used, that are in permanent
+ residency, and so forth. Its exact caching algorithm is currently null.
+
+ When chunks dirty themselves, they are expected to notify the cache, which
+ will then schedule an eviction for the chunk.
+ """
+
+ _perm = None
+ """
+ A permanent cache of chunks which are never evicted from memory.
+
+ This cache is used to speed up logins near the spawn point.
+ """
+
+ def __init__(self):
+ self._perm = {}
+
+ def pin(self, chunk):
+ self._perm[chunk.x, chunk.z] = chunk
+
+ def unpin(self, chunk):
+ del self._perm[chunk.x, chunk.z]
+
+ def get(self, coords):
+ if coords in self._perm:
+ return self._perm[coords]
+ return None
+
+
class ImpossibleCoordinates(Exception):
"""
A coordinate could not ever be valid.
"""
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
if bigcoords in self.chunk_cache:
chunk = self.chunk_cache[bigcoords]
elif bigcoords in self.dirty_chunk_cache:
chunk = self.dirty_chunk_cache[bigcoords]
else:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
- permanent_cache = None
+ level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
- A permanent cache of chunks which are never evicted from memory.
-
- This cache is used to speed up logins near the spawn point.
+ The initial level data.
"""
- level = Level(seed=0, spawn=(0, 0, 0), time=0)
+ _cache = None
"""
- The initial level data.
+ The chunk cache.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
self.chunk_cache = weakref.WeakValueDictionary()
self.dirty_chunk_cache = dict()
self._pending_chunks = dict()
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
- log.msg("World started on %s, using serializer %s" %
+ log.msg("World connected on %s, using serializer %s" %
(world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
+ # Create our cache.
+ self._cache = ChunkCache()
+
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
+ # Start up the permanent cache.
+ # has_option() is not exactly desirable, but it's appropriate here
+ # because we don't want to take any action if the key is unset.
+ if self.config.has_option(self.config_name, "perm_cache"):
+ cache_level = self.config.getint(self.config_name, "perm_cache")
+ self.enable_cache(cache_level)
+
self.chunk_management_loop = LoopingCall(self.sort_chunks)
self.chunk_management_loop.start(1)
# XXX Put this in init or here?
self.mob_manager = MobManager()
# XXX Put this in the managers constructor?
self.mob_manager.world = self
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
"""
self.chunk_management_loop.stop()
# Flush all dirty chunks to disk.
for chunk in self.dirty_chunk_cache.itervalues():
self.save_chunk(chunk)
# Evict all chunks.
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
+ # Destroy the cache.
+ self._cache = None
+
# Save the level data.
self.serializer.save_level(self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
"""
log.msg("Setting cache size to %d, please hold..." % size)
- self.permanent_cache = set()
- assign = self.permanent_cache.add
+ assign = self._cache.pin
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
for x, z in product(rx, rz):
log.msg("Adding %d, %d to cache..." % (x, z))
self.request_chunk(x, z).addCallback(assign)
log.msg("Cache size is now %d!" % size)
def sort_chunks(self):
"""
Sort out the internal caches.
This method will always block when there are dirty chunks.
"""
first = True
all_chunks = dict(self.dirty_chunk_cache)
all_chunks.update(self.chunk_cache)
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
for coords, chunk in all_chunks.iteritems():
if chunk.dirty:
if first:
first = False
self.save_chunk(chunk)
self.chunk_cache[coords] = chunk
else:
self.dirty_chunk_cache[coords] = chunk
else:
self.chunk_cache[coords] = chunk
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
d = dict(self.chunk_cache)
self.chunk_cache = d
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
d = weakref.WeakValueDictionary(self.chunk_cache)
self.chunk_cache = d
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
if hasattr(entity, 'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
+ # First, try the cache.
+ cached = self._cache.get((x, z))
+ if cached is not None:
+ returnValue(cached)
+
+ # Then, legacy caches.
if (x, z) in self.chunk_cache:
returnValue(self.chunk_cache[x, z])
elif (x, z) in self.dirty_chunk_cache:
returnValue(self.dirty_chunk_cache[x, z])
elif (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
if chunk.populated:
self.chunk_cache[x, z] = chunk
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
def pp(chunk):
chunk.populated = True
chunk.dirty = True
self.postprocess_chunk(chunk)
self.dirty_chunk_cache[x, z] = chunk
del self._pending_chunks[x, z]
return chunk
# Set up callbacks.
d.addCallback(pp)
d.chainDeferred(pe)
# Because multiple people might be attached to this callback, we're
# going to do something magical here. We will yield a forked version
# of our Deferred. This means that we will wait right here, for a
# long, long time, before actually returning with the chunk, *but*,
# when we actually finish, we'll be ready to return the chunk
# immediately. Our caller cannot possibly care because they only see a
# Deferred either way.
retval = yield retval
returnValue(retval)
def save_chunk(self, chunk):
if not chunk.dirty or not self.saving:
return
d = maybeDeferred(self.serializer.save_chunk, chunk)
@d.addCallback
def cb(none):
chunk.dirty = False
@d.addErrback
def eb(failure):
failure.trap(SerializerWriteException)
log.msg("Couldn't write %r" % chunk)
def load_player(self, username):
"""
Retrieve player data.
:returns: a ``Deferred`` that will be fired with a ``Player``
"""
# Get the player, possibly.
d = maybeDeferred(self.serializer.load_player, username)
@d.addErrback
def eb(failure):
failure.trap(SerializerReadException)
log.msg("Couldn't load player %r" % username)
# Make a player.
player = Player(username=username)
player.location.x = self.level.spawn[0]
player.location.y = self.level.spawn[1]
player.location.stance = self.level.spawn[1]
player.location.z = self.level.spawn[2]
return player
# This Deferred's good to go as-is.
return d
def save_player(self, username, player):
if self.saving:
self.serializer.save_player(player)
# World-level geometry access.
# These methods let external API users refrain from going through the
# standard motions of looking up and loading chunk information.
@coords_to_chunk
def get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_block(coords)
@coords_to_chunk
def set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_block(coords, value)
@coords_to_chunk
def get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_metadata(coords)
@coords_to_chunk
def set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_metadata(coords, value)
@coords_to_chunk
def destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.destroy(coords)
@coords_to_chunk
def mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: the requested block
"""
return chunk.get_block(coords)
@sync_coords_to_chunk
def sync_set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: None
"""
chunk.set_block(coords, value)
@sync_coords_to_chunk
def sync_get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: the requested metadata
"""
return chunk.get_metadata(coords)
@sync_coords_to_chunk
def sync_set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: None
"""
chunk.set_metadata(coords, value)
@sync_coords_to_chunk
def sync_destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: None
"""
chunk.destroy(coords)
@sync_coords_to_chunk
def sync_mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: None
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_request_chunk(self, chunk, coords):
"""
Get an unknown chunk.
:returns: the requested ``Chunk``
"""
return chunk
|
bravoserver/bravo
|
925ed06680adeb9f43c087d2dbbe3365b6447862
|
world: PEP 8.
|
diff --git a/bravo/world.py b/bravo/world.py
index f76cfd5..e415a9f 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,647 +1,653 @@
from array import array
from functools import wraps
from itertools import product
import random
import sys
import weakref
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
from twisted.internet.task import LoopingCall
from twisted.python import log
from bravo.beta.structures import Level
from bravo.chunk import Chunk, CHUNK_HEIGHT
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
+
class ImpossibleCoordinates(Exception):
"""
A coordinate could not ever be valid.
"""
+
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
+
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
if bigcoords in self.chunk_cache:
chunk = self.chunk_cache[bigcoords]
elif bigcoords in self.dirty_chunk_cache:
chunk = self.dirty_chunk_cache[bigcoords]
else:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
+
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
permanent_cache = None
"""
A permanent cache of chunks which are never evicted from memory.
This cache is used to speed up logins near the spawn point.
"""
level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
The initial level data.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
self.chunk_cache = weakref.WeakValueDictionary()
self.dirty_chunk_cache = dict()
self._pending_chunks = dict()
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
log.msg("World started on %s, using serializer %s" %
- (world_url, self.serializer.name))
+ (world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
self.chunk_management_loop = LoopingCall(self.sort_chunks)
self.chunk_management_loop.start(1)
- self.mob_manager = MobManager() # XXX Put this in init or here?
- self.mob_manager.world = self # XXX Put this in the managers constructor?
+ # XXX Put this in init or here?
+ self.mob_manager = MobManager()
+ # XXX Put this in the managers constructor?
+ self.mob_manager.world = self
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
"""
self.chunk_management_loop.stop()
# Flush all dirty chunks to disk.
for chunk in self.dirty_chunk_cache.itervalues():
self.save_chunk(chunk)
# Evict all chunks.
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
# Save the level data.
self.serializer.save_level(self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
"""
log.msg("Setting cache size to %d, please hold..." % size)
self.permanent_cache = set()
assign = self.permanent_cache.add
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
for x, z in product(rx, rz):
log.msg("Adding %d, %d to cache..." % (x, z))
self.request_chunk(x, z).addCallback(assign)
log.msg("Cache size is now %d!" % size)
def sort_chunks(self):
"""
Sort out the internal caches.
This method will always block when there are dirty chunks.
"""
first = True
all_chunks = dict(self.dirty_chunk_cache)
all_chunks.update(self.chunk_cache)
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
for coords, chunk in all_chunks.iteritems():
if chunk.dirty:
if first:
first = False
self.save_chunk(chunk)
self.chunk_cache[coords] = chunk
else:
self.dirty_chunk_cache[coords] = chunk
else:
self.chunk_cache[coords] = chunk
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
d = dict(self.chunk_cache)
self.chunk_cache = d
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
d = weakref.WeakValueDictionary(self.chunk_cache)
self.chunk_cache = d
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
- if hasattr(entity,'loop'):
+ if hasattr(entity, 'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
if (x, z) in self.chunk_cache:
returnValue(self.chunk_cache[x, z])
elif (x, z) in self.dirty_chunk_cache:
returnValue(self.dirty_chunk_cache[x, z])
elif (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
if chunk.populated:
self.chunk_cache[x, z] = chunk
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
def pp(chunk):
chunk.populated = True
chunk.dirty = True
self.postprocess_chunk(chunk)
self.dirty_chunk_cache[x, z] = chunk
del self._pending_chunks[x, z]
return chunk
# Set up callbacks.
d.addCallback(pp)
d.chainDeferred(pe)
# Because multiple people might be attached to this callback, we're
# going to do something magical here. We will yield a forked version
# of our Deferred. This means that we will wait right here, for a
# long, long time, before actually returning with the chunk, *but*,
# when we actually finish, we'll be ready to return the chunk
# immediately. Our caller cannot possibly care because they only see a
# Deferred either way.
retval = yield retval
returnValue(retval)
def save_chunk(self, chunk):
if not chunk.dirty or not self.saving:
return
d = maybeDeferred(self.serializer.save_chunk, chunk)
@d.addCallback
def cb(none):
chunk.dirty = False
@d.addErrback
def eb(failure):
failure.trap(SerializerWriteException)
log.msg("Couldn't write %r" % chunk)
def load_player(self, username):
"""
Retrieve player data.
:returns: a ``Deferred`` that will be fired with a ``Player``
"""
# Get the player, possibly.
d = maybeDeferred(self.serializer.load_player, username)
@d.addErrback
def eb(failure):
failure.trap(SerializerReadException)
log.msg("Couldn't load player %r" % username)
# Make a player.
player = Player(username=username)
player.location.x = self.level.spawn[0]
player.location.y = self.level.spawn[1]
player.location.stance = self.level.spawn[1]
player.location.z = self.level.spawn[2]
return player
# This Deferred's good to go as-is.
return d
def save_player(self, username, player):
if self.saving:
self.serializer.save_player(player)
# World-level geometry access.
# These methods let external API users refrain from going through the
# standard motions of looking up and loading chunk information.
@coords_to_chunk
def get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_block(coords)
@coords_to_chunk
def set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_block(coords, value)
@coords_to_chunk
def get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_metadata(coords)
@coords_to_chunk
def set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_metadata(coords, value)
@coords_to_chunk
def destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.destroy(coords)
@coords_to_chunk
def mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: the requested block
"""
return chunk.get_block(coords)
@sync_coords_to_chunk
def sync_set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: None
"""
chunk.set_block(coords, value)
@sync_coords_to_chunk
def sync_get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: the requested metadata
"""
return chunk.get_metadata(coords)
@sync_coords_to_chunk
def sync_set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: None
"""
chunk.set_metadata(coords, value)
@sync_coords_to_chunk
def sync_destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: None
"""
chunk.destroy(coords)
@sync_coords_to_chunk
def sync_mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: None
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_request_chunk(self, chunk, coords):
"""
Get an unknown chunk.
:returns: the requested ``Chunk``
"""
return chunk
|
bravoserver/bravo
|
8602bab200b676f1f193971b109ddd8914c28998
|
beta/protocol, beta/structures: Expand Settings and use it for 0xca.
|
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index 8b5a989..f825508 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -1,835 +1,842 @@
# vim: set fileencoding=utf8 :
from itertools import product, chain
from time import time
from urlparse import urlunparse
from twisted.internet import reactor
from twisted.internet.defer import (DeferredList, inlineCallbacks,
maybeDeferred, succeed)
from twisted.internet.protocol import Protocol, connectionDone
from twisted.internet.task import cooperate, deferLater, LoopingCall
from twisted.internet.task import TaskDone, TaskFailed
from twisted.protocols.policies import TimeoutMixin
from twisted.python import log
from twisted.web.client import getPage
from bravo import version
from bravo.beta.structures import BuildData, Settings
from bravo.blocks import blocks, items
from bravo.chunk import CHUNK_HEIGHT
from bravo.entity import Sign
from bravo.errors import BetaClientError, BuildError
from bravo.ibravo import (IChatCommand, IPreBuildHook, IPostBuildHook,
IWindowOpenHook, IWindowClickHook, IWindowCloseHook,
IPreDigHook, IDigHook, ISignHook, IUseHook)
from bravo.infini.factory import InfiniClientFactory
from bravo.inventory.windows import InventoryWindow
from bravo.location import Location, Orientation, Position
from bravo.motd import get_motd
from bravo.beta.packets import parse_packets, make_packet, make_error_packet
from bravo.plugin import retrieve_plugins
from bravo.policy.dig import dig_policies
from bravo.utilities.coords import adjust_coords_for_face, split_coords
from bravo.utilities.chat import complete, username_alternatives
from bravo.utilities.maths import circling, clamp, sorted_by_distance
from bravo.utilities.temporal import timestamp_from_clock
# States of the protocol.
(STATE_UNAUTHENTICATED, STATE_AUTHENTICATED, STATE_LOCATED) = range(3)
SUPPORTED_PROTOCOL = 61
class BetaServerProtocol(object, Protocol, TimeoutMixin):
"""
The Minecraft Alpha/Beta server protocol.
This class is mostly designed to be a skeleton for featureful clients. It
tries hard to not step on the toes of potential subclasses.
"""
excess = ""
packet = None
state = STATE_UNAUTHENTICATED
buf = ""
parser = None
handler = None
player = None
username = None
- settings = Settings("en_US", "normal")
+ settings = Settings()
motd = "Bravo Generic Beta Server"
_health = 20
_latency = 0
def __init__(self):
self.chunks = dict()
self.windows = {}
self.wid = 1
self.location = Location()
self.handlers = {
0x00: self.ping,
0x02: self.handshake,
0x03: self.chat,
0x07: self.use,
0x09: self.respawn,
0x0a: self.grounded,
0x0b: self.position,
0x0c: self.orientation,
0x0d: self.location_packet,
0x0e: self.digging,
0x0f: self.build,
0x10: self.equip,
0x12: self.animate,
0x13: self.action,
0x15: self.pickup,
0x65: self.wclose,
0x66: self.waction,
0x6a: self.wacknowledge,
0x6b: self.wcreative,
0x82: self.sign,
+ 0xca: self.client_settings,
0xcb: self.complete,
0xcc: self.settings_packet,
0xfe: self.poll,
0xff: self.quit,
}
self._ping_loop = LoopingCall(self.update_ping)
self.setTimeout(30)
# Low-level packet handlers
# Try not to hook these if possible, since they offer no convenient
# abstractions or protections.
def ping(self, container):
"""
Hook for ping packets.
By default, this hook will examine the timestamps on incoming pings,
and use them to estimate the current latency of the connected client.
"""
now = timestamp_from_clock(reactor)
then = container.pid
self.latency = now - then
def handshake(self, container):
"""
Hook for handshake packets.
Override this to customize how logins are handled. By default, this
method will only confirm that the negotiated wire protocol is the
correct version, copy data out of the packet and onto the protocol,
and then run the ``authenticated`` callback.
This method will call the ``pre_handshake`` method hook prior to
logging in the client.
"""
self.username = container.username
if container.protocol < SUPPORTED_PROTOCOL:
# Kick old clients.
self.error("This server doesn't support your ancient client.")
return
elif container.protocol > SUPPORTED_PROTOCOL:
# Kick new clients.
self.error("This server doesn't support your newfangled client.")
return
log.msg("Handshaking with client, protocol version %d" %
container.protocol)
if not self.pre_handshake():
log.msg("Pre-handshake hook failed; kicking client")
self.error("You failed the pre-handshake hook.")
return
players = min(self.factory.limitConnections, 20)
self.write_packet("login", eid=self.eid, leveltype="default",
mode=self.factory.mode,
dimension=self.factory.world.dimension,
difficulty="peaceful", unused=0, maxplayers=players)
self.authenticated()
def pre_handshake(self):
"""
Whether this client should be logged in.
"""
return True
def chat(self, container):
"""
Hook for chat packets.
"""
def use(self, container):
"""
Hook for use packets.
"""
def respawn(self, container):
"""
Hook for respawn packets.
"""
def grounded(self, container):
"""
Hook for grounded packets.
"""
self.location.grounded = bool(container.grounded)
def position(self, container):
"""
Hook for position packets.
"""
if self.state != STATE_LOCATED:
log.msg("Ignoring unlocated position!")
return
self.grounded(container.grounded)
old_position = self.location.pos
position = Position.from_player(container.position.x,
container.position.y, container.position.z)
altered = False
dx, dy, dz = old_position - position
if any(abs(d) >= 64 for d in (dx, dy, dz)):
# Whoa, slow down there, cowboy. You're moving too fast. We're
# gonna ignore this position change completely, because it's
# either bogus or ignoring a recent teleport.
altered = True
else:
self.location.pos = position
self.location.stance = container.position.stance
# Santitize location. This handles safety boundaries, illegal stance,
# etc.
altered = self.location.clamp() or altered
# If, for any reason, our opinion on where the client should be
# located is different than theirs, force them to conform to our point
# of view.
if altered:
log.msg("Not updating bogus position!")
self.update_location()
# If our position actually changed, fire the position change hook.
if old_position != position:
self.position_changed()
def orientation(self, container):
"""
Hook for orientation packets.
"""
self.grounded(container.grounded)
old_orientation = self.location.ori
orientation = Orientation.from_degs(container.orientation.rotation,
container.orientation.pitch)
self.location.ori = orientation
if old_orientation != orientation:
self.orientation_changed()
def location_packet(self, container):
"""
Hook for location packets.
"""
self.position(container)
self.orientation(container)
def digging(self, container):
"""
Hook for digging packets.
"""
def build(self, container):
"""
Hook for build packets.
"""
def equip(self, container):
"""
Hook for equip packets.
"""
def pickup(self, container):
"""
Hook for pickup packets.
"""
def animate(self, container):
"""
Hook for animate packets.
"""
def action(self, container):
"""
Hook for action packets.
"""
def wclose(self, container):
"""
Hook for wclose packets.
"""
def waction(self, container):
"""
Hook for waction packets.
"""
def wacknowledge(self, container):
"""
Hook for wacknowledge packets.
"""
def wcreative(self, container):
"""
Hook for creative inventory action packets.
"""
def sign(self, container):
"""
Hook for sign packets.
"""
+ def client_settings(self, container):
+ """
+ Hook for interaction setting packets.
+ """
+
+ self.settings.update_interaction(container)
+
def complete(self, container):
"""
Hook for tab-completion packets.
"""
def settings_packet(self, container):
"""
- Hook for client settings packets.
+ Hook for presentation setting packets.
"""
- distance = ["far", "normal", "short", "tiny"][container.distance]
- self.settings = Settings(container.locale, distance)
+ self.settings.update_presentation(container)
def poll(self, container):
"""
Hook for poll packets.
By default, queries the parent factory for some data, and replays it
in a specific format to the requester. The connection is then closed
at both ends. This functionality is used by Beta 1.8 clients to poll
servers for status.
"""
players = unicode(len(self.factory.protocols))
max_players = unicode(self.factory.limitConnections or 1000000)
data = [
u"§1",
unicode(SUPPORTED_PROTOCOL),
u"Bravo %s" % version,
self.motd,
players,
max_players,
]
response = u"\u0000".join(data)
self.error(response)
def quit(self, container):
"""
Hook for quit packets.
By default, merely logs the quit message and drops the connection.
Even if the connection is not dropped, it will be lost anyway since
the client will close the connection. It's better to explicitly let it
go here than to have zombie protocols.
"""
log.msg("Client is quitting: %s" % container.message)
self.transport.loseConnection()
# Twisted-level data handlers and methods
# Please don't override these needlessly, as they are pretty solid and
# shouldn't need to be touched.
def dataReceived(self, data):
self.buf += data
packets, self.buf = parse_packets(self.buf)
if packets:
self.resetTimeout()
for header, payload in packets:
if header in self.handlers:
d = maybeDeferred(self.handlers[header], payload)
@d.addErrback
def eb(failure):
log.err("Error while handling packet 0x%.2x" % header)
log.err(failure)
return None
else:
log.err("Didn't handle parseable packet 0x%.2x!" % header)
log.err(payload)
def connectionLost(self, reason=connectionDone):
if self._ping_loop.running:
self._ping_loop.stop()
def timeoutConnection(self):
self.error("Connection timed out")
# State-change callbacks
# Feel free to override these, but call them at some point.
def authenticated(self):
"""
Called when the client has successfully authenticated with the server.
"""
self.state = STATE_AUTHENTICATED
self._ping_loop.start(30)
# Event callbacks
# These are meant to be overriden.
def orientation_changed(self):
"""
Called when the client moves.
This callback is only for orientation, not position.
"""
pass
def position_changed(self):
"""
Called when the client moves.
This callback is only for position, not orientation.
"""
pass
# Convenience methods for consolidating code and expressing intent. I
# hear that these are occasionally useful. If a method in this section can
# be used, then *PLEASE* use it; not using it is the same as open-coding
# whatever you're doing, and only hurts in the long run.
def write_packet(self, header, **payload):
"""
Send a packet to the client.
"""
self.transport.write(make_packet(header, **payload))
def update_ping(self):
"""
Send a keepalive to the client.
"""
timestamp = timestamp_from_clock(reactor)
self.write_packet("ping", pid=timestamp)
def update_location(self):
"""
Send this client's location to the client.
Also let other clients know where this client is.
"""
# Don't bother trying to update things if the position's not yet
# synchronized. We could end up jettisoning them into the void.
if self.state != STATE_LOCATED:
return
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
# Inform everybody of our new location.
packet = make_packet("teleport", eid=self.player.eid, x=x, y=y, z=z,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
# Inform ourselves of our new location.
packet = self.location.save_to_packet()
self.transport.write(packet)
def ascend(self, count):
"""
Ascend to the next XZ-plane.
``count`` is the number of ascensions to perform, and may be zero in
order to force this player to not be standing inside a block.
:returns: bool of whether the ascension was successful
This client must be located for this method to have any effect.
"""
if self.state != STATE_LOCATED:
return False
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
chunk = self.chunks[bigx, bigz]
column = [chunk.get_block((smallx, i, smallz))
for i in range(CHUNK_HEIGHT)]
# Special case: Ascend at most once, if the current spot isn't good.
if count == 0:
if (not column[y]) or column[y + 1] or column[y + 2]:
# Yeah, we're gonna need to move.
count += 1
else:
# Nope, we're fine where we are.
return True
for i in xrange(y, 255):
# Find the next spot above us which has a platform and two empty
# blocks of air.
if column[i] and (not column[i + 1]) and not column[i + 2]:
count -= 1
if not count:
break
else:
return False
self.location.pos = self.location.pos._replace(y=i * 32)
return True
def error(self, message):
"""
Error out.
This method sends ``message`` to the client as a descriptive error
message, then closes the connection.
"""
log.msg("Error: %s" % message)
self.transport.write(make_error_packet(message))
self.transport.loseConnection()
def play_notes(self, notes):
"""
Play some music.
Send a sequence of notes to the player. ``notes`` is a finite iterable
of pairs of instruments and pitches.
There is no way to time notes; if staggered playback is desired (and
it usually is!), then ``play_notes()`` should be called repeatedly at
the appropriate times.
This method turns the block beneath the player into a note block,
plays the requested notes through it, then turns it back into the
original block, all without actually modifying the chunk.
"""
x, y, z = self.location.pos.to_block()
if y:
y -= 1
bigx, smallx, bigz, smallz = split_coords(x, z)
if (bigx, bigz) not in self.chunks:
return
block = self.chunks[bigx, bigz].get_block((smallx, y, smallz))
meta = self.chunks[bigx, bigz].get_metadata((smallx, y, smallz))
self.write_packet("block", x=x, y=y, z=z,
type=blocks["note-block"].slot, meta=0)
for instrument, pitch in notes:
self.write_packet("note", x=x, y=y, z=z, pitch=pitch,
instrument=instrument)
self.write_packet("block", x=x, y=y, z=z, type=block, meta=meta)
# Automatic properties. Assigning to them causes the client to be notified
# of changes.
@property
def health(self):
return self._health
@health.setter
def health(self, value):
if not 0 <= value <= 20:
raise BetaClientError("Invalid health value %d" % value)
if self._health != value:
self.write_packet("health", hp=value, fp=0, saturation=0)
self._health = value
@property
def latency(self):
return self._latency
@latency.setter
def latency(self, value):
# Clamp the value to not exceed the boundaries of the packet. This is
# necessary even though, in theory, a ping this high is bad news.
value = clamp(value, 0, 65535)
# Check to see if this is a new value, and if so, alert everybody.
if self._latency != value:
packet = make_packet("players", name=self.username, online=True,
ping=value)
self.factory.broadcast(packet)
self._latency = value
class KickedProtocol(BetaServerProtocol):
"""
A very simple Beta protocol that helps enforce IP bans, Max Connections,
and Max Connections Per IP.
This protocol disconnects people as soon as they connect, with a helpful
message.
"""
def __init__(self, reason=None):
BetaServerProtocol.__init__(self)
if reason:
self.reason = reason
else:
self.reason = (
"This server doesn't like you very much."
" I don't like you very much either.")
def connectionMade(self):
self.error("%s" % self.reason)
class BetaProxyProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` that proxies for an InfiniCraft client.
"""
gateway = "server.wiki.vg"
def handshake(self, container):
self.write_packet("handshake", username="-")
def login(self, container):
self.username = container.username
self.write_packet("login", protocol=0, username="", seed=0,
dimension="earth")
url = urlunparse(("http", self.gateway, "/node/0/0/", None, None,
None))
d = getPage(url)
d.addCallback(self.start_proxy)
def start_proxy(self, response):
log.msg("Response: %s" % response)
log.msg("Starting proxy...")
address, port = response.split(":")
self.add_node(address, int(port))
def add_node(self, address, port):
"""
Add a new node to this client.
"""
from twisted.internet.endpoints import TCP4ClientEndpoint
log.msg("Adding node %s:%d" % (address, port))
endpoint = TCP4ClientEndpoint(reactor, address, port, 5)
self.node_factory = InfiniClientFactory()
d = endpoint.connect(self.node_factory)
d.addCallback(self.node_connected)
d.addErrback(self.node_connect_error)
def node_connected(self, protocol):
log.msg("Connected new node!")
def node_connect_error(self, reason):
log.err("Couldn't connect node!")
log.err(reason)
class BravoProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` suitable for serving MC worlds to clients.
This protocol really does need to be hooked up with a ``BravoFactory`` or
something very much like it.
"""
chunk_tasks = None
time_loop = None
eid = 0
last_dig = None
def __init__(self, config, name):
BetaServerProtocol.__init__(self)
self.config = config
self.config_name = "world %s" % name
# Retrieve the MOTD. Only needs to be done once.
self.motd = self.config.getdefault(self.config_name, "motd",
"BravoServer")
def register_hooks(self):
log.msg("Registering client hooks...")
plugin_types = {
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
for t in plugin_types:
setattr(self, t, getattr(self.factory, t))
log.msg("Registering policies...")
if self.factory.mode == "creative":
self.dig_policy = dig_policies["speedy"]
else:
self.dig_policy = dig_policies["notchy"]
log.msg("Registered client plugin hooks!")
def pre_handshake(self):
"""
Set up username and get going.
"""
if self.username in self.factory.protocols:
# This username's already taken; find a new one.
for name in username_alternatives(self.username):
if name not in self.factory.protocols:
self.username = name
break
else:
self.error("Your username is already taken.")
return False
return True
@inlineCallbacks
def authenticated(self):
BetaServerProtocol.authenticated(self)
# Init player, and copy data into it.
self.player = yield self.factory.world.load_player(self.username)
self.player.eid = self.eid
self.location = self.player.location
# Init players' inventory window.
self.inventory = InventoryWindow(self.player.inventory)
# *Now* we are in our factory's list of protocols. Be aware.
self.factory.protocols[self.username] = self
# Announce our presence.
self.factory.chat("%s is joining the game..." % self.username)
packet = make_packet("players", name=self.username, online=True,
ping=0)
self.factory.broadcast(packet)
# Craft our avatar and send it to already-connected other players.
packet = make_packet("create", eid=self.player.eid)
packet += self.player.save_to_packet()
self.factory.broadcast_for_others(packet, self)
# And of course spawn all of those players' avatars in our client as
# well.
for protocol in self.factory.protocols.itervalues():
# Skip over ourselves; otherwise, the client tweaks out and
# usually either dies or locks up.
if protocol is self:
continue
self.write_packet("create", eid=protocol.player.eid)
packet = protocol.player.save_to_packet()
packet += protocol.player.save_equipment_to_packet()
self.transport.write(packet)
# Send spawn and inventory.
spawn = self.factory.world.level.spawn
packet = make_packet("spawn", x=spawn[0], y=spawn[1], z=spawn[2])
packet += self.inventory.save_to_packet()
self.transport.write(packet)
# TODO: Send Abilities (0xca)
# TODO: Update Health (0x08)
# TODO: Update Experience (0x2b)
# Send weather.
self.transport.write(self.factory.vane.make_packet())
self.send_initial_chunk_and_location()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(10)
def orientation_changed(self):
# Bang your head!
yaw, pitch = self.location.ori.to_fracs()
packet = make_packet("entity-orientation", eid=self.player.eid,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
def position_changed(self):
# Send chunks.
self.update_chunks()
for entity in self.entities_near(2):
if entity.name != "Item":
continue
left = self.player.inventory.add(entity.item, entity.quantity)
if left != entity.quantity:
if left != 0:
# partial collect
entity.quantity = left
else:
packet = make_packet("collect", eid=entity.eid,
destination=self.player.eid)
packet += make_packet("destroy", count=1, eid=[entity.eid])
self.factory.broadcast(packet)
self.factory.destroy_entity(entity)
packet = self.inventory.save_to_packet()
self.transport.write(packet)
def entities_near(self, radius):
"""
Obtain the entities within a radius of this player.
Radius is measured in blocks.
"""
chunk_radius = int(radius // 16 + 1)
diff --git a/bravo/beta/structures.py b/bravo/beta/structures.py
index 42d5465..6e42e9f 100644
--- a/bravo/beta/structures.py
+++ b/bravo/beta/structures.py
@@ -1,79 +1,116 @@
from collections import namedtuple
BuildData = namedtuple("BuildData", "block, metadata, x, y, z, face")
"""
A named tuple representing data for a block which is planned to be built.
"""
Level = namedtuple("Level", "seed, spawn, time")
"""
A named tuple representing the level data for a world.
"""
-Settings = namedtuple("Settings", "locale, distance")
-"""
-A named tuple representing client settings.
-"""
+
+class Settings(object):
+ """
+ Client settings and preferences.
+
+ Ephermal settings representing a client's preferred way of interacting with
+ the server.
+ """
+
+ locale = "en_US"
+ distance = "normal"
+
+ god_mode = False
+ can_fly = False
+ flying = False
+ creative = False
+
+ # XXX what should these actually default to?
+ walking_speed = 0
+ flying_speed = 0
+
+ def __init__(self, presentation=None, interaction=None):
+ if presentation:
+ self.update_presentation(presentation)
+ if interaction:
+ self.update_interaction(interaction)
+
+ def update_presentation(self, presentation):
+ self.locale = presentation["locale"]
+ distance = presentation["distance"]
+ self.distance = ["far", "normal", "short", "tiny"][distance]
+
+ def update_interaction(self, interaction):
+ flags = interaction["flags"]
+ self.god_mode = bool(flags & 0x8)
+ self.can_fly = bool(flags & 0x4)
+ self.flying = bool(flags & 0x2)
+ self.creative = bool(flags & 0x1)
+ self.walking_speed = interaction["walk-speed"]
+ self.flying_speed = interaction["fly-speed"]
+
class Slot(namedtuple("Slot", "primary, secondary, quantity")):
"""
A slot in an inventory.
Slots are essentially tuples of the primary and secondary identifiers of a
block or item, along with a quantity, but they provide several convenience
methods which make them a useful data structure for building inventories.
"""
__slots__ = tuple()
@classmethod
def from_key(cls, key, quantity=1):
"""
Alternative constructor which loads a key instead of a primary and
secondary.
This is meant to simplify code which wants to create slots from keys.
"""
return cls(key[0], key[1], quantity)
def holds(self, other):
"""
Whether these slots hold the same item.
This method is comfortable with other ``Slot`` instances, and also
with regular {2,3}-tuples.
"""
return self.primary == other[0] and self.secondary == other[1]
def decrement(self, quantity=1):
"""
Return a copy of this slot, with quantity decremented, or None if the
slot is empty.
"""
if quantity >= self.quantity:
return None
return self._replace(quantity=self.quantity - quantity)
def increment(self, quantity=1):
"""
Return a copy of this slot, with quantity incremented.
For parity with ``decrement()``.
"""
return self._replace(quantity=self.quantity + quantity)
def replace(self, **kwargs):
"""
Exposed version of ``_replace()`` with slot semantics.
"""
new = self._replace(**kwargs)
if new.quantity == 0:
return None
return new
diff --git a/bravo/tests/beta/test_structures.py b/bravo/tests/beta/test_structures.py
new file mode 100644
index 0000000..eac7bc4
--- /dev/null
+++ b/bravo/tests/beta/test_structures.py
@@ -0,0 +1,15 @@
+from unittest import TestCase
+
+from bravo.beta.structures import Settings
+
+
+class TestSettings(TestCase):
+
+ def test_setting_presentation(self):
+ d = {
+ "locale": "C",
+ "distance": 0,
+ }
+ s = Settings(presentation=d)
+ self.assertEqual(s.locale, "C")
+ self.assertEqual(s.distance, "far")
|
bravoserver/bravo
|
f185e32f4faeed1f6f4098c34bdf586bacdc7931
|
A test that was broken now succeeds.
|
diff --git a/bravo/tests/infini/test_packets.py b/bravo/tests/infini/test_packets.py
index 3da0308..f50d043 100644
--- a/bravo/tests/infini/test_packets.py
+++ b/bravo/tests/infini/test_packets.py
@@ -1,28 +1,26 @@
from twisted.trial import unittest
from bravo.infini.packets import packets, parse_packets
class TestInfiniPacketParsing(unittest.TestCase):
def test_ping(self):
raw = "\x00\x01\x00\x00\x00\x06\x00\x10\x00\x4d\x3c\x7d\x7c"
parsed = packets[0].parse(raw)
self.assertEqual(parsed.header.identifier, 0x00)
self.assertEqual(parsed.header.flags, 0x01)
self.assertEqual(parsed.payload.uid, 16)
self.assertEqual(parsed.payload.timestamp, 5061757)
def test_disconnect(self):
raw = "\xff\x00\x00\x00\x00\x19\x00\x17Invalid client version!"
parsed = packets[255].parse(raw)
self.assertEqual(parsed.header.identifier, 0xff)
self.assertEqual(parsed.payload.explanation,
"Invalid client version!")
class TestInfiniPacketStream(unittest.TestCase):
def test_ping_stream(self):
raw = "\x00\x01\x00\x00\x00\x06\x00\x10\x00\x4d\x3c\x7d\x7c"
packets, leftovers = parse_packets(raw)
-
- test_ping_stream.todo = "Construct doesn't like InfiniCraft"
|
bravoserver/bravo
|
61fe0fc179196b83b62788b20b185ff1ed232455
|
Enable Travis.
|
diff --git a/.travis.yml b/.travis.yml
new file mode 100644
index 0000000..5e9235b
--- /dev/null
+++ b/.travis.yml
@@ -0,0 +1,6 @@
+language: python
+python:
+ - "2.6"
+ - "2.7"
+ - "pypy"
+script: "trial bravo"
|
bravoserver/bravo
|
24261b88171cb51dde14a4c56a6bc1d6f422da45
|
Proper slot implementation + fix dropping items
|
diff --git a/bravo/beta/packets.py b/bravo/beta/packets.py
index c6f00ec..259c494 100644
--- a/bravo/beta/packets.py
+++ b/bravo/beta/packets.py
@@ -1,991 +1,1043 @@
from collections import namedtuple
from construct import Struct, Container, Embed, Enum, MetaField
from construct import MetaArray, If, Switch, Const, Peek, Magic
from construct import OptionalGreedyRange, RepeatUntil
from construct import Flag, PascalString, Adapter
from construct import UBInt8, UBInt16, UBInt32, UBInt64
from construct import SBInt8, SBInt16, SBInt32
from construct import BFloat32, BFloat64
from construct import BitStruct, BitField
from construct import StringAdapter, LengthValueAdapter, Sequence
+from construct import ConstructError
def IPacket(object):
"""
Interface for packets.
"""
def parse(buf, offset):
"""
Parse a packet out of the given buffer, starting at the given offset.
If the parse is successful, returns a tuple of the parsed packet and
the next packet offset in the buffer.
If the parse fails due to insufficient data, returns a tuple of None
and the amount of data required before the parse can be retried.
Exceptions may be raised if the parser finds invalid data.
"""
def simple(name, fmt, *args):
"""
Make a customized namedtuple representing a simple, primitive packet.
"""
from struct import Struct
s = Struct(fmt)
@classmethod
def parse(cls, buf, offset):
if len(buf) >= s.size + offset:
unpacked = s.unpack_from(buf, offset)
return cls(*unpacked), s.size + offset
else:
return None, s.size - len(buf)
def build(self):
return s.pack(*self)
methods = {
"parse": parse,
"build": build,
}
return type(name, (namedtuple(name, *args),), methods)
DUMP_ALL_PACKETS = False
# Strings.
# This one is a UCS2 string, which effectively decodes single writeChar()
# invocations. We need to import the encoding for it first, though.
from bravo.encodings import ucs2
from codecs import register
register(ucs2)
class DoubleAdapter(LengthValueAdapter):
def _encode(self, obj, context):
return len(obj) / 2, obj
def AlphaString(name):
return StringAdapter(
DoubleAdapter(
Sequence(name,
UBInt16("length"),
MetaField("data", lambda ctx: ctx["length"] * 2),
)
),
encoding="ucs2",
)
# Boolean converter.
def Bool(*args, **kwargs):
return Flag(*args, default=True, **kwargs)
# Flying, position, and orientation, reused in several places.
grounded = Struct("grounded", UBInt8("grounded"))
position = Struct("position",
BFloat64("x"),
BFloat64("y"),
BFloat64("stance"),
BFloat64("z")
)
orientation = Struct("orientation", BFloat32("rotation"), BFloat32("pitch"))
+# TODO: this must be replaced with 'slot' (see below)
# Notchian item packing (slot data)
items = Struct("items",
SBInt16("primary"),
If(lambda context: context["primary"] >= 0,
Embed(Struct("item_information",
UBInt8("count"),
UBInt16("secondary"),
Magic("\xff\xff"),
)),
),
)
+Speed = namedtuple('speed', 'x y z')
+
+class Slot(object):
+ def __init__(self, item_id=-1, count=1, damage=0, nbt=None):
+ self.item_id = item_id
+ self.count = count
+ self.damage = damage
+ # TODO: Implement packing/unpacking of gzipped NBT data
+ self.nbt = nbt
+
+ @classmethod
+ def fromItem(cls, item, count):
+ return cls(item[0], count, item[1])
+
+ @property
+ def is_empty(self):
+ return self.item_id == -1
+
+ def __len__(self):
+ return 0 if self.nbt is None else len(self.nbt)
+
+ def __repr__(self):
+ from bravo.blocks import items
+ if self.is_empty:
+ return 'Slot()'
+ elif len(self):
+ return 'Slot(%s, count=%d, damage=%d, +nbt:%dB)' % (
+ str(items[self.item_id]), self.count, self.damage, len(self)
+ )
+ else:
+ return 'Slot(%s, count=%d, damage=%d)' % (
+ str(items[self.item_id]), self.count, self.damage
+ )
+
+ def __eq__(self, other):
+ return (self.item_id == other.item_id and
+ self.count == other.count and
+ self.damage == self.damage and
+ self.nbt == self.nbt)
+
+class SlotAdapter(Adapter):
+
+ def _decode(self, obj, context):
+ if obj.item_id == -1:
+ s = Slot(obj.item_id)
+ else:
+ s = Slot(obj.item_id, obj.count, obj.damage, obj.nbt)
+ return s
+
+ def _encode(self, obj, context):
+ if not isinstance(obj, Slot):
+ raise ConstructError('Slot object expected')
+ if obj.is_empty:
+ return Container(item_id=-1)
+ else:
+ return Container(item_id=obj.item_id, count=obj.count, damage=obj.damage,
+ nbt_len=len(obj) if len(obj) else -1, nbt=obj.nbt)
+
+slot = SlotAdapter(
+ Struct("slot",
+ SBInt16("item_id"),
+ If(lambda context: context["item_id"] >= 0,
+ Embed(Struct("item_information",
+ UBInt8("count"),
+ UBInt16("damage"),
+ SBInt16("nbt_len"),
+ If(lambda context: context["nbt_len"] >= 0,
+ MetaField("nbt", lambda ctx: ctx["nbt_len"])
+ )
+ )),
+ )
+ )
+)
+
+
Metadata = namedtuple("Metadata", "type value")
-metadata_types = ["byte", "short", "int", "float", "string", "slot",
- "coords"]
+metadata_types = ["byte", "short", "int", "float", "string", "slot", "coords"]
# Metadata adaptor.
class MetadataAdapter(Adapter):
def _decode(self, obj, context):
d = {}
for m in obj.data:
- d[m.id.second] = Metadata(metadata_types[m.id.first], m.value)
+ d[m.id.key] = Metadata(metadata_types[m.id.type], m.value)
return d
def _encode(self, obj, context):
c = Container(data=[], terminator=None)
for k, v in obj.iteritems():
t, value = v
d = Container(
- id=Container(first=metadata_types.index(t), second=k),
+ id=Container(type=metadata_types.index(t), key=k),
value=value,
peeked=None)
c.data.append(d)
if c.data:
c.data[-1].peeked = 127
else:
c.data.append(Container(id=Container(first=0, second=0), value=0,
peeked=127))
return c
# Metadata inner container.
metadata_switch = {
0: UBInt8("value"),
1: UBInt16("value"),
2: UBInt32("value"),
3: BFloat32("value"),
4: AlphaString("value"),
- 5: Struct("slot", # is the same as 'items' defined above
- SBInt16("primary"),
- If(lambda context: context["primary"] >= 0,
- Embed(Struct("item_information",
- UBInt8("count"),
- UBInt16("secondary"),
- SBInt16("nbt-len"),
- If(lambda context: context["nbt-len"] >= 0,
- Embed(MetaField("nbt-data", lambda ctx: ctx["nbt-len"]))
- )
- )),
- ),
- ),
+ 5: slot,
6: Struct("coords",
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
),
}
# Metadata subconstruct.
metadata = MetadataAdapter(
Struct("metadata",
RepeatUntil(lambda obj, context: obj["peeked"] == 0x7f,
Struct("data",
BitStruct("id",
- BitField("first", 3),
- BitField("second", 5),
+ BitField("type", 3),
+ BitField("key", 5),
),
- Switch("value", lambda context: context["id"]["first"],
+ Switch("value", lambda context: context["id"]["type"],
metadata_switch),
Peek(UBInt8("peeked")),
),
),
Const(UBInt8("terminator"), 0x7f),
),
)
# Build faces, used during dig and build.
faces = {
"noop": -1,
"-y": 0,
"+y": 1,
"-z": 2,
"+z": 3,
"-x": 4,
"+x": 5,
}
face = Enum(SBInt8("face"), **faces)
# World dimension.
dimensions = {
"earth": 0,
"sky": 1,
"nether": 255,
}
dimension = Enum(UBInt8("dimension"), **dimensions)
# Difficulty levels
difficulties = {
"peaceful": 0,
"easy": 1,
"normal": 2,
"hard": 3,
}
difficulty = Enum(UBInt8("difficulty"), **difficulties)
modes = {
"survival": 0,
"creative": 1,
"adventure": 2,
}
mode = Enum(UBInt8("mode"), **modes)
# Possible effects.
# XXX these names aren't really canonized yet
effect = Enum(UBInt8("effect"),
move_fast=1,
move_slow=2,
dig_fast=3,
dig_slow=4,
damage_boost=5,
heal=6,
harm=7,
jump=8,
confusion=9,
regenerate=10,
resistance=11,
fire_resistance=12,
water_resistance=13,
invisibility=14,
blindness=15,
night_vision=16,
hunger=17,
weakness=18,
poison=19,
wither=20,
)
# The actual packet list.
packets = {
0x00: Struct("ping",
UBInt32("pid"),
),
0x01: Struct("login",
# Player Entity ID (random number generated by the server)
UBInt32("eid"),
# default, flat, largeBiomes
AlphaString("leveltype"),
mode,
dimension,
difficulty,
UBInt8("unused"),
UBInt8("maxplayers"),
),
0x02: Struct("handshake",
UBInt8("protocol"),
AlphaString("username"),
AlphaString("host"),
UBInt32("port"),
),
0x03: Struct("chat",
AlphaString("message"),
),
0x04: Struct("time",
# Total Ticks
UBInt64("timestamp"),
# Time of day
UBInt64("time"),
),
0x05: Struct("entity-equipment",
UBInt32("eid"),
UBInt16("slot"),
Embed(items),
),
0x06: Struct("spawn",
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
0x07: Struct("use",
UBInt32("eid"),
UBInt32("target"),
UBInt8("button"),
),
0x08: Struct("health",
UBInt16("hp"),
UBInt16("fp"),
BFloat32("saturation"),
),
0x09: Struct("respawn",
dimension,
difficulty,
mode,
UBInt16("height"),
AlphaString("leveltype"),
),
0x0a: grounded,
0x0b: Struct("position",
position,
grounded
),
0x0c: Struct("orientation",
orientation,
grounded
),
# TODO: Differ between client and server 'position'
0x0d: Struct("location",
position,
orientation,
grounded
),
0x0e: Struct("digging",
Enum(UBInt8("state"),
started=0,
cancelled=1,
stopped=2,
checked=3,
dropped=4,
# Also eating
shooting=5,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
face,
),
0x0f: Struct("build",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
face,
Embed(items),
UBInt8("cursorx"),
UBInt8("cursory"),
UBInt8("cursorz"),
),
# Hold Item Change
0x10: Struct("equip",
# Only 0-8
UBInt16("slot"),
),
0x11: Struct("bed",
UBInt32("eid"),
UBInt8("unknown"),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
),
0x12: Struct("animate",
UBInt32("eid"),
Enum(UBInt8("animation"),
noop=0,
arm=1,
hit=2,
leave_bed=3,
eat=5,
unknown=102,
crouch=104,
uncrouch=105,
),
),
0x13: Struct("action",
UBInt32("eid"),
Enum(UBInt8("action"),
crouch=1,
uncrouch=2,
leave_bed=3,
start_sprint=4,
stop_sprint=5,
),
),
0x14: Struct("player",
UBInt32("eid"),
AlphaString("username"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
# 0 For none, unlike other packets
# -1 crashes clients
SBInt16("item"),
metadata,
),
- # Spawn Dropped Item
- # TODO: Removed in #61!!! Find out how to spawn items.
- 0x15: Struct("pickup",
- UBInt32("eid"),
- Embed(items),
- SBInt32("x"),
- SBInt32("y"),
- SBInt32("z"),
- UBInt8("yaw"),
- UBInt8("pitch"),
- UBInt8("roll"),
- ),
0x16: Struct("collect",
UBInt32("eid"),
UBInt32("destination"),
),
# Object/Vehicle
0x17: Struct("object", # XXX: was 'vehicle'!
UBInt32("eid"),
Enum(UBInt8("type"), # See http://wiki.vg/Entities#Objects
boat=1,
item_stack=2,
minecart=10,
storage_cart=11,
powered_cart=12,
tnt=50,
ender_crystal=51,
arrow=60,
snowball=61,
egg=62,
thrown_enderpearl=65,
wither_skull=66,
falling_block=70,
frames=71,
ender_eye=72,
thrown_potion=73,
dragon_egg=74,
thrown_xp_bottle=75,
fishing_float=90,
),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("pitch"),
UBInt8("yaw"),
SBInt32("data"), # See http://www.wiki.vg/Object_Data
If(lambda context: context["data"] != 0,
Struct("speed",
SBInt16("x"),
SBInt16("y"),
SBInt16("z"),
)
),
),
0x18: Struct("mob",
UBInt32("eid"),
Enum(UBInt8("type"), **{
"Creeper": 50,
"Skeleton": 51,
"Spider": 52,
"GiantZombie": 53,
"Zombie": 54,
"Slime": 55,
"Ghast": 56,
"ZombiePig": 57,
"Enderman": 58,
"CaveSpider": 59,
"Silverfish": 60,
"Blaze": 61,
"MagmaCube": 62,
"EnderDragon": 63,
"Wither": 64,
"Bat": 65,
"Witch": 66,
"Pig": 90,
"Sheep": 91,
"Cow": 92,
"Chicken": 93,
"Squid": 94,
"Wolf": 95,
"Mooshroom": 96,
"Snowman": 97,
"Ocelot": 98,
"IronGolem": 99,
"Villager": 120
}),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
SBInt8("yaw"),
SBInt8("pitch"),
SBInt8("head_yaw"),
SBInt16("vx"),
SBInt16("vy"),
SBInt16("vz"),
metadata,
),
0x19: Struct("painting",
UBInt32("eid"),
AlphaString("title"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
face,
),
0x1a: Struct("experience",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt16("quantity"),
),
0x1c: Struct("velocity",
UBInt32("eid"),
SBInt16("dx"),
SBInt16("dy"),
SBInt16("dz"),
),
0x1d: Struct("destroy",
UBInt8("count"),
MetaArray(lambda context: context["count"], UBInt32("eid")),
),
0x1e: Struct("create",
UBInt32("eid"),
),
0x1f: Struct("entity-position",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz")
),
0x20: Struct("entity-orientation",
UBInt32("eid"),
UBInt8("yaw"),
UBInt8("pitch")
),
0x21: Struct("entity-location",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz"),
UBInt8("yaw"),
UBInt8("pitch")
),
0x22: Struct("teleport",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
),
0x23: Struct("entity-head",
UBInt32("eid"),
UBInt8("yaw"),
),
0x26: Struct("status",
UBInt32("eid"),
Enum(UBInt8("status"),
damaged=2,
killed=3,
taming=6,
tamed=7,
drying=8,
eating=9,
sheep_eat=10,
golem_rose=11,
heart_particle=12,
angry_particle=13,
happy_particle=14,
magic_particle=15,
shaking=16,
- firework=17
+ firework=17,
),
),
0x27: Struct("attach",
UBInt32("eid"),
# -1 for detatching
UBInt32("vid"),
),
0x28: Struct("metadata",
UBInt32("eid"),
metadata,
),
0x29: Struct("effect",
UBInt32("eid"),
effect,
UBInt8("amount"),
UBInt16("duration"),
),
0x2a: Struct("uneffect",
UBInt32("eid"),
effect,
),
0x2b: Struct("levelup",
BFloat32("current"),
UBInt16("level"),
UBInt16("total"),
),
0x33: Struct("chunk",
SBInt32("x"),
SBInt32("z"),
Bool("continuous"),
UBInt16("primary"),
UBInt16("add"),
PascalString("data", length_field=UBInt32("length"), encoding="zlib"),
),
0x34: Struct("batch",
SBInt32("x"),
SBInt32("z"),
UBInt16("count"),
PascalString("data", length_field=UBInt32("length")),
),
0x35: Struct("block",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt16("type"),
UBInt8("meta"),
),
# XXX This covers general tile actions, not just note blocks.
# TODO: Needs work
0x36: Struct("block-action",
SBInt32("x"),
SBInt16("y"),
SBInt32("z"),
UBInt8("byte1"),
UBInt8("byte2"),
UBInt16("blockid"),
),
0x37: Struct("block-break-anim",
UBInt32("eid"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
UBInt8("stage"),
),
# XXX Server -> Client. Use 0x33 instead.
0x38: Struct("bulk-chunk",
UBInt16("count"),
UBInt32("length"),
UBInt8("sky_light"),
MetaField("data", lambda ctx: ctx["length"]),
MetaArray(lambda context: context["count"],
Struct("metadata",
UBInt32("chunk_x"),
UBInt32("chunk_z"),
UBInt16("bitmap_primary"),
UBInt16("bitmap_secondary"),
)
)
),
# TODO: Needs work?
0x3c: Struct("explosion",
BFloat64("x"),
BFloat64("y"),
BFloat64("z"),
BFloat32("radius"),
UBInt32("count"),
MetaField("blocks", lambda context: context["count"] * 3),
BFloat32("motionx"),
BFloat32("motiony"),
BFloat32("motionz"),
),
0x3d: Struct("sound",
Enum(UBInt32("sid"),
click2=1000,
click1=1001,
bow_fire=1002,
door_toggle=1003,
extinguish=1004,
record_play=1005,
charge=1007,
fireball=1008,
zombie_wood=1010,
zombie_metal=1011,
zombie_break=1012,
wither=1013,
smoke=2000,
block_break=2001,
splash_potion=2002,
ender_eye=2003,
blaze=2004,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt32("data"),
Bool("volume-mod"),
),
0x3e: Struct("named-sound",
AlphaString("name"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
BFloat32("volume"),
UBInt8("pitch"),
),
0x3f: Struct("particle",
AlphaString("name"),
BFloat32("x"),
BFloat32("y"),
BFloat32("z"),
BFloat32("x_offset"),
BFloat32("y_offset"),
BFloat32("z_offset"),
BFloat32("speed"),
UBInt32("count"),
),
0x46: Struct("state",
Enum(UBInt8("state"),
bad_bed=0,
start_rain=1,
stop_rain=2,
mode_change=3,
run_credits=4,
),
mode,
),
0x47: Struct("thunderbolt",
UBInt32("eid"),
UBInt8("gid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
0x64: Struct("window-open",
UBInt8("wid"),
Enum(UBInt8("type"),
chest=0,
workbench=1,
furnace=2,
dispenser=3,
enchatment_table=4,
brewing_stand=5,
npc_trade=6,
beacon=7,
anvil=8,
- hopper=9
+ hopper=9,
),
AlphaString("title"),
UBInt8("slots"),
UBInt8("use_title"),
),
0x65: Struct("window-close",
UBInt8("wid"),
),
0x66: Struct("window-action",
UBInt8("wid"),
UBInt16("slot"),
UBInt8("button"),
UBInt16("token"),
UBInt8("shift"), # TODO: rename to 'mode'
Embed(items),
),
0x67: Struct("window-slot",
UBInt8("wid"),
UBInt16("slot"),
Embed(items),
),
0x68: Struct("inventory",
UBInt8("wid"),
UBInt16("length"),
MetaArray(lambda context: context["length"], items),
),
0x69: Struct("window-progress",
UBInt8("wid"),
UBInt16("bar"),
UBInt16("progress"),
),
0x6a: Struct("window-token",
UBInt8("wid"),
UBInt16("token"),
Bool("acknowledged"),
),
0x6b: Struct("window-creative",
UBInt16("slot"),
Embed(items),
),
0x6c: Struct("enchant",
UBInt8("wid"),
UBInt8("enchantment"),
),
0x82: Struct("sign",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
AlphaString("line1"),
AlphaString("line2"),
AlphaString("line3"),
AlphaString("line4"),
),
0x83: Struct("map",
UBInt16("type"),
UBInt16("itemid"),
PascalString("data", length_field=UBInt16("length")),
),
0x84: Struct("tile-update",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
UBInt8("action"),
PascalString("nbt_data", length_field=UBInt16("length")), # gzipped
),
0xc8: Struct("statistics",
UBInt32("sid"), # XXX I could be an Enum
UBInt8("count"),
),
0xc9: Struct("players",
AlphaString("name"),
Bool("online"),
UBInt16("ping"),
),
0xca: Struct("abilities",
UBInt8("flags"),
UBInt8("fly-speed"),
UBInt8("walk-speed"),
),
0xcb: Struct("tab",
AlphaString("autocomplete"),
),
0xcc: Struct("settings",
AlphaString("locale"),
UBInt8("distance"),
UBInt8("chat"),
difficulty,
Bool("cape"),
),
0xcd: Struct("statuses",
UBInt8("payload")
),
0xce: Struct("score_item",
AlphaString("name"),
AlphaString("value"),
Enum(UBInt8("action"),
create=0,
remove=1,
update=2,
),
),
0xcf: Struct("score_update",
AlphaString("item_name"),
UBInt8("remove"),
If(lambda context: context["remove"] == 0,
Embed(Struct("information",
AlphaString("score_name"),
UBInt32("value"),
))
),
),
0xd0: Struct("score_display",
Enum(UBInt8("position"),
as_list=0,
sidebar=1,
- below_name=2
+ below_name=2,
),
AlphaString("score_name"),
),
0xd1: Struct("teams",
AlphaString("name"),
Enum(UBInt8("mode"),
team_created=0,
team_removed=1,
team_updates=2,
players_added=3,
players_removed=4,
),
If(lambda context: context["mode"] in ("team_created", "team_updated"),
Embed(Struct("team_info",
AlphaString("team_name"),
AlphaString("team_prefix"),
AlphaString("team_suffix"),
Enum(UBInt8("friendly_fire"),
off=0,
on=1,
invisibles=2,
),
))
),
If(lambda context: context["mode"] in ("team_created", "players_added", "players_removed"),
Embed(Struct("players_info",
UBInt16("count"),
MetaArray(lambda context: context["count"], AlphaString("player_names")),
))
),
),
0xfa: Struct("plugin-message",
AlphaString("channel"),
PascalString("data", length_field=UBInt16("length")),
),
0xfc: Struct("key-response",
PascalString("key", length_field=UBInt16("key-len")),
PascalString("token", length_field=UBInt16("token-len")),
),
0xfd: Struct("key-request",
AlphaString("server"),
PascalString("key", length_field=UBInt16("key-len")),
PascalString("token", length_field=UBInt16("token-len")),
),
0xfe: Struct("poll", UBInt8("unused")),
# TODO: rename to 'kick'
0xff: Struct("error", AlphaString("message")),
}
packet_stream = Struct("packet_stream",
OptionalGreedyRange(
Struct("full_packet",
UBInt8("header"),
Switch("payload", lambda context: context["header"], packets),
),
),
OptionalGreedyRange(
UBInt8("leftovers"),
),
)
def parse_packets(bytestream):
"""
Opportunistically parse out as many packets as possible from a raw
bytestream.
Returns a tuple containing a list of unpacked packet containers, and any
leftover unparseable bytes.
"""
container = packet_stream.parse(bytestream)
l = [(i.header, i.payload) for i in container.full_packet]
leftovers = "".join(chr(i) for i in container.leftovers)
if DUMP_ALL_PACKETS:
for header, payload in l:
print "Parsed packet 0x%.2x" % header
print payload
return l, leftovers
incremental_packet_stream = Struct("incremental_packet_stream",
Struct("full_packet",
UBInt8("header"),
Switch("payload", lambda context: context["header"], packets),
),
OptionalGreedyRange(
UBInt8("leftovers"),
),
)
def parse_packets_incrementally(bytestream):
"""
Parse out packets one-by-one, yielding a tuple of packet header and packet
payload.
This function returns a generator.
This function will yield all valid packets in the bytestream up to the
first invalid packet.
:returns: a generator yielding tuples of headers and payloads
"""
while bytestream:
parsed = incremental_packet_stream.parse(bytestream)
header = parsed.full_packet.header
payload = parsed.full_packet.payload
bytestream = "".join(chr(i) for i in parsed.leftovers)
yield header, payload
packets_by_name = dict((v.name, k) for (k, v) in packets.iteritems())
def make_packet(packet, *args, **kwargs):
"""
Constructs a packet bytestream from a packet header and payload.
The payload should be passed as keyword arguments. Additional containers
or dictionaries to be added to the payload may be passed positionally, as
well.
"""
if packet not in packets_by_name:
print "Couldn't find packet name %s!" % packet
return ""
header = packets_by_name[packet]
for arg in args:
kwargs.update(dict(arg))
container = Container(**kwargs)
if DUMP_ALL_PACKETS:
print "Making packet <%s> (0x%.2x)" % (packet, header)
print container
payload = packets[header].build(container)
return chr(header) + payload
def make_error_packet(message):
"""
Convenience method to generate an error packet bytestream.
"""
return make_packet("error", message=message)
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index fd9b41c..8b5a989 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -1,1493 +1,1498 @@
# vim: set fileencoding=utf8 :
from itertools import product, chain
from time import time
from urlparse import urlunparse
from twisted.internet import reactor
from twisted.internet.defer import (DeferredList, inlineCallbacks,
maybeDeferred, succeed)
-from twisted.internet.protocol import Protocol
+from twisted.internet.protocol import Protocol, connectionDone
from twisted.internet.task import cooperate, deferLater, LoopingCall
from twisted.internet.task import TaskDone, TaskFailed
from twisted.protocols.policies import TimeoutMixin
from twisted.python import log
from twisted.web.client import getPage
from bravo import version
from bravo.beta.structures import BuildData, Settings
from bravo.blocks import blocks, items
from bravo.chunk import CHUNK_HEIGHT
from bravo.entity import Sign
from bravo.errors import BetaClientError, BuildError
from bravo.ibravo import (IChatCommand, IPreBuildHook, IPostBuildHook,
IWindowOpenHook, IWindowClickHook, IWindowCloseHook,
IPreDigHook, IDigHook, ISignHook, IUseHook)
from bravo.infini.factory import InfiniClientFactory
from bravo.inventory.windows import InventoryWindow
from bravo.location import Location, Orientation, Position
from bravo.motd import get_motd
from bravo.beta.packets import parse_packets, make_packet, make_error_packet
from bravo.plugin import retrieve_plugins
from bravo.policy.dig import dig_policies
from bravo.utilities.coords import adjust_coords_for_face, split_coords
from bravo.utilities.chat import complete, username_alternatives
from bravo.utilities.maths import circling, clamp, sorted_by_distance
from bravo.utilities.temporal import timestamp_from_clock
# States of the protocol.
(STATE_UNAUTHENTICATED, STATE_AUTHENTICATED, STATE_LOCATED) = range(3)
SUPPORTED_PROTOCOL = 61
class BetaServerProtocol(object, Protocol, TimeoutMixin):
"""
The Minecraft Alpha/Beta server protocol.
This class is mostly designed to be a skeleton for featureful clients. It
tries hard to not step on the toes of potential subclasses.
"""
excess = ""
packet = None
state = STATE_UNAUTHENTICATED
buf = ""
parser = None
handler = None
player = None
username = None
settings = Settings("en_US", "normal")
motd = "Bravo Generic Beta Server"
_health = 20
_latency = 0
def __init__(self):
self.chunks = dict()
self.windows = {}
self.wid = 1
self.location = Location()
self.handlers = {
- 0: self.ping,
- 2: self.handshake,
- 3: self.chat,
- 7: self.use,
- 9: self.respawn,
- 10: self.grounded,
- 11: self.position,
- 12: self.orientation,
- 13: self.location_packet,
- 14: self.digging,
- 15: self.build,
- 16: self.equip,
- 18: self.animate,
- 19: self.action,
- 21: self.pickup,
- 101: self.wclose,
- 102: self.waction,
- 106: self.wacknowledge,
- 107: self.wcreative,
- 130: self.sign,
- 203: self.complete,
- 204: self.settings_packet,
- 254: self.poll,
- 255: self.quit,
+ 0x00: self.ping,
+ 0x02: self.handshake,
+ 0x03: self.chat,
+ 0x07: self.use,
+ 0x09: self.respawn,
+ 0x0a: self.grounded,
+ 0x0b: self.position,
+ 0x0c: self.orientation,
+ 0x0d: self.location_packet,
+ 0x0e: self.digging,
+ 0x0f: self.build,
+ 0x10: self.equip,
+ 0x12: self.animate,
+ 0x13: self.action,
+ 0x15: self.pickup,
+ 0x65: self.wclose,
+ 0x66: self.waction,
+ 0x6a: self.wacknowledge,
+ 0x6b: self.wcreative,
+ 0x82: self.sign,
+ 0xcb: self.complete,
+ 0xcc: self.settings_packet,
+ 0xfe: self.poll,
+ 0xff: self.quit,
}
self._ping_loop = LoopingCall(self.update_ping)
self.setTimeout(30)
# Low-level packet handlers
# Try not to hook these if possible, since they offer no convenient
# abstractions or protections.
def ping(self, container):
"""
Hook for ping packets.
By default, this hook will examine the timestamps on incoming pings,
and use them to estimate the current latency of the connected client.
"""
now = timestamp_from_clock(reactor)
then = container.pid
self.latency = now - then
def handshake(self, container):
"""
Hook for handshake packets.
Override this to customize how logins are handled. By default, this
method will only confirm that the negotiated wire protocol is the
correct version, copy data out of the packet and onto the protocol,
and then run the ``authenticated`` callback.
This method will call the ``pre_handshake`` method hook prior to
logging in the client.
"""
self.username = container.username
if container.protocol < SUPPORTED_PROTOCOL:
# Kick old clients.
self.error("This server doesn't support your ancient client.")
return
elif container.protocol > SUPPORTED_PROTOCOL:
# Kick new clients.
self.error("This server doesn't support your newfangled client.")
return
log.msg("Handshaking with client, protocol version %d" %
container.protocol)
if not self.pre_handshake():
log.msg("Pre-handshake hook failed; kicking client")
self.error("You failed the pre-handshake hook.")
return
players = min(self.factory.limitConnections, 20)
self.write_packet("login", eid=self.eid, leveltype="default",
mode=self.factory.mode,
dimension=self.factory.world.dimension,
difficulty="peaceful", unused=0, maxplayers=players)
self.authenticated()
def pre_handshake(self):
"""
Whether this client should be logged in.
"""
return True
def chat(self, container):
"""
Hook for chat packets.
"""
def use(self, container):
"""
Hook for use packets.
"""
def respawn(self, container):
"""
Hook for respawn packets.
"""
def grounded(self, container):
"""
Hook for grounded packets.
"""
self.location.grounded = bool(container.grounded)
def position(self, container):
"""
Hook for position packets.
"""
if self.state != STATE_LOCATED:
log.msg("Ignoring unlocated position!")
return
self.grounded(container.grounded)
old_position = self.location.pos
position = Position.from_player(container.position.x,
container.position.y, container.position.z)
altered = False
dx, dy, dz = old_position - position
if any(abs(d) >= 64 for d in (dx, dy, dz)):
# Whoa, slow down there, cowboy. You're moving too fast. We're
# gonna ignore this position change completely, because it's
# either bogus or ignoring a recent teleport.
altered = True
else:
self.location.pos = position
self.location.stance = container.position.stance
# Santitize location. This handles safety boundaries, illegal stance,
# etc.
altered = self.location.clamp() or altered
# If, for any reason, our opinion on where the client should be
# located is different than theirs, force them to conform to our point
# of view.
if altered:
log.msg("Not updating bogus position!")
self.update_location()
# If our position actually changed, fire the position change hook.
if old_position != position:
self.position_changed()
def orientation(self, container):
"""
Hook for orientation packets.
"""
self.grounded(container.grounded)
old_orientation = self.location.ori
orientation = Orientation.from_degs(container.orientation.rotation,
container.orientation.pitch)
self.location.ori = orientation
if old_orientation != orientation:
self.orientation_changed()
def location_packet(self, container):
"""
Hook for location packets.
"""
self.position(container)
self.orientation(container)
def digging(self, container):
"""
Hook for digging packets.
"""
def build(self, container):
"""
Hook for build packets.
"""
def equip(self, container):
"""
Hook for equip packets.
"""
def pickup(self, container):
"""
Hook for pickup packets.
"""
def animate(self, container):
"""
Hook for animate packets.
"""
def action(self, container):
"""
Hook for action packets.
"""
def wclose(self, container):
"""
Hook for wclose packets.
"""
def waction(self, container):
"""
Hook for waction packets.
"""
def wacknowledge(self, container):
"""
Hook for wacknowledge packets.
"""
def wcreative(self, container):
"""
Hook for creative inventory action packets.
"""
def sign(self, container):
"""
Hook for sign packets.
"""
def complete(self, container):
"""
Hook for tab-completion packets.
"""
def settings_packet(self, container):
"""
Hook for client settings packets.
"""
distance = ["far", "normal", "short", "tiny"][container.distance]
self.settings = Settings(container.locale, distance)
def poll(self, container):
"""
Hook for poll packets.
By default, queries the parent factory for some data, and replays it
in a specific format to the requester. The connection is then closed
at both ends. This functionality is used by Beta 1.8 clients to poll
servers for status.
"""
players = unicode(len(self.factory.protocols))
max_players = unicode(self.factory.limitConnections or 1000000)
data = [
u"§1",
unicode(SUPPORTED_PROTOCOL),
u"Bravo %s" % version,
self.motd,
players,
max_players,
]
response = u"\u0000".join(data)
self.error(response)
def quit(self, container):
"""
Hook for quit packets.
By default, merely logs the quit message and drops the connection.
Even if the connection is not dropped, it will be lost anyway since
the client will close the connection. It's better to explicitly let it
go here than to have zombie protocols.
"""
log.msg("Client is quitting: %s" % container.message)
self.transport.loseConnection()
# Twisted-level data handlers and methods
# Please don't override these needlessly, as they are pretty solid and
# shouldn't need to be touched.
def dataReceived(self, data):
self.buf += data
packets, self.buf = parse_packets(self.buf)
if packets:
self.resetTimeout()
for header, payload in packets:
if header in self.handlers:
d = maybeDeferred(self.handlers[header], payload)
@d.addErrback
def eb(failure):
- log.err("Error while handling packet %d" % header)
+ log.err("Error while handling packet 0x%.2x" % header)
log.err(failure)
return None
else:
- log.err("Didn't handle parseable packet %d!" % header)
+ log.err("Didn't handle parseable packet 0x%.2x!" % header)
log.err(payload)
- def connectionLost(self, reason):
+ def connectionLost(self, reason=connectionDone):
if self._ping_loop.running:
self._ping_loop.stop()
def timeoutConnection(self):
self.error("Connection timed out")
# State-change callbacks
# Feel free to override these, but call them at some point.
def authenticated(self):
"""
Called when the client has successfully authenticated with the server.
"""
self.state = STATE_AUTHENTICATED
self._ping_loop.start(30)
# Event callbacks
# These are meant to be overriden.
def orientation_changed(self):
"""
Called when the client moves.
This callback is only for orientation, not position.
"""
pass
def position_changed(self):
"""
Called when the client moves.
This callback is only for position, not orientation.
"""
pass
# Convenience methods for consolidating code and expressing intent. I
# hear that these are occasionally useful. If a method in this section can
# be used, then *PLEASE* use it; not using it is the same as open-coding
# whatever you're doing, and only hurts in the long run.
def write_packet(self, header, **payload):
"""
Send a packet to the client.
"""
self.transport.write(make_packet(header, **payload))
def update_ping(self):
"""
Send a keepalive to the client.
"""
timestamp = timestamp_from_clock(reactor)
self.write_packet("ping", pid=timestamp)
def update_location(self):
"""
Send this client's location to the client.
Also let other clients know where this client is.
"""
# Don't bother trying to update things if the position's not yet
# synchronized. We could end up jettisoning them into the void.
if self.state != STATE_LOCATED:
return
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
# Inform everybody of our new location.
packet = make_packet("teleport", eid=self.player.eid, x=x, y=y, z=z,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
# Inform ourselves of our new location.
packet = self.location.save_to_packet()
self.transport.write(packet)
def ascend(self, count):
"""
Ascend to the next XZ-plane.
``count`` is the number of ascensions to perform, and may be zero in
order to force this player to not be standing inside a block.
:returns: bool of whether the ascension was successful
This client must be located for this method to have any effect.
"""
if self.state != STATE_LOCATED:
return False
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
chunk = self.chunks[bigx, bigz]
column = [chunk.get_block((smallx, i, smallz))
for i in range(CHUNK_HEIGHT)]
# Special case: Ascend at most once, if the current spot isn't good.
if count == 0:
if (not column[y]) or column[y + 1] or column[y + 2]:
# Yeah, we're gonna need to move.
count += 1
else:
# Nope, we're fine where we are.
return True
for i in xrange(y, 255):
# Find the next spot above us which has a platform and two empty
# blocks of air.
if column[i] and (not column[i + 1]) and not column[i + 2]:
count -= 1
if not count:
break
else:
return False
self.location.pos = self.location.pos._replace(y=i * 32)
return True
def error(self, message):
"""
Error out.
This method sends ``message`` to the client as a descriptive error
message, then closes the connection.
"""
log.msg("Error: %s" % message)
self.transport.write(make_error_packet(message))
self.transport.loseConnection()
def play_notes(self, notes):
"""
Play some music.
Send a sequence of notes to the player. ``notes`` is a finite iterable
of pairs of instruments and pitches.
There is no way to time notes; if staggered playback is desired (and
it usually is!), then ``play_notes()`` should be called repeatedly at
the appropriate times.
This method turns the block beneath the player into a note block,
plays the requested notes through it, then turns it back into the
original block, all without actually modifying the chunk.
"""
x, y, z = self.location.pos.to_block()
if y:
y -= 1
bigx, smallx, bigz, smallz = split_coords(x, z)
if (bigx, bigz) not in self.chunks:
return
block = self.chunks[bigx, bigz].get_block((smallx, y, smallz))
meta = self.chunks[bigx, bigz].get_metadata((smallx, y, smallz))
self.write_packet("block", x=x, y=y, z=z,
type=blocks["note-block"].slot, meta=0)
for instrument, pitch in notes:
self.write_packet("note", x=x, y=y, z=z, pitch=pitch,
instrument=instrument)
self.write_packet("block", x=x, y=y, z=z, type=block, meta=meta)
# Automatic properties. Assigning to them causes the client to be notified
# of changes.
@property
def health(self):
return self._health
@health.setter
def health(self, value):
if not 0 <= value <= 20:
raise BetaClientError("Invalid health value %d" % value)
if self._health != value:
self.write_packet("health", hp=value, fp=0, saturation=0)
self._health = value
@property
def latency(self):
return self._latency
@latency.setter
def latency(self, value):
# Clamp the value to not exceed the boundaries of the packet. This is
# necessary even though, in theory, a ping this high is bad news.
value = clamp(value, 0, 65535)
# Check to see if this is a new value, and if so, alert everybody.
if self._latency != value:
packet = make_packet("players", name=self.username, online=True,
ping=value)
self.factory.broadcast(packet)
self._latency = value
class KickedProtocol(BetaServerProtocol):
"""
A very simple Beta protocol that helps enforce IP bans, Max Connections,
and Max Connections Per IP.
This protocol disconnects people as soon as they connect, with a helpful
message.
"""
def __init__(self, reason=None):
+ BetaServerProtocol.__init__(self)
if reason:
self.reason = reason
else:
self.reason = (
"This server doesn't like you very much."
" I don't like you very much either.")
def connectionMade(self):
self.error("%s" % self.reason)
+
class BetaProxyProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` that proxies for an InfiniCraft client.
"""
gateway = "server.wiki.vg"
def handshake(self, container):
self.write_packet("handshake", username="-")
def login(self, container):
self.username = container.username
self.write_packet("login", protocol=0, username="", seed=0,
dimension="earth")
url = urlunparse(("http", self.gateway, "/node/0/0/", None, None,
None))
d = getPage(url)
d.addCallback(self.start_proxy)
def start_proxy(self, response):
log.msg("Response: %s" % response)
log.msg("Starting proxy...")
address, port = response.split(":")
self.add_node(address, int(port))
def add_node(self, address, port):
"""
Add a new node to this client.
"""
from twisted.internet.endpoints import TCP4ClientEndpoint
log.msg("Adding node %s:%d" % (address, port))
endpoint = TCP4ClientEndpoint(reactor, address, port, 5)
self.node_factory = InfiniClientFactory()
d = endpoint.connect(self.node_factory)
d.addCallback(self.node_connected)
d.addErrback(self.node_connect_error)
def node_connected(self, protocol):
log.msg("Connected new node!")
def node_connect_error(self, reason):
log.err("Couldn't connect node!")
log.err(reason)
class BravoProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` suitable for serving MC worlds to clients.
This protocol really does need to be hooked up with a ``BravoFactory`` or
something very much like it.
"""
chunk_tasks = None
time_loop = None
eid = 0
last_dig = None
def __init__(self, config, name):
BetaServerProtocol.__init__(self)
self.config = config
self.config_name = "world %s" % name
# Retrieve the MOTD. Only needs to be done once.
self.motd = self.config.getdefault(self.config_name, "motd",
"BravoServer")
def register_hooks(self):
log.msg("Registering client hooks...")
plugin_types = {
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
for t in plugin_types:
setattr(self, t, getattr(self.factory, t))
log.msg("Registering policies...")
if self.factory.mode == "creative":
self.dig_policy = dig_policies["speedy"]
else:
self.dig_policy = dig_policies["notchy"]
log.msg("Registered client plugin hooks!")
def pre_handshake(self):
"""
Set up username and get going.
"""
-
if self.username in self.factory.protocols:
# This username's already taken; find a new one.
for name in username_alternatives(self.username):
if name not in self.factory.protocols:
self.username = name
break
else:
self.error("Your username is already taken.")
return False
return True
@inlineCallbacks
def authenticated(self):
BetaServerProtocol.authenticated(self)
# Init player, and copy data into it.
self.player = yield self.factory.world.load_player(self.username)
self.player.eid = self.eid
self.location = self.player.location
# Init players' inventory window.
self.inventory = InventoryWindow(self.player.inventory)
# *Now* we are in our factory's list of protocols. Be aware.
self.factory.protocols[self.username] = self
# Announce our presence.
self.factory.chat("%s is joining the game..." % self.username)
packet = make_packet("players", name=self.username, online=True,
ping=0)
self.factory.broadcast(packet)
# Craft our avatar and send it to already-connected other players.
packet = make_packet("create", eid=self.player.eid)
packet += self.player.save_to_packet()
self.factory.broadcast_for_others(packet, self)
# And of course spawn all of those players' avatars in our client as
# well.
for protocol in self.factory.protocols.itervalues():
# Skip over ourselves; otherwise, the client tweaks out and
# usually either dies or locks up.
if protocol is self:
continue
self.write_packet("create", eid=protocol.player.eid)
packet = protocol.player.save_to_packet()
packet += protocol.player.save_equipment_to_packet()
self.transport.write(packet)
# Send spawn and inventory.
spawn = self.factory.world.level.spawn
packet = make_packet("spawn", x=spawn[0], y=spawn[1], z=spawn[2])
packet += self.inventory.save_to_packet()
self.transport.write(packet)
+ # TODO: Send Abilities (0xca)
+ # TODO: Update Health (0x08)
+ # TODO: Update Experience (0x2b)
+
# Send weather.
self.transport.write(self.factory.vane.make_packet())
self.send_initial_chunk_and_location()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(10)
def orientation_changed(self):
# Bang your head!
yaw, pitch = self.location.ori.to_fracs()
packet = make_packet("entity-orientation", eid=self.player.eid,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
def position_changed(self):
# Send chunks.
self.update_chunks()
for entity in self.entities_near(2):
if entity.name != "Item":
continue
left = self.player.inventory.add(entity.item, entity.quantity)
if left != entity.quantity:
if left != 0:
# partial collect
entity.quantity = left
else:
packet = make_packet("collect", eid=entity.eid,
destination=self.player.eid)
packet += make_packet("destroy", count=1, eid=[entity.eid])
self.factory.broadcast(packet)
self.factory.destroy_entity(entity)
packet = self.inventory.save_to_packet()
self.transport.write(packet)
def entities_near(self, radius):
"""
Obtain the entities within a radius of this player.
Radius is measured in blocks.
"""
chunk_radius = int(radius // 16 + 1)
chunkx, chunkz = self.location.pos.to_chunk()
minx = chunkx - chunk_radius
maxx = chunkx + chunk_radius + 1
minz = chunkz - chunk_radius
maxz = chunkz + chunk_radius + 1
for x, z in product(xrange(minx, maxx), xrange(minz, maxz)):
if (x, z) not in self.chunks:
continue
chunk = self.chunks[x, z]
yieldables = [entity for entity in chunk.entities
if self.location.distance(entity.location) <= (radius * 32)]
for i in yieldables:
yield i
def chat(self, container):
if container.message.startswith("/"):
pp = {"factory": self.factory}
commands = retrieve_plugins(IChatCommand, factory=self.factory)
# Register aliases.
for plugin in commands.values():
for alias in plugin.aliases:
commands[alias] = plugin
params = container.message[1:].split(" ")
command = params.pop(0).lower()
if command and command in commands:
def cb(iterable):
for line in iterable:
self.write_packet("chat", message=line)
def eb(error):
self.write_packet("chat", message="Error: %s" %
error.getErrorMessage())
d = maybeDeferred(commands[command].chat_command,
self.username, params)
d.addCallback(cb)
d.addErrback(eb)
else:
self.write_packet("chat",
message="Unknown command: %s" % command)
else:
# Send the message up to the factory to be chatified.
message = "<%s> %s" % (self.username, container.message)
self.factory.chat(message)
def use(self, container):
"""
For each entity in proximity (4 blocks), check if it is the target
of this packet and call all hooks that stated interested in this
type.
"""
nearby_players = self.factory.players_near(self.player, 4)
for entity in chain(self.entities_near(4), nearby_players):
if entity.eid == container.target:
for hook in self.use_hooks[entity.name]:
hook.use_hook(self.factory, self.player, entity,
container.button == 0)
break
@inlineCallbacks
def digging(self, container):
if container.x == -1 and container.z == -1 and container.y == 255:
# Lala-land dig packet. Discard it for now.
return
# Player drops currently holding item/block.
if (container.state == "dropped" and container.face == "-y" and
container.x == 0 and container.y == 0 and container.z == 0):
i = self.player.inventory
holding = i.holdables[self.player.equipped]
if holding:
primary, secondary, count = holding
if i.consume((primary, secondary), self.player.equipped):
dest = self.location.in_front_of(2)
coords = dest.pos._replace(y=dest.pos.y + 1)
self.factory.give(coords, (primary, secondary), 1)
# Re-send inventory.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# If no items in this slot are left, this player isn't
# holding an item anymore.
if i.holdables[self.player.equipped] is None:
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=65535,
count=1,
secondary=0
)
self.factory.broadcast_for_others(packet, self)
return
if container.state == "shooting":
self.shoot_arrow()
return
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
coords = smallx, container.y, smallz
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't dig in chunk (%d, %d)!" % (bigx, bigz))
return
block = chunk.get_block((smallx, container.y, smallz))
if container.state == "started":
# Run pre dig hooks
for hook in self.pre_dig_hooks:
cancel = yield maybeDeferred(hook.pre_dig_hook, self.player,
(container.x, container.y, container.z), block)
if cancel:
return
tool = self.player.inventory.holdables[self.player.equipped]
# Check to see whether we should break this block.
if self.dig_policy.is_1ko(block, tool):
self.run_dig_hooks(chunk, coords, blocks[block])
else:
# Set up a timer for breaking the block later.
dtime = time() + self.dig_policy.dig_time(block, tool)
self.last_dig = coords, block, dtime
elif container.state == "stopped":
# The client thinks it has broken a block. We shall see.
if not self.last_dig:
return
oldcoords, oldblock, dtime = self.last_dig
if oldcoords != coords or oldblock != block:
# Nope!
self.last_dig = None
return
dtime -= time()
# When enough time has elapsed, run the dig hooks.
d = deferLater(reactor, max(dtime, 0), self.run_dig_hooks, chunk,
coords, blocks[block])
d.addCallback(lambda none: setattr(self, "last_dig", None))
def run_dig_hooks(self, chunk, coords, block):
"""
Destroy a block and run the post-destroy dig hooks.
"""
x, y, z = coords
if block.breakable:
chunk.destroy(coords)
l = []
for hook in self.dig_hooks:
l.append(maybeDeferred(hook.dig_hook, chunk, x, y, z, block))
dl = DeferredList(l)
dl.addCallback(lambda none: self.factory.flush_chunk(chunk))
@inlineCallbacks
def build(self, container):
"""
Handle a build packet.
Several things must happen. First, the packet's contents need to be
examined to ensure that the packet is valid. A check is done to see if
the packet is opening a windowed object. If not, then a build is
run.
"""
# Is the target within our purview? We don't do a very strict
# containment check, but we *do* require that the chunk be loaded.
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't select in chunk (%d, %d)!" % (bigx, bigz))
return
target = blocks[chunk.get_block((smallx, container.y, smallz))]
# If it's a chest, hax.
if target.name == "chest":
from bravo.policy.windows import Chest
w = Chest()
self.windows[self.wid] = w
w.open()
self.write_packet("window-open", wid=self.wid, type=w.identifier,
title=w.title, slots=w.slots)
self.wid += 1
return
elif target.name == "workbench":
from bravo.policy.windows import Workbench
w = Workbench()
self.windows[self.wid] = w
w.open()
self.write_packet("window-open", wid=self.wid, type=w.identifier,
title=w.title, slots=w.slots)
self.wid += 1
return
# Try to open it first
for hook in self.open_hooks:
window = yield maybeDeferred(hook.open_hook, self, container,
chunk.get_block((smallx, container.y, smallz)))
if window:
self.write_packet("window-open", wid=window.wid,
type=window.identifier, title=window.title,
slots=window.slots_num)
packet = window.save_to_packet()
self.transport.write(packet)
# window opened
return
# Ignore clients that think -1 is placeable.
if container.primary == -1:
return
# Special case when face is "noop": Update the status of the currently
# held block rather than placing a new block.
if container.face == "noop":
return
# If the target block is vanishable, then adjust our aim accordingly.
if target.vanishes:
container.face = "+y"
container.y -= 1
if container.primary in blocks:
block = blocks[container.primary]
elif container.primary in items:
block = items[container.primary]
else:
log.err("Ignoring request to place unknown block %d" %
container.primary)
return
# Run pre-build hooks. These hooks are able to interrupt the build
# process.
builddata = BuildData(block, 0x0, container.x, container.y,
container.z, container.face)
for hook in self.pre_build_hooks:
cont, builddata, cancel = yield maybeDeferred(hook.pre_build_hook,
self.player, builddata)
if cancel:
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
return
if not cont:
break
# Run the build.
try:
yield maybeDeferred(self.run_build, builddata)
except BuildError:
return
newblock = builddata.block.slot
coords = adjust_coords_for_face(
(builddata.x, builddata.y, builddata.z), builddata.face)
# Run post-build hooks. These are merely callbacks which cannot
# interfere with the build process, largely because the build process
# already happened.
for hook in self.post_build_hooks:
yield maybeDeferred(hook.post_build_hook, self.player, coords,
builddata.block)
# Feed automatons.
for automaton in self.factory.automatons:
if newblock in automaton.blocks:
automaton.feed(coords)
# Re-send inventory.
# XXX this could be optimized if/when inventories track damage.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
def run_build(self, builddata):
block, metadata, x, y, z, face = builddata
# Don't place items as blocks.
if block.slot not in blocks:
raise BuildError("Couldn't build item %r as block" % block)
# Check for orientable blocks.
if not metadata and block.orientable():
metadata = block.orientation(face)
if metadata is None:
# Oh, I guess we can't even place the block on this face.
raise BuildError("Couldn't orient block %r on face %s" %
(block, face))
# Make sure we can remove it from the inventory first.
if not self.player.inventory.consume((block.slot, 0),
self.player.equipped):
# Okay, first one was a bust; maybe we can consume the related
# block for dropping instead?
if not self.player.inventory.consume(block.drop,
self.player.equipped):
raise BuildError("Couldn't consume %r from inventory" % block)
# Offset coords according to face.
x, y, z = adjust_coords_for_face((x, y, z), face)
# Set the block and data.
dl = [self.factory.world.set_block((x, y, z), block.slot)]
if metadata:
dl.append(self.factory.world.set_metadata((x, y, z), metadata))
return DeferredList(dl)
def equip(self, container):
self.player.equipped = container.slot
# Inform everyone about the item the player is holding now.
item = self.player.inventory.holdables[self.player.equipped]
if item is None:
# Empty slot. Use signed short -1.
primary, secondary = -1, 0
else:
primary, secondary, count = item
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=primary,
count=1,
secondary=secondary
)
self.factory.broadcast_for_others(packet, self)
def pickup(self, container):
self.factory.give((container.x, container.y, container.z),
(container.primary, container.secondary), container.count)
def animate(self, container):
# Broadcast the animation of the entity to everyone else. Only swing
# arm is send by notchian clients.
packet = make_packet("animate",
eid=self.player.eid,
animation=container.animation
)
self.factory.broadcast_for_others(packet, self)
def wclose(self, container):
wid = container.wid
if wid == 0:
# WID 0 is reserved for the client inventory.
pass
elif wid in self.windows:
w = self.windows.pop(wid)
w.close()
else:
self.error("WID %d doesn't exist." % wid)
def waction(self, container):
wid = container.wid
if wid in self.windows:
w = self.windows[wid]
result = w.action(container.slot, container.button,
container.token, container.shift,
container.primary)
self.write_packet("window-token", wid=wid, token=container.token,
acknowledged=result)
else:
self.error("WID %d doesn't exist." % wid)
def wcreative(self, container):
"""
A slot was altered in creative mode.
"""
# XXX Sometimes the container doesn't contain all of this information.
# What then?
applied = self.inventory.creative(container.slot, container.primary,
container.secondary, container.count)
if applied:
# Inform other players about changes to this player's equipment.
equipped_slot = self.player.equipped + 36
if container.slot == equipped_slot:
packet = make_packet("entity-equipment",
eid=self.player.eid,
# XXX why 0? why not the actual slot?
slot=0,
primary=container.primary,
count=1,
secondary=container.secondary,
)
self.factory.broadcast_for_others(packet, self)
def shoot_arrow(self):
# TODO 1. Create arrow entity: arrow = Arrow(self.factory, self.player)
# 2. Register within the factory: self.factory.register_entity(arrow)
# 3. Run it: arrow.run()
pass
def sign(self, container):
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't handle sign in chunk (%d, %d)!" % (bigx, bigz))
return
if (smallx, container.y, smallz) in chunk.tiles:
new = False
s = chunk.tiles[smallx, container.y, smallz]
else:
new = True
s = Sign(smallx, container.y, smallz)
chunk.tiles[smallx, container.y, smallz] = s
s.text1 = container.line1
s.text2 = container.line2
s.text3 = container.line3
s.text4 = container.line4
chunk.dirty = True
# The best part of a sign isn't making one, it's showing everybody
# else on the server that you did.
packet = make_packet("sign", container)
self.factory.broadcast_for_chunk(packet, bigx, bigz)
# Run sign hooks.
for hook in self.sign_hooks:
hook.sign_hook(self.factory, chunk, container.x, container.y,
container.z, [s.text1, s.text2, s.text3, s.text4], new)
def complete(self, container):
"""
Attempt to tab-complete user names.
"""
needle = container.autocomplete
usernames = self.factory.protocols.keys()
results = complete(needle, usernames)
self.write_packet("tab", autocomplete=results)
def settings_packet(self, container):
"""
Acknowledge a change of settings and update chunk distance.
"""
super(BravoProtocol, self).settings_packet(container)
self.update_chunks()
def disable_chunk(self, x, z):
key = x, z
log.msg("Disabling chunk %d, %d" % key)
if key not in self.chunks:
log.msg("...But the chunk wasn't loaded!")
return
# Remove the chunk from cache.
chunk = self.chunks.pop(key)
eids = [e.eid for e in chunk.entities]
self.write_packet("destroy", count=len(eids), eid=eids)
# Clear chunk data on the client.
self.write_packet("chunk", x=x, z=z, continuous=False, primary=0x0,
add=0x0, data="")
def enable_chunk(self, x, z):
"""
Request a chunk.
This function will asynchronously obtain the chunk, and send it on the
wire.
:returns: `Deferred` that will be fired when the chunk is obtained,
with no arguments
"""
log.msg("Enabling chunk %d, %d" % (x, z))
if (x, z) in self.chunks:
log.msg("...But the chunk was already loaded!")
return succeed(None)
d = self.factory.world.request_chunk(x, z)
@d.addCallback
def cb(chunk):
self.chunks[x, z] = chunk
return chunk
d.addCallback(self.send_chunk)
return d
def send_chunk(self, chunk):
log.msg("Sending chunk %d, %d" % (chunk.x, chunk.z))
packet = chunk.save_to_packet()
self.transport.write(packet)
for entity in chunk.entities:
packet = entity.save_to_packet()
self.transport.write(packet)
for entity in chunk.tiles.itervalues():
if entity.name == "Sign":
packet = entity.save_to_packet()
self.transport.write(packet)
def send_initial_chunk_and_location(self):
"""
Send the initial chunks and location.
This method sends more than one chunk; since Beta 1.2, it must send
nearly fifty chunks before the location can be safely sent.
"""
# Disable located hooks. We'll re-enable them at the end.
self.state = STATE_AUTHENTICATED
log.msg("Initial, position %d, %d, %d" % self.location.pos)
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
# Send the chunk that the player will stand on. The other chunks are
# not so important. There *used* to be a bug, circa Beta 1.2, that
# required lots of surrounding geometry to be present, but that's been
# fixed.
d = self.enable_chunk(bigx, bigz)
# What to do if we can't load a given chunk? Just kick 'em.
d.addErrback(lambda fail: self.error("Couldn't load a chunk... :c"))
# Don't dare send more chunks beyond the initial one until we've
# spawned. Once we've spawned, set our status to LOCATED and then
# update_location() will work.
@d.addCallback
def located(none):
self.state = STATE_LOCATED
# Ensure that we're above-ground.
self.ascend(0)
d.addCallback(lambda none: self.update_location())
d.addCallback(lambda none: self.position_changed())
# Send the MOTD.
if self.motd:
@d.addCallback
def motd(none):
self.write_packet("chat",
message=self.motd.replace("<tagline>", get_motd()))
# Finally, start the secondary chunk loop.
d.addCallback(lambda none: self.update_chunks())
def update_chunks(self):
# Don't send chunks unless we're located.
if self.state != STATE_LOCATED:
return
x, z = self.location.pos.to_chunk()
# These numbers come from a couple spots, including minecraftwiki, but
# I verified them experimentally using torches and pillars to mark
# distances on each setting. ~ C.
distances = {
"tiny": 2,
"short": 4,
"far": 16,
}
radius = distances.get(self.settings.distance, 8)
new = set(circling(x, z, radius))
old = set(self.chunks.iterkeys())
added = new - old
discarded = old - new
# Perhaps some explanation is in order.
# The cooperate() function iterates over the iterable it is fed,
# without tying up the reactor, by yielding after each iteration. The
# inner part of the generator expression generates all of the chunks
# around the currently needed chunk, and it sorts them by distance to
# the current chunk. The end result is that we load chunks one-by-one,
# nearest to furthest, without stalling other clients.
if self.chunk_tasks:
for task in self.chunk_tasks:
try:
task.stop()
except (TaskDone, TaskFailed):
pass
to_enable = sorted_by_distance(added, x, z)
self.chunk_tasks = [
cooperate(self.enable_chunk(i, j) for i, j in to_enable),
cooperate(self.disable_chunk(i, j) for i, j in discarded),
]
def update_time(self):
time = int(self.factory.time)
self.write_packet("time", timestamp=time, time=time % 24000)
- def connectionLost(self, reason):
+ def connectionLost(self, reason=connectionDone):
"""
Cleanup after a lost connection.
Most of the time, these connections are lost cleanly; we don't have
any cleanup to do in the unclean case since clients don't have any
kind of pending state which must be recovered.
Remember, the connection can be lost before identification and
authentication, so ``self.username`` and ``self.player`` can be None.
"""
if self.username and self.player:
self.factory.world.save_player(self.username, self.player)
if self.player:
self.factory.destroy_entity(self.player)
packet = make_packet("destroy", count=1, eid=[self.player.eid])
self.factory.broadcast(packet)
if self.username:
packet = make_packet("players", name=self.username, online=False,
ping=0)
self.factory.broadcast(packet)
self.factory.chat("%s has left the game." % self.username)
self.factory.teardown_protocol(self)
# We are now torn down. After this point, there will be no more
# factory stuff, just our own personal stuff.
del self.factory
if self.time_loop:
self.time_loop.stop()
if self.chunk_tasks:
for task in self.chunk_tasks:
try:
task.stop()
except (TaskDone, TaskFailed):
pass
diff --git a/bravo/entity.py b/bravo/entity.py
index 5641f2f..ba6212e 100644
--- a/bravo/entity.py
+++ b/bravo/entity.py
@@ -1,688 +1,700 @@
from random import uniform
from twisted.internet.task import LoopingCall
from twisted.python import log
from bravo.inventory import Inventory
from bravo.inventory.slots import ChestStorage, FurnaceStorage
from bravo.location import Location
-from bravo.beta.packets import make_packet
+from bravo.beta.packets import make_packet, Speed, Slot
from bravo.utilities.geometry import gen_close_point
from bravo.utilities.maths import clamp
from bravo.utilities.furnace import (furnace_recipes, furnace_on_off,
update_all_windows_slot, update_all_windows_progress)
from bravo.blocks import furnace_fuel, unstackable
class Entity(object):
"""
Class representing an entity.
Entities are simply dynamic in-game objects. Plain entities are not very
interesting.
"""
name = "Entity"
def __init__(self, location=None, eid=0, **kwargs):
"""
Create an entity.
This method calls super().
"""
super(Entity, self).__init__()
self.eid = eid
if location is None:
self.location = Location()
else:
self.location = location
def __repr__(self):
return "%s(eid=%d, location=%s)" % (self.name, self.eid, self.location)
__str__ = __repr__
class Player(Entity):
"""
A player entity.
"""
name = "Player"
def __init__(self, username="", **kwargs):
"""
Create a player.
This method calls super().
"""
super(Player, self).__init__(**kwargs)
self.username = username
self.inventory = Inventory()
self.equipped = 0
def __repr__(self):
return ("%s(eid=%d, location=%s, username=%s)" %
(self.name, self.eid, self.location, self.username))
__str__ = __repr__
def save_to_packet(self):
"""
Create a "player" packet representing this entity.
"""
yaw, pitch = self.location.ori.to_fracs()
x, y, z = self.location.pos
item = self.inventory.holdables[self.equipped]
if item is None:
item = 0
else:
item = item[0]
packet = make_packet("player", eid=self.eid, username=self.username,
x=x, y=y, z=z, yaw=yaw, pitch=pitch, item=item,
- metadata={})
+ # http://www.wiki.vg/Entities#Objects
+ metadata={
+ 0: ('byte', 0), # Flags
+ 1: ('short', 300), # Drowning counter
+ 8: ('int', 0), # Color of the bubbling effects
+ })
return packet
def save_equipment_to_packet(self):
"""
Creates packets that include the equipment of the player. Equipment
is the item the player holds and all 4 armor parts.
"""
packet = ""
slots = (self.inventory.holdables[self.equipped],
self.inventory.armor[3], self.inventory.armor[2],
self.inventory.armor[1], self.inventory.armor[0])
for slot, item in enumerate(slots):
if item is None:
continue
primary, secondary, count = item
packet += make_packet("entity-equipment", eid=self.eid, slot=slot,
primary=primary, secondary=secondary,
count=1)
return packet
class Painting(Entity):
"""
A painting on a wall.
"""
name = "Painting"
def __init__(self, face="+x", motive="", **kwargs):
"""
Create a painting.
This method calls super().
"""
super(Painting, self).__init__(**kwargs)
self.face = face
self.motive = motive
def save_to_packet(self):
"""
Create a "painting" packet representing this entity.
"""
x, y, z = self.location.pos
return make_packet("painting", eid=self.eid, title=self.motive, x=x,
y=y, z=z, face=self.face)
class Pickup(Entity):
"""
Class representing a dropped block or item.
For historical and sanity reasons, this class is called Pickup, even
though its entity name is "Item."
"""
name = "Item"
def __init__(self, item=(0, 0), quantity=1, **kwargs):
"""
Create a pickup.
This method calls super().
"""
super(Pickup, self).__init__(**kwargs)
self.item = item
self.quantity = quantity
def save_to_packet(self):
"""
Create a "pickup" packet representing this entity.
"""
x, y, z = self.location.pos
- return ""
-
- return make_packet("pickup", eid=self.eid, primary=self.item[0],
- secondary=self.item[1], count=self.quantity, x=x, y=y, z=z,
- yaw=0, pitch=0, roll=0)
+ packets = make_packet('object', eid=self.eid, type='item_stack',
+ x=x, y=y, z=z, yaw=0, pitch=0, data=1,
+ speed=Speed(0, 0, 0))
+
+ packets += make_packet('metadata', eid=self.eid,
+ # See http://www.wiki.vg/Entities#Objects
+ metadata={
+ 0: ('byte', 0), # Flags
+ 1: ('short', 300), # Drowning counter
+ 10: ('slot', Slot.fromItem(self.item, self.quantity))
+ })
+ return packets
class Mob(Entity):
"""
A creature.
"""
name = "Mob"
"""
The name of this mob.
Names are used to identify mobs during serialization, just like for all
other entities.
This mob might not be serialized if this name is not overriden.
"""
metadata = {0: ("byte", 0)}
def __init__(self, **kwargs):
"""
Create a mob.
This method calls super().
"""
self.loop = None
super(Mob, self).__init__(**kwargs)
self.manager = None
def update_metadata(self):
"""
Overrideable hook for general metadata updates.
This method is necessary because metadata generally only needs to be
updated prior to certain events, not necessarily in response to
external events.
This hook will always be called prior to saving this mob's data for
serialization or wire transfer.
"""
def run(self):
"""
Start this mob's update loop.
"""
# Save the current chunk coordinates of this mob. They will be used to
# track which chunk this mob belongs to.
self.chunk_coords = self.location.pos
self.loop = LoopingCall(self.update)
self.loop.start(.2)
def save_to_packet(self):
"""
Create a "mob" packet representing this entity.
"""
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
# Update metadata from instance variables.
self.update_metadata()
return make_packet("mob", eid=self.eid, type=self.name, x=x, y=y, z=z,
yaw=yaw, pitch=pitch, head_yaw=yaw, vx=0, vy=0, vz=0,
metadata=self.metadata)
def save_location_to_packet(self):
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
return make_packet("teleport", eid=self.eid, x=x, y=y, z=z, yaw=yaw,
pitch=pitch)
def update(self):
"""
Update this mob's location with respect to a factory.
"""
# XXX Discuss appropriate style with MAD
# XXX remarkably untested
player = self.manager.closest_player(self.location.pos, 16)
if player is None:
vector = (uniform(-.4,.4),
uniform(-.4,.4),
uniform(-.4,.4))
target = self.location.pos + vector
else:
target = player.location.pos
self_pos = self.location.pos
vector = gen_close_point(self_pos, target)
vector = (
clamp(vector[0], -0.4, 0.4),
clamp(vector[1], -0.4, 0.4),
clamp(vector[2], -0.4, 0.4),
)
new_position = self.location.pos + vector
new_theta = self.location.pos.heading(new_position)
self.location.ori = self.location.ori._replace(theta=new_theta)
# XXX explain these magic numbers please
can_go = self.manager.check_block_collision(self.location.pos,
(-10, 0, -10), (16, 32, 16))
if can_go:
self.slide = False
self.location.pos = new_position
self.manager.correct_origin_chunk(self)
self.manager.broadcast(self.save_location_to_packet())
else:
self.slide = self.manager.slide_vector(vector)
self.manager.broadcast(self.save_location_to_packet())
class Chuck(Mob):
"""
A cross between a duck and a chicken.
"""
name = "Chicken"
offsetlist = ((.5, 0, .5),
(-.5, 0, .5),
(.5, 0, -.5),
(-.5, 0, -.5))
class Cow(Mob):
"""
Large, four-legged milk containers.
"""
name = "Cow"
class Creeper(Mob):
"""
A creeper.
"""
name = "Creeper"
def __init__(self, aura=False, **kwargs):
"""
Create a creeper.
This method calls super()
"""
super(Creeper, self).__init__(**kwargs)
self.aura = aura
def update_metadata(self):
self.metadata = {
0: ("byte", 0),
17: ("byte", int(self.aura)),
}
class Ghast(Mob):
"""
A very melancholy ghost.
"""
name = "Ghast"
class GiantZombie(Mob):
"""
Like a regular zombie, but far larger.
"""
name = "GiantZombie"
class Pig(Mob):
"""
A provider of bacon and piggyback rides.
"""
name = "Pig"
def __init__(self, saddle=False, **kwargs):
"""
Create a pig.
This method calls super().
"""
super(Pig, self).__init__(**kwargs)
self.saddle = saddle
def update_metadata(self):
self.metadata = {
0: ("byte", 0),
16: ("byte", int(self.saddle)),
}
class ZombiePigman(Mob):
"""
A zombie pigman.
"""
name = "PigZombie"
class Sheep(Mob):
"""
A woolly mob.
"""
name = "Sheep"
def __init__(self, sheared=False, color=0, **kwargs):
"""
Create a sheep.
This method calls super().
"""
super(Sheep, self).__init__(**kwargs)
self.sheared = sheared
self.color = color
def update_metadata(self):
color = self.color
if self.sheared:
color |= 0x10
self.metadata = {
0: ("byte", 0),
16: ("byte", color),
}
class Skeleton(Mob):
"""
An archer skeleton.
"""
name = "Skeleton"
class Slime(Mob):
"""
A gelatinous blob.
"""
name = "Slime"
def __init__(self, size=1, **kwargs):
"""
Create a slime.
This method calls super().
"""
super(Slime, self).__init__(**kwargs)
self.size = size
def update_metadata(self):
self.metadata = {
0: ("byte", 0),
16: ("byte", self.size),
}
class Spider(Mob):
"""
A spider.
"""
name = "Spider"
class Squid(Mob):
"""
An aquatic source of ink.
"""
name = "Squid"
class Wolf(Mob):
"""
A wolf.
"""
name = "Wolf"
def __init__(self, owner=None, angry=False, sitting=False, **kwargs):
"""
Create a wolf.
This method calls super().
"""
super(Wolf, self).__init__(**kwargs)
self.owner = owner
self.angry = angry
self.sitting = sitting
def update_metadata(self):
flags = 0
if self.sitting:
flags |= 0x1
if self.angry:
flags |= 0x2
if self.owner:
flags |= 0x4
self.metadata = {
0: ("byte", 0),
16: ("byte", flags),
}
class Zombie(Mob):
"""
A zombie.
"""
name = "Zombie"
offsetlist = ((-.5,0,-.5), (-.5,0,.5), (.5,0,-.5), (.5,0,.5), (-.5,1,-.5), (-.5,1,.5), (.5,1,-.5), (.5,1,.5),)
entities = dict((entity.name, entity)
for entity in (
Chuck,
Cow,
Creeper,
Ghast,
GiantZombie,
Painting,
Pickup,
Pig,
Player,
Sheep,
Skeleton,
Slime,
Spider,
Squid,
Wolf,
Zombie,
ZombiePigman,
)
)
class Tile(object):
"""
An entity that is also a block.
Or, perhaps more correctly, a block that is also an entity.
"""
name = "GenericTile"
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def load_from_packet(self, container):
log.msg("%s doesn't know how to load from a packet!" % self.name)
def save_to_packet(self):
log.msg("%s doesn't know how to save to a packet!" % self.name)
return ""
class Chest(Tile):
"""
A tile that holds items.
"""
name = "Chest"
def __init__(self, *args, **kwargs):
super(Chest, self).__init__(*args, **kwargs)
self.inventory = ChestStorage()
class Furnace(Tile):
"""
A tile that converts items to other items, using specific items as fuel.
"""
name = "Furnace"
burntime = 0
cooktime = 0
running = False
def __init__(self, *args, **kwargs):
super(Furnace, self).__init__(*args, **kwargs)
self.inventory = FurnaceStorage()
self.burning = LoopingCall.withCount(self.burn)
def changed(self, factory, coords):
'''
Called from outside by event handler to inform the tile
that the content was changed. If the furnace meet the requirements
the method starts ``burn`` process. The ``burn`` stops the
looping call when it's out of fuel or no need to burn more.
We get furnace coords from outer side as the tile does not know
about own chunk. If self.chunk is implemented the parameter
can be removed and self.coords will be:
>>> self.coords = self.chunk.x, self.x, self.chunk.z, self.z, self.y
:param `BravoFactory` factory: The factory
:param tuple coords: (bigx, smallx, bigz, smallz, y) - coords of this furnace
'''
self.coords = coords
self.factory = factory
if not self.running:
if self.burntime != 0:
# This furnace was already burning, but not started. This
# usually means that the furnace was serialized while burning.
self.running = True
self.burn_max = self.burntime
self.burning.start(0.5)
elif self.has_fuel() and self.can_craft():
# This furnace could be burning, but isn't. Let's start it!
self.burntime = 0
self.cooktime = 0
self.burning.start(0.5)
def burn(self, ticks):
'''
The main furnace loop.
:param int ticks: number of furnace iterations to perform
'''
# Usually it's only one iteration but if something blocks the server
# for long period we shall process skipped ticks.
# Note: progress bars will lag anyway.
if ticks > 1:
log.msg("Lag detected; skipping %d furnace ticks" % (ticks - 1))
for iteration in xrange(ticks):
# Craft items, if we can craft them.
if self.can_craft():
self.cooktime += 1
# Notchian time is ~9.25-9.50 sec.
if self.cooktime == 20:
# Looks like things were successfully crafted.
source = self.inventory.crafting[0]
product = furnace_recipes[source.primary]
self.inventory.crafting[0] = source.decrement()
if self.inventory.crafted[0] is None:
self.inventory.crafted[0] = product
else:
item = self.inventory.crafted[0]
self.inventory.crafted[0] = item.increment(product.quantity)
update_all_windows_slot(self.factory, self.coords, 0, self.inventory.crafting[0])
update_all_windows_slot(self.factory, self.coords, 2, self.inventory.crafted[0])
self.cooktime = 0
else:
self.cooktime = 0
# Consume fuel, if applicable.
if self.burntime == 0:
if self.has_fuel() and self.can_craft():
# We have fuel and stuff to craft, so burn a bit of fuel
# and craft some stuff.
fuel = self.inventory.fuel[0]
self.burntime = self.burn_max = furnace_fuel[fuel.primary]
self.inventory.fuel[0] = fuel.decrement()
if not self.running:
self.running = True
furnace_on_off(self.factory, self.coords, True)
update_all_windows_slot(self.factory, self.coords, 1, self.inventory.fuel[0])
else:
# We're finished burning. Turn ourselves off.
self.burning.stop()
self.running = False
furnace_on_off(self.factory, self.coords, False)
# Reset the cooking time, just because.
self.cooktime = 0
update_all_windows_progress(self.factory, self.coords, 0, 0)
return
self.burntime -= 1
# Update progress bars for the window.
# XXX magic numbers
cook_progress = 185 * self.cooktime / 19
burn_progress = 250 * self.burntime / self.burn_max
update_all_windows_progress(self.factory, self.coords, 0, cook_progress)
update_all_windows_progress(self.factory, self.coords, 1, burn_progress)
def has_fuel(self):
'''
Determine whether this furnace is fueled.
:returns: bool
'''
return (self.inventory.fuel[0] is not None and
diff --git a/bravo/tests/beta/test_packets.py b/bravo/tests/beta/test_packets.py
index 02b0bff..17079cb 100644
--- a/bravo/tests/beta/test_packets.py
+++ b/bravo/tests/beta/test_packets.py
@@ -1,81 +1,127 @@
from unittest import TestCase
-from bravo.beta.packets import simple, parse_packets
+from bravo.beta.packets import simple, parse_packets, make_packet
+from bravo.beta.packets import Speed, Slot, slot
class TestPacketBuilder(TestCase):
def setUp(self):
self.cls = simple("Test", ">BH", "unit, test")
def test_trivial(self):
pass
def test_parse_valid(self):
data = "\x2a\x00\x20"
result, offset = self.cls.parse(data, 0)
self.assertEqual(result.unit, 42)
self.assertEqual(result.test, 32)
self.assertEqual(offset, 3)
def test_parse_short(self):
data = "\x2a\x00"
result, offset = self.cls.parse(data, 0)
self.assertFalse(result)
self.assertEqual(offset, 1)
def test_parse_extra(self):
data = "\x2a\x00\x20\x00"
result, offset = self.cls.parse(data, 0)
self.assertEqual(result.unit, 42)
self.assertEqual(result.test, 32)
self.assertEqual(offset, 3)
def test_parse_offset(self):
data = "\x00\x2a\x00\x20"
result, offset = self.cls.parse(data, 1)
self.assertEqual(result.unit, 42)
self.assertEqual(result.test, 32)
self.assertEqual(offset, 4)
def test_build(self):
packet = self.cls(42, 32)
result = packet.build()
self.assertEqual(result, "\x2a\x00\x20")
-class TestParsePackets(TestCase):
+class TestSlot(TestCase):
+ """
+ http://www.wiki.vg/Slot_Data
+ """
+ pairs = [
+ ('\xff\xff', Slot()),
+ ('\x01\x16\x01\x00\x00\xff\xff', Slot(278)),
+ ('\x01\x16\x01\x00\x00\x00\x04\xCA\xFE\xBA\xBE', Slot(278, nbt='\xCA\xFE\xBA\xBE'))
+ ]
- def do_parse(self, raw):
- packets, leftover = parse_packets(raw)
+ def test_build(self):
+ for raw, obj in self.pairs:
+ self.assertEqual(raw, slot.build(obj))
+
+ def test_parse(self):
+ for raw, obj in self.pairs:
+ self.assertEqual(obj, slot.parse(raw))
+
+
+class TestParsePacketsBase(TestCase):
+ sample = {
+ 0x17: '\x17\x00\x00\x1f\xd6\x02\xff\xff\xed?\x00\x00\x08\x84\xff\xff\xfaD\x00\xed\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00',
+ 0x28: '(\x00\x00\x1f\xd6\x00\x00!\x01,\xaa\x01\x06\x02\x00\x00\xff\xff\x7f',
+ }
+
+
+class TestParsePackets(TestParsePacketsBase):
+
+ def do_parse(self, msg_id):
+ packets, leftover = parse_packets(self.sample[msg_id])
self.assertEqual(len(packets), 1, 'Message not parsed')
self.assertEqual(len(leftover), 0, 'Bytes left after parsing')
return packets[0][1]
def test_parse_0x17(self): # Add vehicle/object
# TODO: some fields doesn't match mc3p, check out who is wrong
- raw = '\x17\x00\x00\x1f\xd6\x02\xff\xff\xed?\x00\x00\x08\x84\xff\xff\xfaD\x00\xed\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00'
- msg = self.do_parse(raw)
+ msg = self.do_parse(0x17)
self.assertEqual(msg.eid, 8150)
self.assertEqual(msg.type, 'item_stack')
self.assertEqual(msg.x, -4801)
self.assertEqual(msg.y, 2180)
self.assertEqual(msg.z, -1468)
self.assertEqual(msg.pitch, 0)
self.assertEqual(msg.yaw, 237)
self.assertEqual(msg.data, 1)
self.assertEqual(msg.speed.x, 0)
self.assertEqual(msg.speed.y, 0)
self.assertEqual(msg.speed.z, 0)
def test_parse_0x28(self): # Entity metadata
- raw = '(\x00\x00\x1f\xd6\x00\x00!\x01,\xaa\x01\x06\x02\x00\x00\xff\xff\x7f'
- msg = self.do_parse(raw)
+ msg = self.do_parse(0x28)
self.assertEqual(msg.eid, 8150)
self.assertEqual(msg.metadata[0].type, 'byte')
self.assertEqual(msg.metadata[0].value, 0)
self.assertEqual(msg.metadata[1].type, 'short')
self.assertEqual(msg.metadata[1].value, 300)
self.assertEqual(msg.metadata[10].type, 'slot')
+ self.assertEqual(msg.metadata[10].value.item_id, 262)
self.assertEqual(msg.metadata[10].value.count, 2)
- self.assertEqual(msg.metadata[10].value.primary, 262)
- self.assertEqual(msg.metadata[10].value.secondary, 0)
+ self.assertEqual(msg.metadata[10].value.damage, 0)
+
+
+class TestBuildPackets(TestParsePacketsBase):
+
+ def check(self, msg_id, raw):
+ self.assertEqual(raw, self.sample[msg_id])
+
+ def test_build_0x17(self): # Add vehicle/object
+ self.check(0x17,
+ make_packet('object', eid=8150, type='item_stack',
+ x=-4801, y=2180, z=-1468,
+ pitch=0, yaw=237, data=1,
+ speed=Speed(0, 0, 0)))
+
+ def test_build_0x28(self): # Entity metadata
+ self.check(0x28,
+ make_packet('metadata', eid=8150, metadata={
+ 0: ('byte', 0),
+ 1: ('short', 300),
+ 10: ('slot', Slot(262, 2))
+ }))
|
bravoserver/bravo
|
8f424c4d3b62397647c30ed2c1a2f844d5ddc804
|
Location fixes: - creating items - collecting items in range
|
diff --git a/.gitignore b/.gitignore
index f8357a6..0d317aa 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,10 +1,11 @@
*.pyc
*.swp
bravo.ini
bravo.log
twistd.pid
dropin.cache
website/output/*
Bravo.egg-info/*
build/*
dist/*
+.idea/
\ No newline at end of file
diff --git a/bravo/beta/factory.py b/bravo/beta/factory.py
index 6744432..b5badc6 100644
--- a/bravo/beta/factory.py
+++ b/bravo/beta/factory.py
@@ -1,574 +1,573 @@
from collections import defaultdict
from itertools import chain, product
from twisted.internet import reactor
from twisted.internet.interfaces import IPushProducer
from twisted.internet.protocol import Factory
from twisted.internet.task import LoopingCall
from twisted.python import log
from zope.interface import implements
from bravo.beta.packets import make_packet
from bravo.beta.protocol import BravoProtocol, KickedProtocol
from bravo.entity import entities
from bravo.ibravo import (ISortedPlugin, IAutomaton, ITerrainGenerator,
IUseHook, ISignHook, IPreDigHook, IDigHook,
IPreBuildHook, IPostBuildHook, IWindowOpenHook,
IWindowClickHook, IWindowCloseHook)
from bravo.location import Location
from bravo.plugin import retrieve_named_plugins, retrieve_sorted_plugins
from bravo.policy.packs import packs as available_packs
from bravo.policy.seasons import Spring, Winter
from bravo.utilities.chat import chat_name, sanitize_chat
from bravo.weather import WeatherVane
from bravo.world import World
(STATE_UNAUTHENTICATED, STATE_CHALLENGED, STATE_AUTHENTICATED,
STATE_LOCATED) = range(4)
circle = [(i, j)
for i, j in product(xrange(-5, 5), xrange(-5, 5))
if i**2 + j**2 <= 25
]
class BravoFactory(Factory):
"""
A ``Factory`` that creates ``BravoProtocol`` objects when connected to.
"""
implements(IPushProducer)
protocol = BravoProtocol
timestamp = None
time = 0
day = 0
eid = 1
interfaces = []
def __init__(self, config, name):
"""
Create a factory and world.
``name`` is the string used to look up factory-specific settings from
the configuration.
:param str name: internal name of this factory
"""
self.name = name
self.config = config
self.config_name = "world %s" % name
self.world = World(self.config, self.name)
self.world.factory = self
self.protocols = dict()
self.connectedIPs = defaultdict(int)
self.mode = self.config.get(self.config_name, "mode")
if self.mode not in ("creative", "survival"):
raise Exception("Unsupported mode %s" % self.mode)
self.limitConnections = self.config.getintdefault(self.config_name,
"limitConnections",
0)
self.limitPerIP = self.config.getintdefault(self.config_name,
"limitPerIP", 0)
self.vane = WeatherVane(self)
def startFactory(self):
log.msg("Initializing factory for world '%s'..." % self.name)
# Get our plugins set up.
self.register_plugins()
log.msg("Starting world...")
self.world.start()
# Start up the permanent cache.
# has_option() is not exactly desirable, but it's appropriate here
# because we don't want to take any action if the key is unset.
if self.config.has_option(self.config_name, "perm_cache"):
cache_level = self.config.getint(self.config_name, "perm_cache")
self.world.enable_cache(cache_level)
log.msg("Starting timekeeping...")
self.timestamp = reactor.seconds()
self.time = self.world.level.time
self.update_season()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(2)
log.msg("Starting entity updates...")
# Start automatons.
for automaton in self.automatons:
automaton.start()
self.chat_consumers = set()
log.msg("Factory successfully initialized for world '%s'!" % self.name)
def stopFactory(self):
"""
Called before factory stops listening on ports. Used to perform
shutdown tasks.
"""
log.msg("Shutting down world...")
# Stop automatons. Technically, they may not actually halt until their
# next iteration, but that is close enough for us, probably.
# Automatons are contracted to not access the world after stop() is
# called.
for automaton in self.automatons:
automaton.stop()
# Evict plugins as soon as possible. Can't be done before stopping
# automatons.
self.unregister_plugins()
self.time_loop.stop()
# Write back current world time. This must be done before stopping the
# world.
self.world.time = self.time
# And now stop the world.
self.world.stop()
log.msg("World data saved!")
def buildProtocol(self, addr):
"""
Create a protocol.
This overriden method provides early player entity registration, as a
solution to the username/entity race that occurs on login.
"""
banned = self.world.serializer.load_plugin_data("banned_ips")
# Do IP bans first.
for ip in banned.split():
if addr.host == ip:
# Use KickedProtocol with extreme prejudice.
log.msg("Kicking banned IP %s" % addr.host)
p = KickedProtocol("Sorry, but your IP address is banned.")
p.factory = self
return p
# We are ignoring values less that 1, but making sure not to go over
# the connection limit.
if (self.limitConnections
and len(self.protocols) >= self.limitConnections):
log.msg("Reached maximum players, turning %s away." % addr.host)
p = KickedProtocol("The player limit has already been reached."
" Please try again later.")
p.factory = self
return p
# Do our connection-per-IP check.
if (self.limitPerIP and
self.connectedIPs[addr.host] >= self.limitPerIP):
log.msg("At maximum connections for %s already, dropping." % addr.host)
p = KickedProtocol("There are too many players connected from this IP.")
p.factory = self
return p
else:
self.connectedIPs[addr.host] += 1
# If the player wasn't kicked, let's continue!
log.msg("Starting connection for %s" % addr)
p = self.protocol(self.config, self.name)
p.host = addr.host
p.factory = self
self.register_entity(p)
# Copy our hooks to the protocol.
p.register_hooks()
return p
def teardown_protocol(self, protocol):
"""
Do internal bookkeeping on behalf of a protocol which has been
disconnected.
Did you know that "bookkeeping" is one of the few words in English
which has three pairs of double letters in a row?
"""
username = protocol.username
host = protocol.host
if username in self.protocols:
del self.protocols[username]
self.connectedIPs[host] -= 1
def set_username(self, protocol, username):
"""
Attempt to set a new username for a protocol.
:returns: whether the username was changed
"""
# If the username's already taken, refuse it.
if username in self.protocols:
return False
if protocol.username in self.protocols:
# This protocol's known under another name, so remove it.
del self.protocols[protocol.username]
# Set the username.
self.protocols[username] = protocol
protocol.username = username
return True
def register_plugins(self):
"""
Setup plugin hooks.
"""
log.msg("Registering client plugin hooks...")
plugin_types = {
"automatons": IAutomaton,
"generators": ITerrainGenerator,
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
packs = self.config.getlistdefault(self.config_name, "packs", [])
try:
packs = [available_packs[pack] for pack in packs]
except KeyError, e:
raise Exception("Couldn't find plugin pack %s" % e.args)
for t, interface in plugin_types.iteritems():
l = self.config.getlistdefault(self.config_name, t, [])
# Grab extra plugins from the pack. Order doesn't really matter
# since the plugin loader sorts things anyway.
for pack in packs:
if t in pack:
l += pack[t]
# Hax. :T
if t == "generators":
plugins = retrieve_sorted_plugins(interface, l)
elif issubclass(interface, ISortedPlugin):
plugins = retrieve_sorted_plugins(interface, l, factory=self)
else:
plugins = retrieve_named_plugins(interface, l, factory=self)
log.msg("Using %s: %s" % (t.replace("_", " "),
", ".join(plugin.name for plugin in plugins)))
setattr(self, t, plugins)
# Deal with seasons.
seasons = self.config.getlistdefault(self.config_name, "seasons", [])
for pack in packs:
if "seasons" in pack:
seasons += pack["seasons"]
self.seasons = []
if "spring" in seasons:
self.seasons.append(Spring())
if "winter" in seasons:
self.seasons.append(Winter())
# Assign generators to the world pipeline.
self.world.pipeline = self.generators
# Use hooks have special funkiness.
uh = self.use_hooks
self.use_hooks = defaultdict(list)
for plugin in uh:
for target in plugin.targets:
self.use_hooks[target].append(plugin)
def unregister_plugins(self):
log.msg("Unregistering client plugin hooks...")
for name in [
"automatons",
"generators",
"open_hooks",
"click_hooks",
"close_hooks",
"pre_build_hooks",
"post_build_hooks",
"pre_dig_hooks",
"dig_hooks",
"sign_hooks",
"use_hooks",
]:
delattr(self, name)
def create_entity(self, x, y, z, name, **kwargs):
"""
Spawn an entirely new entity at the specified block coordinates.
Handles entity registration as well as instantiation.
"""
bigx = x // 16
bigz = z // 16
location = Location.at_block(x, y, z)
entity = entities[name](eid=0, location=location, **kwargs)
self.register_entity(entity)
d = self.world.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
chunk.entities.add(entity)
log.msg("Created entity %s" % entity)
# XXX Maybe just send the entity object to the manager instead of
# the following?
if hasattr(entity,'loop'):
self.world.mob_manager.start_mob(entity)
return entity
def register_entity(self, entity):
"""
Registers an entity with this factory.
Registration is perhaps too fancy of a name; this method merely makes
sure that the entity has a unique and usable entity ID. In particular,
this method does *not* make the entity attached to the world, or
advertise its existence.
"""
if not entity.eid:
self.eid += 1
entity.eid = self.eid
log.msg("Registered entity %s" % entity)
def destroy_entity(self, entity):
"""
Destroy an entity.
The factory doesn't have to know about entities, but it is a good
place to put this logic.
"""
- bigx = entity.location.pos.x // 16
- bigz = entity.location.pos.z // 16
+ bigx, bigz = entity.location.pos.to_chunk()
d = self.world.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
chunk.entities.discard(entity)
chunk.dirty = True
log.msg("Destroyed entity %s" % entity)
def update_time(self):
"""
Update the in-game timer.
The timer goes from 0 to 24000, both of which are high noon. The clock
increments by 20 every second. Days are 20 minutes long.
The day clock is incremented every in-game day, which is every 20
minutes. The day clock goes from 0 to 360, which works out to a reset
once every 5 days. This is a Babylonian in-game year.
"""
t = reactor.seconds()
self.time += 20 * (t - self.timestamp)
self.timestamp = t
days, self.time = divmod(self.time, 24000)
if days:
self.day += days
self.day %= 360
self.update_season()
def broadcast_time(self):
packet = make_packet("time", timestamp=int(self.time))
self.broadcast(packet)
def update_season(self):
"""
Update the world's season.
"""
all_seasons = sorted(self.seasons, key=lambda s: s.day)
# Get all the seasons that we have past the start date of this year.
# We are looking for the season which is closest to our current day,
# without going over; I call this the Price-is-Right style of season
# handling. :3
past_seasons = [s for s in all_seasons if s.day <= self.day]
if past_seasons:
# The most recent one is the one we are in
self.world.season = past_seasons[-1]
elif all_seasons:
# We haven't past any seasons yet this year, so grab the last one
# from 'last year'
self.world.season = all_seasons[-1]
else:
# No seasons enabled.
self.world.season = None
def chat(self, message):
"""
Relay chat messages.
Chat messages are sent to all connected clients, as well as to anybody
consuming this factory.
"""
for consumer in self.chat_consumers:
consumer.write((self, message))
# Prepare the message for chat packeting.
for user in self.protocols:
message = message.replace(user, chat_name(user))
message = sanitize_chat(message)
log.msg("Chat: %s" % message.encode("utf8"))
packet = make_packet("chat", message=message)
self.broadcast(packet)
def broadcast(self, packet):
"""
Broadcast a packet to all connected players.
"""
for player in self.protocols.itervalues():
player.transport.write(packet)
def broadcast_for_others(self, packet, protocol):
"""
Broadcast a packet to all players except the originating player.
Useful for certain packets like player entity spawns which should
never be reflexive.
"""
for player in self.protocols.itervalues():
if player is not protocol:
player.transport.write(packet)
def broadcast_for_chunk(self, packet, x, z):
"""
Broadcast a packet to all players that have a certain chunk loaded.
`x` and `z` are chunk coordinates, not block coordinates.
"""
for player in self.protocols.itervalues():
if (x, z) in player.chunks:
player.transport.write(packet)
def scan_chunk(self, chunk):
"""
Tell automatons about this chunk.
"""
# It's possible for there to be no automatons; this usually means that
# the factory is shutting down. We should be permissive and handle
# this case correctly.
if hasattr(self, "automatons"):
for automaton in self.automatons:
automaton.scan(chunk)
def flush_chunk(self, chunk):
"""
Flush a damaged chunk to all players that have it loaded.
"""
if chunk.is_damaged():
packet = chunk.get_damage_packet()
for player in self.protocols.itervalues():
if (chunk.x, chunk.z) in player.chunks:
player.transport.write(packet)
chunk.clear_damage()
def flush_all_chunks(self):
"""
Flush any damage anywhere in this world to all players.
This is a sledgehammer which should be used sparingly at best, and is
only well-suited to plugins which touch multiple chunks at once.
In other words, if I catch you using this in your plugin needlessly,
I'm gonna have a chat with you.
"""
for chunk in chain(self.world.chunk_cache.itervalues(),
self.world.dirty_chunk_cache.itervalues()):
self.flush_chunk(chunk)
def give(self, coords, block, quantity):
"""
Spawn a pickup at the specified coordinates.
The coordinates need to be in pixels, not blocks.
If the size of the stack is too big, multiple stacks will be dropped.
:param tuple coords: coordinates, in pixels
:param tuple block: key of block or item to drop
:param int quantity: number of blocks to drop in the stack
"""
x, y, z = coords
while quantity > 0:
entity = self.create_entity(x // 32, y // 32, z // 32, "Item",
item=block, quantity=min(quantity, 64))
packet = entity.save_to_packet()
packet += make_packet("create", eid=entity.eid)
self.broadcast(packet)
quantity -= 64
def players_near(self, player, radius):
"""
Obtain other players within a radius of a given player.
Radius is measured in blocks.
"""
radius *= 32
for p in self.protocols.itervalues():
if p.player == player:
continue
distance = player.location.distance(p.location)
if distance <= radius:
yield p.player
def pauseProducing(self):
pass
def resumeProducing(self):
pass
def stopProducing(self):
pass
diff --git a/bravo/beta/packets.py b/bravo/beta/packets.py
index 7ac3f0a..c6f00ec 100644
--- a/bravo/beta/packets.py
+++ b/bravo/beta/packets.py
@@ -1,903 +1,904 @@
from collections import namedtuple
from construct import Struct, Container, Embed, Enum, MetaField
from construct import MetaArray, If, Switch, Const, Peek, Magic
from construct import OptionalGreedyRange, RepeatUntil
from construct import Flag, PascalString, Adapter
from construct import UBInt8, UBInt16, UBInt32, UBInt64
from construct import SBInt8, SBInt16, SBInt32
from construct import BFloat32, BFloat64
from construct import BitStruct, BitField
from construct import StringAdapter, LengthValueAdapter, Sequence
def IPacket(object):
"""
Interface for packets.
"""
def parse(buf, offset):
"""
Parse a packet out of the given buffer, starting at the given offset.
If the parse is successful, returns a tuple of the parsed packet and
the next packet offset in the buffer.
If the parse fails due to insufficient data, returns a tuple of None
and the amount of data required before the parse can be retried.
Exceptions may be raised if the parser finds invalid data.
"""
def simple(name, fmt, *args):
"""
Make a customized namedtuple representing a simple, primitive packet.
"""
from struct import Struct
s = Struct(fmt)
@classmethod
def parse(cls, buf, offset):
if len(buf) >= s.size + offset:
unpacked = s.unpack_from(buf, offset)
return cls(*unpacked), s.size + offset
else:
return None, s.size - len(buf)
def build(self):
return s.pack(*self)
methods = {
"parse": parse,
"build": build,
}
return type(name, (namedtuple(name, *args),), methods)
DUMP_ALL_PACKETS = False
# Strings.
# This one is a UCS2 string, which effectively decodes single writeChar()
# invocations. We need to import the encoding for it first, though.
from bravo.encodings import ucs2
from codecs import register
register(ucs2)
class DoubleAdapter(LengthValueAdapter):
def _encode(self, obj, context):
return len(obj) / 2, obj
def AlphaString(name):
return StringAdapter(
DoubleAdapter(
Sequence(name,
UBInt16("length"),
MetaField("data", lambda ctx: ctx["length"] * 2),
)
),
encoding="ucs2",
)
# Boolean converter.
def Bool(*args, **kwargs):
return Flag(*args, default=True, **kwargs)
# Flying, position, and orientation, reused in several places.
grounded = Struct("grounded", UBInt8("grounded"))
position = Struct("position",
BFloat64("x"),
BFloat64("y"),
BFloat64("stance"),
BFloat64("z")
)
orientation = Struct("orientation", BFloat32("rotation"), BFloat32("pitch"))
# Notchian item packing (slot data)
items = Struct("items",
SBInt16("primary"),
If(lambda context: context["primary"] >= 0,
Embed(Struct("item_information",
UBInt8("count"),
UBInt16("secondary"),
Magic("\xff\xff"),
)),
),
)
Metadata = namedtuple("Metadata", "type value")
metadata_types = ["byte", "short", "int", "float", "string", "slot",
"coords"]
# Metadata adaptor.
class MetadataAdapter(Adapter):
def _decode(self, obj, context):
d = {}
for m in obj.data:
d[m.id.second] = Metadata(metadata_types[m.id.first], m.value)
return d
def _encode(self, obj, context):
c = Container(data=[], terminator=None)
for k, v in obj.iteritems():
t, value = v
d = Container(
id=Container(first=metadata_types.index(t), second=k),
value=value,
peeked=None)
c.data.append(d)
if c.data:
c.data[-1].peeked = 127
else:
c.data.append(Container(id=Container(first=0, second=0), value=0,
peeked=127))
return c
# Metadata inner container.
metadata_switch = {
0: UBInt8("value"),
1: UBInt16("value"),
2: UBInt32("value"),
3: BFloat32("value"),
4: AlphaString("value"),
5: Struct("slot", # is the same as 'items' defined above
SBInt16("primary"),
If(lambda context: context["primary"] >= 0,
Embed(Struct("item_information",
UBInt8("count"),
UBInt16("secondary"),
SBInt16("nbt-len"),
If(lambda context: context["nbt-len"] >= 0,
Embed(MetaField("nbt-data", lambda ctx: ctx["nbt-len"]))
)
)),
),
),
6: Struct("coords",
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
),
}
# Metadata subconstruct.
metadata = MetadataAdapter(
Struct("metadata",
RepeatUntil(lambda obj, context: obj["peeked"] == 0x7f,
Struct("data",
BitStruct("id",
BitField("first", 3),
BitField("second", 5),
),
Switch("value", lambda context: context["id"]["first"],
metadata_switch),
Peek(UBInt8("peeked")),
),
),
Const(UBInt8("terminator"), 0x7f),
),
)
# Build faces, used during dig and build.
faces = {
"noop": -1,
"-y": 0,
"+y": 1,
"-z": 2,
"+z": 3,
"-x": 4,
"+x": 5,
}
face = Enum(SBInt8("face"), **faces)
# World dimension.
dimensions = {
"earth": 0,
"sky": 1,
"nether": 255,
}
dimension = Enum(UBInt8("dimension"), **dimensions)
# Difficulty levels
difficulties = {
"peaceful": 0,
"easy": 1,
"normal": 2,
"hard": 3,
}
difficulty = Enum(UBInt8("difficulty"), **difficulties)
modes = {
"survival": 0,
"creative": 1,
"adventure": 2,
}
mode = Enum(UBInt8("mode"), **modes)
# Possible effects.
# XXX these names aren't really canonized yet
effect = Enum(UBInt8("effect"),
move_fast=1,
move_slow=2,
dig_fast=3,
dig_slow=4,
damage_boost=5,
heal=6,
harm=7,
jump=8,
confusion=9,
regenerate=10,
resistance=11,
fire_resistance=12,
water_resistance=13,
invisibility=14,
blindness=15,
night_vision=16,
hunger=17,
weakness=18,
poison=19,
wither=20,
)
# The actual packet list.
packets = {
0x00: Struct("ping",
UBInt32("pid"),
),
0x01: Struct("login",
# Player Entity ID (random number generated by the server)
UBInt32("eid"),
# default, flat, largeBiomes
AlphaString("leveltype"),
mode,
dimension,
difficulty,
UBInt8("unused"),
UBInt8("maxplayers"),
),
0x02: Struct("handshake",
UBInt8("protocol"),
AlphaString("username"),
AlphaString("host"),
UBInt32("port"),
),
0x03: Struct("chat",
AlphaString("message"),
),
0x04: Struct("time",
# Total Ticks
UBInt64("timestamp"),
# Time of day
UBInt64("time"),
),
0x05: Struct("entity-equipment",
UBInt32("eid"),
UBInt16("slot"),
Embed(items),
),
0x06: Struct("spawn",
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
0x07: Struct("use",
UBInt32("eid"),
UBInt32("target"),
UBInt8("button"),
),
0x08: Struct("health",
UBInt16("hp"),
UBInt16("fp"),
BFloat32("saturation"),
),
0x09: Struct("respawn",
dimension,
difficulty,
mode,
UBInt16("height"),
AlphaString("leveltype"),
),
0x0a: grounded,
0x0b: Struct("position",
position,
grounded
),
0x0c: Struct("orientation",
orientation,
grounded
),
# TODO: Differ between client and server 'position'
0x0d: Struct("location",
position,
orientation,
grounded
),
0x0e: Struct("digging",
Enum(UBInt8("state"),
started=0,
cancelled=1,
stopped=2,
checked=3,
dropped=4,
# Also eating
shooting=5,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
face,
),
0x0f: Struct("build",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
face,
Embed(items),
UBInt8("cursorx"),
UBInt8("cursory"),
UBInt8("cursorz"),
),
# Hold Item Change
0x10: Struct("equip",
# Only 0-8
UBInt16("slot"),
),
0x11: Struct("bed",
UBInt32("eid"),
UBInt8("unknown"),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
),
0x12: Struct("animate",
UBInt32("eid"),
Enum(UBInt8("animation"),
noop=0,
arm=1,
hit=2,
leave_bed=3,
eat=5,
unknown=102,
crouch=104,
uncrouch=105,
),
),
0x13: Struct("action",
UBInt32("eid"),
Enum(UBInt8("action"),
crouch=1,
uncrouch=2,
leave_bed=3,
start_sprint=4,
stop_sprint=5,
),
),
0x14: Struct("player",
UBInt32("eid"),
AlphaString("username"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
# 0 For none, unlike other packets
# -1 crashes clients
SBInt16("item"),
metadata,
),
# Spawn Dropped Item
+ # TODO: Removed in #61!!! Find out how to spawn items.
0x15: Struct("pickup",
UBInt32("eid"),
Embed(items),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
UBInt8("roll"),
),
0x16: Struct("collect",
UBInt32("eid"),
UBInt32("destination"),
),
# Object/Vehicle
0x17: Struct("object", # XXX: was 'vehicle'!
UBInt32("eid"),
Enum(UBInt8("type"), # See http://wiki.vg/Entities#Objects
boat=1,
item_stack=2,
minecart=10,
storage_cart=11,
powered_cart=12,
tnt=50,
ender_crystal=51,
arrow=60,
snowball=61,
egg=62,
thrown_enderpearl=65,
wither_skull=66,
falling_block=70,
frames=71,
ender_eye=72,
thrown_potion=73,
dragon_egg=74,
thrown_xp_bottle=75,
fishing_float=90,
),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("pitch"),
UBInt8("yaw"),
SBInt32("data"), # See http://www.wiki.vg/Object_Data
If(lambda context: context["data"] != 0,
Struct("speed",
SBInt16("x"),
SBInt16("y"),
SBInt16("z"),
)
),
),
0x18: Struct("mob",
UBInt32("eid"),
Enum(UBInt8("type"), **{
"Creeper": 50,
"Skeleton": 51,
"Spider": 52,
"GiantZombie": 53,
"Zombie": 54,
"Slime": 55,
"Ghast": 56,
"ZombiePig": 57,
"Enderman": 58,
"CaveSpider": 59,
"Silverfish": 60,
"Blaze": 61,
"MagmaCube": 62,
"EnderDragon": 63,
"Wither": 64,
"Bat": 65,
"Witch": 66,
"Pig": 90,
"Sheep": 91,
"Cow": 92,
"Chicken": 93,
"Squid": 94,
"Wolf": 95,
"Mooshroom": 96,
"Snowman": 97,
"Ocelot": 98,
"IronGolem": 99,
"Villager": 120
}),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
SBInt8("yaw"),
SBInt8("pitch"),
SBInt8("head_yaw"),
SBInt16("vx"),
SBInt16("vy"),
SBInt16("vz"),
metadata,
),
0x19: Struct("painting",
UBInt32("eid"),
AlphaString("title"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
face,
),
0x1a: Struct("experience",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt16("quantity"),
),
0x1c: Struct("velocity",
UBInt32("eid"),
SBInt16("dx"),
SBInt16("dy"),
SBInt16("dz"),
),
0x1d: Struct("destroy",
UBInt8("count"),
MetaArray(lambda context: context["count"], UBInt32("eid")),
),
0x1e: Struct("create",
UBInt32("eid"),
),
0x1f: Struct("entity-position",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz")
),
0x20: Struct("entity-orientation",
UBInt32("eid"),
UBInt8("yaw"),
UBInt8("pitch")
),
0x21: Struct("entity-location",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz"),
UBInt8("yaw"),
UBInt8("pitch")
),
0x22: Struct("teleport",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
),
0x23: Struct("entity-head",
UBInt32("eid"),
UBInt8("yaw"),
),
0x26: Struct("status",
UBInt32("eid"),
Enum(UBInt8("status"),
damaged=2,
killed=3,
taming=6,
tamed=7,
drying=8,
eating=9,
sheep_eat=10,
golem_rose=11,
heart_particle=12,
angry_particle=13,
happy_particle=14,
magic_particle=15,
shaking=16,
firework=17
),
),
0x27: Struct("attach",
UBInt32("eid"),
# -1 for detatching
UBInt32("vid"),
),
0x28: Struct("metadata",
UBInt32("eid"),
metadata,
),
0x29: Struct("effect",
UBInt32("eid"),
effect,
UBInt8("amount"),
UBInt16("duration"),
),
0x2a: Struct("uneffect",
UBInt32("eid"),
effect,
),
0x2b: Struct("levelup",
BFloat32("current"),
UBInt16("level"),
UBInt16("total"),
),
0x33: Struct("chunk",
SBInt32("x"),
SBInt32("z"),
Bool("continuous"),
UBInt16("primary"),
UBInt16("add"),
PascalString("data", length_field=UBInt32("length"), encoding="zlib"),
),
0x34: Struct("batch",
SBInt32("x"),
SBInt32("z"),
UBInt16("count"),
PascalString("data", length_field=UBInt32("length")),
),
0x35: Struct("block",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt16("type"),
UBInt8("meta"),
),
# XXX This covers general tile actions, not just note blocks.
# TODO: Needs work
0x36: Struct("block-action",
SBInt32("x"),
SBInt16("y"),
SBInt32("z"),
UBInt8("byte1"),
UBInt8("byte2"),
UBInt16("blockid"),
),
0x37: Struct("block-break-anim",
UBInt32("eid"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
UBInt8("stage"),
),
# XXX Server -> Client. Use 0x33 instead.
0x38: Struct("bulk-chunk",
UBInt16("count"),
UBInt32("length"),
UBInt8("sky_light"),
MetaField("data", lambda ctx: ctx["length"]),
MetaArray(lambda context: context["count"],
Struct("metadata",
UBInt32("chunk_x"),
UBInt32("chunk_z"),
UBInt16("bitmap_primary"),
UBInt16("bitmap_secondary"),
)
)
),
# TODO: Needs work?
0x3c: Struct("explosion",
BFloat64("x"),
BFloat64("y"),
BFloat64("z"),
BFloat32("radius"),
UBInt32("count"),
MetaField("blocks", lambda context: context["count"] * 3),
BFloat32("motionx"),
BFloat32("motiony"),
BFloat32("motionz"),
),
0x3d: Struct("sound",
Enum(UBInt32("sid"),
click2=1000,
click1=1001,
bow_fire=1002,
door_toggle=1003,
extinguish=1004,
record_play=1005,
charge=1007,
fireball=1008,
zombie_wood=1010,
zombie_metal=1011,
zombie_break=1012,
wither=1013,
smoke=2000,
block_break=2001,
splash_potion=2002,
ender_eye=2003,
blaze=2004,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt32("data"),
Bool("volume-mod"),
),
0x3e: Struct("named-sound",
AlphaString("name"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
BFloat32("volume"),
UBInt8("pitch"),
),
0x3f: Struct("particle",
AlphaString("name"),
BFloat32("x"),
BFloat32("y"),
BFloat32("z"),
BFloat32("x_offset"),
BFloat32("y_offset"),
BFloat32("z_offset"),
BFloat32("speed"),
UBInt32("count"),
),
0x46: Struct("state",
Enum(UBInt8("state"),
bad_bed=0,
start_rain=1,
stop_rain=2,
mode_change=3,
run_credits=4,
),
mode,
),
0x47: Struct("thunderbolt",
UBInt32("eid"),
UBInt8("gid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
0x64: Struct("window-open",
UBInt8("wid"),
Enum(UBInt8("type"),
chest=0,
workbench=1,
furnace=2,
dispenser=3,
enchatment_table=4,
brewing_stand=5,
npc_trade=6,
beacon=7,
anvil=8,
hopper=9
),
AlphaString("title"),
UBInt8("slots"),
UBInt8("use_title"),
),
0x65: Struct("window-close",
UBInt8("wid"),
),
0x66: Struct("window-action",
UBInt8("wid"),
UBInt16("slot"),
UBInt8("button"),
UBInt16("token"),
UBInt8("shift"), # TODO: rename to 'mode'
Embed(items),
),
0x67: Struct("window-slot",
UBInt8("wid"),
UBInt16("slot"),
Embed(items),
),
0x68: Struct("inventory",
UBInt8("wid"),
UBInt16("length"),
MetaArray(lambda context: context["length"], items),
),
0x69: Struct("window-progress",
UBInt8("wid"),
UBInt16("bar"),
UBInt16("progress"),
),
0x6a: Struct("window-token",
UBInt8("wid"),
UBInt16("token"),
Bool("acknowledged"),
),
0x6b: Struct("window-creative",
UBInt16("slot"),
Embed(items),
),
0x6c: Struct("enchant",
UBInt8("wid"),
UBInt8("enchantment"),
),
0x82: Struct("sign",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
AlphaString("line1"),
AlphaString("line2"),
AlphaString("line3"),
AlphaString("line4"),
),
0x83: Struct("map",
UBInt16("type"),
UBInt16("itemid"),
PascalString("data", length_field=UBInt16("length")),
),
0x84: Struct("tile-update",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
UBInt8("action"),
PascalString("nbt_data", length_field=UBInt16("length")), # gzipped
),
0xc8: Struct("statistics",
UBInt32("sid"), # XXX I could be an Enum
UBInt8("count"),
),
0xc9: Struct("players",
AlphaString("name"),
Bool("online"),
UBInt16("ping"),
),
0xca: Struct("abilities",
UBInt8("flags"),
UBInt8("fly-speed"),
UBInt8("walk-speed"),
),
0xcb: Struct("tab",
AlphaString("autocomplete"),
),
0xcc: Struct("settings",
AlphaString("locale"),
UBInt8("distance"),
UBInt8("chat"),
difficulty,
Bool("cape"),
),
0xcd: Struct("statuses",
UBInt8("payload")
),
0xce: Struct("score_item",
AlphaString("name"),
AlphaString("value"),
Enum(UBInt8("action"),
create=0,
remove=1,
update=2,
),
),
0xcf: Struct("score_update",
AlphaString("item_name"),
UBInt8("remove"),
If(lambda context: context["remove"] == 0,
Embed(Struct("information",
AlphaString("score_name"),
UBInt32("value"),
))
),
),
0xd0: Struct("score_display",
Enum(UBInt8("position"),
as_list=0,
sidebar=1,
below_name=2
),
AlphaString("score_name"),
),
0xd1: Struct("teams",
AlphaString("name"),
Enum(UBInt8("mode"),
team_created=0,
team_removed=1,
team_updates=2,
players_added=3,
players_removed=4,
),
If(lambda context: context["mode"] in ("team_created", "team_updated"),
Embed(Struct("team_info",
AlphaString("team_name"),
AlphaString("team_prefix"),
AlphaString("team_suffix"),
Enum(UBInt8("friendly_fire"),
off=0,
on=1,
invisibles=2,
),
))
),
If(lambda context: context["mode"] in ("team_created", "players_added", "players_removed"),
Embed(Struct("players_info",
UBInt16("count"),
MetaArray(lambda context: context["count"], AlphaString("player_names")),
))
),
),
0xfa: Struct("plugin-message",
AlphaString("channel"),
PascalString("data", length_field=UBInt16("length")),
),
0xfc: Struct("key-response",
PascalString("key", length_field=UBInt16("key-len")),
PascalString("token", length_field=UBInt16("token-len")),
),
0xfd: Struct("key-request",
AlphaString("server"),
PascalString("key", length_field=UBInt16("key-len")),
PascalString("token", length_field=UBInt16("token-len")),
),
0xfe: Struct("poll", UBInt8("unused")),
# TODO: rename to 'kick'
0xff: Struct("error", AlphaString("message")),
}
packet_stream = Struct("packet_stream",
OptionalGreedyRange(
Struct("full_packet",
UBInt8("header"),
Switch("payload", lambda context: context["header"], packets),
),
),
OptionalGreedyRange(
UBInt8("leftovers"),
),
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index 56f0078..fd9b41c 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -219,1277 +219,1275 @@ class BetaServerProtocol(object, Protocol, TimeoutMixin):
# Santitize location. This handles safety boundaries, illegal stance,
# etc.
altered = self.location.clamp() or altered
# If, for any reason, our opinion on where the client should be
# located is different than theirs, force them to conform to our point
# of view.
if altered:
log.msg("Not updating bogus position!")
self.update_location()
# If our position actually changed, fire the position change hook.
if old_position != position:
self.position_changed()
def orientation(self, container):
"""
Hook for orientation packets.
"""
self.grounded(container.grounded)
old_orientation = self.location.ori
orientation = Orientation.from_degs(container.orientation.rotation,
container.orientation.pitch)
self.location.ori = orientation
if old_orientation != orientation:
self.orientation_changed()
def location_packet(self, container):
"""
Hook for location packets.
"""
self.position(container)
self.orientation(container)
def digging(self, container):
"""
Hook for digging packets.
"""
def build(self, container):
"""
Hook for build packets.
"""
def equip(self, container):
"""
Hook for equip packets.
"""
def pickup(self, container):
"""
Hook for pickup packets.
"""
def animate(self, container):
"""
Hook for animate packets.
"""
def action(self, container):
"""
Hook for action packets.
"""
def wclose(self, container):
"""
Hook for wclose packets.
"""
def waction(self, container):
"""
Hook for waction packets.
"""
def wacknowledge(self, container):
"""
Hook for wacknowledge packets.
"""
def wcreative(self, container):
"""
Hook for creative inventory action packets.
"""
def sign(self, container):
"""
Hook for sign packets.
"""
def complete(self, container):
"""
Hook for tab-completion packets.
"""
def settings_packet(self, container):
"""
Hook for client settings packets.
"""
distance = ["far", "normal", "short", "tiny"][container.distance]
self.settings = Settings(container.locale, distance)
def poll(self, container):
"""
Hook for poll packets.
By default, queries the parent factory for some data, and replays it
in a specific format to the requester. The connection is then closed
at both ends. This functionality is used by Beta 1.8 clients to poll
servers for status.
"""
players = unicode(len(self.factory.protocols))
max_players = unicode(self.factory.limitConnections or 1000000)
data = [
u"§1",
unicode(SUPPORTED_PROTOCOL),
u"Bravo %s" % version,
self.motd,
players,
max_players,
]
response = u"\u0000".join(data)
self.error(response)
def quit(self, container):
"""
Hook for quit packets.
By default, merely logs the quit message and drops the connection.
Even if the connection is not dropped, it will be lost anyway since
the client will close the connection. It's better to explicitly let it
go here than to have zombie protocols.
"""
log.msg("Client is quitting: %s" % container.message)
self.transport.loseConnection()
# Twisted-level data handlers and methods
# Please don't override these needlessly, as they are pretty solid and
# shouldn't need to be touched.
def dataReceived(self, data):
self.buf += data
packets, self.buf = parse_packets(self.buf)
if packets:
self.resetTimeout()
for header, payload in packets:
if header in self.handlers:
d = maybeDeferred(self.handlers[header], payload)
@d.addErrback
def eb(failure):
log.err("Error while handling packet %d" % header)
log.err(failure)
return None
else:
log.err("Didn't handle parseable packet %d!" % header)
log.err(payload)
def connectionLost(self, reason):
if self._ping_loop.running:
self._ping_loop.stop()
def timeoutConnection(self):
self.error("Connection timed out")
# State-change callbacks
# Feel free to override these, but call them at some point.
def authenticated(self):
"""
Called when the client has successfully authenticated with the server.
"""
self.state = STATE_AUTHENTICATED
self._ping_loop.start(30)
# Event callbacks
# These are meant to be overriden.
def orientation_changed(self):
"""
Called when the client moves.
This callback is only for orientation, not position.
"""
pass
def position_changed(self):
"""
Called when the client moves.
This callback is only for position, not orientation.
"""
pass
# Convenience methods for consolidating code and expressing intent. I
# hear that these are occasionally useful. If a method in this section can
# be used, then *PLEASE* use it; not using it is the same as open-coding
# whatever you're doing, and only hurts in the long run.
def write_packet(self, header, **payload):
"""
Send a packet to the client.
"""
self.transport.write(make_packet(header, **payload))
def update_ping(self):
"""
Send a keepalive to the client.
"""
timestamp = timestamp_from_clock(reactor)
self.write_packet("ping", pid=timestamp)
def update_location(self):
"""
Send this client's location to the client.
Also let other clients know where this client is.
"""
# Don't bother trying to update things if the position's not yet
# synchronized. We could end up jettisoning them into the void.
if self.state != STATE_LOCATED:
return
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
# Inform everybody of our new location.
packet = make_packet("teleport", eid=self.player.eid, x=x, y=y, z=z,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
# Inform ourselves of our new location.
packet = self.location.save_to_packet()
self.transport.write(packet)
def ascend(self, count):
"""
Ascend to the next XZ-plane.
``count`` is the number of ascensions to perform, and may be zero in
order to force this player to not be standing inside a block.
:returns: bool of whether the ascension was successful
This client must be located for this method to have any effect.
"""
if self.state != STATE_LOCATED:
return False
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
chunk = self.chunks[bigx, bigz]
column = [chunk.get_block((smallx, i, smallz))
for i in range(CHUNK_HEIGHT)]
# Special case: Ascend at most once, if the current spot isn't good.
if count == 0:
if (not column[y]) or column[y + 1] or column[y + 2]:
# Yeah, we're gonna need to move.
count += 1
else:
# Nope, we're fine where we are.
return True
for i in xrange(y, 255):
# Find the next spot above us which has a platform and two empty
# blocks of air.
if column[i] and (not column[i + 1]) and not column[i + 2]:
count -= 1
if not count:
break
else:
return False
self.location.pos = self.location.pos._replace(y=i * 32)
return True
def error(self, message):
"""
Error out.
This method sends ``message`` to the client as a descriptive error
message, then closes the connection.
"""
log.msg("Error: %s" % message)
self.transport.write(make_error_packet(message))
self.transport.loseConnection()
def play_notes(self, notes):
"""
Play some music.
Send a sequence of notes to the player. ``notes`` is a finite iterable
of pairs of instruments and pitches.
There is no way to time notes; if staggered playback is desired (and
it usually is!), then ``play_notes()`` should be called repeatedly at
the appropriate times.
This method turns the block beneath the player into a note block,
plays the requested notes through it, then turns it back into the
original block, all without actually modifying the chunk.
"""
x, y, z = self.location.pos.to_block()
if y:
y -= 1
bigx, smallx, bigz, smallz = split_coords(x, z)
if (bigx, bigz) not in self.chunks:
return
block = self.chunks[bigx, bigz].get_block((smallx, y, smallz))
meta = self.chunks[bigx, bigz].get_metadata((smallx, y, smallz))
self.write_packet("block", x=x, y=y, z=z,
type=blocks["note-block"].slot, meta=0)
for instrument, pitch in notes:
self.write_packet("note", x=x, y=y, z=z, pitch=pitch,
instrument=instrument)
self.write_packet("block", x=x, y=y, z=z, type=block, meta=meta)
# Automatic properties. Assigning to them causes the client to be notified
# of changes.
@property
def health(self):
return self._health
@health.setter
def health(self, value):
if not 0 <= value <= 20:
raise BetaClientError("Invalid health value %d" % value)
if self._health != value:
self.write_packet("health", hp=value, fp=0, saturation=0)
self._health = value
@property
def latency(self):
return self._latency
@latency.setter
def latency(self, value):
# Clamp the value to not exceed the boundaries of the packet. This is
# necessary even though, in theory, a ping this high is bad news.
value = clamp(value, 0, 65535)
# Check to see if this is a new value, and if so, alert everybody.
if self._latency != value:
packet = make_packet("players", name=self.username, online=True,
ping=value)
self.factory.broadcast(packet)
self._latency = value
class KickedProtocol(BetaServerProtocol):
"""
A very simple Beta protocol that helps enforce IP bans, Max Connections,
and Max Connections Per IP.
This protocol disconnects people as soon as they connect, with a helpful
message.
"""
def __init__(self, reason=None):
if reason:
self.reason = reason
else:
self.reason = (
"This server doesn't like you very much."
" I don't like you very much either.")
def connectionMade(self):
self.error("%s" % self.reason)
class BetaProxyProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` that proxies for an InfiniCraft client.
"""
gateway = "server.wiki.vg"
def handshake(self, container):
self.write_packet("handshake", username="-")
def login(self, container):
self.username = container.username
self.write_packet("login", protocol=0, username="", seed=0,
dimension="earth")
url = urlunparse(("http", self.gateway, "/node/0/0/", None, None,
None))
d = getPage(url)
d.addCallback(self.start_proxy)
def start_proxy(self, response):
log.msg("Response: %s" % response)
log.msg("Starting proxy...")
address, port = response.split(":")
self.add_node(address, int(port))
def add_node(self, address, port):
"""
Add a new node to this client.
"""
from twisted.internet.endpoints import TCP4ClientEndpoint
log.msg("Adding node %s:%d" % (address, port))
endpoint = TCP4ClientEndpoint(reactor, address, port, 5)
self.node_factory = InfiniClientFactory()
d = endpoint.connect(self.node_factory)
d.addCallback(self.node_connected)
d.addErrback(self.node_connect_error)
def node_connected(self, protocol):
log.msg("Connected new node!")
def node_connect_error(self, reason):
log.err("Couldn't connect node!")
log.err(reason)
class BravoProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` suitable for serving MC worlds to clients.
This protocol really does need to be hooked up with a ``BravoFactory`` or
something very much like it.
"""
chunk_tasks = None
time_loop = None
eid = 0
last_dig = None
def __init__(self, config, name):
BetaServerProtocol.__init__(self)
self.config = config
self.config_name = "world %s" % name
# Retrieve the MOTD. Only needs to be done once.
self.motd = self.config.getdefault(self.config_name, "motd",
"BravoServer")
def register_hooks(self):
log.msg("Registering client hooks...")
plugin_types = {
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
for t in plugin_types:
setattr(self, t, getattr(self.factory, t))
log.msg("Registering policies...")
if self.factory.mode == "creative":
self.dig_policy = dig_policies["speedy"]
else:
self.dig_policy = dig_policies["notchy"]
log.msg("Registered client plugin hooks!")
def pre_handshake(self):
"""
Set up username and get going.
"""
if self.username in self.factory.protocols:
# This username's already taken; find a new one.
- for name in username_alternatives(username):
+ for name in username_alternatives(self.username):
if name not in self.factory.protocols:
- container.username = name
+ self.username = name
break
else:
self.error("Your username is already taken.")
return False
return True
@inlineCallbacks
def authenticated(self):
BetaServerProtocol.authenticated(self)
# Init player, and copy data into it.
self.player = yield self.factory.world.load_player(self.username)
self.player.eid = self.eid
self.location = self.player.location
# Init players' inventory window.
self.inventory = InventoryWindow(self.player.inventory)
# *Now* we are in our factory's list of protocols. Be aware.
self.factory.protocols[self.username] = self
# Announce our presence.
self.factory.chat("%s is joining the game..." % self.username)
packet = make_packet("players", name=self.username, online=True,
ping=0)
self.factory.broadcast(packet)
# Craft our avatar and send it to already-connected other players.
packet = make_packet("create", eid=self.player.eid)
packet += self.player.save_to_packet()
self.factory.broadcast_for_others(packet, self)
# And of course spawn all of those players' avatars in our client as
# well.
for protocol in self.factory.protocols.itervalues():
# Skip over ourselves; otherwise, the client tweaks out and
# usually either dies or locks up.
if protocol is self:
continue
self.write_packet("create", eid=protocol.player.eid)
packet = protocol.player.save_to_packet()
packet += protocol.player.save_equipment_to_packet()
self.transport.write(packet)
# Send spawn and inventory.
spawn = self.factory.world.level.spawn
packet = make_packet("spawn", x=spawn[0], y=spawn[1], z=spawn[2])
packet += self.inventory.save_to_packet()
self.transport.write(packet)
# Send weather.
self.transport.write(self.factory.vane.make_packet())
self.send_initial_chunk_and_location()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(10)
def orientation_changed(self):
# Bang your head!
yaw, pitch = self.location.ori.to_fracs()
packet = make_packet("entity-orientation", eid=self.player.eid,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
def position_changed(self):
# Send chunks.
self.update_chunks()
for entity in self.entities_near(2):
if entity.name != "Item":
continue
left = self.player.inventory.add(entity.item, entity.quantity)
if left != entity.quantity:
if left != 0:
# partial collect
entity.quantity = left
else:
packet = make_packet("collect", eid=entity.eid,
destination=self.player.eid)
packet += make_packet("destroy", count=1, eid=[entity.eid])
self.factory.broadcast(packet)
self.factory.destroy_entity(entity)
packet = self.inventory.save_to_packet()
self.transport.write(packet)
def entities_near(self, radius):
"""
Obtain the entities within a radius of this player.
Radius is measured in blocks.
"""
chunk_radius = int(radius // 16 + 1)
- chunkx, chaff, chunkz, chaff = split_coords(self.location.pos.x,
- self.location.pos.z)
+ chunkx, chunkz = self.location.pos.to_chunk()
minx = chunkx - chunk_radius
maxx = chunkx + chunk_radius + 1
minz = chunkz - chunk_radius
maxz = chunkz + chunk_radius + 1
for x, z in product(xrange(minx, maxx), xrange(minz, maxz)):
if (x, z) not in self.chunks:
continue
chunk = self.chunks[x, z]
yieldables = [entity for entity in chunk.entities
if self.location.distance(entity.location) <= (radius * 32)]
for i in yieldables:
yield i
def chat(self, container):
if container.message.startswith("/"):
pp = {"factory": self.factory}
commands = retrieve_plugins(IChatCommand, factory=self.factory)
# Register aliases.
for plugin in commands.values():
for alias in plugin.aliases:
commands[alias] = plugin
params = container.message[1:].split(" ")
command = params.pop(0).lower()
if command and command in commands:
def cb(iterable):
for line in iterable:
self.write_packet("chat", message=line)
def eb(error):
self.write_packet("chat", message="Error: %s" %
error.getErrorMessage())
d = maybeDeferred(commands[command].chat_command,
self.username, params)
d.addCallback(cb)
d.addErrback(eb)
else:
self.write_packet("chat",
message="Unknown command: %s" % command)
else:
# Send the message up to the factory to be chatified.
message = "<%s> %s" % (self.username, container.message)
self.factory.chat(message)
def use(self, container):
"""
For each entity in proximity (4 blocks), check if it is the target
of this packet and call all hooks that stated interested in this
type.
"""
nearby_players = self.factory.players_near(self.player, 4)
for entity in chain(self.entities_near(4), nearby_players):
if entity.eid == container.target:
for hook in self.use_hooks[entity.name]:
hook.use_hook(self.factory, self.player, entity,
container.button == 0)
break
@inlineCallbacks
def digging(self, container):
if container.x == -1 and container.z == -1 and container.y == 255:
# Lala-land dig packet. Discard it for now.
return
# Player drops currently holding item/block.
if (container.state == "dropped" and container.face == "-y" and
container.x == 0 and container.y == 0 and container.z == 0):
i = self.player.inventory
holding = i.holdables[self.player.equipped]
if holding:
primary, secondary, count = holding
if i.consume((primary, secondary), self.player.equipped):
dest = self.location.in_front_of(2)
coords = dest.pos._replace(y=dest.pos.y + 1)
self.factory.give(coords, (primary, secondary), 1)
# Re-send inventory.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# If no items in this slot are left, this player isn't
# holding an item anymore.
if i.holdables[self.player.equipped] is None:
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=65535,
count=1,
secondary=0
)
self.factory.broadcast_for_others(packet, self)
return
if container.state == "shooting":
self.shoot_arrow()
return
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
coords = smallx, container.y, smallz
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't dig in chunk (%d, %d)!" % (bigx, bigz))
return
block = chunk.get_block((smallx, container.y, smallz))
if container.state == "started":
# Run pre dig hooks
for hook in self.pre_dig_hooks:
cancel = yield maybeDeferred(hook.pre_dig_hook, self.player,
(container.x, container.y, container.z), block)
if cancel:
return
tool = self.player.inventory.holdables[self.player.equipped]
# Check to see whether we should break this block.
if self.dig_policy.is_1ko(block, tool):
self.run_dig_hooks(chunk, coords, blocks[block])
else:
# Set up a timer for breaking the block later.
dtime = time() + self.dig_policy.dig_time(block, tool)
self.last_dig = coords, block, dtime
elif container.state == "stopped":
# The client thinks it has broken a block. We shall see.
if not self.last_dig:
return
oldcoords, oldblock, dtime = self.last_dig
if oldcoords != coords or oldblock != block:
# Nope!
self.last_dig = None
return
dtime -= time()
# When enough time has elapsed, run the dig hooks.
d = deferLater(reactor, max(dtime, 0), self.run_dig_hooks, chunk,
coords, blocks[block])
d.addCallback(lambda none: setattr(self, "last_dig", None))
def run_dig_hooks(self, chunk, coords, block):
"""
Destroy a block and run the post-destroy dig hooks.
"""
x, y, z = coords
if block.breakable:
chunk.destroy(coords)
l = []
for hook in self.dig_hooks:
l.append(maybeDeferred(hook.dig_hook, chunk, x, y, z, block))
dl = DeferredList(l)
dl.addCallback(lambda none: self.factory.flush_chunk(chunk))
@inlineCallbacks
def build(self, container):
"""
Handle a build packet.
Several things must happen. First, the packet's contents need to be
examined to ensure that the packet is valid. A check is done to see if
the packet is opening a windowed object. If not, then a build is
run.
"""
# Is the target within our purview? We don't do a very strict
# containment check, but we *do* require that the chunk be loaded.
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't select in chunk (%d, %d)!" % (bigx, bigz))
return
target = blocks[chunk.get_block((smallx, container.y, smallz))]
# If it's a chest, hax.
if target.name == "chest":
from bravo.policy.windows import Chest
w = Chest()
self.windows[self.wid] = w
w.open()
self.write_packet("window-open", wid=self.wid, type=w.identifier,
title=w.title, slots=w.slots)
self.wid += 1
return
elif target.name == "workbench":
from bravo.policy.windows import Workbench
w = Workbench()
self.windows[self.wid] = w
w.open()
self.write_packet("window-open", wid=self.wid, type=w.identifier,
title=w.title, slots=w.slots)
self.wid += 1
return
# Try to open it first
for hook in self.open_hooks:
window = yield maybeDeferred(hook.open_hook, self, container,
chunk.get_block((smallx, container.y, smallz)))
if window:
self.write_packet("window-open", wid=window.wid,
type=window.identifier, title=window.title,
slots=window.slots_num)
packet = window.save_to_packet()
self.transport.write(packet)
# window opened
return
# Ignore clients that think -1 is placeable.
if container.primary == -1:
return
# Special case when face is "noop": Update the status of the currently
# held block rather than placing a new block.
if container.face == "noop":
return
# If the target block is vanishable, then adjust our aim accordingly.
if target.vanishes:
container.face = "+y"
container.y -= 1
if container.primary in blocks:
block = blocks[container.primary]
elif container.primary in items:
block = items[container.primary]
else:
log.err("Ignoring request to place unknown block %d" %
container.primary)
return
# Run pre-build hooks. These hooks are able to interrupt the build
# process.
builddata = BuildData(block, 0x0, container.x, container.y,
container.z, container.face)
for hook in self.pre_build_hooks:
cont, builddata, cancel = yield maybeDeferred(hook.pre_build_hook,
self.player, builddata)
if cancel:
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
return
if not cont:
break
# Run the build.
try:
yield maybeDeferred(self.run_build, builddata)
except BuildError:
return
newblock = builddata.block.slot
coords = adjust_coords_for_face(
(builddata.x, builddata.y, builddata.z), builddata.face)
# Run post-build hooks. These are merely callbacks which cannot
# interfere with the build process, largely because the build process
# already happened.
for hook in self.post_build_hooks:
yield maybeDeferred(hook.post_build_hook, self.player, coords,
builddata.block)
# Feed automatons.
for automaton in self.factory.automatons:
if newblock in automaton.blocks:
automaton.feed(coords)
# Re-send inventory.
# XXX this could be optimized if/when inventories track damage.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# Flush damaged chunks.
for chunk in self.chunks.itervalues():
self.factory.flush_chunk(chunk)
def run_build(self, builddata):
block, metadata, x, y, z, face = builddata
# Don't place items as blocks.
if block.slot not in blocks:
raise BuildError("Couldn't build item %r as block" % block)
# Check for orientable blocks.
if not metadata and block.orientable():
metadata = block.orientation(face)
if metadata is None:
# Oh, I guess we can't even place the block on this face.
raise BuildError("Couldn't orient block %r on face %s" %
(block, face))
# Make sure we can remove it from the inventory first.
if not self.player.inventory.consume((block.slot, 0),
self.player.equipped):
# Okay, first one was a bust; maybe we can consume the related
# block for dropping instead?
if not self.player.inventory.consume(block.drop,
self.player.equipped):
raise BuildError("Couldn't consume %r from inventory" % block)
# Offset coords according to face.
x, y, z = adjust_coords_for_face((x, y, z), face)
# Set the block and data.
dl = [self.factory.world.set_block((x, y, z), block.slot)]
if metadata:
dl.append(self.factory.world.set_metadata((x, y, z), metadata))
return DeferredList(dl)
def equip(self, container):
self.player.equipped = container.slot
# Inform everyone about the item the player is holding now.
item = self.player.inventory.holdables[self.player.equipped]
if item is None:
# Empty slot. Use signed short -1.
primary, secondary = -1, 0
else:
primary, secondary, count = item
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=primary,
count=1,
secondary=secondary
)
self.factory.broadcast_for_others(packet, self)
def pickup(self, container):
self.factory.give((container.x, container.y, container.z),
(container.primary, container.secondary), container.count)
def animate(self, container):
# Broadcast the animation of the entity to everyone else. Only swing
# arm is send by notchian clients.
packet = make_packet("animate",
eid=self.player.eid,
animation=container.animation
)
self.factory.broadcast_for_others(packet, self)
def wclose(self, container):
wid = container.wid
if wid == 0:
# WID 0 is reserved for the client inventory.
pass
elif wid in self.windows:
w = self.windows.pop(wid)
w.close()
else:
self.error("WID %d doesn't exist." % wid)
def waction(self, container):
wid = container.wid
if wid in self.windows:
w = self.windows[wid]
result = w.action(container.slot, container.button,
container.token, container.shift,
container.primary)
self.write_packet("window-token", wid=wid, token=container.token,
acknowledged=result)
else:
self.error("WID %d doesn't exist." % wid)
def wcreative(self, container):
"""
A slot was altered in creative mode.
"""
# XXX Sometimes the container doesn't contain all of this information.
# What then?
applied = self.inventory.creative(container.slot, container.primary,
container.secondary, container.count)
if applied:
# Inform other players about changes to this player's equipment.
equipped_slot = self.player.equipped + 36
if container.slot == equipped_slot:
packet = make_packet("entity-equipment",
eid=self.player.eid,
# XXX why 0? why not the actual slot?
slot=0,
primary=container.primary,
count=1,
secondary=container.secondary,
)
self.factory.broadcast_for_others(packet, self)
def shoot_arrow(self):
# TODO 1. Create arrow entity: arrow = Arrow(self.factory, self.player)
# 2. Register within the factory: self.factory.register_entity(arrow)
# 3. Run it: arrow.run()
pass
def sign(self, container):
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't handle sign in chunk (%d, %d)!" % (bigx, bigz))
return
if (smallx, container.y, smallz) in chunk.tiles:
new = False
s = chunk.tiles[smallx, container.y, smallz]
else:
new = True
s = Sign(smallx, container.y, smallz)
chunk.tiles[smallx, container.y, smallz] = s
s.text1 = container.line1
s.text2 = container.line2
s.text3 = container.line3
s.text4 = container.line4
chunk.dirty = True
# The best part of a sign isn't making one, it's showing everybody
# else on the server that you did.
packet = make_packet("sign", container)
self.factory.broadcast_for_chunk(packet, bigx, bigz)
# Run sign hooks.
for hook in self.sign_hooks:
hook.sign_hook(self.factory, chunk, container.x, container.y,
container.z, [s.text1, s.text2, s.text3, s.text4], new)
def complete(self, container):
"""
Attempt to tab-complete user names.
"""
needle = container.autocomplete
usernames = self.factory.protocols.keys()
results = complete(needle, usernames)
self.write_packet("tab", autocomplete=results)
def settings_packet(self, container):
"""
Acknowledge a change of settings and update chunk distance.
"""
super(BravoProtocol, self).settings_packet(container)
self.update_chunks()
def disable_chunk(self, x, z):
key = x, z
log.msg("Disabling chunk %d, %d" % key)
if key not in self.chunks:
log.msg("...But the chunk wasn't loaded!")
return
# Remove the chunk from cache.
chunk = self.chunks.pop(key)
eids = [e.eid for e in chunk.entities]
self.write_packet("destroy", count=len(eids), eid=eids)
# Clear chunk data on the client.
self.write_packet("chunk", x=x, z=z, continuous=False, primary=0x0,
add=0x0, data="")
def enable_chunk(self, x, z):
"""
Request a chunk.
This function will asynchronously obtain the chunk, and send it on the
wire.
:returns: `Deferred` that will be fired when the chunk is obtained,
with no arguments
"""
log.msg("Enabling chunk %d, %d" % (x, z))
if (x, z) in self.chunks:
log.msg("...But the chunk was already loaded!")
return succeed(None)
d = self.factory.world.request_chunk(x, z)
@d.addCallback
def cb(chunk):
self.chunks[x, z] = chunk
return chunk
d.addCallback(self.send_chunk)
return d
def send_chunk(self, chunk):
log.msg("Sending chunk %d, %d" % (chunk.x, chunk.z))
packet = chunk.save_to_packet()
self.transport.write(packet)
for entity in chunk.entities:
packet = entity.save_to_packet()
self.transport.write(packet)
for entity in chunk.tiles.itervalues():
if entity.name == "Sign":
packet = entity.save_to_packet()
self.transport.write(packet)
def send_initial_chunk_and_location(self):
"""
Send the initial chunks and location.
This method sends more than one chunk; since Beta 1.2, it must send
nearly fifty chunks before the location can be safely sent.
"""
# Disable located hooks. We'll re-enable them at the end.
self.state = STATE_AUTHENTICATED
log.msg("Initial, position %d, %d, %d" % self.location.pos)
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
# Send the chunk that the player will stand on. The other chunks are
# not so important. There *used* to be a bug, circa Beta 1.2, that
# required lots of surrounding geometry to be present, but that's been
# fixed.
d = self.enable_chunk(bigx, bigz)
# What to do if we can't load a given chunk? Just kick 'em.
d.addErrback(lambda fail: self.error("Couldn't load a chunk... :c"))
# Don't dare send more chunks beyond the initial one until we've
# spawned. Once we've spawned, set our status to LOCATED and then
# update_location() will work.
@d.addCallback
def located(none):
self.state = STATE_LOCATED
# Ensure that we're above-ground.
self.ascend(0)
d.addCallback(lambda none: self.update_location())
d.addCallback(lambda none: self.position_changed())
# Send the MOTD.
if self.motd:
@d.addCallback
def motd(none):
self.write_packet("chat",
message=self.motd.replace("<tagline>", get_motd()))
# Finally, start the secondary chunk loop.
d.addCallback(lambda none: self.update_chunks())
def update_chunks(self):
# Don't send chunks unless we're located.
if self.state != STATE_LOCATED:
return
- x, y, z = self.location.pos.to_block()
- x, chaff, z, chaff = split_coords(x, z)
+ x, z = self.location.pos.to_chunk()
# These numbers come from a couple spots, including minecraftwiki, but
# I verified them experimentally using torches and pillars to mark
# distances on each setting. ~ C.
distances = {
"tiny": 2,
"short": 4,
"far": 16,
}
radius = distances.get(self.settings.distance, 8)
new = set(circling(x, z, radius))
old = set(self.chunks.iterkeys())
added = new - old
discarded = old - new
# Perhaps some explanation is in order.
# The cooperate() function iterates over the iterable it is fed,
# without tying up the reactor, by yielding after each iteration. The
# inner part of the generator expression generates all of the chunks
# around the currently needed chunk, and it sorts them by distance to
# the current chunk. The end result is that we load chunks one-by-one,
# nearest to furthest, without stalling other clients.
if self.chunk_tasks:
for task in self.chunk_tasks:
try:
task.stop()
except (TaskDone, TaskFailed):
pass
to_enable = sorted_by_distance(added, x, z)
self.chunk_tasks = [
cooperate(self.enable_chunk(i, j) for i, j in to_enable),
cooperate(self.disable_chunk(i, j) for i, j in discarded),
]
def update_time(self):
time = int(self.factory.time)
self.write_packet("time", timestamp=time, time=time % 24000)
def connectionLost(self, reason):
"""
Cleanup after a lost connection.
Most of the time, these connections are lost cleanly; we don't have
any cleanup to do in the unclean case since clients don't have any
kind of pending state which must be recovered.
Remember, the connection can be lost before identification and
authentication, so ``self.username`` and ``self.player`` can be None.
"""
if self.username and self.player:
self.factory.world.save_player(self.username, self.player)
if self.player:
self.factory.destroy_entity(self.player)
packet = make_packet("destroy", count=1, eid=[self.player.eid])
self.factory.broadcast(packet)
if self.username:
packet = make_packet("players", name=self.username, online=False,
ping=0)
self.factory.broadcast(packet)
self.factory.chat("%s has left the game." % self.username)
self.factory.teardown_protocol(self)
# We are now torn down. After this point, there will be no more
# factory stuff, just our own personal stuff.
del self.factory
if self.time_loop:
self.time_loop.stop()
if self.chunk_tasks:
for task in self.chunk_tasks:
try:
task.stop()
except (TaskDone, TaskFailed):
pass
diff --git a/bravo/location.py b/bravo/location.py
index 3f15d0e..437e2fa 100644
--- a/bravo/location.py
+++ b/bravo/location.py
@@ -1,247 +1,252 @@
from __future__ import division
from collections import namedtuple
from copy import copy
from math import atan2, cos, degrees, radians, pi, sin, sqrt
import operator
from construct import Container
from bravo.beta.packets import make_packet
from bravo.utilities.maths import clamp
def _combinator(op):
def f(self, other):
return self._replace(x=op(self.x, other.x), y=op(self.y, other.y),
z=op(self.z, other.z))
return f
class Position(namedtuple("Position", "x, y, z")):
"""
The coordinates pointing to an entity.
Positions are *always* stored as integer absolute pixel coordinates.
"""
__add__ = _combinator(operator.add)
__sub__ = _combinator(operator.sub)
__mul__ = _combinator(operator.mul)
__div__ = _combinator(operator.div)
@classmethod
def from_player(cls, x, y, z):
"""
Create a ``Position`` from floating-point block coordinates.
"""
return cls(int(x * 32), int(y * 32), int(z * 32))
def to_player(self):
"""
Return this position as floating-point block coordinates.
"""
return self.x / 32, self.y / 32, self.z / 32
def to_block(self):
"""
Return this position as block coordinates.
"""
return int(self.x // 32), int(self.y // 32), int(self.z // 32)
+ def to_chunk(self):
+ return int(self.x // 32 // 16), int(self.z // 32 // 16)
+
def distance(self, other):
"""
Return the distance between this position and another, in absolute
pixels.
"""
dx = (self.x - other.x)**2
dy = (self.y - other.y)**2
dz = (self.z - other.z)**2
return int(sqrt(dx + dy + dz))
def heading(self, other):
"""
Return the heading from this position to another, in radians.
This is a wrapper for the common atan2() expression found in games,
meant to help encapsulate semantics and keep copy-paste errors from
happening.
"""
theta = atan2(self.z - other.z, self.x - other.x) + pi / 2
if theta < 0:
theta += pi * 2
return theta
class Orientation(namedtuple("Orientation", "theta, phi")):
"""
The angles corresponding to the heading of an entity.
Theta and phi are very much like the theta and phi of spherical
coordinates, except that phi's zero is perpendicular to the XZ-plane
rather than pointing straight up or straight down.
Orientation is stored in floating-point radians, for simplicity of
computation. Unfortunately, no wire protocol speaks radians, so several
conversion methods are provided for sanity and convenience.
The ``from_degs()`` and ``to_degs()`` methods provide integer degrees.
This form is called "yaw and pitch" by protocol documentation.
"""
@classmethod
def from_degs(cls, yaw, pitch):
"""
Create an ``Orientation`` from integer degrees.
"""
return cls(radians(yaw) % (pi * 2), radians(pitch))
def to_degs(self):
"""
Return this orientation as integer degrees.
"""
return int(round(degrees(self.theta))), int(round(degrees(self.phi)))
def to_fracs(self):
"""
Return this orientation as fractions of a byte.
"""
yaw = int(self.theta * 255 / (2 * pi)) % 256
pitch = int(self.phi * 255 / (2 * pi)) % 256
return yaw, pitch
class Location(object):
"""
The position and orientation of an entity.
"""
def __init__(self):
# Position in pixels.
self.pos = Position(0, 0, 0)
# Start with a relatively sane stance.
self.stance = 1.0
# Orientation, in radians.
self.ori = Orientation(0.0, 0.0)
# Whether we are in the air.
self.grounded = False
@classmethod
def at_block(cls, x, y, z):
"""
Pinpoint a location at a certain block.
This constructor is intended to aid in pinpointing locations at a
specific block rather than forcing users to do the pixel<->block maths
themselves. Admittedly, the maths in question aren't hard, but there's
no reason to avoid this encapsulation.
"""
location = cls()
location.pos = Position(x * 32, y * 32, z * 32)
return location
def __repr__(self):
return "<Location(%s, (%d, %d (+%.6f), %d), (%.2f, %.2f))>" % (
"grounded" if self.grounded else "midair", self.pos.x, self.pos.y,
self.stance - self.pos.y, self.pos.z, self.ori.theta,
self.ori.phi)
__str__ = __repr__
def clamp(self):
"""
Force this location to be sane.
Forces the position and orientation to be sane, then fixes up
location-specific things, like stance.
:returns: bool indicating whether this location had to be altered
"""
clamped = False
y = self.pos.y
# Clamp Y. We take precautions here and forbid things to go up past
# the top of the world; this tend to strand entities up in the sky
# where they cannot get down. We also forbid entities from falling
# past bedrock.
+ # TODO: Fix me, I'm broken
if not (32 * 1) <= y:
y = max(y, 32 * 1)
self.pos = self.pos._replace(y=y)
clamped = True
# Stance is the current jumping position, plus a small offset of
# around 0.1. In the Alpha server, it must between 0.1 and 1.65, or
# the anti-grounded code kicks the client. In the Beta server, though,
# the clamp is different. Experimentally, the stance can range from
# 1.5 (crouching) to 2.4 (jumping). At this point, we enforce some
# sanity on our client, and force the stance to a reasonable value.
fy = y / 32
if not 1.5 < (self.stance - fy) < 2.4:
# Standard standing stance is 1.62.
self.stance = fy + 1.62
clamped = True
return clamped
def save_to_packet(self):
"""
Returns a position/look/grounded packet.
"""
# Get our position.
- x, y, z = self.pos.to_block()
+ x, y, z = self.pos.to_player()
# Grab orientation.
yaw, pitch = self.ori.to_degs()
+ # Note: When this packet is sent from the server, the 'y' and 'stance' fields are swapped.
position = Container(x=x, y=self.stance, z=z, stance=y)
orientation = Container(rotation=yaw, pitch=pitch)
grounded = Container(grounded=self.grounded)
packet = make_packet("location", position=position,
orientation=orientation, grounded=grounded)
return packet
def distance(self, other):
"""
Return the distance between this location and another location.
"""
return self.pos.distance(other.pos)
def in_front_of(self, distance):
"""
Return a ``Location`` a certain number of blocks in front of this
position.
The orientation of the returned location is identical to this
position's orientation.
:param int distance: the number of blocks by which to offset this
position
"""
other = copy(self)
distance *= 32
# Do some trig to put the other location a few blocks ahead of the
# player in the direction they are facing. Note that all three
# coordinates are "misnamed;" the unit circle actually starts at (0,
# 1) and goes *backwards* towards (-1, 0).
x = int(self.pos.x - distance * sin(self.ori.theta))
z = int(self.pos.z + distance * cos(self.ori.theta))
other.pos = other.pos._replace(x=x, z=z)
return other
diff --git a/bravo/plugins/window_hooks.py b/bravo/plugins/window_hooks.py
index 2c0a603..73f7be8 100644
--- a/bravo/plugins/window_hooks.py
+++ b/bravo/plugins/window_hooks.py
@@ -1,506 +1,510 @@
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.python import log
from zope.interface import implements
from bravo.beta.packets import make_packet
from bravo.blocks import blocks
from bravo.entity import Chest as ChestTile, Furnace as FurnaceTile
from bravo.ibravo import (IWindowOpenHook, IWindowClickHook, IWindowCloseHook,
IPreBuildHook, IDigHook)
from bravo.inventory.windows import (WorkbenchWindow, ChestWindow,
LargeChestWindow, FurnaceWindow)
from bravo.location import Location
from bravo.utilities.building import chestsAround
from bravo.utilities.coords import adjust_coords_for_face, split_coords
-def drop_items(factory, location, items, y_offset = 0):
+def drop_items(factory, location, items, y_offset=0):
"""
Loop over items and drop all of them
:param location: Location() or tuple (x, y, z)
:param items: list of items
"""
# XXX why am I polymorphic? :T
if type(location) == Location:
- x, y, z = location.x, location.y, location.z
+ x, y, z = location.pos.x, location.pos.y, location.pos.z
+ y += int(y_offset * 32) + 16
+ coords = (x, y, z)
else:
x, y, z = location
- y += y_offset
- coords = (int(x * 32) + 16, int(y * 32) + 16, int(z * 32) + 16)
+ y += y_offset
+ coords = (int(x * 32) + 16, int(y * 32) + 16, int(z * 32) + 16)
for item in items:
if item is None:
continue
factory.give(coords, (item[0], item[1]), item[2])
def processClickMessage(factory, player, window, container):
# Clicked out of the window
+ # TODO: change packet's slot to signed
if container.slot == 64537: # -999
items = window.drop_selected(bool(container.button))
drop_items(factory, player.location.in_front_of(1), items, 1)
player.write_packet("window-token", wid=container.wid,
token=container.token, acknowledged=True)
return
# perform selection action
selected = window.select(container.slot, bool(container.button),
bool(container.shift))
if selected:
# Notchian server does not send any packets here because both server
# and client uses same algorithm for inventory actions. I did my best
# to make bravo's inventory behave the same way but there is a chance
# some differencies still exist. So we send whole window content to
# the cliet to make sure client displays inventory we have on server.
packet = window.save_to_packet()
player.transport.write(packet)
# TODO: send package for 'item on cursor'.
equipped_slot = player.player.equipped + 36
# Inform other players about changes to this player's equipment.
if container.wid == 0 and (container.slot in range(5, 9) or
container.slot == equipped_slot):
# Currently equipped item changes.
if container.slot == equipped_slot:
item = player.player.inventory.holdables[player.player.equipped]
slot = 0
# Armor changes.
else:
item = player.player.inventory.armor[container.slot - 5]
# Order of slots is reversed in the equipment package.
slot = 4 - (container.slot - 5)
if item is None:
- primary, secondary = 65535, 0
+ primary, secondary, count = -1, 0, 0
else:
primary, secondary, count = item
packet = make_packet("entity-equipment",
eid=player.player.eid,
slot=slot,
primary=primary,
- secondary=secondary
+ secondary=secondary,
+ count=0
)
factory.broadcast_for_others(packet, player)
# If the window is SharedWindow for tile...
if window.coords is not None:
# ...and the window have dirty slots...
if len(window.dirty_slots):
# ...check if some one else...
for p in factory.protocols.itervalues():
if p is player:
continue
# ...have window opened for the same tile...
if len(p.windows) and p.windows[-1].coords == window.coords:
# ...and notify about changes...
packets = p.windows[-1].packets_for_dirty(window.dirty_slots)
p.transport.write(packets)
window.dirty_slots.clear()
# ...and mark the chunk dirty
bigx, smallx, bigz, smallz, y = window.coords
d = factory.world.request_chunk(bigx, bigz)
@d.addCallback
def mark_chunk_dirty(chunk):
chunk.dirty = True
return True
class Windows(object):
'''
Generic window hooks
NOTE: ``player`` argument in methods is a protocol. Not Player class!
'''
implements(IWindowClickHook, IWindowCloseHook)
def __init__(self, factory):
self.factory = factory
def close_hook(self, player, container):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x65 message
"""
if container.wid == 0:
return
if player.windows and player.windows[-1].wid == container.wid:
window = player.windows.pop()
items, packets = window.close()
# No need to send the packet as the window already closed on client.
# Pakets work only for player's inventory.
drop_items(self.factory, player.location.in_front_of(1), items, 1)
else:
player.error("Couldn't close non-current window %d" % container.wid)
def click_hook(self, player, container):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x66 message
"""
if container.wid == 0:
# Player's inventory is a special case and processed separately
return False
if player.windows and player.windows[-1].wid == container.wid:
window = player.windows[-1]
else:
player.error("Couldn't find window %d" % container.wid)
return False
processClickMessage(self.factory, player, window, container)
return True
name = "windows"
before = tuple()
after = ("inventory",)
class Inventory(object):
'''
Player's inventory hooks
'''
implements(IWindowClickHook, IWindowCloseHook)
def __init__(self, factory):
self.factory = factory
def close_hook(self, player, container):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x65 message
"""
if container.wid != 0:
# not inventory window
return
# NOTE: player is a protocol. Not Player class!
items, packets = player.inventory.close() # it's window from protocol
if packets:
player.transport.write(packets)
drop_items(self.factory, player.location.in_front_of(1), items, 1)
def click_hook(self, player, container):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x66 message
"""
if container.wid != 0:
# not inventory window
return False
processClickMessage(self.factory, player, player.inventory, container)
return True
name = "inventory"
before = tuple()
after = tuple()
class Workbench(object):
implements(IWindowOpenHook)
def __init__(self, factory):
pass
def open_hook(self, player, container, block):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x64 message
The ``block`` is a block we trying to open
:returns: None or window object
"""
if block != blocks["workbench"].slot:
return None
window = WorkbenchWindow(player.wid, player.player.inventory)
player.windows.append(window)
return window
name = "workbench"
before = tuple()
after = tuple()
class Furnace(object):
implements(IWindowOpenHook, IWindowClickHook, IPreBuildHook, IDigHook)
def __init__(self, factory):
self.factory = factory
def get_furnace_tile(self, chunk, coords):
try:
furnace = chunk.tiles[coords]
if type(furnace) != FurnaceTile:
raise KeyError
except KeyError:
x, y, z = coords
x = chunk.x * 16 + x
z = chunk.z * 16 + z
log.msg("Furnace at (%d, %d, %d) do not have tile or tile type mismatch" %
(x, y, z))
furnace = None
return furnace
@inlineCallbacks
def open_hook(self, player, container, block):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x64 message
The ``block`` is a block we trying to open
:returns: None or window object
"""
if block not in (blocks["furnace"].slot, blocks["burning-furnace"].slot):
returnValue(None)
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
chunk = yield self.factory.world.request_chunk(bigx, bigz)
furnace = self.get_furnace_tile(chunk, (smallx, container.y, smallz))
if furnace is None:
returnValue(None)
coords = bigx, smallx, bigz, smallz, container.y
window = FurnaceWindow(player.wid, player.player.inventory,
furnace.inventory, coords)
player.windows.append(window)
returnValue(window)
def click_hook(self, player, container):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x66 message
"""
if container.wid == 0:
return # skip inventory window
elif player.windows:
window = player.windows[-1]
else:
# click message but no window... hmm...
return
if type(window) != FurnaceWindow:
return
# inform content of furnace was probably changed
bigx, x, bigz, z, y = window.coords
d = self.factory.world.request_chunk(bigx, bigz)
@d.addCallback
def on_change(chunk):
furnace = self.get_furnace_tile(chunk, (x, y, z))
if furnace is not None:
furnace.changed(self.factory, window.coords)
@inlineCallbacks
def pre_build_hook(self, player, builddata):
item, metadata, x, y, z, face = builddata
if item.slot != blocks["furnace"].slot:
returnValue((True, builddata, False))
x, y, z = adjust_coords_for_face((x, y, z), face)
bigx, smallx, bigz, smallz = split_coords(x, z)
# the furnace cannot be oriented up or down
if face == "-y" or face == "+y":
orientation = ('+x', '+z', '-x', '-z')[((int(player.location.yaw) \
- 45 + 360) % 360) / 90]
metadata = blocks["furnace"].orientation(orientation)
builddata = builddata._replace(metadata=metadata)
# Not much to do, just tell the chunk about this tile.
chunk = yield self.factory.world.request_chunk(bigx, bigz)
chunk.tiles[smallx, y, smallz] = FurnaceTile(smallx, y, smallz)
returnValue((True, builddata, False))
def dig_hook(self, chunk, x, y, z, block):
# NOTE: x, y, z - coords in chunk
if block.slot not in (blocks["furnace"].slot, blocks["burning-furnace"].slot):
return
furnaces = self.factory.furnace_manager
coords = (x, y, z)
furnace = self.get_furnace_tile(chunk, coords)
if furnace is None:
return
# Inform FurnaceManager the furnace was removed
furnaces.remove((chunk.x, x, chunk.z, z, y))
# Block coordinates
x = chunk.x * 16 + x
z = chunk.z * 16 + z
furnace = furnace.inventory
drop_items(self.factory, (x, y, z),
furnace.crafted + furnace.crafting + furnace.fuel)
del(chunk.tiles[coords])
name = "furnace"
before = ("windows",) # plugins that comes before this plugin
after = tuple()
class Chest(object):
implements(IWindowOpenHook, IPreBuildHook, IDigHook)
def __init__(self, factory):
self.factory = factory
def get_chest_tile(self, chunk, coords):
try:
chest = chunk.tiles[coords]
if type(chest) != ChestTile:
raise KeyError
except KeyError:
x, y, z = coords
x = chunk.x * 16 + x
z = chunk.z * 16 + z
log.msg("Chest at (%d, %d, %d) do not have tile or tile type mismatch" %
(x, y, z))
chest = None
return chest
@inlineCallbacks
def open_hook(self, player, container, block):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x64 message
The ``block`` is a block we trying to open
:returns: None or window object
"""
if block != blocks["chest"].slot:
returnValue(None)
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
chunk = yield self.factory.world.request_chunk(bigx, bigz)
chests_around = chestsAround(self.factory,
(container.x, container.y, container.z))
chests_around_num = len(chests_around)
if chests_around_num == 0: # small chest
chest = self.get_chest_tile(chunk, (smallx, container.y, smallz))
if chest is None:
returnValue(None)
coords = bigx, smallx, bigz, smallz, container.y
window = ChestWindow(player.wid, player.player.inventory,
chest.inventory, coords)
elif chests_around_num == 1: # large chest
# process second chest coordinates
x2, y2, z2 = chests_around[0]
bigx2, smallx2, bigz2, smallz2 = split_coords(x2, z2)
if bigx == bigx2 and bigz == bigz2:
# both chest blocks are in same chunk
chunk2 = chunk
else:
chunk2 = yield self.factory.world.request_chunk(bigx2, bigz2)
chest1 = self.get_chest_tile(chunk, (smallx, container.y, smallz))
chest2 = self.get_chest_tile(chunk2, (smallx2, container.y, smallz2))
if chest1 is None or chest2 is None:
returnValue(None)
c1 = bigx, smallx, bigz, smallz, container.y
c2 = bigx2, smallx2, bigz2, smallz2, container.y
# We shall properly order chest inventories
if c1 < c2:
window = LargeChestWindow(player.wid, player.player.inventory,
chest1.inventory, chest2.inventory, c1)
else:
window = LargeChestWindow(player.wid, player.player.inventory,
chest2.inventory, chest1.inventory, c2)
else:
log.msg("Chest at (%d, %d, %d) have three chests connected" %
(container.x, container.y, container.z))
returnValue(None)
player.windows.append(window)
returnValue(window)
@inlineCallbacks
def pre_build_hook(self, player, builddata):
item, metadata, x, y, z, face = builddata
if item.slot != blocks["chest"].slot:
returnValue((True, builddata, False))
x, y, z = adjust_coords_for_face((x, y, z), face)
bigx, smallx, bigz, smallz = split_coords(x, z)
# chest orientation according to players position
if face == "-y" or face == "+y":
orientation = ('+x', '+z', '-x', '-z')[((int(player.location.yaw) \
- 45 + 360) % 360) / 90]
else:
orientation = face
# Chests have some restrictions on building:
# you cannot connect more than two chests. (notchian)
ccs = chestsAround(self.factory, (x, y, z))
ccn = len(ccs)
if ccn > 1:
# cannot build three or more connected chests
returnValue((False, builddata, True))
chunk = yield self.factory.world.request_chunk(bigx, bigz)
if ccn == 0:
metadata = blocks["chest"].orientation(orientation)
elif ccn == 1:
# check gonna-be-connected chest is not connected already
n = len(chestsAround(self.factory, ccs[0]))
if n != 0:
returnValue((False, builddata, True))
# align both blocks correctly (since 1.8)
# get second block
x2, y2, z2 = ccs[0]
bigx2, smallx2, bigz2, smallz2 = split_coords(x2, z2)
# new chests orientation axis according to blocks position
pair = x - x2, z - z2
ornt = {(0, 1): "x", (0, -1): "x",
(1, 0): "z", (-1, 0): "z"}[pair]
# if player is faced another direction, fix it
if orientation[1] != ornt:
# same sign with proper orientation
# XXX Probably notchian logic is different here
# but this one works well enough
orientation = orientation[0] + ornt
metadata = blocks["chest"].orientation(orientation)
# update second block's metadata
if bigx == bigx2 and bigz == bigz2:
# both blocks are in same chunk
chunk2 = chunk
else:
chunk2 = yield self.factory.world.request_chunk(bigx2, bigz2)
chunk2.set_metadata((smallx2, y2, smallz2), metadata)
# Not much to do, just tell the chunk about this tile.
chunk.tiles[smallx, y, smallz] = ChestTile(smallx, y, smallz)
builddata = builddata._replace(metadata=metadata)
returnValue((True, builddata, False))
def dig_hook(self, chunk, x, y, z, block):
if block.slot != blocks["chest"].slot:
return
coords = (x, y, z)
chest = self.get_chest_tile(chunk, coords)
if chest is None:
return
# Block coordinates
x = chunk.x * 16 + x
z = chunk.z * 16 + z
chest = chest.inventory
drop_items(self.factory, (x, y, z), chest.storage)
del(chunk.tiles[coords])
name = "chest"
before = ()
after = tuple()
|
bravoserver/bravo
|
2f3cc7fa4c7c0d6e27d94adf60d88ed3e50fc18a
|
Packet unittests: - 0x17: Add vehicle/object - 0x28: Entity metadata
|
diff --git a/bravo/tests/beta/test_packets.py b/bravo/tests/beta/test_packets.py
index 043aee3..02b0bff 100644
--- a/bravo/tests/beta/test_packets.py
+++ b/bravo/tests/beta/test_packets.py
@@ -1,43 +1,81 @@
from unittest import TestCase
-from bravo.beta.packets import simple
+from bravo.beta.packets import simple, parse_packets
class TestPacketBuilder(TestCase):
def setUp(self):
self.cls = simple("Test", ">BH", "unit, test")
def test_trivial(self):
pass
def test_parse_valid(self):
data = "\x2a\x00\x20"
result, offset = self.cls.parse(data, 0)
self.assertEqual(result.unit, 42)
self.assertEqual(result.test, 32)
self.assertEqual(offset, 3)
def test_parse_short(self):
data = "\x2a\x00"
result, offset = self.cls.parse(data, 0)
self.assertFalse(result)
self.assertEqual(offset, 1)
def test_parse_extra(self):
data = "\x2a\x00\x20\x00"
result, offset = self.cls.parse(data, 0)
self.assertEqual(result.unit, 42)
self.assertEqual(result.test, 32)
self.assertEqual(offset, 3)
def test_parse_offset(self):
data = "\x00\x2a\x00\x20"
result, offset = self.cls.parse(data, 1)
self.assertEqual(result.unit, 42)
self.assertEqual(result.test, 32)
self.assertEqual(offset, 4)
def test_build(self):
packet = self.cls(42, 32)
result = packet.build()
self.assertEqual(result, "\x2a\x00\x20")
+
+
+class TestParsePackets(TestCase):
+
+ def do_parse(self, raw):
+ packets, leftover = parse_packets(raw)
+ self.assertEqual(len(packets), 1, 'Message not parsed')
+ self.assertEqual(len(leftover), 0, 'Bytes left after parsing')
+ return packets[0][1]
+
+ def test_parse_0x17(self): # Add vehicle/object
+ # TODO: some fields doesn't match mc3p, check out who is wrong
+ raw = '\x17\x00\x00\x1f\xd6\x02\xff\xff\xed?\x00\x00\x08\x84\xff\xff\xfaD\x00\xed\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00'
+ msg = self.do_parse(raw)
+ self.assertEqual(msg.eid, 8150)
+ self.assertEqual(msg.type, 'item_stack')
+ self.assertEqual(msg.x, -4801)
+ self.assertEqual(msg.y, 2180)
+ self.assertEqual(msg.z, -1468)
+ self.assertEqual(msg.pitch, 0)
+ self.assertEqual(msg.yaw, 237)
+ self.assertEqual(msg.data, 1)
+ self.assertEqual(msg.speed.x, 0)
+ self.assertEqual(msg.speed.y, 0)
+ self.assertEqual(msg.speed.z, 0)
+
+ def test_parse_0x28(self): # Entity metadata
+ raw = '(\x00\x00\x1f\xd6\x00\x00!\x01,\xaa\x01\x06\x02\x00\x00\xff\xff\x7f'
+ msg = self.do_parse(raw)
+ self.assertEqual(msg.eid, 8150)
+ self.assertEqual(msg.metadata[0].type, 'byte')
+ self.assertEqual(msg.metadata[0].value, 0)
+ self.assertEqual(msg.metadata[1].type, 'short')
+ self.assertEqual(msg.metadata[1].value, 300)
+ self.assertEqual(msg.metadata[10].type, 'slot')
+ self.assertEqual(msg.metadata[10].value.count, 2)
+ self.assertEqual(msg.metadata[10].value.primary, 262)
+ self.assertEqual(msg.metadata[10].value.secondary, 0)
|
bravoserver/bravo
|
fe78153b6205ecef2e06422a075817383770718d
|
Bump up to protocol version 61.
|
diff --git a/bravo/beta/packets.py b/bravo/beta/packets.py
index af9c454..7ac3f0a 100644
--- a/bravo/beta/packets.py
+++ b/bravo/beta/packets.py
@@ -1,897 +1,990 @@
from collections import namedtuple
from construct import Struct, Container, Embed, Enum, MetaField
from construct import MetaArray, If, Switch, Const, Peek, Magic
from construct import OptionalGreedyRange, RepeatUntil
from construct import Flag, PascalString, Adapter
from construct import UBInt8, UBInt16, UBInt32, UBInt64
from construct import SBInt8, SBInt16, SBInt32
from construct import BFloat32, BFloat64
from construct import BitStruct, BitField
from construct import StringAdapter, LengthValueAdapter, Sequence
def IPacket(object):
"""
Interface for packets.
"""
def parse(buf, offset):
"""
Parse a packet out of the given buffer, starting at the given offset.
If the parse is successful, returns a tuple of the parsed packet and
the next packet offset in the buffer.
If the parse fails due to insufficient data, returns a tuple of None
and the amount of data required before the parse can be retried.
Exceptions may be raised if the parser finds invalid data.
"""
def simple(name, fmt, *args):
"""
Make a customized namedtuple representing a simple, primitive packet.
"""
from struct import Struct
s = Struct(fmt)
@classmethod
def parse(cls, buf, offset):
if len(buf) >= s.size + offset:
unpacked = s.unpack_from(buf, offset)
return cls(*unpacked), s.size + offset
else:
return None, s.size - len(buf)
def build(self):
return s.pack(*self)
methods = {
"parse": parse,
"build": build,
}
return type(name, (namedtuple(name, *args),), methods)
DUMP_ALL_PACKETS = False
# Strings.
# This one is a UCS2 string, which effectively decodes single writeChar()
# invocations. We need to import the encoding for it first, though.
from bravo.encodings import ucs2
from codecs import register
register(ucs2)
class DoubleAdapter(LengthValueAdapter):
def _encode(self, obj, context):
return len(obj) / 2, obj
def AlphaString(name):
return StringAdapter(
DoubleAdapter(
Sequence(name,
UBInt16("length"),
MetaField("data", lambda ctx: ctx["length"] * 2),
)
),
encoding="ucs2",
)
# Boolean converter.
def Bool(*args, **kwargs):
return Flag(*args, default=True, **kwargs)
# Flying, position, and orientation, reused in several places.
grounded = Struct("grounded", UBInt8("grounded"))
position = Struct("position",
BFloat64("x"),
BFloat64("y"),
BFloat64("stance"),
BFloat64("z")
)
orientation = Struct("orientation", BFloat32("rotation"), BFloat32("pitch"))
-# Notchian item packing
+# Notchian item packing (slot data)
items = Struct("items",
SBInt16("primary"),
If(lambda context: context["primary"] >= 0,
Embed(Struct("item_information",
UBInt8("count"),
UBInt16("secondary"),
Magic("\xff\xff"),
)),
),
)
Metadata = namedtuple("Metadata", "type value")
metadata_types = ["byte", "short", "int", "float", "string", "slot",
"coords"]
# Metadata adaptor.
class MetadataAdapter(Adapter):
def _decode(self, obj, context):
d = {}
for m in obj.data:
d[m.id.second] = Metadata(metadata_types[m.id.first], m.value)
return d
def _encode(self, obj, context):
c = Container(data=[], terminator=None)
for k, v in obj.iteritems():
t, value = v
d = Container(
id=Container(first=metadata_types.index(t), second=k),
value=value,
peeked=None)
c.data.append(d)
if c.data:
c.data[-1].peeked = 127
else:
c.data.append(Container(id=Container(first=0, second=0), value=0,
peeked=127))
return c
# Metadata inner container.
metadata_switch = {
0: UBInt8("value"),
1: UBInt16("value"),
2: UBInt32("value"),
3: BFloat32("value"),
4: AlphaString("value"),
- 5: Struct("slot",
- UBInt16("primary"),
- UBInt8("count"),
- UBInt16("secondary"),
+ 5: Struct("slot", # is the same as 'items' defined above
+ SBInt16("primary"),
+ If(lambda context: context["primary"] >= 0,
+ Embed(Struct("item_information",
+ UBInt8("count"),
+ UBInt16("secondary"),
+ SBInt16("nbt-len"),
+ If(lambda context: context["nbt-len"] >= 0,
+ Embed(MetaField("nbt-data", lambda ctx: ctx["nbt-len"]))
+ )
+ )),
+ ),
),
6: Struct("coords",
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
),
}
# Metadata subconstruct.
metadata = MetadataAdapter(
Struct("metadata",
RepeatUntil(lambda obj, context: obj["peeked"] == 0x7f,
Struct("data",
BitStruct("id",
BitField("first", 3),
BitField("second", 5),
),
Switch("value", lambda context: context["id"]["first"],
metadata_switch),
Peek(UBInt8("peeked")),
),
),
Const(UBInt8("terminator"), 0x7f),
),
)
# Build faces, used during dig and build.
faces = {
"noop": -1,
"-y": 0,
"+y": 1,
"-z": 2,
"+z": 3,
"-x": 4,
"+x": 5,
}
face = Enum(SBInt8("face"), **faces)
# World dimension.
dimensions = {
"earth": 0,
"sky": 1,
"nether": 255,
}
dimension = Enum(UBInt8("dimension"), **dimensions)
# Difficulty levels
difficulties = {
"peaceful": 0,
"easy": 1,
"normal": 2,
"hard": 3,
}
difficulty = Enum(UBInt8("difficulty"), **difficulties)
modes = {
"survival": 0,
"creative": 1,
"adventure": 2,
}
mode = Enum(UBInt8("mode"), **modes)
# Possible effects.
# XXX these names aren't really canonized yet
effect = Enum(UBInt8("effect"),
move_fast=1,
move_slow=2,
dig_fast=3,
dig_slow=4,
damage_boost=5,
heal=6,
harm=7,
jump=8,
confusion=9,
regenerate=10,
resistance=11,
fire_resistance=12,
water_resistance=13,
invisibility=14,
blindness=15,
night_vision=16,
hunger=17,
weakness=18,
poison=19,
wither=20,
)
# The actual packet list.
packets = {
- 0: Struct("ping",
+ 0x00: Struct("ping",
UBInt32("pid"),
),
- 1: Struct("login",
+ 0x01: Struct("login",
# Player Entity ID (random number generated by the server)
UBInt32("eid"),
# default, flat, largeBiomes
AlphaString("leveltype"),
mode,
dimension,
difficulty,
UBInt8("unused"),
UBInt8("maxplayers"),
),
- 2: Struct("handshake",
+ 0x02: Struct("handshake",
UBInt8("protocol"),
AlphaString("username"),
AlphaString("host"),
UBInt32("port"),
),
- 3: Struct("chat",
+ 0x03: Struct("chat",
AlphaString("message"),
),
- 4: Struct("time",
+ 0x04: Struct("time",
# Total Ticks
UBInt64("timestamp"),
# Time of day
UBInt64("time"),
),
- 5: Struct("entity-equipment",
+ 0x05: Struct("entity-equipment",
UBInt32("eid"),
UBInt16("slot"),
Embed(items),
),
- 6: Struct("spawn",
+ 0x06: Struct("spawn",
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
- 7: Struct("use",
+ 0x07: Struct("use",
UBInt32("eid"),
UBInt32("target"),
UBInt8("button"),
),
- 8: Struct("health",
+ 0x08: Struct("health",
UBInt16("hp"),
UBInt16("fp"),
BFloat32("saturation"),
),
- 9: Struct("respawn",
+ 0x09: Struct("respawn",
dimension,
difficulty,
mode,
UBInt16("height"),
AlphaString("leveltype"),
),
- 10: grounded,
- 11: Struct("position",
+ 0x0a: grounded,
+ 0x0b: Struct("position",
position,
grounded
),
- 12: Struct("orientation",
+ 0x0c: Struct("orientation",
orientation,
grounded
),
# TODO: Differ between client and server 'position'
- 13: Struct("location",
+ 0x0d: Struct("location",
position,
orientation,
grounded
),
- 14: Struct("digging",
+ 0x0e: Struct("digging",
Enum(UBInt8("state"),
started=0,
cancelled=1,
stopped=2,
checked=3,
dropped=4,
# Also eating
shooting=5,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
face,
),
- 15: Struct("build",
+ 0x0f: Struct("build",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
face,
Embed(items),
UBInt8("cursorx"),
UBInt8("cursory"),
UBInt8("cursorz"),
),
# Hold Item Change
- 16: Struct("equip",
+ 0x10: Struct("equip",
# Only 0-8
UBInt16("slot"),
),
- 17: Struct("bed",
+ 0x11: Struct("bed",
UBInt32("eid"),
UBInt8("unknown"),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
),
- 18: Struct("animate",
+ 0x12: Struct("animate",
UBInt32("eid"),
Enum(UBInt8("animation"),
noop=0,
arm=1,
hit=2,
leave_bed=3,
eat=5,
unknown=102,
crouch=104,
uncrouch=105,
),
),
- 19: Struct("action",
+ 0x13: Struct("action",
UBInt32("eid"),
Enum(UBInt8("action"),
crouch=1,
uncrouch=2,
leave_bed=3,
start_sprint=4,
stop_sprint=5,
),
),
- 20: Struct("player",
+ 0x14: Struct("player",
UBInt32("eid"),
AlphaString("username"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
# 0 For none, unlike other packets
# -1 crashes clients
SBInt16("item"),
metadata,
),
# Spawn Dropped Item
- 21: Struct("pickup",
+ 0x15: Struct("pickup",
UBInt32("eid"),
Embed(items),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
UBInt8("roll"),
),
- 22: Struct("collect",
+ 0x16: Struct("collect",
UBInt32("eid"),
UBInt32("destination"),
),
# Object/Vehicle
- 23: Struct("vehicle",
+ 0x17: Struct("object", # XXX: was 'vehicle'!
UBInt32("eid"),
- Enum(UBInt8("type"),
+ Enum(UBInt8("type"), # See http://wiki.vg/Entities#Objects
boat=1,
+ item_stack=2,
minecart=10,
storage_cart=11,
powered_cart=12,
tnt=50,
ender_crystal=51,
arrow=60,
snowball=61,
egg=62,
thrown_enderpearl=65,
wither_skull=66,
- # See http://wiki.vg/Entities#Objects
falling_block=70,
+ frames=71,
ender_eye=72,
thrown_potion=73,
dragon_egg=74,
thrown_xp_bottle=75,
fishing_float=90,
),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
- SBInt32("data"),
- # The following 3 are 0 if data is 0
- SBInt16("speedx"),
- SBInt16("speedy"),
- SBInt16("speedz"),
+ UBInt8("pitch"),
+ UBInt8("yaw"),
+ SBInt32("data"), # See http://www.wiki.vg/Object_Data
+ If(lambda context: context["data"] != 0,
+ Struct("speed",
+ SBInt16("x"),
+ SBInt16("y"),
+ SBInt16("z"),
+ )
+ ),
),
- 24: Struct("mob",
+ 0x18: Struct("mob",
UBInt32("eid"),
Enum(UBInt8("type"), **{
"Creeper": 50,
"Skeleton": 51,
"Spider": 52,
"GiantZombie": 53,
"Zombie": 54,
"Slime": 55,
"Ghast": 56,
"ZombiePig": 57,
"Enderman": 58,
"CaveSpider": 59,
"Silverfish": 60,
"Blaze": 61,
"MagmaCube": 62,
"EnderDragon": 63,
"Wither": 64,
"Bat": 65,
"Witch": 66,
"Pig": 90,
"Sheep": 91,
"Cow": 92,
"Chicken": 93,
"Squid": 94,
"Wolf": 95,
"Mooshroom": 96,
"Snowman": 97,
"Ocelot": 98,
"IronGolem": 99,
"Villager": 120
}),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
SBInt8("yaw"),
SBInt8("pitch"),
SBInt8("head_yaw"),
SBInt16("vx"),
SBInt16("vy"),
SBInt16("vz"),
metadata,
),
- 25: Struct("painting",
+ 0x19: Struct("painting",
UBInt32("eid"),
AlphaString("title"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
face,
),
- 26: Struct("experience",
+ 0x1a: Struct("experience",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt16("quantity"),
),
- 28: Struct("velocity",
+ 0x1c: Struct("velocity",
UBInt32("eid"),
SBInt16("dx"),
SBInt16("dy"),
SBInt16("dz"),
),
- 29: Struct("destroy",
+ 0x1d: Struct("destroy",
UBInt8("count"),
MetaArray(lambda context: context["count"], UBInt32("eid")),
),
- 30: Struct("create",
+ 0x1e: Struct("create",
UBInt32("eid"),
),
- 31: Struct("entity-position",
+ 0x1f: Struct("entity-position",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz")
),
- 32: Struct("entity-orientation",
+ 0x20: Struct("entity-orientation",
UBInt32("eid"),
UBInt8("yaw"),
UBInt8("pitch")
),
- 33: Struct("entity-location",
+ 0x21: Struct("entity-location",
UBInt32("eid"),
SBInt8("dx"),
SBInt8("dy"),
SBInt8("dz"),
UBInt8("yaw"),
UBInt8("pitch")
),
- 34: Struct("teleport",
+ 0x22: Struct("teleport",
UBInt32("eid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
UBInt8("yaw"),
UBInt8("pitch"),
),
- 35: Struct("entity-head",
+ 0x23: Struct("entity-head",
UBInt32("eid"),
UBInt8("yaw"),
),
- 38: Struct("status",
+ 0x26: Struct("status",
UBInt32("eid"),
Enum(UBInt8("status"),
damaged=2,
killed=3,
taming=6,
tamed=7,
drying=8,
eating=9,
sheep_eat=10,
+ golem_rose=11,
+ heart_particle=12,
+ angry_particle=13,
+ happy_particle=14,
+ magic_particle=15,
+ shaking=16,
+ firework=17
),
),
- 39: Struct("attach",
+ 0x27: Struct("attach",
UBInt32("eid"),
# -1 for detatching
UBInt32("vid"),
),
- 40: Struct("metadata",
+ 0x28: Struct("metadata",
UBInt32("eid"),
metadata,
),
- 41: Struct("effect",
+ 0x29: Struct("effect",
UBInt32("eid"),
effect,
UBInt8("amount"),
UBInt16("duration"),
),
- 42: Struct("uneffect",
+ 0x2a: Struct("uneffect",
UBInt32("eid"),
effect,
),
- 43: Struct("levelup",
+ 0x2b: Struct("levelup",
BFloat32("current"),
UBInt16("level"),
UBInt16("total"),
),
- 51: Struct("chunk",
+ 0x33: Struct("chunk",
SBInt32("x"),
SBInt32("z"),
Bool("continuous"),
UBInt16("primary"),
UBInt16("add"),
PascalString("data", length_field=UBInt32("length"), encoding="zlib"),
),
- 52: Struct("batch",
+ 0x34: Struct("batch",
SBInt32("x"),
SBInt32("z"),
UBInt16("count"),
PascalString("data", length_field=UBInt32("length")),
),
- 53: Struct("block",
+ 0x35: Struct("block",
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt16("type"),
UBInt8("meta"),
),
# XXX This covers general tile actions, not just note blocks.
# TODO: Needs work
- 54: Struct("block-action",
+ 0x36: Struct("block-action",
SBInt32("x"),
SBInt16("y"),
SBInt32("z"),
UBInt8("byte1"),
UBInt8("byte2"),
UBInt16("blockid"),
),
- 55: Struct("block-break-anim",
+ 0x37: Struct("block-break-anim",
UBInt32("eid"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
UBInt8("stage"),
),
- 56: Struct("bulk-chunk",
+ # XXX Server -> Client. Use 0x33 instead.
+ 0x38: Struct("bulk-chunk",
UBInt16("count"),
- # Length
- # Data
- # metadata
+ UBInt32("length"),
+ UBInt8("sky_light"),
+ MetaField("data", lambda ctx: ctx["length"]),
+ MetaArray(lambda context: context["count"],
+ Struct("metadata",
+ UBInt32("chunk_x"),
+ UBInt32("chunk_z"),
+ UBInt16("bitmap_primary"),
+ UBInt16("bitmap_secondary"),
+ )
+ )
),
# TODO: Needs work?
- 60: Struct("explosion",
+ 0x3c: Struct("explosion",
BFloat64("x"),
BFloat64("y"),
BFloat64("z"),
BFloat32("radius"),
UBInt32("count"),
MetaField("blocks", lambda context: context["count"] * 3),
BFloat32("motionx"),
BFloat32("motiony"),
BFloat32("motionz"),
),
- 61: Struct("sound",
+ 0x3d: Struct("sound",
Enum(UBInt32("sid"),
click2=1000,
click1=1001,
bow_fire=1002,
door_toggle=1003,
extinguish=1004,
record_play=1005,
charge=1007,
fireball=1008,
zombie_wood=1010,
zombie_metal=1011,
zombie_break=1012,
wither=1013,
smoke=2000,
block_break=2001,
splash_potion=2002,
ender_eye=2003,
blaze=2004,
),
SBInt32("x"),
UBInt8("y"),
SBInt32("z"),
UBInt32("data"),
Bool("volume-mod"),
),
- 62: Struct("named-sound",
+ 0x3e: Struct("named-sound",
AlphaString("name"),
UBInt32("x"),
UBInt32("y"),
UBInt32("z"),
BFloat32("volume"),
UBInt8("pitch"),
),
- 70: Struct("state",
+ 0x3f: Struct("particle",
+ AlphaString("name"),
+ BFloat32("x"),
+ BFloat32("y"),
+ BFloat32("z"),
+ BFloat32("x_offset"),
+ BFloat32("y_offset"),
+ BFloat32("z_offset"),
+ BFloat32("speed"),
+ UBInt32("count"),
+ ),
+ 0x46: Struct("state",
Enum(UBInt8("state"),
bad_bed=0,
start_rain=1,
stop_rain=2,
mode_change=3,
run_credits=4,
),
mode,
),
- 71: Struct("thunderbolt",
+ 0x47: Struct("thunderbolt",
UBInt32("eid"),
UBInt8("gid"),
SBInt32("x"),
SBInt32("y"),
SBInt32("z"),
),
- 100: Struct("window-open",
+ 0x64: Struct("window-open",
UBInt8("wid"),
Enum(UBInt8("type"),
chest=0,
workbench=1,
furnace=2,
dispenser=3,
enchatment_table=4,
brewing_stand=5,
+ npc_trade=6,
+ beacon=7,
+ anvil=8,
+ hopper=9
),
AlphaString("title"),
UBInt8("slots"),
+ UBInt8("use_title"),
),
- 101: Struct("window-close",
+ 0x65: Struct("window-close",
UBInt8("wid"),
),
- 102: Struct("window-action",
+ 0x66: Struct("window-action",
UBInt8("wid"),
UBInt16("slot"),
UBInt8("button"),
UBInt16("token"),
- Bool("shift"),
+ UBInt8("shift"), # TODO: rename to 'mode'
Embed(items),
),
- 103: Struct("window-slot",
+ 0x67: Struct("window-slot",
UBInt8("wid"),
UBInt16("slot"),
Embed(items),
),
- 104: Struct("inventory",
+ 0x68: Struct("inventory",
UBInt8("wid"),
UBInt16("length"),
MetaArray(lambda context: context["length"], items),
),
- 105: Struct("window-progress",
+ 0x69: Struct("window-progress",
UBInt8("wid"),
UBInt16("bar"),
UBInt16("progress"),
),
- 106: Struct("window-token",
+ 0x6a: Struct("window-token",
UBInt8("wid"),
UBInt16("token"),
Bool("acknowledged"),
),
- 107: Struct("window-creative",
+ 0x6b: Struct("window-creative",
UBInt16("slot"),
Embed(items),
),
- 108: Struct("enchant",
+ 0x6c: Struct("enchant",
UBInt8("wid"),
UBInt8("enchantment"),
),
- 130: Struct("sign",
+ 0x82: Struct("sign",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
AlphaString("line1"),
AlphaString("line2"),
AlphaString("line3"),
AlphaString("line4"),
),
- 131: Struct("map",
+ 0x83: Struct("map",
UBInt16("type"),
UBInt16("itemid"),
- PascalString("data", length_field=UBInt8("length")),
+ PascalString("data", length_field=UBInt16("length")),
),
- # TODO: NBT data array
- 132: Struct("tile-update",
+ 0x84: Struct("tile-update",
SBInt32("x"),
UBInt16("y"),
SBInt32("z"),
UBInt8("action"),
- # nbt data
+ PascalString("nbt_data", length_field=UBInt16("length")), # gzipped
),
- 200: Struct("statistics",
+ 0xc8: Struct("statistics",
UBInt32("sid"), # XXX I could be an Enum
UBInt8("count"),
),
- 201: Struct("players",
+ 0xc9: Struct("players",
AlphaString("name"),
Bool("online"),
UBInt16("ping"),
),
- 202: Struct("abilities",
+ 0xca: Struct("abilities",
UBInt8("flags"),
UBInt8("fly-speed"),
UBInt8("walk-speed"),
),
- 203: Struct("tab",
+ 0xcb: Struct("tab",
AlphaString("autocomplete"),
),
- 204: Struct("settings",
+ 0xcc: Struct("settings",
AlphaString("locale"),
UBInt8("distance"),
UBInt8("chat"),
difficulty,
Bool("cape"),
),
- 205: Struct("statuses",
+ 0xcd: Struct("statuses",
UBInt8("payload")
),
- # TODO: Needs DATA field
- 250: Struct("plugin-message",
+ 0xce: Struct("score_item",
+ AlphaString("name"),
+ AlphaString("value"),
+ Enum(UBInt8("action"),
+ create=0,
+ remove=1,
+ update=2,
+ ),
+ ),
+ 0xcf: Struct("score_update",
+ AlphaString("item_name"),
+ UBInt8("remove"),
+ If(lambda context: context["remove"] == 0,
+ Embed(Struct("information",
+ AlphaString("score_name"),
+ UBInt32("value"),
+ ))
+ ),
+ ),
+ 0xd0: Struct("score_display",
+ Enum(UBInt8("position"),
+ as_list=0,
+ sidebar=1,
+ below_name=2
+ ),
+ AlphaString("score_name"),
+ ),
+ 0xd1: Struct("teams",
+ AlphaString("name"),
+ Enum(UBInt8("mode"),
+ team_created=0,
+ team_removed=1,
+ team_updates=2,
+ players_added=3,
+ players_removed=4,
+ ),
+ If(lambda context: context["mode"] in ("team_created", "team_updated"),
+ Embed(Struct("team_info",
+ AlphaString("team_name"),
+ AlphaString("team_prefix"),
+ AlphaString("team_suffix"),
+ Enum(UBInt8("friendly_fire"),
+ off=0,
+ on=1,
+ invisibles=2,
+ ),
+ ))
+ ),
+ If(lambda context: context["mode"] in ("team_created", "players_added", "players_removed"),
+ Embed(Struct("players_info",
+ UBInt16("count"),
+ MetaArray(lambda context: context["count"], AlphaString("player_names")),
+ ))
+ ),
+ ),
+ 0xfa: Struct("plugin-message",
AlphaString("channel"),
- UBInt16("length"),
- # Data
+ PascalString("data", length_field=UBInt16("length")),
),
- # TODO: Missing byte arrays
- 252: Struct("key-response",
- UBInt16("shared-len"),
- # Shared Secret, byte array
- UBInt16("token-len"),
- # Token byte array
+ 0xfc: Struct("key-response",
+ PascalString("key", length_field=UBInt16("key-len")),
+ PascalString("token", length_field=UBInt16("token-len")),
),
- # TODO: Missing byte arrays
- 253: Struct("key-request",
+ 0xfd: Struct("key-request",
AlphaString("server"),
- UBInt16("key-len"),
- # Pubkey byte array
- UBInt16("token-len"),
- # Token byte arrap
+ PascalString("key", length_field=UBInt16("key-len")),
+ PascalString("token", length_field=UBInt16("token-len")),
),
- 254: Struct("poll", UBInt8("unused")),
- 255: Struct("error", AlphaString("message")),
+ 0xfe: Struct("poll", UBInt8("unused")),
+ # TODO: rename to 'kick'
+ 0xff: Struct("error", AlphaString("message")),
}
packet_stream = Struct("packet_stream",
OptionalGreedyRange(
Struct("full_packet",
UBInt8("header"),
Switch("payload", lambda context: context["header"], packets),
),
),
OptionalGreedyRange(
UBInt8("leftovers"),
),
)
def parse_packets(bytestream):
"""
Opportunistically parse out as many packets as possible from a raw
bytestream.
Returns a tuple containing a list of unpacked packet containers, and any
leftover unparseable bytes.
"""
container = packet_stream.parse(bytestream)
l = [(i.header, i.payload) for i in container.full_packet]
leftovers = "".join(chr(i) for i in container.leftovers)
if DUMP_ALL_PACKETS:
- for packet in l:
- print "Parsed packet %d" % packet[0]
- print packet[1]
+ for header, payload in l:
+ print "Parsed packet 0x%.2x" % header
+ print payload
return l, leftovers
incremental_packet_stream = Struct("incremental_packet_stream",
Struct("full_packet",
UBInt8("header"),
Switch("payload", lambda context: context["header"], packets),
),
OptionalGreedyRange(
UBInt8("leftovers"),
),
)
def parse_packets_incrementally(bytestream):
"""
Parse out packets one-by-one, yielding a tuple of packet header and packet
payload.
This function returns a generator.
This function will yield all valid packets in the bytestream up to the
first invalid packet.
:returns: a generator yielding tuples of headers and payloads
"""
while bytestream:
parsed = incremental_packet_stream.parse(bytestream)
header = parsed.full_packet.header
payload = parsed.full_packet.payload
bytestream = "".join(chr(i) for i in parsed.leftovers)
yield header, payload
packets_by_name = dict((v.name, k) for (k, v) in packets.iteritems())
def make_packet(packet, *args, **kwargs):
"""
Constructs a packet bytestream from a packet header and payload.
The payload should be passed as keyword arguments. Additional containers
or dictionaries to be added to the payload may be passed positionally, as
well.
"""
if packet not in packets_by_name:
print "Couldn't find packet name %s!" % packet
return ""
header = packets_by_name[packet]
for arg in args:
kwargs.update(dict(arg))
container = Container(**kwargs)
if DUMP_ALL_PACKETS:
- print "Making packet %s (%d)" % (packet, header)
+ print "Making packet <%s> (0x%.2x)" % (packet, header)
print container
payload = packets[header].build(container)
return chr(header) + payload
def make_error_packet(message):
"""
Convenience method to generate an error packet bytestream.
"""
return make_packet("error", message=message)
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index d2cbd93..56f0078 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -1,553 +1,553 @@
# vim: set fileencoding=utf8 :
from itertools import product, chain
from time import time
from urlparse import urlunparse
from twisted.internet import reactor
from twisted.internet.defer import (DeferredList, inlineCallbacks,
maybeDeferred, succeed)
from twisted.internet.protocol import Protocol
from twisted.internet.task import cooperate, deferLater, LoopingCall
from twisted.internet.task import TaskDone, TaskFailed
from twisted.protocols.policies import TimeoutMixin
from twisted.python import log
from twisted.web.client import getPage
from bravo import version
from bravo.beta.structures import BuildData, Settings
from bravo.blocks import blocks, items
from bravo.chunk import CHUNK_HEIGHT
from bravo.entity import Sign
from bravo.errors import BetaClientError, BuildError
from bravo.ibravo import (IChatCommand, IPreBuildHook, IPostBuildHook,
IWindowOpenHook, IWindowClickHook, IWindowCloseHook,
IPreDigHook, IDigHook, ISignHook, IUseHook)
from bravo.infini.factory import InfiniClientFactory
from bravo.inventory.windows import InventoryWindow
from bravo.location import Location, Orientation, Position
from bravo.motd import get_motd
from bravo.beta.packets import parse_packets, make_packet, make_error_packet
from bravo.plugin import retrieve_plugins
from bravo.policy.dig import dig_policies
from bravo.utilities.coords import adjust_coords_for_face, split_coords
from bravo.utilities.chat import complete, username_alternatives
from bravo.utilities.maths import circling, clamp, sorted_by_distance
from bravo.utilities.temporal import timestamp_from_clock
# States of the protocol.
(STATE_UNAUTHENTICATED, STATE_AUTHENTICATED, STATE_LOCATED) = range(3)
-SUPPORTED_PROTOCOL = 60
+SUPPORTED_PROTOCOL = 61
class BetaServerProtocol(object, Protocol, TimeoutMixin):
"""
The Minecraft Alpha/Beta server protocol.
This class is mostly designed to be a skeleton for featureful clients. It
tries hard to not step on the toes of potential subclasses.
"""
excess = ""
packet = None
state = STATE_UNAUTHENTICATED
buf = ""
parser = None
handler = None
player = None
username = None
settings = Settings("en_US", "normal")
motd = "Bravo Generic Beta Server"
_health = 20
_latency = 0
def __init__(self):
self.chunks = dict()
self.windows = {}
self.wid = 1
self.location = Location()
self.handlers = {
0: self.ping,
2: self.handshake,
3: self.chat,
7: self.use,
9: self.respawn,
10: self.grounded,
11: self.position,
12: self.orientation,
13: self.location_packet,
14: self.digging,
15: self.build,
16: self.equip,
18: self.animate,
19: self.action,
21: self.pickup,
101: self.wclose,
102: self.waction,
106: self.wacknowledge,
107: self.wcreative,
130: self.sign,
203: self.complete,
204: self.settings_packet,
254: self.poll,
255: self.quit,
}
self._ping_loop = LoopingCall(self.update_ping)
self.setTimeout(30)
# Low-level packet handlers
# Try not to hook these if possible, since they offer no convenient
# abstractions or protections.
def ping(self, container):
"""
Hook for ping packets.
By default, this hook will examine the timestamps on incoming pings,
and use them to estimate the current latency of the connected client.
"""
now = timestamp_from_clock(reactor)
then = container.pid
self.latency = now - then
def handshake(self, container):
"""
Hook for handshake packets.
Override this to customize how logins are handled. By default, this
method will only confirm that the negotiated wire protocol is the
correct version, copy data out of the packet and onto the protocol,
and then run the ``authenticated`` callback.
This method will call the ``pre_handshake`` method hook prior to
logging in the client.
"""
self.username = container.username
if container.protocol < SUPPORTED_PROTOCOL:
# Kick old clients.
self.error("This server doesn't support your ancient client.")
return
elif container.protocol > SUPPORTED_PROTOCOL:
# Kick new clients.
self.error("This server doesn't support your newfangled client.")
return
log.msg("Handshaking with client, protocol version %d" %
container.protocol)
if not self.pre_handshake():
log.msg("Pre-handshake hook failed; kicking client")
self.error("You failed the pre-handshake hook.")
return
players = min(self.factory.limitConnections, 20)
self.write_packet("login", eid=self.eid, leveltype="default",
mode=self.factory.mode,
dimension=self.factory.world.dimension,
difficulty="peaceful", unused=0, maxplayers=players)
self.authenticated()
def pre_handshake(self):
"""
Whether this client should be logged in.
"""
return True
def chat(self, container):
"""
Hook for chat packets.
"""
def use(self, container):
"""
Hook for use packets.
"""
def respawn(self, container):
"""
Hook for respawn packets.
"""
def grounded(self, container):
"""
Hook for grounded packets.
"""
self.location.grounded = bool(container.grounded)
def position(self, container):
"""
Hook for position packets.
"""
if self.state != STATE_LOCATED:
log.msg("Ignoring unlocated position!")
return
self.grounded(container.grounded)
old_position = self.location.pos
position = Position.from_player(container.position.x,
container.position.y, container.position.z)
altered = False
dx, dy, dz = old_position - position
if any(abs(d) >= 64 for d in (dx, dy, dz)):
# Whoa, slow down there, cowboy. You're moving too fast. We're
# gonna ignore this position change completely, because it's
# either bogus or ignoring a recent teleport.
altered = True
else:
self.location.pos = position
self.location.stance = container.position.stance
# Santitize location. This handles safety boundaries, illegal stance,
# etc.
altered = self.location.clamp() or altered
# If, for any reason, our opinion on where the client should be
# located is different than theirs, force them to conform to our point
# of view.
if altered:
log.msg("Not updating bogus position!")
self.update_location()
# If our position actually changed, fire the position change hook.
if old_position != position:
self.position_changed()
def orientation(self, container):
"""
Hook for orientation packets.
"""
self.grounded(container.grounded)
old_orientation = self.location.ori
orientation = Orientation.from_degs(container.orientation.rotation,
container.orientation.pitch)
self.location.ori = orientation
if old_orientation != orientation:
self.orientation_changed()
def location_packet(self, container):
"""
Hook for location packets.
"""
self.position(container)
self.orientation(container)
def digging(self, container):
"""
Hook for digging packets.
"""
def build(self, container):
"""
Hook for build packets.
"""
def equip(self, container):
"""
Hook for equip packets.
"""
def pickup(self, container):
"""
Hook for pickup packets.
"""
def animate(self, container):
"""
Hook for animate packets.
"""
def action(self, container):
"""
Hook for action packets.
"""
def wclose(self, container):
"""
Hook for wclose packets.
"""
def waction(self, container):
"""
Hook for waction packets.
"""
def wacknowledge(self, container):
"""
Hook for wacknowledge packets.
"""
def wcreative(self, container):
"""
Hook for creative inventory action packets.
"""
def sign(self, container):
"""
Hook for sign packets.
"""
def complete(self, container):
"""
Hook for tab-completion packets.
"""
def settings_packet(self, container):
"""
Hook for client settings packets.
"""
distance = ["far", "normal", "short", "tiny"][container.distance]
self.settings = Settings(container.locale, distance)
def poll(self, container):
"""
Hook for poll packets.
By default, queries the parent factory for some data, and replays it
in a specific format to the requester. The connection is then closed
at both ends. This functionality is used by Beta 1.8 clients to poll
servers for status.
"""
players = unicode(len(self.factory.protocols))
max_players = unicode(self.factory.limitConnections or 1000000)
data = [
u"§1",
unicode(SUPPORTED_PROTOCOL),
u"Bravo %s" % version,
self.motd,
players,
max_players,
]
response = u"\u0000".join(data)
self.error(response)
def quit(self, container):
"""
Hook for quit packets.
By default, merely logs the quit message and drops the connection.
Even if the connection is not dropped, it will be lost anyway since
the client will close the connection. It's better to explicitly let it
go here than to have zombie protocols.
"""
log.msg("Client is quitting: %s" % container.message)
self.transport.loseConnection()
# Twisted-level data handlers and methods
# Please don't override these needlessly, as they are pretty solid and
# shouldn't need to be touched.
def dataReceived(self, data):
self.buf += data
packets, self.buf = parse_packets(self.buf)
if packets:
self.resetTimeout()
for header, payload in packets:
if header in self.handlers:
d = maybeDeferred(self.handlers[header], payload)
@d.addErrback
def eb(failure):
log.err("Error while handling packet %d" % header)
log.err(failure)
return None
else:
log.err("Didn't handle parseable packet %d!" % header)
log.err(payload)
def connectionLost(self, reason):
if self._ping_loop.running:
self._ping_loop.stop()
def timeoutConnection(self):
self.error("Connection timed out")
# State-change callbacks
# Feel free to override these, but call them at some point.
def authenticated(self):
"""
Called when the client has successfully authenticated with the server.
"""
self.state = STATE_AUTHENTICATED
self._ping_loop.start(30)
# Event callbacks
# These are meant to be overriden.
def orientation_changed(self):
"""
Called when the client moves.
This callback is only for orientation, not position.
"""
pass
def position_changed(self):
"""
Called when the client moves.
This callback is only for position, not orientation.
"""
pass
# Convenience methods for consolidating code and expressing intent. I
# hear that these are occasionally useful. If a method in this section can
# be used, then *PLEASE* use it; not using it is the same as open-coding
# whatever you're doing, and only hurts in the long run.
def write_packet(self, header, **payload):
"""
Send a packet to the client.
"""
self.transport.write(make_packet(header, **payload))
def update_ping(self):
"""
Send a keepalive to the client.
"""
timestamp = timestamp_from_clock(reactor)
self.write_packet("ping", pid=timestamp)
def update_location(self):
"""
Send this client's location to the client.
Also let other clients know where this client is.
"""
# Don't bother trying to update things if the position's not yet
# synchronized. We could end up jettisoning them into the void.
if self.state != STATE_LOCATED:
return
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
# Inform everybody of our new location.
packet = make_packet("teleport", eid=self.player.eid, x=x, y=y, z=z,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
# Inform ourselves of our new location.
packet = self.location.save_to_packet()
self.transport.write(packet)
def ascend(self, count):
"""
Ascend to the next XZ-plane.
``count`` is the number of ascensions to perform, and may be zero in
order to force this player to not be standing inside a block.
:returns: bool of whether the ascension was successful
This client must be located for this method to have any effect.
"""
if self.state != STATE_LOCATED:
return False
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
chunk = self.chunks[bigx, bigz]
column = [chunk.get_block((smallx, i, smallz))
for i in range(CHUNK_HEIGHT)]
# Special case: Ascend at most once, if the current spot isn't good.
if count == 0:
if (not column[y]) or column[y + 1] or column[y + 2]:
# Yeah, we're gonna need to move.
count += 1
else:
# Nope, we're fine where we are.
return True
for i in xrange(y, 255):
# Find the next spot above us which has a platform and two empty
# blocks of air.
if column[i] and (not column[i + 1]) and not column[i + 2]:
count -= 1
if not count:
break
else:
return False
self.location.pos = self.location.pos._replace(y=i * 32)
return True
def error(self, message):
"""
Error out.
This method sends ``message`` to the client as a descriptive error
message, then closes the connection.
"""
log.msg("Error: %s" % message)
self.transport.write(make_error_packet(message))
self.transport.loseConnection()
def play_notes(self, notes):
"""
Play some music.
Send a sequence of notes to the player. ``notes`` is a finite iterable
of pairs of instruments and pitches.
There is no way to time notes; if staggered playback is desired (and
it usually is!), then ``play_notes()`` should be called repeatedly at
the appropriate times.
This method turns the block beneath the player into a note block,
plays the requested notes through it, then turns it back into the
original block, all without actually modifying the chunk.
"""
x, y, z = self.location.pos.to_block()
if y:
y -= 1
bigx, smallx, bigz, smallz = split_coords(x, z)
if (bigx, bigz) not in self.chunks:
diff --git a/requirements.txt b/requirements.txt
index 69a9c95..650f7b1 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,2 +1,2 @@
-construct>=2.03
+construct==2.0.6
Twisted>=11.0
|
bravoserver/bravo
|
2d4f4b47af8855b1941f60c78f84c91db1bc0c26
|
terrain/trees: A couple cleanups.
|
diff --git a/bravo/terrain/trees.py b/bravo/terrain/trees.py
index e65902e..86358db 100644
--- a/bravo/terrain/trees.py
+++ b/bravo/terrain/trees.py
@@ -1,663 +1,664 @@
from __future__ import division
from itertools import product
from math import cos, pi, sin, sqrt
from random import choice, random, randint
from zope.interface import Interface, implements
from bravo.blocks import blocks
from bravo.chunk import CHUNK_HEIGHT
from bravo.utilities.maths import dist
PHI = (sqrt(5) - 1) * 0.5
IPHI = (sqrt(5) + 1) * 0.5
"""
Phi and inverse phi constants.
"""
# add lights in the middle of foliage clusters
# for those huge trees that get so dark underneath
# or for enchanted forests that should glow and stuff
# Only works if SHAPE is "round" or "cone" or "procedural"
# 0 makes just normal trees
# 1 adds one light inside the foliage clusters for a bit of light
# 2 adds two lights around the base of each cluster, for more light
# 4 adds lights all around the base of each cluster for lots of light
-LIGHTTREE = 0
+DARK, ONE, TWO, FOUR = range(4)
+LIGHTING = DARK
def dist_to_mat(cord, vec, matidxlist, world, invert=False, limit=None):
"""
Find the distance from the given coordinates to any of a set of blocks
along a certain vector.
"""
curcord = [i + .5 for i in cord]
iterations = 0
on_map = True
while on_map:
x = int(curcord[0])
y = int(curcord[1])
z = int(curcord[2])
if not 0 <= y < CHUNK_HEIGHT:
break
block = world.sync_get_block((x, y, z))
if block in matidxlist and not invert:
break
elif block not in matidxlist and invert:
break
else:
curcord = [curcord[i] + vec[i] for i in range(3)]
iterations += 1
if limit and iterations > limit:
break
return iterations
class ITree(Interface):
"""
An ideal Platonic tree.
Trees usually are made of some sort of wood, and are adorned with leaves.
These trees also may have lanterns hanging in their branches.
"""
def prepare(world):
"""
Do any post-__init__() setup.
"""
def make_trunk(world):
"""
Write a trunk to the world.
"""
def make_foliage(world):
"""
Write foliage (leaves, lanterns) to the world.
"""
OAK, PINE, BIRCH, JUNGLE = range(4)
class Tree(object):
"""
Set up the interface for tree objects. Designed for subclassing.
"""
implements(ITree)
species = OAK
def __init__(self, pos, height=None):
if height is None:
self.height = randint(4, 7)
else:
self.height = height
self.pos = pos
def prepare(self, world):
pass
def make_trunk(self, world):
pass
def make_foliage(self, world):
pass
class StickTree(Tree):
"""
A large stick or log.
Subclass this to build trees which only require a single-log trunk.
"""
def make_trunk(self, world):
x, y, z = self.pos
- for i in xrange(self.height):
+ for y in range(y, y + self.height):
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
- y += 1
class NormalTree(StickTree):
"""
A Notchy tree.
This tree will be a single bulb of foliage above a single width trunk.
The shape is very similar to the default Minecraft tree.
"""
def make_foliage(self, world):
topy = self.pos[1] + self.height - 1
start = topy - 2
end = topy + 2
for y in xrange(start, end):
if y > start + 1:
rad = 1
else:
rad = 2
for xoff, zoff in product(xrange(-rad, rad + 1), repeat=2):
# XXX Wait, sorry, what.
if (random() > PHI and abs(xoff) == abs(zoff) == rad or
xoff == zoff == 0):
continue
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
world.sync_set_metadata((x, y, z), self.species)
class BambooTree(StickTree):
"""
A bamboo-like tree.
Bamboo foliage is sparse and always adjacent to the trunk.
"""
def make_foliage(self, world):
start = self.pos[1]
end = start + self.height + 1
for y in xrange(start, end):
for i in (0, 1):
xoff = choice([-1, 1])
zoff = choice([-1, 1])
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
class PalmTree(StickTree):
"""
A traditional palm tree.
This tree has four tufts of foliage at the top of the trunk. No coconuts,
though.
"""
def make_foliage(self, world):
y = self.pos[1] + self.height
for xoff, zoff in product(xrange(-2, 3), repeat=2):
if abs(xoff) == abs(zoff):
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
class ProceduralTree(Tree):
"""
Base class for larger, complex, procedurally generated trees.
This tree type has roots, a trunk, branches all of varying width, and many
foliage clusters.
This class needs to be subclassed to be useful. Specifically,
foliage_shape must be set. Subclass 'prepare' and 'shapefunc' to make
different shaped trees.
"""
def cross_section(self, center, radius, diraxis, matidx, world):
"""Create a round section of type matidx in world.
Passed values:
center = [x,y,z] for the coordinates of the center block
radius = <number> as the radius of the section. May be a float or int.
diraxis: The list index for the axis to make the section
perpendicular to. 0 indicates the x axis, 1 the y, 2 the z. The
section will extend along the other two axies.
matidx = <int> the integer value to make the section out of.
world = the array generated by make_mcmap
matdata = <int> the integer value to make the block data value.
"""
+ # This isn't especially likely...
rad = int(radius + PHI)
if rad <= 0:
return None
+
secidx1 = (diraxis - 1) % 3
secidx2 = (1 + diraxis) % 3
coord = [0] * 3
for off1, off2 in product(xrange(-rad, rad + 1), repeat=2):
thisdist = sqrt((abs(off1) + .5) ** 2 + (abs(off2) + .5) ** 2)
if thisdist > radius:
continue
pri = center[diraxis]
sec1 = center[secidx1] + off1
sec2 = center[secidx2] + off2
coord[diraxis] = pri
coord[secidx1] = sec1
coord[secidx2] = sec2
world.sync_set_block(coord, matidx)
world.sync_set_metadata(coord, self.species)
def shapefunc(self, y):
"""
Obtain a radius for the given height.
Subclass this method to customize tree design.
If None is returned, no foliage cluster will be created.
:returns: radius, or None
"""
if random() < 100 / ((self.height) ** 2) and y < self.trunkheight:
return self.height * .12
return None
def foliage_cluster(self, center, world):
"""
Generate a round cluster of foliage at the location center.
The shape of the cluster is defined by the list self.foliage_shape.
This list must be set in a subclass of ProceduralTree.
"""
- x = center[0]
- y = center[1]
- z = center[2]
+ x, y, z = center
for i in self.foliage_shape:
self.cross_section([x, y, z], i, 1, blocks["leaves"].slot, world)
y += 1
def taperedcylinder(self, start, end, startsize, endsize, world, blockdata):
"""
Create a tapered cylinder in world.
start and end are the beginning and ending coordinates of form [x,y,z].
startsize and endsize are the beginning and ending radius.
The material of the cylinder is WOODMAT.
"""
# delta is the coordinate vector for the difference between
# start and end.
delta = [int(e - s) for e, s in zip(end, start)]
# primidx is the index (0,1,or 2 for x,y,z) for the coordinate
# which has the largest overall delta.
maxdist = max(delta, key=abs)
if maxdist == 0:
return None
primidx = delta.index(maxdist)
# secidx1 and secidx2 are the remaining indices out of [0,1,2].
secidx1 = (primidx - 1) % 3
secidx2 = (1 + primidx) % 3
# primsign is the digit 1 or -1 depending on whether the limb is headed
# along the positive or negative primidx axis.
primsign = cmp(delta[primidx], 0) or 1
# secdelta1 and ...2 are the amount the associated values change
# for every step along the prime axis.
secdelta1 = delta[secidx1]
secfac1 = float(secdelta1) / delta[primidx]
secdelta2 = delta[secidx2]
secfac2 = float(secdelta2) / delta[primidx]
# Initialize coord. These values could be anything, since
# they are overwritten.
coord = [0] * 3
# Loop through each crossection along the primary axis,
# from start to end.
endoffset = delta[primidx] + primsign
for primoffset in xrange(0, endoffset, primsign):
primloc = start[primidx] + primoffset
secloc1 = int(start[secidx1] + primoffset * secfac1)
secloc2 = int(start[secidx2] + primoffset * secfac2)
coord[primidx] = primloc
coord[secidx1] = secloc1
coord[secidx2] = secloc2
primdist = abs(delta[primidx])
radius = endsize + (startsize - endsize) * abs(delta[primidx]
- primoffset) / primdist
self.cross_section(coord, radius, primidx, blockdata, world)
def make_foliage(self, world):
"""
Generate the foliage for the tree in world.
Also place lanterns.
"""
foliage_coords = self.foliage_cords
for coord in foliage_coords:
self.foliage_cluster(coord, world)
for x, y, z in foliage_coords:
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
- if LIGHTTREE == 1:
+ if LIGHTING == ONE:
world.sync_set_block((x, y + 1, z), blocks["lightstone"].slot)
- elif LIGHTTREE in [2, 4]:
+ elif LIGHTING == TWO:
+ world.sync_set_block((x + 1, y, z), blocks["lightstone"].slot)
+ world.sync_set_block((x - 1, y, z), blocks["lightstone"].slot)
+ elif LIGHTING == FOUR:
world.sync_set_block((x + 1, y, z), blocks["lightstone"].slot)
world.sync_set_block((x - 1, y, z), blocks["lightstone"].slot)
- if LIGHTTREE == 4:
- world.sync_set_block((x, y, z + 1), blocks["lightstone"].slot)
- world.sync_set_block((x, y, z - 1), blocks["lightstone"].slot)
+ world.sync_set_block((x, y, z + 1), blocks["lightstone"].slot)
+ world.sync_set_block((x, y, z - 1), blocks["lightstone"].slot)
def make_branches(self, world):
- """Generate the branches and enter them in world.
"""
- treeposition = self.pos
+ Generate the branches and enter them in world.
+ """
+
height = self.height
- topy = treeposition[1] + int(self.trunkheight + 0.5)
+ topy = self.pos[1] + int(self.trunkheight + 0.5)
# endrad is the base radius of the branches at the trunk
endrad = max(self.trunkradius * (1 - self.trunkheight / height), 1)
for coord in self.foliage_cords:
- distance = dist((coord[0], coord[2]),
- (treeposition[0], treeposition[2]))
- ydist = coord[1] - treeposition[1]
+ distance = dist((coord[0], coord[2]), (self.pos[0], self.pos[2]))
+ ydist = coord[1] - self.pos[1]
# value is a magic number that weights the probability
# of generating branches properly so that
# you get enough on small trees, but not too many
# on larger trees.
# Very difficult to get right... do not touch!
value = (self.branchdensity * 220 * height) / ((ydist + distance) ** 3)
if value < random():
continue
posy = coord[1]
slope = self.branchslope + (0.5 - random()) * .16
if coord[1] - distance * slope > topy:
# Another random rejection, for branches between
# the top of the trunk and the crown of the tree
threshhold = 1 / height
if random() < threshhold:
continue
branchy = topy
basesize = endrad
else:
branchy = posy - distance * slope
basesize = (endrad + (self.trunkradius - endrad) *
(topy - branchy) / self.trunkheight)
startsize = (basesize * (1 + random()) * PHI *
(distance / height) ** PHI)
if startsize < 1.0:
startsize = 1.0
rndr = sqrt(random()) * basesize * PHI
rndang = random() * 2 * pi
rndx = int(rndr * sin(rndang) + 0.5)
rndz = int(rndr * cos(rndang) + 0.5)
- startcoord = [treeposition[0] + rndx,
+ startcoord = [self.pos[0] + rndx,
int(branchy),
- treeposition[2] + rndz]
+ self.pos[2] + rndz]
endsize = 1.0
self.taperedcylinder(startcoord, coord, startsize, endsize, world,
blocks["log"].slot)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
# In this method, x and z are the position of the trunk.
x, starty, z = self.pos
midy = starty + int(self.trunkheight / (PHI + 1))
topy = starty + int(self.trunkheight + 0.5)
end_size_factor = self.trunkheight / self.height
endrad = max(self.trunkradius * (1 - end_size_factor), 1)
midrad = max(self.trunkradius * (1 - end_size_factor * .5), endrad)
# Make the lower and upper sections of the trunk.
self.taperedcylinder([x, starty, z], [x, midy, z], self.trunkradius,
midrad, world, blocks["log"].slot)
self.taperedcylinder([x, midy, z], [x, topy, z], midrad, endrad,
world, blocks["log"].slot)
#Make the branches.
self.make_branches(world)
def prepare(self, world):
"""
Initialize the internal values for the Tree object.
Primarily, sets up the foliage cluster locations.
"""
- treeposition = self.pos
self.trunkradius = PHI * sqrt(self.height)
if self.trunkradius < 1:
self.trunkradius = 1
self.trunkheight = self.height
- yend = int(treeposition[1] + self.height)
+ yend = int(self.pos[1] + self.height)
self.branchdensity = 1.0
foliage_coords = []
- ystart = treeposition[1]
+ ystart = self.pos[1]
num_of_clusters_per_y = int(1.5 + (self.height / 19) ** 2)
if num_of_clusters_per_y < 1:
num_of_clusters_per_y = 1
# make sure we don't spend too much time off the top of the map
if yend > 255:
yend = 255
if ystart > 255:
ystart = 255
for y in xrange(yend, ystart, -1):
for i in xrange(num_of_clusters_per_y):
shapefac = self.shapefunc(y - ystart)
if shapefac is None:
continue
r = (sqrt(random()) + .328) * shapefac
theta = random() * 2 * pi
- x = int(r * sin(theta)) + treeposition[0]
- z = int(r * cos(theta)) + treeposition[2]
+ x = int(r * sin(theta)) + self.pos[0]
+ z = int(r * cos(theta)) + self.pos[2]
foliage_coords += [[x, y, z]]
self.foliage_cords = foliage_coords
class RoundTree(ProceduralTree):
"""
A rounded deciduous tree.
"""
species = BIRCH
branchslope = 1 / (PHI + 1)
foliage_shape = [2, 3, 3, 2.5, 1.6]
def prepare(self, world):
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.8
self.trunkheight *= 0.7
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.282 + .1 * sqrt(random())):
return None
radius = self.height / 2
adj = self.height / 2 - y
if adj == 0:
distance = radius
elif abs(adj) >= radius:
distance = 0
else:
distance = dist((0, 0), (radius, adj))
distance *= PHI
return distance
class ConeTree(ProceduralTree):
"""
A conifer.
"""
species = PINE
branchslope = 0.15
foliage_shape = [3, 2.6, 2, 1]
def prepare(self, world):
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.5
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.25 + .05 * sqrt(random())):
return None
# Radius.
return max((self.height - y) / (PHI + 1), 0)
class RainforestTree(ProceduralTree):
"""
A big rainforest tree.
"""
species = JUNGLE
branchslope = 1
foliage_shape = [3.4, 2.6]
def prepare(self, world):
# XXX play with these numbers until jungles look right
self.height = randint(10, 20)
self.trunkradius = randint(5, 15)
ProceduralTree.prepare(self, world)
self.trunkradius /= PHI + 1
self.trunkheight *= .9
def shapefunc(self, y):
# Bottom 4/5 of the tree is probably branch-free.
if y < self.height * 0.8:
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None and random() < 0.07:
return twigs
return None
else:
width = self.height * 1 / (IPHI + 1)
topdist = (self.height - y) / (self.height * 0.2)
distance = width * (PHI + topdist) * (PHI + random()) * 1 / (IPHI + 1)
return distance
class MangroveTree(RoundTree):
"""
A mangrove tree.
Like the round deciduous tree, but bigger, taller, and generally more
awesome.
"""
branchslope = 1
def prepare(self, world):
RoundTree.prepare(self, world)
self.trunkradius *= PHI
def shapefunc(self, y):
val = RoundTree.shapefunc(self, y)
if val is not None:
val *= IPHI
return val
def make_roots(self, rootbases, world):
"""generate the roots and enter them in world.
rootbases = [[x,z,base_radius], ...] and is the list of locations
the roots can originate from, and the size of that location.
"""
- treeposition = self.pos
+
height = self.height
for coord in self.foliage_cords:
# First, set the threshhold for randomly selecting this
# coordinate for root creation.
- distance = dist((coord[0], coord[2]),
- (treeposition[0], treeposition[2]))
- ydist = coord[1] - treeposition[1]
+ distance = dist((coord[0], coord[2]), (self.pos[0], self.pos[2]))
+ ydist = coord[1] - self.pos[1]
value = ((self.branchdensity * 220 * height) /
((ydist + distance) ** 3))
# Randomly skip roots, based on the above threshold
if value < random():
continue
# initialize the internal variables from a selection of
# starting locations.
rootbase = choice(rootbases)
rootx = rootbase[0]
rootz = rootbase[1]
rootbaseradius = rootbase[2]
# Offset the root origin location by a random amount
# (radialy) from the starting location.
rndr = sqrt(random()) * rootbaseradius * PHI
rndang = random() * 2 * pi
rndx = int(rndr * sin(rndang) + 0.5)
rndz = int(rndr * cos(rndang) + 0.5)
rndy = int(random() * rootbaseradius * 0.5)
- startcoord = [rootx + rndx, treeposition[1] + rndy, rootz + rndz]
+ startcoord = [rootx + rndx, self.pos[1] + rndy, rootz + rndz]
# offset is the distance from the root base to the root tip.
offset = [startcoord[i] - coord[i] for i in xrange(3)]
# If this is a mangrove tree, make the roots longer.
offset = [int(val * IPHI - 1.5) for val in offset]
rootstartsize = (rootbaseradius * IPHI * abs(offset[1]) /
(height * IPHI))
rootstartsize = max(rootstartsize, 1.0)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
height = self.height
trunkheight = self.trunkheight
trunkradius = self.trunkradius
- treeposition = self.pos
- starty = treeposition[1]
- midy = treeposition[1] + int(trunkheight * 1 / (PHI + 1))
- topy = treeposition[1] + int(trunkheight + 0.5)
+ starty = self.pos[1]
+ midy = self.pos[1] + int(trunkheight * 1 / (PHI + 1))
+ topy = self.pos[1] + int(trunkheight + 0.5)
# In this method, x and z are the position of the trunk.
- x = treeposition[0]
- z = treeposition[2]
+ x = self.pos[0]
+ z = self.pos[2]
end_size_factor = trunkheight / height
endrad = max(trunkradius * (1 - end_size_factor), 1)
midrad = max(trunkradius * (1 - end_size_factor * .5), endrad)
# The start radius of the trunk should be a little smaller if we
# are using root buttresses.
startrad = trunkradius * .8
# rootbases is used later in self.makeroots(...) as
# starting locations for the roots.
rootbases = [[x, z, startrad]]
buttress_radius = trunkradius * 0.382
# posradius is how far the root buttresses should be offset
# from the trunk.
posradius = trunkradius
# In mangroves, the root buttresses are much more extended.
posradius = posradius * (IPHI + 1)
num_of_buttresses = int(sqrt(trunkradius) + 3.5)
for i in xrange(num_of_buttresses):
rndang = random() * 2 * pi
thisposradius = posradius * (0.9 + random() * .2)
# thisx and thisz are the x and z position for the base of
# the root buttress.
thisx = x + int(thisposradius * sin(rndang))
thisz = z + int(thisposradius * cos(rndang))
# thisbuttressradius is the radius of the buttress.
# Currently, root buttresses do not taper.
thisbuttressradius = max(buttress_radius * (PHI + random()), 1)
# Make the root buttress.
self.taperedcylinder([thisx, starty, thisz], [x, midy, z],
thisbuttressradius, thisbuttressradius,
world, blocks["log"].slot)
# Add this root buttress as a possible location at
# which roots can spawn.
- rootbases += [[thisx, thisz, thisbuttressradius]]
+ rootbases.append([thisx, thisz, thisbuttressradius])
# Make the lower and upper sections of the trunk.
self.taperedcylinder([x, starty, z], [x, midy, z], startrad, midrad,
world, blocks["log"].slot)
self.taperedcylinder([x, midy, z], [x, topy, z], midrad, endrad,
world, blocks["log"].slot)
#Make the branches
self.make_branches(world)
+
+ # XXX ... and do something with the rootbases?
|
bravoserver/bravo
|
8ef1655e6d45a56071b6789de1d4279318f8fe73
|
terrain/trees: Fix up a couple distance calculations.
|
diff --git a/bravo/terrain/trees.py b/bravo/terrain/trees.py
index 63bf0d5..e65902e 100644
--- a/bravo/terrain/trees.py
+++ b/bravo/terrain/trees.py
@@ -1,664 +1,663 @@
from __future__ import division
from itertools import product
from math import cos, pi, sin, sqrt
from random import choice, random, randint
from zope.interface import Interface, implements
from bravo.blocks import blocks
from bravo.chunk import CHUNK_HEIGHT
+from bravo.utilities.maths import dist
PHI = (sqrt(5) - 1) * 0.5
IPHI = (sqrt(5) + 1) * 0.5
"""
Phi and inverse phi constants.
"""
# add lights in the middle of foliage clusters
# for those huge trees that get so dark underneath
# or for enchanted forests that should glow and stuff
# Only works if SHAPE is "round" or "cone" or "procedural"
# 0 makes just normal trees
# 1 adds one light inside the foliage clusters for a bit of light
# 2 adds two lights around the base of each cluster, for more light
# 4 adds lights all around the base of each cluster for lots of light
LIGHTTREE = 0
def dist_to_mat(cord, vec, matidxlist, world, invert=False, limit=None):
"""
Find the distance from the given coordinates to any of a set of blocks
along a certain vector.
"""
curcord = [i + .5 for i in cord]
iterations = 0
on_map = True
while on_map:
x = int(curcord[0])
y = int(curcord[1])
z = int(curcord[2])
if not 0 <= y < CHUNK_HEIGHT:
break
block = world.sync_get_block((x, y, z))
if block in matidxlist and not invert:
break
elif block not in matidxlist and invert:
break
else:
curcord = [curcord[i] + vec[i] for i in range(3)]
iterations += 1
if limit and iterations > limit:
break
return iterations
class ITree(Interface):
"""
An ideal Platonic tree.
Trees usually are made of some sort of wood, and are adorned with leaves.
These trees also may have lanterns hanging in their branches.
"""
def prepare(world):
"""
Do any post-__init__() setup.
"""
def make_trunk(world):
"""
Write a trunk to the world.
"""
def make_foliage(world):
"""
Write foliage (leaves, lanterns) to the world.
"""
OAK, PINE, BIRCH, JUNGLE = range(4)
class Tree(object):
"""
Set up the interface for tree objects. Designed for subclassing.
"""
implements(ITree)
species = OAK
def __init__(self, pos, height=None):
if height is None:
self.height = randint(4, 7)
else:
self.height = height
self.pos = pos
def prepare(self, world):
pass
def make_trunk(self, world):
pass
def make_foliage(self, world):
pass
class StickTree(Tree):
"""
A large stick or log.
Subclass this to build trees which only require a single-log trunk.
"""
def make_trunk(self, world):
x, y, z = self.pos
for i in xrange(self.height):
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
y += 1
class NormalTree(StickTree):
"""
A Notchy tree.
This tree will be a single bulb of foliage above a single width trunk.
The shape is very similar to the default Minecraft tree.
"""
def make_foliage(self, world):
topy = self.pos[1] + self.height - 1
start = topy - 2
end = topy + 2
for y in xrange(start, end):
if y > start + 1:
rad = 1
else:
rad = 2
for xoff, zoff in product(xrange(-rad, rad + 1), repeat=2):
# XXX Wait, sorry, what.
if (random() > PHI and abs(xoff) == abs(zoff) == rad or
xoff == zoff == 0):
continue
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
world.sync_set_metadata((x, y, z), self.species)
class BambooTree(StickTree):
"""
A bamboo-like tree.
Bamboo foliage is sparse and always adjacent to the trunk.
"""
def make_foliage(self, world):
start = self.pos[1]
end = start + self.height + 1
for y in xrange(start, end):
for i in (0, 1):
xoff = choice([-1, 1])
zoff = choice([-1, 1])
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
class PalmTree(StickTree):
"""
A traditional palm tree.
This tree has four tufts of foliage at the top of the trunk. No coconuts,
though.
"""
def make_foliage(self, world):
y = self.pos[1] + self.height
for xoff, zoff in product(xrange(-2, 3), repeat=2):
if abs(xoff) == abs(zoff):
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
class ProceduralTree(Tree):
"""
Base class for larger, complex, procedurally generated trees.
This tree type has roots, a trunk, branches all of varying width, and many
foliage clusters.
This class needs to be subclassed to be useful. Specifically,
foliage_shape must be set. Subclass 'prepare' and 'shapefunc' to make
different shaped trees.
"""
def cross_section(self, center, radius, diraxis, matidx, world):
"""Create a round section of type matidx in world.
Passed values:
center = [x,y,z] for the coordinates of the center block
radius = <number> as the radius of the section. May be a float or int.
diraxis: The list index for the axis to make the section
perpendicular to. 0 indicates the x axis, 1 the y, 2 the z. The
section will extend along the other two axies.
matidx = <int> the integer value to make the section out of.
world = the array generated by make_mcmap
matdata = <int> the integer value to make the block data value.
"""
rad = int(radius + PHI)
if rad <= 0:
return None
secidx1 = (diraxis - 1) % 3
secidx2 = (1 + diraxis) % 3
coord = [0] * 3
for off1, off2 in product(xrange(-rad, rad + 1), repeat=2):
thisdist = sqrt((abs(off1) + .5) ** 2 + (abs(off2) + .5) ** 2)
if thisdist > radius:
continue
pri = center[diraxis]
sec1 = center[secidx1] + off1
sec2 = center[secidx2] + off2
coord[diraxis] = pri
coord[secidx1] = sec1
coord[secidx2] = sec2
world.sync_set_block(coord, matidx)
world.sync_set_metadata(coord, self.species)
def shapefunc(self, y):
"""
Obtain a radius for the given height.
Subclass this method to customize tree design.
If None is returned, no foliage cluster will be created.
:returns: radius, or None
"""
if random() < 100 / ((self.height) ** 2) and y < self.trunkheight:
return self.height * .12
return None
def foliage_cluster(self, center, world):
"""
Generate a round cluster of foliage at the location center.
The shape of the cluster is defined by the list self.foliage_shape.
This list must be set in a subclass of ProceduralTree.
"""
x = center[0]
y = center[1]
z = center[2]
for i in self.foliage_shape:
self.cross_section([x, y, z], i, 1, blocks["leaves"].slot, world)
y += 1
def taperedcylinder(self, start, end, startsize, endsize, world, blockdata):
"""
Create a tapered cylinder in world.
start and end are the beginning and ending coordinates of form [x,y,z].
startsize and endsize are the beginning and ending radius.
The material of the cylinder is WOODMAT.
"""
# delta is the coordinate vector for the difference between
# start and end.
delta = [int(e - s) for e, s in zip(end, start)]
# primidx is the index (0,1,or 2 for x,y,z) for the coordinate
# which has the largest overall delta.
maxdist = max(delta, key=abs)
if maxdist == 0:
return None
primidx = delta.index(maxdist)
# secidx1 and secidx2 are the remaining indices out of [0,1,2].
secidx1 = (primidx - 1) % 3
secidx2 = (1 + primidx) % 3
# primsign is the digit 1 or -1 depending on whether the limb is headed
# along the positive or negative primidx axis.
primsign = cmp(delta[primidx], 0) or 1
# secdelta1 and ...2 are the amount the associated values change
# for every step along the prime axis.
secdelta1 = delta[secidx1]
secfac1 = float(secdelta1) / delta[primidx]
secdelta2 = delta[secidx2]
secfac2 = float(secdelta2) / delta[primidx]
# Initialize coord. These values could be anything, since
# they are overwritten.
coord = [0] * 3
# Loop through each crossection along the primary axis,
# from start to end.
endoffset = delta[primidx] + primsign
for primoffset in xrange(0, endoffset, primsign):
primloc = start[primidx] + primoffset
secloc1 = int(start[secidx1] + primoffset * secfac1)
secloc2 = int(start[secidx2] + primoffset * secfac2)
coord[primidx] = primloc
coord[secidx1] = secloc1
coord[secidx2] = secloc2
primdist = abs(delta[primidx])
radius = endsize + (startsize - endsize) * abs(delta[primidx]
- primoffset) / primdist
self.cross_section(coord, radius, primidx, blockdata, world)
def make_foliage(self, world):
"""
Generate the foliage for the tree in world.
Also place lanterns.
"""
foliage_coords = self.foliage_cords
for coord in foliage_coords:
self.foliage_cluster(coord, world)
for x, y, z in foliage_coords:
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
if LIGHTTREE == 1:
world.sync_set_block((x, y + 1, z), blocks["lightstone"].slot)
elif LIGHTTREE in [2, 4]:
world.sync_set_block((x + 1, y, z), blocks["lightstone"].slot)
world.sync_set_block((x - 1, y, z), blocks["lightstone"].slot)
if LIGHTTREE == 4:
world.sync_set_block((x, y, z + 1), blocks["lightstone"].slot)
world.sync_set_block((x, y, z - 1), blocks["lightstone"].slot)
def make_branches(self, world):
"""Generate the branches and enter them in world.
"""
treeposition = self.pos
height = self.height
topy = treeposition[1] + int(self.trunkheight + 0.5)
# endrad is the base radius of the branches at the trunk
endrad = max(self.trunkradius * (1 - self.trunkheight / height), 1)
for coord in self.foliage_cords:
- # XXX here too
- dist = (sqrt(float(coord[0] - treeposition[0]) ** 2 +
- float(coord[2] - treeposition[2]) ** 2))
+ distance = dist((coord[0], coord[2]),
+ (treeposition[0], treeposition[2]))
ydist = coord[1] - treeposition[1]
# value is a magic number that weights the probability
# of generating branches properly so that
# you get enough on small trees, but not too many
# on larger trees.
# Very difficult to get right... do not touch!
- value = (self.branchdensity * 220 * height) / ((ydist + dist) ** 3)
+ value = (self.branchdensity * 220 * height) / ((ydist + distance) ** 3)
if value < random():
continue
posy = coord[1]
slope = self.branchslope + (0.5 - random()) * .16
- if coord[1] - dist * slope > topy:
+ if coord[1] - distance * slope > topy:
# Another random rejection, for branches between
# the top of the trunk and the crown of the tree
threshhold = 1 / height
if random() < threshhold:
continue
branchy = topy
basesize = endrad
else:
- branchy = posy - dist * slope
+ branchy = posy - distance * slope
basesize = (endrad + (self.trunkradius - endrad) *
(topy - branchy) / self.trunkheight)
startsize = (basesize * (1 + random()) * PHI *
- (dist/height) ** PHI)
+ (distance / height) ** PHI)
if startsize < 1.0:
startsize = 1.0
rndr = sqrt(random()) * basesize * PHI
rndang = random() * 2 * pi
rndx = int(rndr * sin(rndang) + 0.5)
rndz = int(rndr * cos(rndang) + 0.5)
startcoord = [treeposition[0] + rndx,
int(branchy),
treeposition[2] + rndz]
endsize = 1.0
self.taperedcylinder(startcoord, coord, startsize, endsize, world,
blocks["log"].slot)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
# In this method, x and z are the position of the trunk.
x, starty, z = self.pos
midy = starty + int(self.trunkheight / (PHI + 1))
topy = starty + int(self.trunkheight + 0.5)
end_size_factor = self.trunkheight / self.height
endrad = max(self.trunkradius * (1 - end_size_factor), 1)
midrad = max(self.trunkradius * (1 - end_size_factor * .5), endrad)
# Make the lower and upper sections of the trunk.
self.taperedcylinder([x, starty, z], [x, midy, z], self.trunkradius,
midrad, world, blocks["log"].slot)
self.taperedcylinder([x, midy, z], [x, topy, z], midrad, endrad,
world, blocks["log"].slot)
#Make the branches.
self.make_branches(world)
def prepare(self, world):
"""
Initialize the internal values for the Tree object.
Primarily, sets up the foliage cluster locations.
"""
treeposition = self.pos
self.trunkradius = PHI * sqrt(self.height)
if self.trunkradius < 1:
self.trunkradius = 1
self.trunkheight = self.height
yend = int(treeposition[1] + self.height)
self.branchdensity = 1.0
foliage_coords = []
ystart = treeposition[1]
num_of_clusters_per_y = int(1.5 + (self.height / 19) ** 2)
if num_of_clusters_per_y < 1:
num_of_clusters_per_y = 1
# make sure we don't spend too much time off the top of the map
if yend > 255:
yend = 255
if ystart > 255:
ystart = 255
for y in xrange(yend, ystart, -1):
for i in xrange(num_of_clusters_per_y):
shapefac = self.shapefunc(y - ystart)
if shapefac is None:
continue
r = (sqrt(random()) + .328) * shapefac
theta = random() * 2 * pi
x = int(r * sin(theta)) + treeposition[0]
z = int(r * cos(theta)) + treeposition[2]
foliage_coords += [[x, y, z]]
self.foliage_cords = foliage_coords
class RoundTree(ProceduralTree):
"""
A rounded deciduous tree.
"""
species = BIRCH
branchslope = 1 / (PHI + 1)
foliage_shape = [2, 3, 3, 2.5, 1.6]
def prepare(self, world):
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.8
self.trunkheight *= 0.7
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.282 + .1 * sqrt(random())):
return None
radius = self.height / 2
adj = self.height / 2 - y
if adj == 0:
- dist = radius
+ distance = radius
elif abs(adj) >= radius:
- dist = 0
+ distance = 0
else:
- dist = sqrt((radius ** 2) - (adj ** 2))
- dist *= PHI
- return dist
+ distance = dist((0, 0), (radius, adj))
+ distance *= PHI
+ return distance
class ConeTree(ProceduralTree):
"""
A conifer.
"""
species = PINE
branchslope = 0.15
foliage_shape = [3, 2.6, 2, 1]
def prepare(self, world):
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.5
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.25 + .05 * sqrt(random())):
return None
# Radius.
return max((self.height - y) / (PHI + 1), 0)
class RainforestTree(ProceduralTree):
"""
A big rainforest tree.
"""
species = JUNGLE
branchslope = 1
foliage_shape = [3.4, 2.6]
def prepare(self, world):
# XXX play with these numbers until jungles look right
self.height = randint(10, 20)
self.trunkradius = randint(5, 15)
ProceduralTree.prepare(self, world)
self.trunkradius /= PHI + 1
self.trunkheight *= .9
def shapefunc(self, y):
# Bottom 4/5 of the tree is probably branch-free.
if y < self.height * 0.8:
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None and random() < 0.07:
return twigs
return None
else:
width = self.height * 1 / (IPHI + 1)
topdist = (self.height - y) / (self.height * 0.2)
- dist = width * (PHI + topdist) * (PHI + random()) * 1 / (IPHI + 1)
- return dist
+ distance = width * (PHI + topdist) * (PHI + random()) * 1 / (IPHI + 1)
+ return distance
class MangroveTree(RoundTree):
"""
A mangrove tree.
Like the round deciduous tree, but bigger, taller, and generally more
awesome.
"""
branchslope = 1
def prepare(self, world):
RoundTree.prepare(self, world)
self.trunkradius *= PHI
def shapefunc(self, y):
val = RoundTree.shapefunc(self, y)
if val is not None:
val *= IPHI
return val
def make_roots(self, rootbases, world):
"""generate the roots and enter them in world.
rootbases = [[x,z,base_radius], ...] and is the list of locations
the roots can originate from, and the size of that location.
"""
treeposition = self.pos
height = self.height
for coord in self.foliage_cords:
# First, set the threshhold for randomly selecting this
# coordinate for root creation.
- # XXX factor out into utility function
- dist = (sqrt(float(coord[0] - treeposition[0]) ** 2 +
- float(coord[2] - treeposition[2]) ** 2))
+ distance = dist((coord[0], coord[2]),
+ (treeposition[0], treeposition[2]))
ydist = coord[1] - treeposition[1]
value = ((self.branchdensity * 220 * height) /
- ((ydist + dist) ** 3))
+ ((ydist + distance) ** 3))
# Randomly skip roots, based on the above threshold
if value < random():
continue
# initialize the internal variables from a selection of
# starting locations.
rootbase = choice(rootbases)
rootx = rootbase[0]
rootz = rootbase[1]
rootbaseradius = rootbase[2]
# Offset the root origin location by a random amount
# (radialy) from the starting location.
rndr = sqrt(random()) * rootbaseradius * PHI
rndang = random() * 2 * pi
rndx = int(rndr * sin(rndang) + 0.5)
rndz = int(rndr * cos(rndang) + 0.5)
rndy = int(random() * rootbaseradius * 0.5)
startcoord = [rootx + rndx, treeposition[1] + rndy, rootz + rndz]
# offset is the distance from the root base to the root tip.
offset = [startcoord[i] - coord[i] for i in xrange(3)]
# If this is a mangrove tree, make the roots longer.
offset = [int(val * IPHI - 1.5) for val in offset]
rootstartsize = (rootbaseradius * IPHI * abs(offset[1]) /
(height * IPHI))
rootstartsize = max(rootstartsize, 1.0)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
height = self.height
trunkheight = self.trunkheight
trunkradius = self.trunkradius
treeposition = self.pos
starty = treeposition[1]
midy = treeposition[1] + int(trunkheight * 1 / (PHI + 1))
topy = treeposition[1] + int(trunkheight + 0.5)
# In this method, x and z are the position of the trunk.
x = treeposition[0]
z = treeposition[2]
end_size_factor = trunkheight / height
endrad = max(trunkradius * (1 - end_size_factor), 1)
midrad = max(trunkradius * (1 - end_size_factor * .5), endrad)
# The start radius of the trunk should be a little smaller if we
# are using root buttresses.
startrad = trunkradius * .8
# rootbases is used later in self.makeroots(...) as
# starting locations for the roots.
rootbases = [[x, z, startrad]]
buttress_radius = trunkradius * 0.382
# posradius is how far the root buttresses should be offset
# from the trunk.
posradius = trunkradius
# In mangroves, the root buttresses are much more extended.
posradius = posradius * (IPHI + 1)
num_of_buttresses = int(sqrt(trunkradius) + 3.5)
for i in xrange(num_of_buttresses):
rndang = random() * 2 * pi
thisposradius = posradius * (0.9 + random() * .2)
# thisx and thisz are the x and z position for the base of
# the root buttress.
thisx = x + int(thisposradius * sin(rndang))
thisz = z + int(thisposradius * cos(rndang))
# thisbuttressradius is the radius of the buttress.
# Currently, root buttresses do not taper.
thisbuttressradius = max(buttress_radius * (PHI + random()), 1)
# Make the root buttress.
self.taperedcylinder([thisx, starty, thisz], [x, midy, z],
thisbuttressradius, thisbuttressradius,
world, blocks["log"].slot)
# Add this root buttress as a possible location at
# which roots can spawn.
rootbases += [[thisx, thisz, thisbuttressradius]]
# Make the lower and upper sections of the trunk.
self.taperedcylinder([x, starty, z], [x, midy, z], startrad, midrad,
world, blocks["log"].slot)
self.taperedcylinder([x, midy, z], [x, topy, z], midrad, endrad,
world, blocks["log"].slot)
#Make the branches
self.make_branches(world)
|
bravoserver/bravo
|
8de4affa78931303df8e8219111111222de7c792
|
utilities/maths: Add Pythagorean distance, and test.
|
diff --git a/bravo/tests/utilities/test_maths.py b/bravo/tests/utilities/test_maths.py
index 4b01aa1..aeff23b 100644
--- a/bravo/tests/utilities/test_maths.py
+++ b/bravo/tests/utilities/test_maths.py
@@ -1,39 +1,48 @@
import unittest
-from bravo.utilities.maths import clamp, morton2
+from bravo.utilities.maths import clamp, dist, morton2
+
+
+class TestDistance(unittest.TestCase):
+
+ def test_pythagorean_triple(self):
+ five = dist((0, 0), (3, 4))
+ self.assertAlmostEqual(five, 5)
+
class TestMorton(unittest.TestCase):
def test_zero(self):
self.assertEqual(morton2(0, 0), 0)
def test_first(self):
self.assertEqual(morton2(1, 0), 1)
def test_second(self):
self.assertEqual(morton2(0, 1), 2)
def test_first_full(self):
self.assertEqual(morton2(0xffff, 0x0), 0x55555555)
def test_second_full(self):
self.assertEqual(morton2(0x0, 0xffff), 0xaaaaaaaa)
+
class TestClamp(unittest.TestCase):
def test_minimum(self):
self.assertEqual(clamp(-1, 0, 3), 0)
def test_maximum(self):
self.assertEqual(clamp(4, 0, 3), 3)
def test_middle(self):
self.assertEqual(clamp(2, 0, 3), 2)
def test_middle_polymorphic(self):
"""
``clamp()`` doesn't care too much about its arguments, and won't
modify types unnecessarily.
"""
self.assertEqual(clamp(1.5, 0, 3), 1.5)
diff --git a/bravo/utilities/maths.py b/bravo/utilities/maths.py
index 6d2aecd..768ef72 100644
--- a/bravo/utilities/maths.py
+++ b/bravo/utilities/maths.py
@@ -1,85 +1,93 @@
from itertools import product
-from math import cos, sin
+from math import cos, sin, sqrt
+
+
+def dist(first, second):
+ """
+ Calculate the distance from one point to another.
+ """
+
+ return sqrt(sum((x - y) ** 2 for x, y in zip(first, second)))
def rotated_cosine(x, y, theta, lambd):
r"""
Evaluate a rotated 3D sinusoidal wave at a given point, angle, and
wavelength.
The function used is:
.. math::
f(x, y) = -\cos((x \cos\theta - y \sin\theta) / \lambda) / 2 + 1
This function has a handful of useful properties; it has a local minimum
at f(0, 0) and oscillates infinitely betwen 0 and 1.
:param float x: X coordinate
:param float y: Y coordinate
:param float theta: angle of rotation
:param float lambda: wavelength
:returns: float of f(x, y)
"""
return -cos((x * cos(theta) - y * sin(theta)) / lambd) / 2 + 1
def morton2(x, y):
"""
Create a Morton number by interleaving the bits of two numbers.
This can be used to map 2D coordinates into the integers.
Inputs will be masked off to 16 bits, unsigned.
"""
gx = x & 0xffff
gy = y & 0xffff
b = 0x00ff00ff, 0x0f0f0f0f, 0x33333333, 0x55555555
s = 8, 4, 2, 1
for i, j in zip(b, s):
gx = (gx | (gx << j)) & i
gy = (gy | (gy << j)) & i
return gx | (gy << 1)
def clamp(x, low, high):
"""
Clamp or saturate a number to be no lower than a minimum and no higher
than a maximum.
Implemented as its own function simply because it's so easy to mess up
when open-coded.
"""
return min(max(x, low), high)
def circling(x, y, r):
"""
Generate the points of the filled integral circle of the given radius
around the given coordinates.
"""
l = []
for i, j in product(range(-r, r + 1), repeat=2):
if i ** 2 + j ** 2 <= r ** 2:
l.append((x + i, y + j))
return l
def sorted_by_distance(iterable, x, y):
"""
Like ``sorted()``, but by distance to the given coordinates.
"""
def key(t):
return (t[0] - x) ** 2 + (t[1] - y) ** 2
return sorted(iterable, key=key)
|
bravoserver/bravo
|
df93e32cf98fb6049de849142ae3dfb94f4edf77
|
terrain/trees: A big pile of PEP8 fixes.
|
diff --git a/bravo/terrain/trees.py b/bravo/terrain/trees.py
index 549a641..63bf0d5 100644
--- a/bravo/terrain/trees.py
+++ b/bravo/terrain/trees.py
@@ -1,661 +1,664 @@
from __future__ import division
from itertools import product
from math import cos, pi, sin, sqrt
from random import choice, random, randint
from zope.interface import Interface, implements
from bravo.blocks import blocks
from bravo.chunk import CHUNK_HEIGHT
PHI = (sqrt(5) - 1) * 0.5
IPHI = (sqrt(5) + 1) * 0.5
"""
Phi and inverse phi constants.
"""
# add lights in the middle of foliage clusters
# for those huge trees that get so dark underneath
# or for enchanted forests that should glow and stuff
# Only works if SHAPE is "round" or "cone" or "procedural"
# 0 makes just normal trees
# 1 adds one light inside the foliage clusters for a bit of light
# 2 adds two lights around the base of each cluster, for more light
# 4 adds lights all around the base of each cluster for lots of light
LIGHTTREE = 0
def dist_to_mat(cord, vec, matidxlist, world, invert=False, limit=None):
"""
Find the distance from the given coordinates to any of a set of blocks
along a certain vector.
"""
curcord = [i + .5 for i in cord]
iterations = 0
on_map = True
while on_map:
x = int(curcord[0])
y = int(curcord[1])
z = int(curcord[2])
if not 0 <= y < CHUNK_HEIGHT:
break
block = world.sync_get_block((x, y, z))
if block in matidxlist and not invert:
break
elif block not in matidxlist and invert:
break
else:
curcord = [curcord[i] + vec[i] for i in range(3)]
iterations += 1
if limit and iterations > limit:
break
return iterations
class ITree(Interface):
"""
An ideal Platonic tree.
Trees usually are made of some sort of wood, and are adorned with leaves.
These trees also may have lanterns hanging in their branches.
"""
def prepare(world):
"""
Do any post-__init__() setup.
"""
def make_trunk(world):
"""
Write a trunk to the world.
"""
def make_foliage(world):
"""
Write foliage (leaves, lanterns) to the world.
"""
OAK, PINE, BIRCH, JUNGLE = range(4)
class Tree(object):
"""
Set up the interface for tree objects. Designed for subclassing.
"""
implements(ITree)
species = OAK
def __init__(self, pos, height=None):
if height is None:
self.height = randint(4, 7)
else:
self.height = height
self.pos = pos
def prepare(self, world):
pass
def make_trunk(self, world):
pass
def make_foliage(self, world):
pass
class StickTree(Tree):
"""
A large stick or log.
Subclass this to build trees which only require a single-log trunk.
"""
def make_trunk(self, world):
x, y, z = self.pos
for i in xrange(self.height):
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
y += 1
class NormalTree(StickTree):
"""
A Notchy tree.
This tree will be a single bulb of foliage above a single width trunk.
The shape is very similar to the default Minecraft tree.
"""
def make_foliage(self, world):
topy = self.pos[1] + self.height - 1
start = topy - 2
end = topy + 2
for y in xrange(start, end):
if y > start + 1:
rad = 1
else:
rad = 2
for xoff, zoff in product(xrange(-rad, rad + 1), repeat=2):
+ # XXX Wait, sorry, what.
if (random() > PHI and abs(xoff) == abs(zoff) == rad or
- xoff == zoff == 0):
+ xoff == zoff == 0):
continue
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
world.sync_set_metadata((x, y, z), self.species)
class BambooTree(StickTree):
"""
A bamboo-like tree.
Bamboo foliage is sparse and always adjacent to the trunk.
"""
def make_foliage(self, world):
start = self.pos[1]
end = start + self.height + 1
for y in xrange(start, end):
for i in (0, 1):
xoff = choice([-1, 1])
zoff = choice([-1, 1])
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
class PalmTree(StickTree):
"""
A traditional palm tree.
This tree has four tufts of foliage at the top of the trunk. No coconuts,
though.
"""
def make_foliage(self, world):
y = self.pos[1] + self.height
for xoff, zoff in product(xrange(-2, 3), repeat=2):
if abs(xoff) == abs(zoff):
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
class ProceduralTree(Tree):
"""
Base class for larger, complex, procedurally generated trees.
This tree type has roots, a trunk, branches all of varying width, and many
foliage clusters.
This class needs to be subclassed to be useful. Specifically,
foliage_shape must be set. Subclass 'prepare' and 'shapefunc' to make
different shaped trees.
"""
def cross_section(self, center, radius, diraxis, matidx, world):
"""Create a round section of type matidx in world.
Passed values:
center = [x,y,z] for the coordinates of the center block
radius = <number> as the radius of the section. May be a float or int.
diraxis: The list index for the axis to make the section
perpendicular to. 0 indicates the x axis, 1 the y, 2 the z. The
section will extend along the other two axies.
matidx = <int> the integer value to make the section out of.
world = the array generated by make_mcmap
matdata = <int> the integer value to make the block data value.
"""
rad = int(radius + PHI)
if rad <= 0:
return None
secidx1 = (diraxis - 1) % 3
secidx2 = (1 + diraxis) % 3
coord = [0] * 3
for off1, off2 in product(xrange(-rad, rad + 1), repeat=2):
- thisdist = sqrt((abs(off1) + .5)**2 + (abs(off2) + .5)**2)
+ thisdist = sqrt((abs(off1) + .5) ** 2 + (abs(off2) + .5) ** 2)
if thisdist > radius:
continue
pri = center[diraxis]
sec1 = center[secidx1] + off1
sec2 = center[secidx2] + off2
coord[diraxis] = pri
coord[secidx1] = sec1
coord[secidx2] = sec2
world.sync_set_block(coord, matidx)
world.sync_set_metadata(coord, self.species)
def shapefunc(self, y):
"""
Obtain a radius for the given height.
Subclass this method to customize tree design.
If None is returned, no foliage cluster will be created.
:returns: radius, or None
"""
- if random() < 100 / ((self.height)**2) and y < self.trunkheight:
+ if random() < 100 / ((self.height) ** 2) and y < self.trunkheight:
return self.height * .12
return None
def foliage_cluster(self, center, world):
"""
Generate a round cluster of foliage at the location center.
The shape of the cluster is defined by the list self.foliage_shape.
This list must be set in a subclass of ProceduralTree.
"""
x = center[0]
y = center[1]
z = center[2]
for i in self.foliage_shape:
self.cross_section([x, y, z], i, 1, blocks["leaves"].slot, world)
y += 1
def taperedcylinder(self, start, end, startsize, endsize, world, blockdata):
"""
Create a tapered cylinder in world.
start and end are the beginning and ending coordinates of form [x,y,z].
startsize and endsize are the beginning and ending radius.
The material of the cylinder is WOODMAT.
"""
# delta is the coordinate vector for the difference between
# start and end.
delta = [int(e - s) for e, s in zip(end, start)]
# primidx is the index (0,1,or 2 for x,y,z) for the coordinate
# which has the largest overall delta.
maxdist = max(delta, key=abs)
if maxdist == 0:
return None
primidx = delta.index(maxdist)
# secidx1 and secidx2 are the remaining indices out of [0,1,2].
secidx1 = (primidx - 1) % 3
secidx2 = (1 + primidx) % 3
# primsign is the digit 1 or -1 depending on whether the limb is headed
# along the positive or negative primidx axis.
primsign = cmp(delta[primidx], 0) or 1
# secdelta1 and ...2 are the amount the associated values change
# for every step along the prime axis.
secdelta1 = delta[secidx1]
- secfac1 = float(secdelta1)/delta[primidx]
+ secfac1 = float(secdelta1) / delta[primidx]
secdelta2 = delta[secidx2]
- secfac2 = float(secdelta2)/delta[primidx]
+ secfac2 = float(secdelta2) / delta[primidx]
# Initialize coord. These values could be anything, since
# they are overwritten.
coord = [0] * 3
# Loop through each crossection along the primary axis,
# from start to end.
endoffset = delta[primidx] + primsign
for primoffset in xrange(0, endoffset, primsign):
primloc = start[primidx] + primoffset
- secloc1 = int(start[secidx1] + primoffset*secfac1)
- secloc2 = int(start[secidx2] + primoffset*secfac2)
+ secloc1 = int(start[secidx1] + primoffset * secfac1)
+ secloc2 = int(start[secidx2] + primoffset * secfac2)
coord[primidx] = primloc
coord[secidx1] = secloc1
coord[secidx2] = secloc2
primdist = abs(delta[primidx])
- radius = endsize + (startsize-endsize) * abs(delta[primidx]
+ radius = endsize + (startsize - endsize) * abs(delta[primidx]
- primoffset) / primdist
self.cross_section(coord, radius, primidx, blockdata, world)
def make_foliage(self, world):
"""
Generate the foliage for the tree in world.
Also place lanterns.
"""
foliage_coords = self.foliage_cords
for coord in foliage_coords:
- self.foliage_cluster(coord,world)
+ self.foliage_cluster(coord, world)
for x, y, z in foliage_coords:
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
if LIGHTTREE == 1:
world.sync_set_block((x, y + 1, z), blocks["lightstone"].slot)
- elif LIGHTTREE in [2,4]:
+ elif LIGHTTREE in [2, 4]:
world.sync_set_block((x + 1, y, z), blocks["lightstone"].slot)
world.sync_set_block((x - 1, y, z), blocks["lightstone"].slot)
if LIGHTTREE == 4:
world.sync_set_block((x, y, z + 1), blocks["lightstone"].slot)
world.sync_set_block((x, y, z - 1), blocks["lightstone"].slot)
def make_branches(self, world):
"""Generate the branches and enter them in world.
"""
treeposition = self.pos
height = self.height
topy = treeposition[1] + int(self.trunkheight + 0.5)
# endrad is the base radius of the branches at the trunk
- endrad = max(self.trunkradius * (1 - self.trunkheight/height), 1)
+ endrad = max(self.trunkradius * (1 - self.trunkheight / height), 1)
for coord in self.foliage_cords:
- dist = (sqrt(float(coord[0]-treeposition[0])**2 +
- float(coord[2]-treeposition[2])**2))
- ydist = coord[1]-treeposition[1]
+ # XXX here too
+ dist = (sqrt(float(coord[0] - treeposition[0]) ** 2 +
+ float(coord[2] - treeposition[2]) ** 2))
+ ydist = coord[1] - treeposition[1]
# value is a magic number that weights the probability
# of generating branches properly so that
# you get enough on small trees, but not too many
# on larger trees.
# Very difficult to get right... do not touch!
- value = (self.branchdensity * 220 * height)/((ydist + dist) ** 3)
+ value = (self.branchdensity * 220 * height) / ((ydist + dist) ** 3)
if value < random():
continue
posy = coord[1]
slope = self.branchslope + (0.5 - random()) * .16
- if coord[1] - dist*slope > topy:
+ if coord[1] - dist * slope > topy:
# Another random rejection, for branches between
# the top of the trunk and the crown of the tree
threshhold = 1 / height
if random() < threshhold:
continue
branchy = topy
basesize = endrad
else:
- branchy = posy-dist*slope
- basesize = (endrad + (self.trunkradius-endrad) *
+ branchy = posy - dist * slope
+ basesize = (endrad + (self.trunkradius - endrad) *
(topy - branchy) / self.trunkheight)
startsize = (basesize * (1 + random()) * PHI *
- (dist/height)**PHI)
+ (dist/height) ** PHI)
if startsize < 1.0:
startsize = 1.0
rndr = sqrt(random()) * basesize * PHI
rndang = random() * 2 * pi
- rndx = int(rndr*sin(rndang) + 0.5)
- rndz = int(rndr*cos(rndang) + 0.5)
- startcoord = [treeposition[0]+rndx,
+ rndx = int(rndr * sin(rndang) + 0.5)
+ rndz = int(rndr * cos(rndang) + 0.5)
+ startcoord = [treeposition[0] + rndx,
int(branchy),
- treeposition[2]+rndz]
+ treeposition[2] + rndz]
endsize = 1.0
self.taperedcylinder(startcoord, coord, startsize, endsize, world,
- blocks["log"].slot)
+ blocks["log"].slot)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
# In this method, x and z are the position of the trunk.
x, starty, z = self.pos
midy = starty + int(self.trunkheight / (PHI + 1))
topy = starty + int(self.trunkheight + 0.5)
end_size_factor = self.trunkheight / self.height
endrad = max(self.trunkradius * (1 - end_size_factor), 1)
midrad = max(self.trunkradius * (1 - end_size_factor * .5), endrad)
# Make the lower and upper sections of the trunk.
- self.taperedcylinder([x,starty,z], [x,midy,z], self.trunkradius,
- midrad, world, blocks["log"].slot)
- self.taperedcylinder([x,midy,z], [x,topy,z], midrad, endrad, world,
- blocks["log"].slot)
+ self.taperedcylinder([x, starty, z], [x, midy, z], self.trunkradius,
+ midrad, world, blocks["log"].slot)
+ self.taperedcylinder([x, midy, z], [x, topy, z], midrad, endrad,
+ world, blocks["log"].slot)
#Make the branches.
self.make_branches(world)
def prepare(self, world):
"""
Initialize the internal values for the Tree object.
Primarily, sets up the foliage cluster locations.
"""
treeposition = self.pos
self.trunkradius = PHI * sqrt(self.height)
if self.trunkradius < 1:
self.trunkradius = 1
self.trunkheight = self.height
yend = int(treeposition[1] + self.height)
self.branchdensity = 1.0
foliage_coords = []
ystart = treeposition[1]
- num_of_clusters_per_y = int(1.5 + (self.height / 19)**2)
+ num_of_clusters_per_y = int(1.5 + (self.height / 19) ** 2)
if num_of_clusters_per_y < 1:
num_of_clusters_per_y = 1
# make sure we don't spend too much time off the top of the map
if yend > 255:
yend = 255
if ystart > 255:
ystart = 255
for y in xrange(yend, ystart, -1):
for i in xrange(num_of_clusters_per_y):
shapefac = self.shapefunc(y - ystart)
if shapefac is None:
continue
- r = (sqrt(random()) + .328)*shapefac
+ r = (sqrt(random()) + .328) * shapefac
- theta = random()*2*pi
- x = int(r*sin(theta)) + treeposition[0]
- z = int(r*cos(theta)) + treeposition[2]
+ theta = random() * 2 * pi
+ x = int(r * sin(theta)) + treeposition[0]
+ z = int(r * cos(theta)) + treeposition[2]
- foliage_coords += [[x,y,z]]
+ foliage_coords += [[x, y, z]]
self.foliage_cords = foliage_coords
class RoundTree(ProceduralTree):
"""
A rounded deciduous tree.
"""
species = BIRCH
branchslope = 1 / (PHI + 1)
foliage_shape = [2, 3, 3, 2.5, 1.6]
def prepare(self, world):
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.8
self.trunkheight *= 0.7
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
- if y < self.height * (.282 + .1 * sqrt(random())) :
+ if y < self.height * (.282 + .1 * sqrt(random())):
return None
radius = self.height / 2
adj = self.height / 2 - y
- if adj == 0 :
+ if adj == 0:
dist = radius
elif abs(adj) >= radius:
dist = 0
else:
- dist = sqrt((radius**2) - (adj**2))
+ dist = sqrt((radius ** 2) - (adj ** 2))
dist *= PHI
return dist
class ConeTree(ProceduralTree):
"""
A conifer.
"""
species = PINE
branchslope = 0.15
foliage_shape = [3, 2.6, 2, 1]
def prepare(self, world):
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.5
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.25 + .05 * sqrt(random())):
return None
# Radius.
return max((self.height - y) / (PHI + 1), 0)
class RainforestTree(ProceduralTree):
"""
A big rainforest tree.
"""
species = JUNGLE
branchslope = 1
foliage_shape = [3.4, 2.6]
def prepare(self, world):
# XXX play with these numbers until jungles look right
self.height = randint(10, 20)
self.trunkradius = randint(5, 15)
ProceduralTree.prepare(self, world)
self.trunkradius /= PHI + 1
self.trunkheight *= .9
def shapefunc(self, y):
# Bottom 4/5 of the tree is probably branch-free.
if y < self.height * 0.8:
- twigs = ProceduralTree.shapefunc(self,y)
+ twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None and random() < 0.07:
return twigs
return None
else:
width = self.height * 1 / (IPHI + 1)
topdist = (self.height - y) / (self.height * 0.2)
dist = width * (PHI + topdist) * (PHI + random()) * 1 / (IPHI + 1)
return dist
class MangroveTree(RoundTree):
"""
A mangrove tree.
Like the round deciduous tree, but bigger, taller, and generally more
awesome.
"""
branchslope = 1
def prepare(self, world):
RoundTree.prepare(self, world)
self.trunkradius *= PHI
def shapefunc(self, y):
val = RoundTree.shapefunc(self, y)
if val is not None:
val *= IPHI
return val
def make_roots(self, rootbases, world):
"""generate the roots and enter them in world.
rootbases = [[x,z,base_radius], ...] and is the list of locations
the roots can originate from, and the size of that location.
"""
treeposition = self.pos
height = self.height
for coord in self.foliage_cords:
# First, set the threshhold for randomly selecting this
# coordinate for root creation.
- dist = (sqrt(float(coord[0]-treeposition[0])**2 +
- float(coord[2]-treeposition[2])**2))
- ydist = coord[1]-treeposition[1]
- value = (self.branchdensity * 220 * height)/((ydist + dist) ** 3)
+ # XXX factor out into utility function
+ dist = (sqrt(float(coord[0] - treeposition[0]) ** 2 +
+ float(coord[2] - treeposition[2]) ** 2))
+ ydist = coord[1] - treeposition[1]
+ value = ((self.branchdensity * 220 * height) /
+ ((ydist + dist) ** 3))
# Randomly skip roots, based on the above threshold
if value < random():
continue
# initialize the internal variables from a selection of
# starting locations.
rootbase = choice(rootbases)
rootx = rootbase[0]
rootz = rootbase[1]
rootbaseradius = rootbase[2]
# Offset the root origin location by a random amount
# (radialy) from the starting location.
rndr = sqrt(random()) * rootbaseradius * PHI
- rndang = random()*2*pi
- rndx = int(rndr*sin(rndang) + 0.5)
- rndz = int(rndr*cos(rndang) + 0.5)
- rndy = int(random()*rootbaseradius*0.5)
- startcoord = [rootx+rndx,treeposition[1]+rndy,rootz+rndz]
+ rndang = random() * 2 * pi
+ rndx = int(rndr * sin(rndang) + 0.5)
+ rndz = int(rndr * cos(rndang) + 0.5)
+ rndy = int(random() * rootbaseradius * 0.5)
+ startcoord = [rootx + rndx, treeposition[1] + rndy, rootz + rndz]
# offset is the distance from the root base to the root tip.
- offset = [startcoord[i]-coord[i] for i in xrange(3)]
+ offset = [startcoord[i] - coord[i] for i in xrange(3)]
# If this is a mangrove tree, make the roots longer.
offset = [int(val * IPHI - 1.5) for val in offset]
- rootstartsize = (rootbaseradius * IPHI * abs(offset[1])/
+ rootstartsize = (rootbaseradius * IPHI * abs(offset[1]) /
(height * IPHI))
rootstartsize = max(rootstartsize, 1.0)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
height = self.height
trunkheight = self.trunkheight
trunkradius = self.trunkradius
treeposition = self.pos
starty = treeposition[1]
- midy = treeposition[1]+int(trunkheight * 1 / (PHI + 1))
- topy = treeposition[1]+int(trunkheight + 0.5)
+ midy = treeposition[1] + int(trunkheight * 1 / (PHI + 1))
+ topy = treeposition[1] + int(trunkheight + 0.5)
# In this method, x and z are the position of the trunk.
x = treeposition[0]
z = treeposition[2]
- end_size_factor = trunkheight/height
+ end_size_factor = trunkheight / height
endrad = max(trunkradius * (1 - end_size_factor), 1)
midrad = max(trunkradius * (1 - end_size_factor * .5), endrad)
# The start radius of the trunk should be a little smaller if we
# are using root buttresses.
startrad = trunkradius * .8
# rootbases is used later in self.makeroots(...) as
# starting locations for the roots.
- rootbases = [[x,z,startrad]]
+ rootbases = [[x, z, startrad]]
buttress_radius = trunkradius * 0.382
# posradius is how far the root buttresses should be offset
# from the trunk.
posradius = trunkradius
# In mangroves, the root buttresses are much more extended.
posradius = posradius * (IPHI + 1)
num_of_buttresses = int(sqrt(trunkradius) + 3.5)
for i in xrange(num_of_buttresses):
- rndang = random()*2*pi
- thisposradius = posradius * (0.9 + random()*.2)
+ rndang = random() * 2 * pi
+ thisposradius = posradius * (0.9 + random() * .2)
# thisx and thisz are the x and z position for the base of
# the root buttress.
thisx = x + int(thisposradius * sin(rndang))
thisz = z + int(thisposradius * cos(rndang))
# thisbuttressradius is the radius of the buttress.
# Currently, root buttresses do not taper.
- thisbuttressradius = max(buttress_radius * (PHI + random()),
- 1)
+ thisbuttressradius = max(buttress_radius * (PHI + random()), 1)
# Make the root buttress.
self.taperedcylinder([thisx, starty, thisz], [x, midy, z],
- thisbuttressradius, thisbuttressradius, world,
- blocks["log"].slot)
+ thisbuttressradius, thisbuttressradius,
+ world, blocks["log"].slot)
# Add this root buttress as a possible location at
# which roots can spawn.
- rootbases += [[thisx,thisz,thisbuttressradius]]
+ rootbases += [[thisx, thisz, thisbuttressradius]]
# Make the lower and upper sections of the trunk.
- self.taperedcylinder([x,starty,z], [x,midy,z], startrad, midrad,
- world, blocks["log"].slot)
- self.taperedcylinder([x,midy,z], [x,topy,z], midrad, endrad, world,
- blocks["log"].slot)
+ self.taperedcylinder([x, starty, z], [x, midy, z], startrad, midrad,
+ world, blocks["log"].slot)
+ self.taperedcylinder([x, midy, z], [x, topy, z], midrad, endrad,
+ world, blocks["log"].slot)
#Make the branches
self.make_branches(world)
|
bravoserver/bravo
|
1be6a20a06a015c4c84bc976ca38cd0a7c40d5fe
|
terrain/trees: Share species numbers and use names for them.
|
diff --git a/bravo/terrain/trees.py b/bravo/terrain/trees.py
index 54db5e0..549a641 100644
--- a/bravo/terrain/trees.py
+++ b/bravo/terrain/trees.py
@@ -1,653 +1,661 @@
from __future__ import division
from itertools import product
from math import cos, pi, sin, sqrt
from random import choice, random, randint
from zope.interface import Interface, implements
from bravo.blocks import blocks
from bravo.chunk import CHUNK_HEIGHT
PHI = (sqrt(5) - 1) * 0.5
IPHI = (sqrt(5) + 1) * 0.5
"""
Phi and inverse phi constants.
"""
# add lights in the middle of foliage clusters
# for those huge trees that get so dark underneath
# or for enchanted forests that should glow and stuff
# Only works if SHAPE is "round" or "cone" or "procedural"
# 0 makes just normal trees
# 1 adds one light inside the foliage clusters for a bit of light
# 2 adds two lights around the base of each cluster, for more light
# 4 adds lights all around the base of each cluster for lots of light
LIGHTTREE = 0
def dist_to_mat(cord, vec, matidxlist, world, invert=False, limit=None):
"""
Find the distance from the given coordinates to any of a set of blocks
along a certain vector.
"""
curcord = [i + .5 for i in cord]
iterations = 0
on_map = True
while on_map:
x = int(curcord[0])
y = int(curcord[1])
z = int(curcord[2])
if not 0 <= y < CHUNK_HEIGHT:
break
block = world.sync_get_block((x, y, z))
if block in matidxlist and not invert:
break
elif block not in matidxlist and invert:
break
else:
curcord = [curcord[i] + vec[i] for i in range(3)]
iterations += 1
if limit and iterations > limit:
break
return iterations
class ITree(Interface):
"""
An ideal Platonic tree.
Trees usually are made of some sort of wood, and are adorned with leaves.
These trees also may have lanterns hanging in their branches.
"""
def prepare(world):
"""
Do any post-__init__() setup.
"""
def make_trunk(world):
"""
Write a trunk to the world.
"""
def make_foliage(world):
"""
Write foliage (leaves, lanterns) to the world.
"""
+OAK, PINE, BIRCH, JUNGLE = range(4)
+
+
class Tree(object):
"""
Set up the interface for tree objects. Designed for subclassing.
"""
implements(ITree)
+ species = OAK
+
def __init__(self, pos, height=None):
if height is None:
self.height = randint(4, 7)
else:
self.height = height
- self.species = 0 # default to oak
self.pos = pos
def prepare(self, world):
pass
def make_trunk(self, world):
pass
def make_foliage(self, world):
pass
class StickTree(Tree):
"""
A large stick or log.
Subclass this to build trees which only require a single-log trunk.
"""
def make_trunk(self, world):
x, y, z = self.pos
for i in xrange(self.height):
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
y += 1
class NormalTree(StickTree):
"""
A Notchy tree.
This tree will be a single bulb of foliage above a single width trunk.
The shape is very similar to the default Minecraft tree.
"""
def make_foliage(self, world):
topy = self.pos[1] + self.height - 1
start = topy - 2
end = topy + 2
for y in xrange(start, end):
if y > start + 1:
rad = 1
else:
rad = 2
for xoff, zoff in product(xrange(-rad, rad + 1), repeat=2):
if (random() > PHI and abs(xoff) == abs(zoff) == rad or
xoff == zoff == 0):
continue
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
world.sync_set_metadata((x, y, z), self.species)
class BambooTree(StickTree):
"""
A bamboo-like tree.
Bamboo foliage is sparse and always adjacent to the trunk.
"""
def make_foliage(self, world):
start = self.pos[1]
end = start + self.height + 1
for y in xrange(start, end):
for i in (0, 1):
xoff = choice([-1, 1])
zoff = choice([-1, 1])
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
class PalmTree(StickTree):
"""
A traditional palm tree.
This tree has four tufts of foliage at the top of the trunk. No coconuts,
though.
"""
def make_foliage(self, world):
y = self.pos[1] + self.height
for xoff, zoff in product(xrange(-2, 3), repeat=2):
if abs(xoff) == abs(zoff):
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
class ProceduralTree(Tree):
"""
Base class for larger, complex, procedurally generated trees.
This tree type has roots, a trunk, branches all of varying width, and many
foliage clusters.
This class needs to be subclassed to be useful. Specifically,
foliage_shape must be set. Subclass 'prepare' and 'shapefunc' to make
different shaped trees.
"""
def cross_section(self, center, radius, diraxis, matidx, world):
"""Create a round section of type matidx in world.
Passed values:
center = [x,y,z] for the coordinates of the center block
radius = <number> as the radius of the section. May be a float or int.
diraxis: The list index for the axis to make the section
perpendicular to. 0 indicates the x axis, 1 the y, 2 the z. The
section will extend along the other two axies.
matidx = <int> the integer value to make the section out of.
world = the array generated by make_mcmap
matdata = <int> the integer value to make the block data value.
"""
rad = int(radius + PHI)
if rad <= 0:
return None
secidx1 = (diraxis - 1) % 3
secidx2 = (1 + diraxis) % 3
coord = [0] * 3
for off1, off2 in product(xrange(-rad, rad + 1), repeat=2):
thisdist = sqrt((abs(off1) + .5)**2 + (abs(off2) + .5)**2)
if thisdist > radius:
continue
pri = center[diraxis]
sec1 = center[secidx1] + off1
sec2 = center[secidx2] + off2
coord[diraxis] = pri
coord[secidx1] = sec1
coord[secidx2] = sec2
world.sync_set_block(coord, matidx)
world.sync_set_metadata(coord, self.species)
def shapefunc(self, y):
"""
Obtain a radius for the given height.
Subclass this method to customize tree design.
If None is returned, no foliage cluster will be created.
:returns: radius, or None
"""
if random() < 100 / ((self.height)**2) and y < self.trunkheight:
return self.height * .12
return None
def foliage_cluster(self, center, world):
"""
Generate a round cluster of foliage at the location center.
The shape of the cluster is defined by the list self.foliage_shape.
This list must be set in a subclass of ProceduralTree.
"""
x = center[0]
y = center[1]
z = center[2]
for i in self.foliage_shape:
self.cross_section([x, y, z], i, 1, blocks["leaves"].slot, world)
y += 1
def taperedcylinder(self, start, end, startsize, endsize, world, blockdata):
"""
Create a tapered cylinder in world.
start and end are the beginning and ending coordinates of form [x,y,z].
startsize and endsize are the beginning and ending radius.
The material of the cylinder is WOODMAT.
"""
# delta is the coordinate vector for the difference between
# start and end.
delta = [int(e - s) for e, s in zip(end, start)]
# primidx is the index (0,1,or 2 for x,y,z) for the coordinate
# which has the largest overall delta.
maxdist = max(delta, key=abs)
if maxdist == 0:
return None
primidx = delta.index(maxdist)
# secidx1 and secidx2 are the remaining indices out of [0,1,2].
secidx1 = (primidx - 1) % 3
secidx2 = (1 + primidx) % 3
# primsign is the digit 1 or -1 depending on whether the limb is headed
# along the positive or negative primidx axis.
primsign = cmp(delta[primidx], 0) or 1
# secdelta1 and ...2 are the amount the associated values change
# for every step along the prime axis.
secdelta1 = delta[secidx1]
secfac1 = float(secdelta1)/delta[primidx]
secdelta2 = delta[secidx2]
secfac2 = float(secdelta2)/delta[primidx]
# Initialize coord. These values could be anything, since
# they are overwritten.
coord = [0] * 3
# Loop through each crossection along the primary axis,
# from start to end.
endoffset = delta[primidx] + primsign
for primoffset in xrange(0, endoffset, primsign):
primloc = start[primidx] + primoffset
secloc1 = int(start[secidx1] + primoffset*secfac1)
secloc2 = int(start[secidx2] + primoffset*secfac2)
coord[primidx] = primloc
coord[secidx1] = secloc1
coord[secidx2] = secloc2
primdist = abs(delta[primidx])
radius = endsize + (startsize-endsize) * abs(delta[primidx]
- primoffset) / primdist
self.cross_section(coord, radius, primidx, blockdata, world)
def make_foliage(self, world):
"""
Generate the foliage for the tree in world.
Also place lanterns.
"""
foliage_coords = self.foliage_cords
for coord in foliage_coords:
self.foliage_cluster(coord,world)
for x, y, z in foliage_coords:
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
if LIGHTTREE == 1:
world.sync_set_block((x, y + 1, z), blocks["lightstone"].slot)
elif LIGHTTREE in [2,4]:
world.sync_set_block((x + 1, y, z), blocks["lightstone"].slot)
world.sync_set_block((x - 1, y, z), blocks["lightstone"].slot)
if LIGHTTREE == 4:
world.sync_set_block((x, y, z + 1), blocks["lightstone"].slot)
world.sync_set_block((x, y, z - 1), blocks["lightstone"].slot)
def make_branches(self, world):
"""Generate the branches and enter them in world.
"""
treeposition = self.pos
height = self.height
topy = treeposition[1] + int(self.trunkheight + 0.5)
# endrad is the base radius of the branches at the trunk
endrad = max(self.trunkradius * (1 - self.trunkheight/height), 1)
for coord in self.foliage_cords:
dist = (sqrt(float(coord[0]-treeposition[0])**2 +
float(coord[2]-treeposition[2])**2))
ydist = coord[1]-treeposition[1]
# value is a magic number that weights the probability
# of generating branches properly so that
# you get enough on small trees, but not too many
# on larger trees.
# Very difficult to get right... do not touch!
value = (self.branchdensity * 220 * height)/((ydist + dist) ** 3)
if value < random():
continue
posy = coord[1]
slope = self.branchslope + (0.5 - random()) * .16
if coord[1] - dist*slope > topy:
# Another random rejection, for branches between
# the top of the trunk and the crown of the tree
threshhold = 1 / height
if random() < threshhold:
continue
branchy = topy
basesize = endrad
else:
branchy = posy-dist*slope
basesize = (endrad + (self.trunkradius-endrad) *
(topy - branchy) / self.trunkheight)
startsize = (basesize * (1 + random()) * PHI *
(dist/height)**PHI)
if startsize < 1.0:
startsize = 1.0
rndr = sqrt(random()) * basesize * PHI
rndang = random() * 2 * pi
rndx = int(rndr*sin(rndang) + 0.5)
rndz = int(rndr*cos(rndang) + 0.5)
startcoord = [treeposition[0]+rndx,
int(branchy),
treeposition[2]+rndz]
endsize = 1.0
self.taperedcylinder(startcoord, coord, startsize, endsize, world,
blocks["log"].slot)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
# In this method, x and z are the position of the trunk.
x, starty, z = self.pos
midy = starty + int(self.trunkheight / (PHI + 1))
topy = starty + int(self.trunkheight + 0.5)
end_size_factor = self.trunkheight / self.height
endrad = max(self.trunkradius * (1 - end_size_factor), 1)
midrad = max(self.trunkradius * (1 - end_size_factor * .5), endrad)
# Make the lower and upper sections of the trunk.
self.taperedcylinder([x,starty,z], [x,midy,z], self.trunkradius,
midrad, world, blocks["log"].slot)
self.taperedcylinder([x,midy,z], [x,topy,z], midrad, endrad, world,
blocks["log"].slot)
#Make the branches.
self.make_branches(world)
def prepare(self, world):
"""
Initialize the internal values for the Tree object.
Primarily, sets up the foliage cluster locations.
"""
treeposition = self.pos
self.trunkradius = PHI * sqrt(self.height)
if self.trunkradius < 1:
self.trunkradius = 1
self.trunkheight = self.height
yend = int(treeposition[1] + self.height)
self.branchdensity = 1.0
foliage_coords = []
ystart = treeposition[1]
num_of_clusters_per_y = int(1.5 + (self.height / 19)**2)
if num_of_clusters_per_y < 1:
num_of_clusters_per_y = 1
# make sure we don't spend too much time off the top of the map
if yend > 255:
yend = 255
if ystart > 255:
ystart = 255
for y in xrange(yend, ystart, -1):
for i in xrange(num_of_clusters_per_y):
shapefac = self.shapefunc(y - ystart)
if shapefac is None:
continue
r = (sqrt(random()) + .328)*shapefac
theta = random()*2*pi
x = int(r*sin(theta)) + treeposition[0]
z = int(r*cos(theta)) + treeposition[2]
foliage_coords += [[x,y,z]]
self.foliage_cords = foliage_coords
class RoundTree(ProceduralTree):
"""
A rounded deciduous tree.
"""
+ species = BIRCH
+
branchslope = 1 / (PHI + 1)
foliage_shape = [2, 3, 3, 2.5, 1.6]
def prepare(self, world):
- self.species = 2 # birch wood
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.8
self.trunkheight *= 0.7
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.282 + .1 * sqrt(random())) :
return None
radius = self.height / 2
adj = self.height / 2 - y
if adj == 0 :
dist = radius
elif abs(adj) >= radius:
dist = 0
else:
dist = sqrt((radius**2) - (adj**2))
dist *= PHI
return dist
class ConeTree(ProceduralTree):
"""
A conifer.
"""
+ species = PINE
+
branchslope = 0.15
foliage_shape = [3, 2.6, 2, 1]
def prepare(self, world):
- self.species = 1 # pine wood
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.5
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.25 + .05 * sqrt(random())):
return None
# Radius.
return max((self.height - y) / (PHI + 1), 0)
class RainforestTree(ProceduralTree):
"""
A big rainforest tree.
"""
+
+ species = JUNGLE
+
branchslope = 1
foliage_shape = [3.4, 2.6]
def prepare(self, world):
- self.species = 3 # jungle wood
- # TODO: play with these numbers until jungles look right
+ # XXX play with these numbers until jungles look right
self.height = randint(10, 20)
self.trunkradius = randint(5, 15)
ProceduralTree.prepare(self, world)
self.trunkradius /= PHI + 1
self.trunkheight *= .9
def shapefunc(self, y):
# Bottom 4/5 of the tree is probably branch-free.
if y < self.height * 0.8:
twigs = ProceduralTree.shapefunc(self,y)
if twigs is not None and random() < 0.07:
return twigs
return None
else:
width = self.height * 1 / (IPHI + 1)
topdist = (self.height - y) / (self.height * 0.2)
dist = width * (PHI + topdist) * (PHI + random()) * 1 / (IPHI + 1)
return dist
class MangroveTree(RoundTree):
"""
A mangrove tree.
Like the round deciduous tree, but bigger, taller, and generally more
awesome.
"""
branchslope = 1
def prepare(self, world):
RoundTree.prepare(self, world)
self.trunkradius *= PHI
def shapefunc(self, y):
val = RoundTree.shapefunc(self, y)
if val is not None:
val *= IPHI
return val
def make_roots(self, rootbases, world):
"""generate the roots and enter them in world.
rootbases = [[x,z,base_radius], ...] and is the list of locations
the roots can originate from, and the size of that location.
"""
treeposition = self.pos
height = self.height
for coord in self.foliage_cords:
# First, set the threshhold for randomly selecting this
# coordinate for root creation.
dist = (sqrt(float(coord[0]-treeposition[0])**2 +
float(coord[2]-treeposition[2])**2))
ydist = coord[1]-treeposition[1]
value = (self.branchdensity * 220 * height)/((ydist + dist) ** 3)
# Randomly skip roots, based on the above threshold
if value < random():
continue
# initialize the internal variables from a selection of
# starting locations.
rootbase = choice(rootbases)
rootx = rootbase[0]
rootz = rootbase[1]
rootbaseradius = rootbase[2]
# Offset the root origin location by a random amount
# (radialy) from the starting location.
rndr = sqrt(random()) * rootbaseradius * PHI
rndang = random()*2*pi
rndx = int(rndr*sin(rndang) + 0.5)
rndz = int(rndr*cos(rndang) + 0.5)
rndy = int(random()*rootbaseradius*0.5)
startcoord = [rootx+rndx,treeposition[1]+rndy,rootz+rndz]
# offset is the distance from the root base to the root tip.
offset = [startcoord[i]-coord[i] for i in xrange(3)]
# If this is a mangrove tree, make the roots longer.
offset = [int(val * IPHI - 1.5) for val in offset]
rootstartsize = (rootbaseradius * IPHI * abs(offset[1])/
(height * IPHI))
rootstartsize = max(rootstartsize, 1.0)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
height = self.height
trunkheight = self.trunkheight
trunkradius = self.trunkradius
treeposition = self.pos
starty = treeposition[1]
midy = treeposition[1]+int(trunkheight * 1 / (PHI + 1))
topy = treeposition[1]+int(trunkheight + 0.5)
# In this method, x and z are the position of the trunk.
x = treeposition[0]
z = treeposition[2]
end_size_factor = trunkheight/height
endrad = max(trunkradius * (1 - end_size_factor), 1)
midrad = max(trunkradius * (1 - end_size_factor * .5), endrad)
# The start radius of the trunk should be a little smaller if we
# are using root buttresses.
startrad = trunkradius * .8
# rootbases is used later in self.makeroots(...) as
# starting locations for the roots.
rootbases = [[x,z,startrad]]
buttress_radius = trunkradius * 0.382
# posradius is how far the root buttresses should be offset
# from the trunk.
posradius = trunkradius
# In mangroves, the root buttresses are much more extended.
posradius = posradius * (IPHI + 1)
num_of_buttresses = int(sqrt(trunkradius) + 3.5)
for i in xrange(num_of_buttresses):
rndang = random()*2*pi
thisposradius = posradius * (0.9 + random()*.2)
# thisx and thisz are the x and z position for the base of
# the root buttress.
thisx = x + int(thisposradius * sin(rndang))
thisz = z + int(thisposradius * cos(rndang))
# thisbuttressradius is the radius of the buttress.
# Currently, root buttresses do not taper.
thisbuttressradius = max(buttress_radius * (PHI + random()),
1)
# Make the root buttress.
self.taperedcylinder([thisx, starty, thisz], [x, midy, z],
thisbuttressradius, thisbuttressradius, world,
blocks["log"].slot)
# Add this root buttress as a possible location at
# which roots can spawn.
rootbases += [[thisx,thisz,thisbuttressradius]]
# Make the lower and upper sections of the trunk.
self.taperedcylinder([x,starty,z], [x,midy,z], startrad, midrad,
world, blocks["log"].slot)
self.taperedcylinder([x,midy,z], [x,topy,z], midrad, endrad, world,
blocks["log"].slot)
#Make the branches
self.make_branches(world)
|
bravoserver/bravo
|
9e0178aa4a7f3b316e00aa73b4df53a19d6c8a80
|
terrain/trees: Some PEP8.
|
diff --git a/bravo/terrain/trees.py b/bravo/terrain/trees.py
index b98ce83..54db5e0 100644
--- a/bravo/terrain/trees.py
+++ b/bravo/terrain/trees.py
@@ -1,643 +1,653 @@
from __future__ import division
from itertools import product
from math import cos, pi, sin, sqrt
from random import choice, random, randint
from zope.interface import Interface, implements
from bravo.blocks import blocks
from bravo.chunk import CHUNK_HEIGHT
PHI = (sqrt(5) - 1) * 0.5
IPHI = (sqrt(5) + 1) * 0.5
"""
Phi and inverse phi constants.
"""
# add lights in the middle of foliage clusters
# for those huge trees that get so dark underneath
# or for enchanted forests that should glow and stuff
# Only works if SHAPE is "round" or "cone" or "procedural"
# 0 makes just normal trees
# 1 adds one light inside the foliage clusters for a bit of light
# 2 adds two lights around the base of each cluster, for more light
# 4 adds lights all around the base of each cluster for lots of light
LIGHTTREE = 0
+
def dist_to_mat(cord, vec, matidxlist, world, invert=False, limit=None):
"""
Find the distance from the given coordinates to any of a set of blocks
along a certain vector.
"""
curcord = [i + .5 for i in cord]
iterations = 0
on_map = True
while on_map:
x = int(curcord[0])
y = int(curcord[1])
z = int(curcord[2])
if not 0 <= y < CHUNK_HEIGHT:
break
block = world.sync_get_block((x, y, z))
if block in matidxlist and not invert:
break
elif block not in matidxlist and invert:
break
else:
curcord = [curcord[i] + vec[i] for i in range(3)]
iterations += 1
if limit and iterations > limit:
break
return iterations
+
class ITree(Interface):
"""
An ideal Platonic tree.
Trees usually are made of some sort of wood, and are adorned with leaves.
These trees also may have lanterns hanging in their branches.
"""
def prepare(world):
"""
Do any post-__init__() setup.
"""
def make_trunk(world):
"""
Write a trunk to the world.
"""
def make_foliage(world):
"""
Write foliage (leaves, lanterns) to the world.
"""
+
class Tree(object):
"""
Set up the interface for tree objects. Designed for subclassing.
"""
implements(ITree)
def __init__(self, pos, height=None):
if height is None:
self.height = randint(4, 7)
else:
self.height = height
self.species = 0 # default to oak
self.pos = pos
def prepare(self, world):
pass
def make_trunk(self, world):
pass
def make_foliage(self, world):
pass
+
class StickTree(Tree):
"""
A large stick or log.
Subclass this to build trees which only require a single-log trunk.
"""
def make_trunk(self, world):
x, y, z = self.pos
for i in xrange(self.height):
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
y += 1
+
class NormalTree(StickTree):
"""
A Notchy tree.
This tree will be a single bulb of foliage above a single width trunk.
The shape is very similar to the default Minecraft tree.
"""
def make_foliage(self, world):
topy = self.pos[1] + self.height - 1
start = topy - 2
end = topy + 2
for y in xrange(start, end):
if y > start + 1:
rad = 1
else:
rad = 2
for xoff, zoff in product(xrange(-rad, rad + 1), repeat=2):
if (random() > PHI and abs(xoff) == abs(zoff) == rad or
xoff == zoff == 0):
continue
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
world.sync_set_metadata((x, y, z), self.species)
+
class BambooTree(StickTree):
"""
A bamboo-like tree.
Bamboo foliage is sparse and always adjacent to the trunk.
"""
def make_foliage(self, world):
start = self.pos[1]
end = start + self.height + 1
for y in xrange(start, end):
for i in (0, 1):
xoff = choice([-1, 1])
zoff = choice([-1, 1])
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
+
class PalmTree(StickTree):
"""
A traditional palm tree.
This tree has four tufts of foliage at the top of the trunk. No coconuts,
though.
"""
def make_foliage(self, world):
y = self.pos[1] + self.height
for xoff, zoff in product(xrange(-2, 3), repeat=2):
if abs(xoff) == abs(zoff):
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
+
class ProceduralTree(Tree):
"""
Base class for larger, complex, procedurally generated trees.
This tree type has roots, a trunk, branches all of varying width, and many
foliage clusters.
This class needs to be subclassed to be useful. Specifically,
foliage_shape must be set. Subclass 'prepare' and 'shapefunc' to make
different shaped trees.
"""
def cross_section(self, center, radius, diraxis, matidx, world):
"""Create a round section of type matidx in world.
Passed values:
center = [x,y,z] for the coordinates of the center block
radius = <number> as the radius of the section. May be a float or int.
diraxis: The list index for the axis to make the section
perpendicular to. 0 indicates the x axis, 1 the y, 2 the z. The
section will extend along the other two axies.
matidx = <int> the integer value to make the section out of.
world = the array generated by make_mcmap
matdata = <int> the integer value to make the block data value.
"""
rad = int(radius + PHI)
if rad <= 0:
return None
secidx1 = (diraxis - 1) % 3
secidx2 = (1 + diraxis) % 3
coord = [0] * 3
for off1, off2 in product(xrange(-rad, rad + 1), repeat=2):
thisdist = sqrt((abs(off1) + .5)**2 + (abs(off2) + .5)**2)
if thisdist > radius:
continue
pri = center[diraxis]
sec1 = center[secidx1] + off1
sec2 = center[secidx2] + off2
coord[diraxis] = pri
coord[secidx1] = sec1
coord[secidx2] = sec2
world.sync_set_block(coord, matidx)
world.sync_set_metadata(coord, self.species)
def shapefunc(self, y):
"""
Obtain a radius for the given height.
Subclass this method to customize tree design.
If None is returned, no foliage cluster will be created.
:returns: radius, or None
"""
if random() < 100 / ((self.height)**2) and y < self.trunkheight:
return self.height * .12
return None
def foliage_cluster(self, center, world):
"""
Generate a round cluster of foliage at the location center.
The shape of the cluster is defined by the list self.foliage_shape.
This list must be set in a subclass of ProceduralTree.
"""
x = center[0]
y = center[1]
z = center[2]
for i in self.foliage_shape:
self.cross_section([x, y, z], i, 1, blocks["leaves"].slot, world)
y += 1
def taperedcylinder(self, start, end, startsize, endsize, world, blockdata):
"""
Create a tapered cylinder in world.
start and end are the beginning and ending coordinates of form [x,y,z].
startsize and endsize are the beginning and ending radius.
The material of the cylinder is WOODMAT.
"""
# delta is the coordinate vector for the difference between
# start and end.
delta = [int(e - s) for e, s in zip(end, start)]
# primidx is the index (0,1,or 2 for x,y,z) for the coordinate
# which has the largest overall delta.
maxdist = max(delta, key=abs)
if maxdist == 0:
return None
primidx = delta.index(maxdist)
# secidx1 and secidx2 are the remaining indices out of [0,1,2].
secidx1 = (primidx - 1) % 3
secidx2 = (1 + primidx) % 3
# primsign is the digit 1 or -1 depending on whether the limb is headed
# along the positive or negative primidx axis.
primsign = cmp(delta[primidx], 0) or 1
# secdelta1 and ...2 are the amount the associated values change
# for every step along the prime axis.
secdelta1 = delta[secidx1]
secfac1 = float(secdelta1)/delta[primidx]
secdelta2 = delta[secidx2]
secfac2 = float(secdelta2)/delta[primidx]
# Initialize coord. These values could be anything, since
# they are overwritten.
coord = [0] * 3
# Loop through each crossection along the primary axis,
# from start to end.
endoffset = delta[primidx] + primsign
for primoffset in xrange(0, endoffset, primsign):
primloc = start[primidx] + primoffset
secloc1 = int(start[secidx1] + primoffset*secfac1)
secloc2 = int(start[secidx2] + primoffset*secfac2)
coord[primidx] = primloc
coord[secidx1] = secloc1
coord[secidx2] = secloc2
primdist = abs(delta[primidx])
radius = endsize + (startsize-endsize) * abs(delta[primidx]
- primoffset) / primdist
self.cross_section(coord, radius, primidx, blockdata, world)
def make_foliage(self, world):
"""
Generate the foliage for the tree in world.
Also place lanterns.
"""
foliage_coords = self.foliage_cords
for coord in foliage_coords:
self.foliage_cluster(coord,world)
for x, y, z in foliage_coords:
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
if LIGHTTREE == 1:
world.sync_set_block((x, y + 1, z), blocks["lightstone"].slot)
elif LIGHTTREE in [2,4]:
world.sync_set_block((x + 1, y, z), blocks["lightstone"].slot)
world.sync_set_block((x - 1, y, z), blocks["lightstone"].slot)
if LIGHTTREE == 4:
world.sync_set_block((x, y, z + 1), blocks["lightstone"].slot)
world.sync_set_block((x, y, z - 1), blocks["lightstone"].slot)
def make_branches(self, world):
"""Generate the branches and enter them in world.
"""
treeposition = self.pos
height = self.height
topy = treeposition[1] + int(self.trunkheight + 0.5)
# endrad is the base radius of the branches at the trunk
endrad = max(self.trunkradius * (1 - self.trunkheight/height), 1)
for coord in self.foliage_cords:
dist = (sqrt(float(coord[0]-treeposition[0])**2 +
float(coord[2]-treeposition[2])**2))
ydist = coord[1]-treeposition[1]
# value is a magic number that weights the probability
# of generating branches properly so that
# you get enough on small trees, but not too many
# on larger trees.
# Very difficult to get right... do not touch!
value = (self.branchdensity * 220 * height)/((ydist + dist) ** 3)
if value < random():
continue
posy = coord[1]
slope = self.branchslope + (0.5 - random()) * .16
if coord[1] - dist*slope > topy:
# Another random rejection, for branches between
# the top of the trunk and the crown of the tree
threshhold = 1 / height
if random() < threshhold:
continue
branchy = topy
basesize = endrad
else:
branchy = posy-dist*slope
basesize = (endrad + (self.trunkradius-endrad) *
(topy - branchy) / self.trunkheight)
startsize = (basesize * (1 + random()) * PHI *
(dist/height)**PHI)
if startsize < 1.0:
startsize = 1.0
rndr = sqrt(random()) * basesize * PHI
rndang = random() * 2 * pi
rndx = int(rndr*sin(rndang) + 0.5)
rndz = int(rndr*cos(rndang) + 0.5)
startcoord = [treeposition[0]+rndx,
int(branchy),
treeposition[2]+rndz]
endsize = 1.0
self.taperedcylinder(startcoord, coord, startsize, endsize, world,
blocks["log"].slot)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
# In this method, x and z are the position of the trunk.
x, starty, z = self.pos
midy = starty + int(self.trunkheight / (PHI + 1))
topy = starty + int(self.trunkheight + 0.5)
end_size_factor = self.trunkheight / self.height
endrad = max(self.trunkradius * (1 - end_size_factor), 1)
midrad = max(self.trunkradius * (1 - end_size_factor * .5), endrad)
# Make the lower and upper sections of the trunk.
self.taperedcylinder([x,starty,z], [x,midy,z], self.trunkradius,
midrad, world, blocks["log"].slot)
self.taperedcylinder([x,midy,z], [x,topy,z], midrad, endrad, world,
blocks["log"].slot)
#Make the branches.
self.make_branches(world)
def prepare(self, world):
"""
Initialize the internal values for the Tree object.
Primarily, sets up the foliage cluster locations.
"""
treeposition = self.pos
self.trunkradius = PHI * sqrt(self.height)
if self.trunkradius < 1:
self.trunkradius = 1
self.trunkheight = self.height
yend = int(treeposition[1] + self.height)
self.branchdensity = 1.0
foliage_coords = []
ystart = treeposition[1]
num_of_clusters_per_y = int(1.5 + (self.height / 19)**2)
if num_of_clusters_per_y < 1:
num_of_clusters_per_y = 1
# make sure we don't spend too much time off the top of the map
if yend > 255:
yend = 255
if ystart > 255:
ystart = 255
for y in xrange(yend, ystart, -1):
for i in xrange(num_of_clusters_per_y):
shapefac = self.shapefunc(y - ystart)
if shapefac is None:
continue
r = (sqrt(random()) + .328)*shapefac
theta = random()*2*pi
x = int(r*sin(theta)) + treeposition[0]
z = int(r*cos(theta)) + treeposition[2]
foliage_coords += [[x,y,z]]
self.foliage_cords = foliage_coords
class RoundTree(ProceduralTree):
"""
A rounded deciduous tree.
"""
branchslope = 1 / (PHI + 1)
foliage_shape = [2, 3, 3, 2.5, 1.6]
def prepare(self, world):
self.species = 2 # birch wood
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.8
self.trunkheight *= 0.7
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.282 + .1 * sqrt(random())) :
return None
radius = self.height / 2
adj = self.height / 2 - y
if adj == 0 :
dist = radius
elif abs(adj) >= radius:
dist = 0
else:
dist = sqrt((radius**2) - (adj**2))
dist *= PHI
return dist
class ConeTree(ProceduralTree):
"""
A conifer.
"""
branchslope = 0.15
foliage_shape = [3, 2.6, 2, 1]
def prepare(self, world):
self.species = 1 # pine wood
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.5
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.25 + .05 * sqrt(random())):
return None
# Radius.
return max((self.height - y) / (PHI + 1), 0)
+
class RainforestTree(ProceduralTree):
"""
A big rainforest tree.
"""
branchslope = 1
foliage_shape = [3.4, 2.6]
def prepare(self, world):
self.species = 3 # jungle wood
# TODO: play with these numbers until jungles look right
self.height = randint(10, 20)
self.trunkradius = randint(5, 15)
ProceduralTree.prepare(self, world)
self.trunkradius /= PHI + 1
self.trunkheight *= .9
def shapefunc(self, y):
# Bottom 4/5 of the tree is probably branch-free.
if y < self.height * 0.8:
twigs = ProceduralTree.shapefunc(self,y)
if twigs is not None and random() < 0.07:
return twigs
return None
else:
width = self.height * 1 / (IPHI + 1)
topdist = (self.height - y) / (self.height * 0.2)
dist = width * (PHI + topdist) * (PHI + random()) * 1 / (IPHI + 1)
return dist
+
class MangroveTree(RoundTree):
"""
A mangrove tree.
Like the round deciduous tree, but bigger, taller, and generally more
awesome.
"""
branchslope = 1
def prepare(self, world):
RoundTree.prepare(self, world)
self.trunkradius *= PHI
def shapefunc(self, y):
val = RoundTree.shapefunc(self, y)
if val is not None:
val *= IPHI
return val
def make_roots(self, rootbases, world):
"""generate the roots and enter them in world.
rootbases = [[x,z,base_radius], ...] and is the list of locations
the roots can originate from, and the size of that location.
"""
treeposition = self.pos
height = self.height
for coord in self.foliage_cords:
# First, set the threshhold for randomly selecting this
# coordinate for root creation.
dist = (sqrt(float(coord[0]-treeposition[0])**2 +
float(coord[2]-treeposition[2])**2))
ydist = coord[1]-treeposition[1]
value = (self.branchdensity * 220 * height)/((ydist + dist) ** 3)
# Randomly skip roots, based on the above threshold
if value < random():
continue
# initialize the internal variables from a selection of
# starting locations.
rootbase = choice(rootbases)
rootx = rootbase[0]
rootz = rootbase[1]
rootbaseradius = rootbase[2]
# Offset the root origin location by a random amount
# (radialy) from the starting location.
rndr = sqrt(random()) * rootbaseradius * PHI
rndang = random()*2*pi
rndx = int(rndr*sin(rndang) + 0.5)
rndz = int(rndr*cos(rndang) + 0.5)
rndy = int(random()*rootbaseradius*0.5)
startcoord = [rootx+rndx,treeposition[1]+rndy,rootz+rndz]
# offset is the distance from the root base to the root tip.
offset = [startcoord[i]-coord[i] for i in xrange(3)]
# If this is a mangrove tree, make the roots longer.
offset = [int(val * IPHI - 1.5) for val in offset]
rootstartsize = (rootbaseradius * IPHI * abs(offset[1])/
(height * IPHI))
rootstartsize = max(rootstartsize, 1.0)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
height = self.height
trunkheight = self.trunkheight
trunkradius = self.trunkradius
treeposition = self.pos
starty = treeposition[1]
midy = treeposition[1]+int(trunkheight * 1 / (PHI + 1))
topy = treeposition[1]+int(trunkheight + 0.5)
# In this method, x and z are the position of the trunk.
x = treeposition[0]
z = treeposition[2]
end_size_factor = trunkheight/height
endrad = max(trunkradius * (1 - end_size_factor), 1)
midrad = max(trunkradius * (1 - end_size_factor * .5), endrad)
# The start radius of the trunk should be a little smaller if we
# are using root buttresses.
startrad = trunkradius * .8
# rootbases is used later in self.makeroots(...) as
# starting locations for the roots.
rootbases = [[x,z,startrad]]
buttress_radius = trunkradius * 0.382
# posradius is how far the root buttresses should be offset
# from the trunk.
posradius = trunkradius
# In mangroves, the root buttresses are much more extended.
posradius = posradius * (IPHI + 1)
num_of_buttresses = int(sqrt(trunkradius) + 3.5)
for i in xrange(num_of_buttresses):
rndang = random()*2*pi
thisposradius = posradius * (0.9 + random()*.2)
# thisx and thisz are the x and z position for the base of
# the root buttress.
thisx = x + int(thisposradius * sin(rndang))
thisz = z + int(thisposradius * cos(rndang))
# thisbuttressradius is the radius of the buttress.
# Currently, root buttresses do not taper.
thisbuttressradius = max(buttress_radius * (PHI + random()),
1)
# Make the root buttress.
self.taperedcylinder([thisx, starty, thisz], [x, midy, z],
thisbuttressradius, thisbuttressradius, world,
blocks["log"].slot)
# Add this root buttress as a possible location at
# which roots can spawn.
rootbases += [[thisx,thisz,thisbuttressradius]]
# Make the lower and upper sections of the trunk.
self.taperedcylinder([x,starty,z], [x,midy,z], startrad, midrad,
world, blocks["log"].slot)
self.taperedcylinder([x,midy,z], [x,topy,z], midrad, endrad, world,
blocks["log"].slot)
#Make the branches
self.make_branches(world)
|
bravoserver/bravo
|
0b968a590d93b523cbceae1d7f16ab85664d092f
|
plugins/window_hooks: Remove old print statements.
|
diff --git a/bravo/plugins/window_hooks.py b/bravo/plugins/window_hooks.py
index 8774c89..2c0a603 100644
--- a/bravo/plugins/window_hooks.py
+++ b/bravo/plugins/window_hooks.py
@@ -1,508 +1,506 @@
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.python import log
from zope.interface import implements
from bravo.beta.packets import make_packet
from bravo.blocks import blocks
from bravo.entity import Chest as ChestTile, Furnace as FurnaceTile
from bravo.ibravo import (IWindowOpenHook, IWindowClickHook, IWindowCloseHook,
IPreBuildHook, IDigHook)
from bravo.inventory.windows import (WorkbenchWindow, ChestWindow,
LargeChestWindow, FurnaceWindow)
from bravo.location import Location
from bravo.utilities.building import chestsAround
from bravo.utilities.coords import adjust_coords_for_face, split_coords
def drop_items(factory, location, items, y_offset = 0):
"""
Loop over items and drop all of them
:param location: Location() or tuple (x, y, z)
:param items: list of items
"""
# XXX why am I polymorphic? :T
if type(location) == Location:
x, y, z = location.x, location.y, location.z
else:
x, y, z = location
y += y_offset
coords = (int(x * 32) + 16, int(y * 32) + 16, int(z * 32) + 16)
for item in items:
if item is None:
continue
factory.give(coords, (item[0], item[1]), item[2])
def processClickMessage(factory, player, window, container):
# Clicked out of the window
if container.slot == 64537: # -999
items = window.drop_selected(bool(container.button))
drop_items(factory, player.location.in_front_of(1), items, 1)
player.write_packet("window-token", wid=container.wid,
token=container.token, acknowledged=True)
return
# perform selection action
selected = window.select(container.slot, bool(container.button),
bool(container.shift))
if selected:
# Notchian server does not send any packets here because both server
# and client uses same algorithm for inventory actions. I did my best
# to make bravo's inventory behave the same way but there is a chance
# some differencies still exist. So we send whole window content to
# the cliet to make sure client displays inventory we have on server.
packet = window.save_to_packet()
player.transport.write(packet)
# TODO: send package for 'item on cursor'.
equipped_slot = player.player.equipped + 36
# Inform other players about changes to this player's equipment.
if container.wid == 0 and (container.slot in range(5, 9) or
container.slot == equipped_slot):
# Currently equipped item changes.
if container.slot == equipped_slot:
item = player.player.inventory.holdables[player.player.equipped]
slot = 0
# Armor changes.
else:
item = player.player.inventory.armor[container.slot - 5]
# Order of slots is reversed in the equipment package.
slot = 4 - (container.slot - 5)
if item is None:
primary, secondary = 65535, 0
else:
primary, secondary, count = item
packet = make_packet("entity-equipment",
eid=player.player.eid,
slot=slot,
primary=primary,
secondary=secondary
)
factory.broadcast_for_others(packet, player)
# If the window is SharedWindow for tile...
if window.coords is not None:
# ...and the window have dirty slots...
if len(window.dirty_slots):
# ...check if some one else...
for p in factory.protocols.itervalues():
if p is player:
continue
# ...have window opened for the same tile...
if len(p.windows) and p.windows[-1].coords == window.coords:
# ...and notify about changes...
packets = p.windows[-1].packets_for_dirty(window.dirty_slots)
p.transport.write(packets)
window.dirty_slots.clear()
# ...and mark the chunk dirty
bigx, smallx, bigz, smallz, y = window.coords
d = factory.world.request_chunk(bigx, bigz)
@d.addCallback
def mark_chunk_dirty(chunk):
chunk.dirty = True
return True
class Windows(object):
'''
Generic window hooks
NOTE: ``player`` argument in methods is a protocol. Not Player class!
'''
implements(IWindowClickHook, IWindowCloseHook)
def __init__(self, factory):
self.factory = factory
def close_hook(self, player, container):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x65 message
"""
if container.wid == 0:
return
if player.windows and player.windows[-1].wid == container.wid:
window = player.windows.pop()
items, packets = window.close()
# No need to send the packet as the window already closed on client.
# Pakets work only for player's inventory.
drop_items(self.factory, player.location.in_front_of(1), items, 1)
else:
player.error("Couldn't close non-current window %d" % container.wid)
def click_hook(self, player, container):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x66 message
"""
if container.wid == 0:
# Player's inventory is a special case and processed separately
return False
if player.windows and player.windows[-1].wid == container.wid:
window = player.windows[-1]
else:
player.error("Couldn't find window %d" % container.wid)
return False
processClickMessage(self.factory, player, window, container)
return True
name = "windows"
before = tuple()
after = ("inventory",)
class Inventory(object):
'''
Player's inventory hooks
'''
implements(IWindowClickHook, IWindowCloseHook)
def __init__(self, factory):
self.factory = factory
def close_hook(self, player, container):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x65 message
"""
if container.wid != 0:
# not inventory window
return
# NOTE: player is a protocol. Not Player class!
items, packets = player.inventory.close() # it's window from protocol
if packets:
player.transport.write(packets)
drop_items(self.factory, player.location.in_front_of(1), items, 1)
def click_hook(self, player, container):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x66 message
"""
if container.wid != 0:
# not inventory window
return False
processClickMessage(self.factory, player, player.inventory, container)
return True
name = "inventory"
before = tuple()
after = tuple()
class Workbench(object):
implements(IWindowOpenHook)
def __init__(self, factory):
pass
def open_hook(self, player, container, block):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x64 message
The ``block`` is a block we trying to open
:returns: None or window object
"""
if block != blocks["workbench"].slot:
return None
window = WorkbenchWindow(player.wid, player.player.inventory)
player.windows.append(window)
return window
name = "workbench"
before = tuple()
after = tuple()
class Furnace(object):
implements(IWindowOpenHook, IWindowClickHook, IPreBuildHook, IDigHook)
def __init__(self, factory):
self.factory = factory
def get_furnace_tile(self, chunk, coords):
try:
furnace = chunk.tiles[coords]
if type(furnace) != FurnaceTile:
raise KeyError
except KeyError:
x, y, z = coords
x = chunk.x * 16 + x
z = chunk.z * 16 + z
log.msg("Furnace at (%d, %d, %d) do not have tile or tile type mismatch" %
(x, y, z))
furnace = None
return furnace
@inlineCallbacks
def open_hook(self, player, container, block):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x64 message
The ``block`` is a block we trying to open
:returns: None or window object
"""
if block not in (blocks["furnace"].slot, blocks["burning-furnace"].slot):
returnValue(None)
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
chunk = yield self.factory.world.request_chunk(bigx, bigz)
furnace = self.get_furnace_tile(chunk, (smallx, container.y, smallz))
if furnace is None:
returnValue(None)
coords = bigx, smallx, bigz, smallz, container.y
window = FurnaceWindow(player.wid, player.player.inventory,
furnace.inventory, coords)
player.windows.append(window)
returnValue(window)
def click_hook(self, player, container):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x66 message
"""
if container.wid == 0:
return # skip inventory window
elif player.windows:
window = player.windows[-1]
else:
# click message but no window... hmm...
return
if type(window) != FurnaceWindow:
return
# inform content of furnace was probably changed
bigx, x, bigz, z, y = window.coords
d = self.factory.world.request_chunk(bigx, bigz)
@d.addCallback
def on_change(chunk):
furnace = self.get_furnace_tile(chunk, (x, y, z))
if furnace is not None:
furnace.changed(self.factory, window.coords)
@inlineCallbacks
def pre_build_hook(self, player, builddata):
item, metadata, x, y, z, face = builddata
if item.slot != blocks["furnace"].slot:
returnValue((True, builddata, False))
x, y, z = adjust_coords_for_face((x, y, z), face)
bigx, smallx, bigz, smallz = split_coords(x, z)
# the furnace cannot be oriented up or down
if face == "-y" or face == "+y":
orientation = ('+x', '+z', '-x', '-z')[((int(player.location.yaw) \
- 45 + 360) % 360) / 90]
metadata = blocks["furnace"].orientation(orientation)
builddata = builddata._replace(metadata=metadata)
- print "fix metadata"
# Not much to do, just tell the chunk about this tile.
chunk = yield self.factory.world.request_chunk(bigx, bigz)
chunk.tiles[smallx, y, smallz] = FurnaceTile(smallx, y, smallz)
returnValue((True, builddata, False))
def dig_hook(self, chunk, x, y, z, block):
# NOTE: x, y, z - coords in chunk
if block.slot not in (blocks["furnace"].slot, blocks["burning-furnace"].slot):
return
furnaces = self.factory.furnace_manager
coords = (x, y, z)
furnace = self.get_furnace_tile(chunk, coords)
if furnace is None:
return
# Inform FurnaceManager the furnace was removed
furnaces.remove((chunk.x, x, chunk.z, z, y))
# Block coordinates
x = chunk.x * 16 + x
z = chunk.z * 16 + z
furnace = furnace.inventory
drop_items(self.factory, (x, y, z),
furnace.crafted + furnace.crafting + furnace.fuel)
del(chunk.tiles[coords])
name = "furnace"
before = ("windows",) # plugins that comes before this plugin
after = tuple()
class Chest(object):
implements(IWindowOpenHook, IPreBuildHook, IDigHook)
def __init__(self, factory):
self.factory = factory
def get_chest_tile(self, chunk, coords):
try:
chest = chunk.tiles[coords]
if type(chest) != ChestTile:
raise KeyError
except KeyError:
x, y, z = coords
x = chunk.x * 16 + x
z = chunk.z * 16 + z
log.msg("Chest at (%d, %d, %d) do not have tile or tile type mismatch" %
(x, y, z))
- print chunk.tiles
chest = None
return chest
@inlineCallbacks
def open_hook(self, player, container, block):
"""
The ``player`` is a Player's protocol
The ``container`` is a 0x64 message
The ``block`` is a block we trying to open
:returns: None or window object
"""
if block != blocks["chest"].slot:
returnValue(None)
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
chunk = yield self.factory.world.request_chunk(bigx, bigz)
chests_around = chestsAround(self.factory,
(container.x, container.y, container.z))
chests_around_num = len(chests_around)
if chests_around_num == 0: # small chest
chest = self.get_chest_tile(chunk, (smallx, container.y, smallz))
if chest is None:
returnValue(None)
coords = bigx, smallx, bigz, smallz, container.y
window = ChestWindow(player.wid, player.player.inventory,
chest.inventory, coords)
elif chests_around_num == 1: # large chest
# process second chest coordinates
x2, y2, z2 = chests_around[0]
bigx2, smallx2, bigz2, smallz2 = split_coords(x2, z2)
if bigx == bigx2 and bigz == bigz2:
# both chest blocks are in same chunk
chunk2 = chunk
else:
chunk2 = yield self.factory.world.request_chunk(bigx2, bigz2)
chest1 = self.get_chest_tile(chunk, (smallx, container.y, smallz))
chest2 = self.get_chest_tile(chunk2, (smallx2, container.y, smallz2))
if chest1 is None or chest2 is None:
returnValue(None)
c1 = bigx, smallx, bigz, smallz, container.y
c2 = bigx2, smallx2, bigz2, smallz2, container.y
# We shall properly order chest inventories
if c1 < c2:
window = LargeChestWindow(player.wid, player.player.inventory,
chest1.inventory, chest2.inventory, c1)
else:
window = LargeChestWindow(player.wid, player.player.inventory,
chest2.inventory, chest1.inventory, c2)
else:
log.msg("Chest at (%d, %d, %d) have three chests connected" %
(container.x, container.y, container.z))
returnValue(None)
player.windows.append(window)
returnValue(window)
@inlineCallbacks
def pre_build_hook(self, player, builddata):
item, metadata, x, y, z, face = builddata
if item.slot != blocks["chest"].slot:
returnValue((True, builddata, False))
x, y, z = adjust_coords_for_face((x, y, z), face)
bigx, smallx, bigz, smallz = split_coords(x, z)
# chest orientation according to players position
if face == "-y" or face == "+y":
orientation = ('+x', '+z', '-x', '-z')[((int(player.location.yaw) \
- 45 + 360) % 360) / 90]
else:
orientation = face
# Chests have some restrictions on building:
# you cannot connect more than two chests. (notchian)
ccs = chestsAround(self.factory, (x, y, z))
ccn = len(ccs)
if ccn > 1:
# cannot build three or more connected chests
returnValue((False, builddata, True))
chunk = yield self.factory.world.request_chunk(bigx, bigz)
if ccn == 0:
metadata = blocks["chest"].orientation(orientation)
elif ccn == 1:
# check gonna-be-connected chest is not connected already
n = len(chestsAround(self.factory, ccs[0]))
if n != 0:
returnValue((False, builddata, True))
# align both blocks correctly (since 1.8)
# get second block
x2, y2, z2 = ccs[0]
bigx2, smallx2, bigz2, smallz2 = split_coords(x2, z2)
# new chests orientation axis according to blocks position
pair = x - x2, z - z2
ornt = {(0, 1): "x", (0, -1): "x",
(1, 0): "z", (-1, 0): "z"}[pair]
# if player is faced another direction, fix it
if orientation[1] != ornt:
# same sign with proper orientation
# XXX Probably notchian logic is different here
# but this one works well enough
orientation = orientation[0] + ornt
metadata = blocks["chest"].orientation(orientation)
# update second block's metadata
if bigx == bigx2 and bigz == bigz2:
# both blocks are in same chunk
chunk2 = chunk
else:
chunk2 = yield self.factory.world.request_chunk(bigx2, bigz2)
chunk2.set_metadata((smallx2, y2, smallz2), metadata)
# Not much to do, just tell the chunk about this tile.
chunk.tiles[smallx, y, smallz] = ChestTile(smallx, y, smallz)
builddata = builddata._replace(metadata=metadata)
returnValue((True, builddata, False))
def dig_hook(self, chunk, x, y, z, block):
if block.slot != blocks["chest"].slot:
return
coords = (x, y, z)
chest = self.get_chest_tile(chunk, coords)
if chest is None:
return
# Block coordinates
x = chunk.x * 16 + x
z = chunk.z * 16 + z
chest = chest.inventory
drop_items(self.factory, (x, y, z), chest.storage)
del(chunk.tiles[coords])
name = "chest"
before = ()
after = tuple()
|
bravoserver/bravo
|
1717a8da2eca41ce06f7bcea6a3b25139ab7620b
|
And use new coord helpers.
|
diff --git a/bravo/chunk.py b/bravo/chunk.py
index 12e925b..3453547 100644
--- a/bravo/chunk.py
+++ b/bravo/chunk.py
@@ -1,630 +1,626 @@
from array import array
from functools import wraps
from itertools import product
from struct import pack
from warnings import warn
from bravo.blocks import blocks, glowing_blocks
from bravo.beta.packets import make_packet
from bravo.geometry.section import Section
from bravo.utilities.bits import pack_nibbles
+from bravo.utilities.coords import CHUNK_HEIGHT, XZ, iterchunk
from bravo.utilities.maths import clamp
-CHUNK_HEIGHT = 256
-"""
-The total height of chunks.
-"""
-
class ChunkWarning(Warning):
"""
Somebody did something inappropriate to this chunk, but it probably isn't
lethal, so the chunk is issuing a warning instead of an exception.
"""
def check_bounds(f):
"""
Decorate a function or method to have its first positional argument be
treated as an (x, y, z) tuple which must fit inside chunk boundaries of
16, CHUNK_HEIGHT, and 16, respectively.
A warning will be raised if the bounds check fails.
"""
@wraps(f)
def deco(chunk, coords, *args, **kwargs):
x, y, z = coords
# Coordinates were out-of-bounds; warn and run away.
if not (0 <= x < 16 and 0 <= z < 16 and 0 <= y < CHUNK_HEIGHT):
warn("Coordinates %s are OOB in %s() of %s, ignoring call"
% (coords, f.func_name, chunk), ChunkWarning, stacklevel=2)
# A concession towards where this decorator will be used. The
# value is likely to be discarded either way, but if the value is
# used, we shouldn't horribly die because of None/0 mismatch.
return 0
return f(chunk, coords, *args, **kwargs)
return deco
def ci(x, y, z):
"""
Turn an (x, y, z) tuple into a chunk index.
This is really a macro and not a function, but Python doesn't know the
difference. Hopefully this is faster on PyPy than on CPython.
"""
return (x * 16 + z) * CHUNK_HEIGHT + y
def segment_array(a):
"""
Chop up a chunk-sized array into sixteen components.
The chops are done in order to produce the smaller chunks preferred by
modern clients.
"""
l = [array(a.typecode) for chaff in range(16)]
index = 0
for i in range(0, len(a), 16):
l[index].extend(a[i:i + 16])
index = (index + 1) % 16
return l
def make_glows():
"""
Set up glow tables.
These tables provide glow maps for illuminated points.
"""
glow = [None] * 16
for i in range(16):
dim = 2 * i + 1
glow[i] = array("b", [0] * (dim**3))
for x, y, z in product(xrange(dim), repeat=3):
distance = abs(x - i) + abs(y - i) + abs(z - i)
glow[i][(x * dim + y) * dim + z] = i + 1 - distance
glow[i] = array("B", [clamp(x, 0, 15) for x in glow[i]])
return glow
glow = make_glows()
def composite_glow(target, strength, x, y, z):
"""
Composite a light source onto a lightmap.
The exact operation is not quite unlike an add.
"""
ambient = glow[strength]
xbound, zbound, ybound = 16, CHUNK_HEIGHT, 16
sx = x - strength
sy = y - strength
sz = z - strength
ex = x + strength
ey = y + strength
ez = z + strength
si, sj, sk = 0, 0, 0
ei, ej, ek = strength * 2, strength * 2, strength * 2
if sx < 0:
sx, si = 0, -sx
if sy < 0:
sy, sj = 0, -sy
if sz < 0:
sz, sk = 0, -sz
if ex > xbound:
ex, ei = xbound, ei - ex + xbound
if ey > ybound:
ey, ej = ybound, ej - ey + ybound
if ez > zbound:
ez, ek = zbound, ek - ez + zbound
adim = 2 * strength + 1
# Composite! Apologies for the loops.
for (tx, ax) in zip(range(sx, ex), range(si, ei)):
for (tz, az) in zip(range(sz, ez), range(sk, ek)):
for (ty, ay) in zip(range(sy, ey), range(sj, ej)):
ambient_index = (ax * adim + az) * adim + ay
target[ci(tx, ty, tz)] += ambient[ambient_index]
def iter_neighbors(coords):
"""
Iterate over the chunk-local coordinates surrounding the given
coordinates.
All coordinates are chunk-local.
Coordinates which are not valid chunk-local coordinates will not be
generated.
"""
x, z, y = coords
for dx, dz, dy in (
(1, 0, 0),
(-1, 0, 0),
(0, 1, 0),
(0, -1, 0),
(0, 0, 1),
(0, 0, -1)):
nx = x + dx
nz = z + dz
ny = y + dy
if not (0 <= nx < 16 and
0 <= nz < 16 and
0 <= ny < CHUNK_HEIGHT):
continue
yield nx, nz, ny
def neighboring_light(glow, block):
"""
Calculate the amount of light that should be shone on a block.
``glow`` is the brighest neighboring light. ``block`` is the slot of the
block being illuminated.
The return value is always a valid light value.
"""
return clamp(glow - blocks[block].dim, 0, 15)
class Chunk(object):
"""
A chunk of blocks.
Chunks are large pieces of world geometry (block data). The blocks, light
maps, and associated metadata are stored in chunks. Chunks are
always measured 16xCHUNK_HEIGHTx16 and are aligned on 16x16 boundaries in
the xz-plane.
:cvar bool dirty: Whether this chunk needs to be flushed to disk.
:cvar bool populated: Whether this chunk has had its initial block data
filled out.
"""
all_damaged = False
dirty = True
populated = False
def __init__(self, x, z):
"""
:param int x: X coordinate in chunk coords
:param int z: Z coordinate in chunk coords
:ivar array.array heightmap: Tracks the tallest block in each xz-column.
:ivar bool all_damaged: Flag for forcing the entire chunk to be
damaged. This is for efficiency; past a certain point, it is not
efficient to batch block updates or track damage. Heavily damaged
chunks have their damage represented as a complete resend of the
entire chunk.
"""
self.x = int(x)
self.z = int(z)
self.heightmap = array("B", [0] * (16 * 16))
self.blocklight = array("B", [0] * (16 * 16 * CHUNK_HEIGHT))
self.sections = [Section() for i in range(16)]
self.entities = set()
self.tiles = {}
self.damaged = set()
def __repr__(self):
return "Chunk(%d, %d)" % (self.x, self.z)
__str__ = __repr__
def regenerate_heightmap(self):
"""
Regenerate the height map array.
The height map is merely the position of the tallest block in any
xz-column.
"""
for x in range(16):
for z in range(16):
column = x * 16 + z
for y in range(255, -1, -1):
if self.get_block((x, y, z)):
break
self.heightmap[column] = y
def regenerate_blocklight(self):
lightmap = array("L", [0] * (16 * 16 * CHUNK_HEIGHT))
- for x, y, z in product(xrange(16), xrange(CHUNK_HEIGHT), xrange(16)):
+ for x, z, y in iterchunk():
block = self.get_block((x, y, z))
if block in glowing_blocks:
composite_glow(lightmap, glowing_blocks[block], x, y, z)
self.blocklight = array("B", [clamp(x, 0, 15) for x in lightmap])
def regenerate_skylight(self):
"""
Regenerate the ambient light map.
Each block's individual light comes from two sources. The ambient
light comes from the sky.
The height map must be valid for this method to produce valid results.
"""
# Create an array of skylights, and a mask of dimming blocks.
lights = [0xf] * (16 * 16)
mask = [0x0] * (16 * 16)
# For each y-level, we're going to update the mask, apply it to the
# lights, apply the lights to the section, and then blur the lights
# and move downwards. Since empty sections are full of air, and air
# doesn't ever dim, ignoring empty sections should be a correct way
# to speed things up. Another optimization is that the process ends
# early if the entire slice of lights is dark.
for section in reversed(self.sections):
if not section:
continue
for y in range(15, -1, -1):
# Early-out if there's no more light left.
if not any(lights):
break
# Update the mask.
- for x, z in product(range(16), repeat=2):
+ for x, z in XZ:
offset = x * 16 + z
block = section.get_block((x, y, z))
mask[offset] = blocks[block].dim
# Apply the mask to the lights.
for i, dim in enumerate(mask):
# Keep it positive.
lights[i] = max(0, lights[i] - dim)
# Apply the lights to the section.
- for x, z in product(range(16), repeat=2):
+ for x, z in XZ:
offset = x * 16 + z
section.set_skylight((x, y, z), lights[offset])
# XXX blur the lights
# And continue moving downward.
def regenerate(self):
"""
Regenerate all auxiliary tables.
"""
self.regenerate_heightmap()
self.regenerate_blocklight()
self.regenerate_skylight()
self.dirty = True
def damage(self, coords):
"""
Record damage on this chunk.
"""
if self.all_damaged:
return
x, y, z = coords
self.damaged.add(coords)
# The number 176 represents the threshold at which it is cheaper to
# resend the entire chunk instead of individual blocks.
if len(self.damaged) > 176:
self.all_damaged = True
self.damaged.clear()
def is_damaged(self):
"""
Determine whether any damage is pending on this chunk.
:rtype: bool
:returns: True if any damage is pending on this chunk, False if not.
"""
return self.all_damaged or bool(self.damaged)
def get_damage_packet(self):
"""
Make a packet representing the current damage on this chunk.
This method is not private, but some care should be taken with it,
since it wraps some fairly cryptic internal data structures.
If this chunk is currently undamaged, this method will return an empty
string, which should be safe to treat as a packet. Please check with
`is_damaged()` before doing this if you need to optimize this case.
To avoid extra overhead, this method should really be used in
conjunction with `Factory.broadcast_for_chunk()`.
Do not forget to clear this chunk's damage! Callers are responsible
for doing this.
>>> packet = chunk.get_damage_packet()
>>> factory.broadcast_for_chunk(packet, chunk.x, chunk.z)
>>> chunk.clear_damage()
:rtype: str
:returns: String representation of the packet.
"""
if self.all_damaged:
# Resend the entire chunk!
return self.save_to_packet()
elif not self.damaged:
# Send nothing at all; we don't even have a scratch on us.
return ""
elif len(self.damaged) == 1:
# Use a single block update packet. Find the first (only) set bit
# in the damaged array, and use it as an index.
coords = next(iter(self.damaged))
block = self.get_block(coords)
metadata = self.get_metadata(coords)
x, y, z = coords
return make_packet("block",
x=x + self.x * 16,
y=y,
z=z + self.z * 16,
type=block,
meta=metadata)
else:
# Use a batch update.
records = []
for coords in self.damaged:
block = self.get_block(coords)
metadata = self.get_metadata(coords)
x, y, z = coords
record = x << 28 | z << 24 | y << 16 | block << 4 | metadata
records.append(record)
data = "".join(pack(">I", record) for record in records)
return make_packet("batch", x=self.x, z=self.z,
count=len(records), data=data)
def clear_damage(self):
"""
Clear this chunk's damage.
"""
self.damaged.clear()
self.all_damaged = False
def save_to_packet(self):
"""
Generate a chunk packet.
"""
mask = 0
packed = []
ls = segment_array(self.blocklight)
for i, section in enumerate(self.sections):
if any(section.blocks):
mask |= 1 << i
packed.append(section.blocks.tostring())
for i, section in enumerate(self.sections):
if mask & 1 << i:
packed.append(pack_nibbles(section.metadata))
for i, l in enumerate(ls):
if mask & 1 << i:
packed.append(pack_nibbles(l))
for i, section in enumerate(self.sections):
if mask & 1 << i:
packed.append(pack_nibbles(section.skylight))
# Fake the biome data.
packed.append("\x00" * 256)
packet = make_packet("chunk", x=self.x, z=self.z, continuous=True,
primary=mask, add=0x0, data="".join(packed))
return packet
@check_bounds
def get_block(self, coords):
"""
Look up a block value.
:param tuple coords: coordinate triplet
:rtype: int
:returns: int representing block type
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_block((x, y, z))
@check_bounds
def set_block(self, coords, block):
"""
Update a block value.
:param tuple coords: coordinate triplet
:param int block: block type
"""
x, y, z = coords
index, section_y = divmod(y, 16)
column = x * 16 + z
if self.get_block(coords) != block:
self.sections[index].set_block((x, section_y, z), block)
if not self.populated:
return
# Regenerate heightmap at this coordinate.
if block:
self.heightmap[column] = max(self.heightmap[column], y)
else:
# If we replace the highest block with air, we need to go
# through all blocks below it to find the new top block.
height = self.heightmap[column]
if y == height:
for y in range(height, -1, -1):
if self.get_block((x, y, z)):
break
self.heightmap[column] = y
# Do the blocklight at this coordinate, if appropriate.
if block in glowing_blocks:
composite_glow(self.blocklight, glowing_blocks[block],
x, y, z)
self.blocklight = array("B", [clamp(x, 0, 15) for x in
self.blocklight])
# And the skylight.
glow = max(self.get_skylight((nx, ny, nz))
for nx, nz, ny in iter_neighbors((x, z, y)))
self.set_skylight((x, y, z), neighboring_light(glow, block))
self.dirty = True
self.damage(coords)
@check_bounds
def get_metadata(self, coords):
"""
Look up metadata.
:param tuple coords: coordinate triplet
:rtype: int
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_metadata((x, y, z))
@check_bounds
def set_metadata(self, coords, metadata):
"""
Update metadata.
:param tuple coords: coordinate triplet
:param int metadata:
"""
if self.get_metadata(coords) != metadata:
x, y, z = coords
index, y = divmod(y, 16)
self.sections[index].set_metadata((x, y, z), metadata)
self.dirty = True
self.damage(coords)
@check_bounds
def get_skylight(self, coords):
"""
Look up skylight value.
:param tuple coords: coordinate triplet
:rtype: int
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_skylight((x, y, z))
@check_bounds
def set_skylight(self, coords, value):
"""
Update skylight value.
:param tuple coords: coordinate triplet
:param int metadata:
"""
if self.get_metadata(coords) != value:
x, y, z = coords
index, y = divmod(y, 16)
self.sections[index].set_skylight((x, y, z), value)
@check_bounds
def destroy(self, coords):
"""
Destroy the block at the given coordinates.
This may or may not set the block to be full of air; it uses the
block's preferred replacement. For example, ice generally turns to
water when destroyed.
This is safe as a no-op; for example, destroying a block of air with
no metadata is not going to cause state changes.
:param tuple coords: coordinate triplet
"""
block = blocks[self.get_block(coords)]
self.set_block(coords, block.replace)
self.set_metadata(coords, 0)
def height_at(self, x, z):
"""
Get the height of an xz-column of blocks.
:param int x: X coordinate
:param int z: Z coordinate
:rtype: int
:returns: The height of the given column of blocks.
"""
return self.heightmap[x * 16 + z]
def sed(self, search, replace):
"""
Execute a search and replace on all blocks in this chunk.
Named after the ubiquitous Unix tool. Does a semantic
s/search/replace/g on this chunk's blocks.
:param int search: block to find
:param int replace: block to use as a replacement
"""
for section in self.sections:
for i, block in enumerate(section.blocks):
if block == search:
section.blocks[i] = replace
self.all_damaged = True
self.dirty = True
diff --git a/bravo/plugins/generators.py b/bravo/plugins/generators.py
index 850b100..c8a6bf9 100644
--- a/bravo/plugins/generators.py
+++ b/bravo/plugins/generators.py
@@ -1,477 +1,477 @@
from __future__ import division
from array import array
from itertools import combinations, product
from random import Random
from zope.interface import implements
from bravo.blocks import blocks
-from bravo.chunk import CHUNK_HEIGHT
+from bravo.chunk import CHUNK_HEIGHT, XZ, iterchunk
from bravo.ibravo import ITerrainGenerator
from bravo.simplex import octaves2, octaves3, set_seed
from bravo.utilities.maths import morton2
R = Random()
class BoringGenerator(object):
"""
Generates boring slabs of flat stone.
This generator relies on implementation details of ``Chunk``.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Fill the bottom half of the chunk with stone.
"""
# Optimized fill. Fill the bottom eight sections with stone.
stone = array("B", [blocks["stone"].slot] * 16 * 16 * 16)
for section in chunk.sections[:8]:
section.blocks[:] = stone[:]
name = "boring"
before = tuple()
after = tuple()
class SimplexGenerator(object):
"""
Generates waves of stone.
This class uses a simplex noise generator to procedurally generate
organic-looking, continuously smooth terrain.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth waves of stone.
"""
set_seed(seed)
# And into one end he plugged the whole of reality as extrapolated
# from a piece of fairy cake, and into the other end he plugged his
# wife: so that when he turned it on she saw in one instant the whole
# infinity of creation and herself in relation to it.
factor = 1 / 256
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
magx = (chunk.x * 16 + x) * factor
magz = (chunk.z * 16 + z) * factor
height = octaves2(magx, magz, 6)
# Normalize around 70. Normalization is scaled according to a
# rotated cosine.
#scale = rotated_cosine(magx, magz, seed, 16 * 10)
height *= 15
height = int(height + 70)
# Make our chunk offset, and render into the chunk.
for y in range(height):
chunk.set_block((x, y, z), blocks["stone"].slot)
name = "simplex"
before = tuple()
after = tuple()
class ComplexGenerator(object):
"""
Generate islands of stone.
This class uses a simplex noise generator to procedurally generate
ridiculous things.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth islands of stone.
"""
set_seed(seed)
factor = 1 / 256
- for x, z, y in product(xrange(16), xrange(16), xrange(CHUNK_HEIGHT)):
+ for x, z, y in iterchunk():
magx = (chunk.x * 16 + x) * factor
magz = (chunk.z * 16 + z) * factor
sample = octaves3(magx, magz, y * factor, 6)
if sample > 0.5:
chunk.set_block((x, y, z), blocks["stone"].slot)
name = "complex"
before = tuple()
after = tuple()
class WaterTableGenerator(object):
"""
Create a water table.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Generate a flat water table halfway up the map.
"""
for x, z, y in product(xrange(16), xrange(16), xrange(62)):
if chunk.get_block((x, y, z)) == blocks["air"].slot:
chunk.set_block((x, y, z), blocks["spring"].slot)
name = "watertable"
before = tuple()
after = ("trees", "caves")
class ErosionGenerator(object):
"""
Erodes stone surfaces into dirt.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Turn the top few layers of stone into dirt.
"""
chunk.regenerate_heightmap()
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
y = chunk.height_at(x, z)
if chunk.get_block((x, y, z)) == blocks["stone"].slot:
bottom = max(y - 3, 0)
for i in range(bottom, y + 1):
chunk.set_block((x, i, z), blocks["dirt"].slot)
name = "erosion"
before = ("boring", "simplex")
after = ("watertable",)
class GrassGenerator(object):
"""
Find exposed dirt and grow grass.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Find the top dirt block in each y-level and turn it into grass.
"""
chunk.regenerate_heightmap()
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
y = chunk.height_at(x, z)
if (chunk.get_block((x, y, z)) == blocks["dirt"].slot and
(y == 127 or
chunk.get_block((x, y + 1, z)) == blocks["air"].slot)):
chunk.set_block((x, y, z), blocks["grass"].slot)
name = "grass"
before = ("erosion", "complex")
after = tuple()
class BeachGenerator(object):
"""
Generates simple beaches.
Beaches are areas of sand around bodies of water. This generator will form
beaches near all bodies of water regardless of size or composition; it
will form beaches at large seashores and frozen lakes. It will even place
beaches on one-block puddles.
"""
implements(ITerrainGenerator)
above = set([blocks["air"].slot, blocks["water"].slot,
blocks["spring"].slot, blocks["ice"].slot])
replace = set([blocks["dirt"].slot, blocks["grass"].slot])
def populate(self, chunk, seed):
"""
Find blocks within a height range and turn them into sand if they are
dirt and underwater or exposed to air. If the height range is near the
water table level, this creates fairly good beaches.
"""
chunk.regenerate_heightmap()
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
y = chunk.height_at(x, z)
while y > 60 and chunk.get_block((x, y, z)) in self.above:
y -= 1
if not 60 < y < 66:
continue
if chunk.get_block((x, y, z)) in self.replace:
chunk.set_block((x, y, z), blocks["sand"].slot)
name = "beaches"
before = ("erosion", "complex")
after = ("saplings",)
class OreGenerator(object):
"""
Place ores and clay.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
set_seed(seed)
xzfactor = 1 / 16
yfactor = 1 / 32
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
for y in range(chunk.height_at(x, z) + 1):
magx = (chunk.x * 16 + x) * xzfactor
magz = (chunk.z * 16 + z) * xzfactor
magy = y * yfactor
sample = octaves3(magx, magz, magy, 3)
if sample > 0.9999:
# Figure out what to place here.
old = chunk.get_block((x, y, z))
new = None
if old == blocks["sand"].slot:
# Sand becomes clay.
new = blocks["clay"].slot
elif old == blocks["dirt"].slot:
# Dirt becomes gravel.
new = blocks["gravel"].slot
elif old == blocks["stone"].slot:
# Stone becomes one of the ores.
if y < 12:
new = blocks["diamond-ore"].slot
elif y < 24:
new = blocks["gold-ore"].slot
elif y < 36:
new = blocks["redstone-ore"].slot
elif y < 48:
new = blocks["iron-ore"].slot
else:
new = blocks["coal-ore"].slot
if new:
chunk.set_block((x, y, z), new)
name = "ore"
before = ("erosion", "complex", "beaches")
after = tuple()
class SafetyGenerator(object):
"""
Generates terrain features essential for the safety of clients.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Spread a layer of bedrock along the bottom of the chunk, and clear the
top two layers to avoid players getting stuck at the top.
"""
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
chunk.set_block((x, 0, z), blocks["bedrock"].slot)
chunk.set_block((x, 126, z), blocks["air"].slot)
chunk.set_block((x, 127, z), blocks["air"].slot)
name = "safety"
before = ("boring", "simplex", "complex", "cliffs", "float", "caves")
after = tuple()
class CliffGenerator(object):
"""
This class/generator creates cliffs by selectively applying a offset of
the noise map to blocks based on height. Feel free to make this more
realistic.
This generator relies on implementation details of ``Chunk``.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth waves of stone, then compare to current landscape.
"""
set_seed(seed)
factor = 1 / 256
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
magx = ((chunk.x + 32) * 16 + x) * factor
magz = ((chunk.z + 32) * 16 + z) * factor
height = octaves2(magx, magz, 6)
height *= 15
height = int(height + 70)
current_height = chunk.heightmap[x * 16 + z]
if (-6 < current_height - height < 3 and
current_height > 63 and height > 63):
for y in range(height - 3):
chunk.set_block((x, y, z), blocks["stone"].slot)
for y in range(y, CHUNK_HEIGHT // 2):
chunk.set_block((x, y, z), blocks["air"].slot)
name = "cliffs"
before = tuple()
after = tuple()
class FloatGenerator(object):
"""
Rips chunks out of the map, to create surreal chunks of floating land.
This generator relies on implementation details of ``Chunk``.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Create floating islands.
"""
# Eat moar stone
R.seed(seed)
factor = 1 / 256
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
magx = ((chunk.x+16) * 16 + x) * factor
magz = ((chunk.z+16) * 16 + z) * factor
height = octaves2(magx, magz, 6)
height *= 15
height = int(height + 70)
if abs(chunk.heightmap[x * 16 + z] - height) < 10:
height = CHUNK_HEIGHT
else:
height = height - 30 + R.randint(-15, 10)
for y in range(height):
chunk.set_block((x, y, z), blocks["air"].slot)
name = "float"
before = tuple()
after = tuple()
class CaveGenerator(object):
"""
Carve caves and seams out of terrain.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth waves of stone.
"""
sede = seed ^ 0xcafebabe
xzfactor = 1 / 128
yfactor = 1 / 64
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
magx = (chunk.x * 16 + x) * xzfactor
magz = (chunk.z * 16 + z) * xzfactor
for y in range(CHUNK_HEIGHT):
if not chunk.get_block((x, y, z)):
continue
magy = y * yfactor
set_seed(seed)
should_cave = abs(octaves3(magx, magz, magy, 3))
set_seed(sede)
should_cave *= abs(octaves3(magx, magz, magy, 3))
if should_cave < 0.002:
chunk.set_block((x, y, z), blocks["air"].slot)
name = "caves"
before = ("grass", "erosion", "simplex", "complex", "boring")
after = tuple()
class SaplingGenerator(object):
"""
Plant saplings at relatively silly places around the map.
"""
implements(ITerrainGenerator)
primes = [401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467,
479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569,
571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643,
647, 653, 659, 661, 673, 677, 683, 691]
"""
A field of prime numbers, used to select factors for trees.
"""
ground = (blocks["grass"].slot, blocks["dirt"].slot)
def populate(self, chunk, seed):
"""
Place saplings.
The algorithm used to pick locations for the saplings is quite
simple, although slightly involved. The basic technique is to
calculate a Morton number for every xz-column in the chunk, and then
use coprime offsets to sprinkle selected points fairly evenly
throughout the chunk.
Saplings are only placed on dirt and grass blocks.
"""
R.seed(seed)
factors = R.choice(list(combinations(self.primes, 3)))
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
# Make a Morton number.
morton = morton2(chunk.x * 16 + x, chunk.z * 16 + z)
if not all(morton % factor for factor in factors):
# Magic number is how many tree types are available
- species = morton % 4
+ species = morton % 4
# Plant a sapling.
y = chunk.height_at(x, z)
if chunk.get_block((x, y, z)) in self.ground:
chunk.set_block((x, y + 1, z), blocks["sapling"].slot)
chunk.set_metadata((x, y + 1, z), species)
name = "saplings"
before = ("grass", "erosion", "simplex", "complex", "boring")
after = tuple()
diff --git a/bravo/plugins/web.py b/bravo/plugins/web.py
index d3b371c..e5ffb85 100644
--- a/bravo/plugins/web.py
+++ b/bravo/plugins/web.py
@@ -1,235 +1,235 @@
-from itertools import product
from StringIO import StringIO
import os
import time
from PIL import Image
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
from twisted.web.template import flattenString, renderer, Element
from twisted.web.template import XMLString, XMLFile
from twisted.web.http import datetimeToString
from zope.interface import implements
+from bravo import __file__
from bravo.blocks import blocks
from bravo.ibravo import IWorldResource
-from bravo import __file__
+from bravo.utilities.coords import XZ
worldmap_xml = os.path.join(os.path.dirname(__file__), 'plugins',
'worldmap.html')
block_colors = {
blocks["clay"].slot: "rosybrown",
blocks["cobblestone"].slot: 'dimgray',
blocks["dirt"].slot: 'brown',
blocks["grass"].slot: ("forestgreen", 'green', 'darkgreen'),
blocks["lava"].slot: 'red',
blocks["lava-spring"].slot: 'red',
blocks["leaves"].slot: "limegreen",
blocks["log"].slot: "sienna",
blocks["sand"].slot: 'khaki',
blocks["sapling"].slot: "lime",
blocks["snow"].slot: 'snow',
blocks["spring"].slot: 'blue',
blocks["stone"].slot: 'gray',
blocks["water"].slot: 'blue',
blocks["wood"].slot: 'burlywood',
}
default_color = 'black'
# http://en.wikipedia.org/wiki/Web_colors X11 color names
names_to_colors = {
"black": (0, 0, 0),
"blue": (0, 0, 255),
"brown": (165, 42, 42),
"burlywood": (22, 184, 135),
"darkgreen": (0, 100, 0),
"dimgray": (105, 105, 105),
"forestgreen": (34, 139, 34),
"gray": (128, 128, 128),
"green": (0, 128, 0),
"khaki": (240, 230, 140),
"lime": (0, 255, 0),
"limegreen": (50, 255, 50),
"red": (255, 0, 0),
"rosybrown": (188, 143, 143),
"saddlebrown": (139, 69, 19),
"sienna": (160, 82, 45),
"snow": (255, 250, 250),
}
class ChunkIllustrator(Resource):
"""
A helper resource which returns image data for a given chunk.
"""
def __init__(self, factory, x, z):
self.factory = factory
self.x = x
self.z = z
def _cb_render_GET(self, chunk, request):
# If the request finished already, then don't even bother preparing
# the image.
if request._disconnected:
return
request.setHeader('content-type', 'image/png')
i = Image.new("RGB", (16, 16))
pbo = i.load()
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
y = chunk.height_at(x, z)
block = chunk.blocks[x, z, y]
if block in block_colors:
color = block_colors[block]
if isinstance(color, tuple):
# Switch colors depending on height.
color = color[y / 5 % len(color)]
else:
color = default_color
pbo[x, z] = names_to_colors[color]
data = StringIO()
i.save(data, "PNG")
# cache image for 5 minutes
request.setHeader("Cache-Control", "public, max-age=360")
request.setHeader("Expires", datetimeToString(time.time() + 360))
request.write(data.getvalue())
request.finish()
def render_GET(self, request):
d = self.factory.world.request_chunk(self.x, self.z)
d.addCallback(self._cb_render_GET, request)
return NOT_DONE_YET
class WorldMapElement(Element):
"""
Element for the WorldMap plugin.
"""
loader = XMLFile(worldmap_xml)
class WorldMap(Resource):
implements(IWorldResource)
name = "worldmap"
isLeaf = False
def __init__(self):
Resource.__init__(self)
self.element = WorldMapElement()
def getChild(self, name, request):
"""
Make a ``ChunkIllustrator`` for the requested chunk.
"""
x, z = [int(i) for i in name.split(",")]
return ChunkIllustrator(x, z)
def render_GET(self, request):
d = flattenString(request, self.element)
def complete_request(html):
if not request._disconnected:
request.write(html)
request.finish()
d.addCallback(complete_request)
return NOT_DONE_YET
automaton_stats_template = """
<html xmlns:t="http://twistedmatrix.com/ns/twisted.web.template/0.1">
<head>
<title>Automaton Stats</title>
</head>
<body>
<h1>Automatons</h1>
<div nowrap="nowrap" t:render="main" />
</body>
</html>
"""
class AutomatonElement(Element):
"""
An automaton.
"""
loader = XMLString("""
<div xmlns:t="http://twistedmatrix.com/ns/twisted.web.template/0.1">
<h2 t:render="name" />
<ul>
<li t:render="tracked" />
<li t:render="step" />
</ul>
</div>
""")
def __init__(self, automaton):
Element.__init__(self)
self.automaton = automaton
@renderer
def name(self, request, tag):
return tag(self.automaton.name)
@renderer
def tracked(self, request, tag):
if hasattr(self.automaton, "tracked"):
t = self.automaton.tracked
if isinstance(t, dict):
l = sum(len(i) for i in t.values())
else:
l = len(t)
s = "Currently tracking %d blocks" % l
else:
s = "<n/a>"
return tag(s)
@renderer
def step(self, request, tag):
if hasattr(self.automaton, "step"):
s = "Currently processing every %f seconds" % self.automaton.step
else:
s = "<n/a>"
return tag(s)
class AutomatonStatsElement(Element):
"""
Render some information about automatons.
"""
loader = XMLString(automaton_stats_template)
def __init__(self, factory):
self.factory = factory
@renderer
def main(self, request, tag):
return tag(*(AutomatonElement(a) for a in self.factory.automatons))
class AutomatonStats(Resource):
implements(IWorldResource)
name = "automatonstats"
isLeaf = True
def __init__(self, factory):
self.factory = factory
def render_GET(self, request):
d = flattenString(request, AutomatonStatsElement(self.factory))
def complete_request(html):
if not request._disconnected:
request.write(html)
request.finish()
d.addCallback(complete_request)
return NOT_DONE_YET
diff --git a/bravo/policy/seasons.py b/bravo/policy/seasons.py
index d0e7707..37fc52d 100644
--- a/bravo/policy/seasons.py
+++ b/bravo/policy/seasons.py
@@ -1,89 +1,88 @@
-from itertools import product
-
from zope.interface import implements
from bravo.blocks import blocks
from bravo.ibravo import ISeason
+from bravo.utilities.coords import CHUNK_HEIGHT, XZ
snow_resistant = set([
blocks["air"].slot,
blocks["brown-mushroom"].slot,
blocks["cactus"].slot,
blocks["cake-block"].slot,
blocks["crops"].slot,
blocks["fence"].slot,
blocks["fire"].slot,
blocks["flower"].slot,
blocks["glass"].slot,
blocks["ice"].slot,
blocks["iron-door-block"].slot,
blocks["ladder"].slot,
blocks["lava"].slot,
blocks["lava-spring"].slot,
blocks["lever"].slot,
blocks["portal"].slot,
blocks["red-mushroom"].slot,
blocks["redstone-repeater-off"].slot,
blocks["redstone-repeater-on"].slot,
blocks["redstone-torch"].slot,
blocks["redstone-torch-off"].slot,
blocks["redstone-wire"].slot,
blocks["rose"].slot,
blocks["sapling"].slot,
blocks["signpost"].slot,
blocks["snow"].slot,
blocks["spring"].slot,
blocks["single-stone-slab"].slot,
blocks["stone-button"].slot,
blocks["stone-plate"].slot,
blocks["stone-stairs"].slot,
blocks["reed"].slot,
blocks["torch"].slot,
blocks["tracks"].slot,
blocks["wall-sign"].slot,
blocks["water"].slot,
blocks["wooden-door-block"].slot,
blocks["wooden-plate"].slot,
blocks["wooden-stairs"].slot,
])
"""
Blocks which cannot have snow spawned on top of them.
"""
class Winter(object):
implements(ISeason)
def transform(self, chunk):
chunk.sed(blocks["spring"].slot, blocks["ice"].slot)
# Make sure that the heightmap is valid so that we don't spawn
# floating snow.
chunk.regenerate_heightmap()
# Lay snow over anything not already snowed and not snow-resistant.
- for x, z in product(xrange(16), xrange(16)):
+ for x, z in XZ:
height = chunk.height_at(x, z)
- if height == 127:
+ if height == CHUNK_HEIGHT - 1:
continue
top_block = chunk.get_block((x, height, z))
if top_block not in snow_resistant:
chunk.set_block((x, height + 1, z), blocks["snow"].slot)
name = "winter"
day = 0
class Spring(object):
implements(ISeason)
def transform(self, chunk):
chunk.sed(blocks["ice"].slot, blocks["spring"].slot)
chunk.sed(blocks["snow"].slot, blocks["air"].slot)
name = "spring"
day = 90
diff --git a/bravo/tests/plugins/test_generators.py b/bravo/tests/plugins/test_generators.py
index 72f5426..d46e28a 100644
--- a/bravo/tests/plugins/test_generators.py
+++ b/bravo/tests/plugins/test_generators.py
@@ -1,78 +1,79 @@
import unittest
from itertools import product
import bravo.blocks
from bravo.chunk import Chunk, CHUNK_HEIGHT
import bravo.ibravo
import bravo.plugin
+from bravo.utilities.coords import iterchunk
class TestGenerators(unittest.TestCase):
def setUp(self):
self.chunk = Chunk(0, 0)
self.p = bravo.plugin.retrieve_plugins(bravo.ibravo.ITerrainGenerator)
def test_trivial(self):
pass
def test_boring(self):
if "boring" not in self.p:
raise unittest.SkipTest("plugin not present")
plugin = self.p["boring"]
plugin.populate(self.chunk, 0)
- for x, y, z in product(xrange(16), xrange(CHUNK_HEIGHT), xrange(16)):
+ for x, z, y in iterchunk():
if y < CHUNK_HEIGHT // 2:
self.assertEqual(self.chunk.get_block((x, y, z)),
bravo.blocks.blocks["stone"].slot)
else:
self.assertEqual(self.chunk.get_block((x, y, z)),
bravo.blocks.blocks["air"].slot)
def test_beaches_range(self):
if "beaches" not in self.p:
raise unittest.SkipTest("plugin not present")
plugin = self.p["beaches"]
# Prepare chunk.
for i in range(5):
self.chunk.set_block((i, 61 + i, i),
bravo.blocks.blocks["dirt"].slot)
plugin.populate(self.chunk, 0)
for i in range(5):
self.assertEqual(self.chunk.get_block((i, 61 + i, i)),
bravo.blocks.blocks["sand"].slot,
"%d, %d, %d is wrong" % (i, 61 + i, i))
def test_beaches_immersed(self):
"""
Test that beaches still generate properly around pre-existing water
tables.
This test is meant to ensure that the order of beaches and watertable
does not matter.
"""
if "beaches" not in self.p:
raise unittest.SkipTest("plugin not present")
plugin = self.p["beaches"]
# Prepare chunk.
for x, z, y in product(xrange(16), xrange(16), xrange(60, 64)):
self.chunk.set_block((x, y, z),
bravo.blocks.blocks["spring"].slot)
for i in range(5):
self.chunk.set_block((i, 61 + i, i),
bravo.blocks.blocks["dirt"].slot)
plugin.populate(self.chunk, 0)
for i in range(5):
self.assertEqual(self.chunk.get_block((i, 61 + i, i)),
bravo.blocks.blocks["sand"].slot,
"%d, %d, %d is wrong" % (i, 61 + i, i))
diff --git a/bravo/tests/test_chunk.py b/bravo/tests/test_chunk.py
index 49bc33f..6f8b708 100644
--- a/bravo/tests/test_chunk.py
+++ b/bravo/tests/test_chunk.py
@@ -1,230 +1,231 @@
from twisted.trial import unittest
from itertools import product
from bravo.blocks import blocks
from bravo.chunk import Chunk
+from bravo.utilities.coords import XZ
class TestChunkBlocks(unittest.TestCase):
def setUp(self):
self.c = Chunk(0, 0)
def test_trivial(self):
pass
def test_destroy(self):
"""
Test block destruction.
"""
self.c.set_block((0, 0, 0), 1)
self.c.set_metadata((0, 0, 0), 1)
self.c.destroy((0, 0, 0))
self.assertEqual(self.c.get_block((0, 0, 0)), 0)
self.assertEqual(self.c.get_metadata((0, 0, 0)), 0)
def test_sed(self):
"""
``sed()`` should work.
"""
self.c.set_block((1, 1, 1), 1)
self.c.set_block((2, 2, 2), 2)
self.c.set_block((3, 3, 3), 3)
self.c.sed(1, 3)
self.assertEqual(self.c.get_block((1, 1, 1)), 3)
self.assertEqual(self.c.get_block((2, 2, 2)), 2)
self.assertEqual(self.c.get_block((3, 3, 3)), 3)
def test_set_block_heightmap(self):
"""
Heightmaps work.
"""
self.c.populated = True
self.c.set_block((0, 20, 0), 1)
self.assertEqual(self.c.heightmap[0], 20)
def test_set_block_heightmap_underneath(self):
"""
A block placed underneath the highest block will not alter the
heightmap.
"""
self.c.populated = True
self.c.set_block((0, 20, 0), 1)
self.assertEqual(self.c.heightmap[0], 20)
self.c.set_block((0, 10, 0), 1)
self.assertEqual(self.c.heightmap[0], 20)
def test_set_block_heightmap_destroyed(self):
"""
Upon destruction of the highest block, the heightmap will point at the
next-highest block.
"""
self.c.populated = True
self.c.set_block((0, 30, 0), 1)
self.c.set_block((0, 10, 0), 1)
self.c.destroy((0, 30, 0))
self.assertEqual(self.c.heightmap[0], 10)
class TestLightmaps(unittest.TestCase):
def setUp(self):
self.c = Chunk(0, 0)
def test_trivial(self):
pass
def test_boring_skylight_values(self):
# Fill it as if we were the boring generator.
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
self.c.set_block((x, 0, z), 1)
self.c.regenerate()
# Make sure that all of the blocks at the bottom of the ambient
# lightmap are set to 15 (fully illuminated).
# Note that skylight of a solid block is 0, the important value
# is the skylight of the transluscent (usually air) block above it.
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
self.assertEqual(self.c.get_skylight((x, 0, z)), 0xf)
test_boring_skylight_values.todo = "Skylight maths is still broken"
def test_skylight_spread(self):
# Fill it as if we were the boring generator.
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
self.c.set_block((x, 0, z), 1)
# Put a false floor up to block the light.
for x, z in product(xrange(1, 15), repeat=2):
self.c.set_block((x, 2, z), 1)
self.c.regenerate()
# Test that a gradient emerges.
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
flipx = x if x > 7 else 15 - x
flipz = z if z > 7 else 15 - z
target = max(flipx, flipz)
self.assertEqual(self.c.get_skylight((x, 1, z)), target,
"%d, %d" % (x, z))
test_skylight_spread.todo = "Skylight maths is still broken"
def test_skylight_arch(self):
"""
Indirect illumination should work.
"""
# Floor.
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
self.c.set_block((x, 0, z), 1)
# Arch of bedrock, with an empty spot in the middle, which will be our
# indirect spot.
for x, y, z in product(xrange(2), xrange(1, 3), xrange(3)):
self.c.set_block((x, y, z), 1)
self.c.set_block((1, 1, 1), 0)
# Illuminate and make sure that our indirect spot has just a little
# bit of illumination.
self.c.regenerate()
self.assertEqual(self.c.get_skylight((1, 1, 1)), 14)
test_skylight_arch.todo = "Skylight maths is still broken"
def test_skylight_arch_leaves(self):
"""
Indirect illumination with dimming should work.
"""
# Floor.
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
self.c.set_block((x, 0, z), 1)
# Arch of bedrock, with an empty spot in the middle, which will be our
# indirect spot.
for x, y, z in product(xrange(2), xrange(1, 3), xrange(3)):
self.c.set_block((x, y, z), 1)
self.c.set_block((1, 1, 1), 0)
# Leaves in front of the spot should cause a dimming of 1.
self.c.set_block((2, 1, 1), 18)
# Illuminate and make sure that our indirect spot has just a little
# bit of illumination.
self.c.regenerate()
self.assertEqual(self.c.get_skylight((1, 1, 1)), 13)
test_skylight_arch_leaves.todo = "Skylight maths is still broken"
def test_skylight_arch_leaves_occluded(self):
"""
Indirect illumination with dimming through occluded blocks only should
work.
"""
# Floor.
- for x, z in product(xrange(16), repeat=2):
+ for x, z in XZ:
self.c.set_block((x, 0, z), 1)
# Arch of bedrock, with an empty spot in the middle, which will be our
# indirect spot.
for x, y, z in product(xrange(3), xrange(1, 3), xrange(3)):
self.c.set_block((x, y, z), 1)
self.c.set_block((1, 1, 1), 0)
# Leaves in front of the spot should cause a dimming of 1, but since
# the leaves themselves are occluded, the total dimming should be 2.
self.c.set_block((2, 1, 1), 18)
# Illuminate and make sure that our indirect spot has just a little
# bit of illumination.
self.c.regenerate()
self.assertEqual(self.c.get_skylight((1, 1, 1)), 12)
test_skylight_arch_leaves_occluded.todo = "Skylight maths is still broken"
def test_incremental_solid(self):
"""
Regeneration isn't necessary to correctly light solid blocks.
"""
# Initialize tables and enable set_block().
self.c.regenerate()
self.c.populated = True
# Any solid block with no dimming works. I choose dirt.
self.c.set_block((0, 0, 0), blocks["dirt"].slot)
self.assertEqual(self.c.get_skylight((0, 0, 0)), 0)
test_incremental_solid.todo = "Skylight maths is still broken"
def test_incremental_air(self):
"""
Regeneration isn't necessary to correctly light dug blocks, which
leave behind air.
"""
# Any solid block with no dimming works. I choose dirt.
self.c.set_block((0, 0, 0), blocks["dirt"].slot)
# Initialize tables and enable set_block().
self.c.regenerate()
self.c.populated = True
self.c.set_block((0, 0, 0), blocks["air"].slot)
self.assertEqual(self.c.get_skylight((0, 0, 0)), 15)
diff --git a/bravo/utilities/automatic.py b/bravo/utilities/automatic.py
index 20cdf60..c129964 100644
--- a/bravo/utilities/automatic.py
+++ b/bravo/utilities/automatic.py
@@ -1,32 +1,32 @@
-from itertools import product
+from bravo.utilities.coords import XZ
def naive_scan(automaton, chunk):
"""
Utility function which can be used to implement a naive, slow, but
thorough chunk scan for automatons.
This method is designed to be directly useable on automaton classes to
provide the `scan()` interface.
This function depends on implementation details of ``Chunk``.
"""
for index, section in enumerate(chunk.sections):
if section:
for i, block in enumerate(section.blocks):
coords = i & 0xf, (i >> 8) + index * 16, i >> 4 & 0xf
automaton.feed(coords)
def column_scan(automaton, chunk):
"""
Utility function which provides a chunk scanner which only examines the
tallest blocks in the chunk. This can be useful for automatons which only
care about sunlit or elevated areas.
This method can be used directly in automaton classes to provide `scan()`.
"""
- for x, z in product(range(16), repeat=2):
+ for x, z in XZ:
y = chunk.height_at(x, z)
if chunk.get_block((x, y, z)) in automaton.blocks:
automaton.feed((x + chunk.x * 16, y, z + chunk.z * 16))
diff --git a/bravo/utilities/coords.py b/bravo/utilities/coords.py
index f39763b..4df1016 100644
--- a/bravo/utilities/coords.py
+++ b/bravo/utilities/coords.py
@@ -1,140 +1,143 @@
"""
Utilities for coordinate handling and munging.
"""
from itertools import product
from math import floor, ceil
-from bravo.chunk import CHUNK_HEIGHT
+CHUNK_HEIGHT = 256
+"""
+The total height of chunks.
+"""
def polar_round_vector(vector):
"""
Rounds a vector towards zero
"""
if vector[0] >= 0:
calculated_x = floor(vector[0])
else:
calculated_x = ceil(vector[0])
if vector[1] >= 0:
calculated_y = floor(vector[1])
else:
calculated_y = ceil(vector[1])
if vector[2] >= 0:
calculated_z = floor(vector[2])
else:
calculated_z = ceil(vector[2])
return calculated_x, calculated_y, calculated_z
def split_coords(x, z):
"""
Split a pair of coordinates into chunk and subchunk coordinates.
:param int x: the X coordinate
:param int z: the Z coordinate
:returns: a tuple of the X chunk, X subchunk, Z chunk, and Z subchunk
"""
first, second = divmod(int(x), 16)
third, fourth = divmod(int(z), 16)
return first, second, third, fourth
def taxicab2(x1, y1, x2, y2):
"""
Return the taxicab distance between two blocks.
"""
return abs(x1 - x2) + abs(y1 - y2)
def taxicab3(x1, y1, z1, x2, y2, z2):
"""
Return the taxicab distance between two blocks, in three dimensions.
"""
return abs(x1 - x2) + abs(y1 - y2) + abs(z1 - z2)
def adjust_coords_for_face(coords, face):
"""
Adjust a set of coords according to a face.
The face is a standard string descriptor, such as "+x".
The "noop" face is supported.
"""
x, y, z = coords
if face == "-x":
x -= 1
elif face == "+x":
x += 1
elif face == "-y":
y -= 1
elif face == "+y":
y += 1
elif face == "-z":
z -= 1
elif face == "+z":
z += 1
return x, y, z
XZ = list(product(range(16), repeat=2))
"""
The xz-coords for a chunk.
"""
def iterchunk():
"""
Yield an iterable of x, z, y coordinates for an entire chunk.
"""
return product(range(16), range(16), range(256))
def iterneighbors(x, y, z):
"""
Yield an iterable of neighboring block coordinates.
The first item in the iterable is the original coordinates.
Coordinates with invalid Y values are discarded automatically.
"""
for (dx, dy, dz) in (
( 0, 0, 0),
( 0, 0, 1),
( 0, 0, -1),
( 0, 1, 0),
( 0, -1, 0),
( 1, 0, 0),
(-1, 0, 0)):
if 0 <= y + dy < CHUNK_HEIGHT:
yield x + dx, y + dy, z + dz
def itercube(x, y, z, r):
"""
Yield an iterable of coordinates in a cube around a given block.
Coordinates with invalid Y values are discarded automatically.
"""
bx = x - r
tx = x + r + 1
by = max(y - r, 0)
ty = min(y + r + 1, CHUNK_HEIGHT)
bz = z - r
tz = z + r + 1
return product(xrange(bx, tx), xrange(by, ty), xrange(bz, tz))
|
bravoserver/bravo
|
c96a57217f5337526a81c57f281cf35ad9d00e0d
|
utilities/coords: Add a couple more iterators.
|
diff --git a/bravo/utilities/coords.py b/bravo/utilities/coords.py
index be96ec8..f39763b 100644
--- a/bravo/utilities/coords.py
+++ b/bravo/utilities/coords.py
@@ -1,127 +1,140 @@
"""
Utilities for coordinate handling and munging.
"""
from itertools import product
from math import floor, ceil
from bravo.chunk import CHUNK_HEIGHT
def polar_round_vector(vector):
"""
Rounds a vector towards zero
"""
if vector[0] >= 0:
calculated_x = floor(vector[0])
else:
calculated_x = ceil(vector[0])
if vector[1] >= 0:
calculated_y = floor(vector[1])
else:
calculated_y = ceil(vector[1])
if vector[2] >= 0:
calculated_z = floor(vector[2])
else:
calculated_z = ceil(vector[2])
return calculated_x, calculated_y, calculated_z
def split_coords(x, z):
"""
Split a pair of coordinates into chunk and subchunk coordinates.
:param int x: the X coordinate
:param int z: the Z coordinate
:returns: a tuple of the X chunk, X subchunk, Z chunk, and Z subchunk
"""
first, second = divmod(int(x), 16)
third, fourth = divmod(int(z), 16)
return first, second, third, fourth
def taxicab2(x1, y1, x2, y2):
"""
Return the taxicab distance between two blocks.
"""
return abs(x1 - x2) + abs(y1 - y2)
def taxicab3(x1, y1, z1, x2, y2, z2):
"""
Return the taxicab distance between two blocks, in three dimensions.
"""
return abs(x1 - x2) + abs(y1 - y2) + abs(z1 - z2)
def adjust_coords_for_face(coords, face):
"""
Adjust a set of coords according to a face.
The face is a standard string descriptor, such as "+x".
The "noop" face is supported.
"""
x, y, z = coords
if face == "-x":
x -= 1
elif face == "+x":
x += 1
elif face == "-y":
y -= 1
elif face == "+y":
y += 1
elif face == "-z":
z -= 1
elif face == "+z":
z += 1
return x, y, z
+XZ = list(product(range(16), repeat=2))
+"""
+The xz-coords for a chunk.
+"""
+
+def iterchunk():
+ """
+ Yield an iterable of x, z, y coordinates for an entire chunk.
+ """
+
+ return product(range(16), range(16), range(256))
+
+
def iterneighbors(x, y, z):
"""
Yield an iterable of neighboring block coordinates.
The first item in the iterable is the original coordinates.
Coordinates with invalid Y values are discarded automatically.
"""
for (dx, dy, dz) in (
( 0, 0, 0),
( 0, 0, 1),
( 0, 0, -1),
( 0, 1, 0),
( 0, -1, 0),
( 1, 0, 0),
(-1, 0, 0)):
if 0 <= y + dy < CHUNK_HEIGHT:
yield x + dx, y + dy, z + dz
def itercube(x, y, z, r):
"""
Yield an iterable of coordinates in a cube around a given block.
Coordinates with invalid Y values are discarded automatically.
"""
bx = x - r
tx = x + r + 1
by = max(y - r, 0)
ty = min(y + r + 1, CHUNK_HEIGHT)
bz = z - r
tz = z + r + 1
return product(xrange(bx, tx), xrange(by, ty), xrange(bz, tz))
|
bravoserver/bravo
|
cca2977a8fd2cc8bb195f091c2c25734db80423a
|
And update locations using 128 as well.
|
diff --git a/bravo/plugins/generators.py b/bravo/plugins/generators.py
index 083356d..850b100 100644
--- a/bravo/plugins/generators.py
+++ b/bravo/plugins/generators.py
@@ -1,477 +1,477 @@
from __future__ import division
from array import array
from itertools import combinations, product
from random import Random
from zope.interface import implements
from bravo.blocks import blocks
from bravo.chunk import CHUNK_HEIGHT
from bravo.ibravo import ITerrainGenerator
from bravo.simplex import octaves2, octaves3, set_seed
from bravo.utilities.maths import morton2
R = Random()
class BoringGenerator(object):
"""
Generates boring slabs of flat stone.
This generator relies on implementation details of ``Chunk``.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Fill the bottom half of the chunk with stone.
"""
# Optimized fill. Fill the bottom eight sections with stone.
stone = array("B", [blocks["stone"].slot] * 16 * 16 * 16)
for section in chunk.sections[:8]:
section.blocks[:] = stone[:]
name = "boring"
before = tuple()
after = tuple()
class SimplexGenerator(object):
"""
Generates waves of stone.
This class uses a simplex noise generator to procedurally generate
organic-looking, continuously smooth terrain.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth waves of stone.
"""
set_seed(seed)
# And into one end he plugged the whole of reality as extrapolated
# from a piece of fairy cake, and into the other end he plugged his
# wife: so that when he turned it on she saw in one instant the whole
# infinity of creation and herself in relation to it.
factor = 1 / 256
for x, z in product(xrange(16), repeat=2):
magx = (chunk.x * 16 + x) * factor
magz = (chunk.z * 16 + z) * factor
height = octaves2(magx, magz, 6)
# Normalize around 70. Normalization is scaled according to a
# rotated cosine.
#scale = rotated_cosine(magx, magz, seed, 16 * 10)
height *= 15
height = int(height + 70)
# Make our chunk offset, and render into the chunk.
for y in range(height):
chunk.set_block((x, y, z), blocks["stone"].slot)
name = "simplex"
before = tuple()
after = tuple()
class ComplexGenerator(object):
"""
Generate islands of stone.
This class uses a simplex noise generator to procedurally generate
ridiculous things.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth islands of stone.
"""
set_seed(seed)
factor = 1 / 256
for x, z, y in product(xrange(16), xrange(16), xrange(CHUNK_HEIGHT)):
magx = (chunk.x * 16 + x) * factor
magz = (chunk.z * 16 + z) * factor
sample = octaves3(magx, magz, y * factor, 6)
if sample > 0.5:
chunk.set_block((x, y, z), blocks["stone"].slot)
name = "complex"
before = tuple()
after = tuple()
class WaterTableGenerator(object):
"""
Create a water table.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Generate a flat water table halfway up the map.
"""
for x, z, y in product(xrange(16), xrange(16), xrange(62)):
if chunk.get_block((x, y, z)) == blocks["air"].slot:
chunk.set_block((x, y, z), blocks["spring"].slot)
name = "watertable"
before = tuple()
after = ("trees", "caves")
class ErosionGenerator(object):
"""
Erodes stone surfaces into dirt.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Turn the top few layers of stone into dirt.
"""
chunk.regenerate_heightmap()
for x, z in product(xrange(16), repeat=2):
y = chunk.height_at(x, z)
if chunk.get_block((x, y, z)) == blocks["stone"].slot:
bottom = max(y - 3, 0)
for i in range(bottom, y + 1):
chunk.set_block((x, i, z), blocks["dirt"].slot)
name = "erosion"
before = ("boring", "simplex")
after = ("watertable",)
class GrassGenerator(object):
"""
Find exposed dirt and grow grass.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Find the top dirt block in each y-level and turn it into grass.
"""
chunk.regenerate_heightmap()
for x, z in product(xrange(16), repeat=2):
y = chunk.height_at(x, z)
if (chunk.get_block((x, y, z)) == blocks["dirt"].slot and
(y == 127 or
chunk.get_block((x, y + 1, z)) == blocks["air"].slot)):
chunk.set_block((x, y, z), blocks["grass"].slot)
name = "grass"
before = ("erosion", "complex")
after = tuple()
class BeachGenerator(object):
"""
Generates simple beaches.
Beaches are areas of sand around bodies of water. This generator will form
beaches near all bodies of water regardless of size or composition; it
will form beaches at large seashores and frozen lakes. It will even place
beaches on one-block puddles.
"""
implements(ITerrainGenerator)
above = set([blocks["air"].slot, blocks["water"].slot,
blocks["spring"].slot, blocks["ice"].slot])
replace = set([blocks["dirt"].slot, blocks["grass"].slot])
def populate(self, chunk, seed):
"""
Find blocks within a height range and turn them into sand if they are
dirt and underwater or exposed to air. If the height range is near the
water table level, this creates fairly good beaches.
"""
chunk.regenerate_heightmap()
for x, z in product(xrange(16), repeat=2):
y = chunk.height_at(x, z)
while y > 60 and chunk.get_block((x, y, z)) in self.above:
y -= 1
if not 60 < y < 66:
continue
if chunk.get_block((x, y, z)) in self.replace:
chunk.set_block((x, y, z), blocks["sand"].slot)
name = "beaches"
before = ("erosion", "complex")
after = ("saplings",)
class OreGenerator(object):
"""
Place ores and clay.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
set_seed(seed)
xzfactor = 1 / 16
yfactor = 1 / 32
for x, z in product(xrange(16), repeat=2):
for y in range(chunk.height_at(x, z) + 1):
magx = (chunk.x * 16 + x) * xzfactor
magz = (chunk.z * 16 + z) * xzfactor
magy = y * yfactor
sample = octaves3(magx, magz, magy, 3)
if sample > 0.9999:
# Figure out what to place here.
old = chunk.get_block((x, y, z))
new = None
if old == blocks["sand"].slot:
# Sand becomes clay.
new = blocks["clay"].slot
elif old == blocks["dirt"].slot:
# Dirt becomes gravel.
new = blocks["gravel"].slot
elif old == blocks["stone"].slot:
# Stone becomes one of the ores.
if y < 12:
new = blocks["diamond-ore"].slot
elif y < 24:
new = blocks["gold-ore"].slot
elif y < 36:
new = blocks["redstone-ore"].slot
elif y < 48:
new = blocks["iron-ore"].slot
else:
new = blocks["coal-ore"].slot
if new:
chunk.set_block((x, y, z), new)
name = "ore"
before = ("erosion", "complex", "beaches")
after = tuple()
class SafetyGenerator(object):
"""
Generates terrain features essential for the safety of clients.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Spread a layer of bedrock along the bottom of the chunk, and clear the
top two layers to avoid players getting stuck at the top.
"""
for x, z in product(xrange(16), repeat=2):
chunk.set_block((x, 0, z), blocks["bedrock"].slot)
chunk.set_block((x, 126, z), blocks["air"].slot)
chunk.set_block((x, 127, z), blocks["air"].slot)
name = "safety"
before = ("boring", "simplex", "complex", "cliffs", "float", "caves")
after = tuple()
class CliffGenerator(object):
"""
This class/generator creates cliffs by selectively applying a offset of
the noise map to blocks based on height. Feel free to make this more
realistic.
This generator relies on implementation details of ``Chunk``.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth waves of stone, then compare to current landscape.
"""
set_seed(seed)
factor = 1 / 256
for x, z in product(xrange(16), repeat=2):
magx = ((chunk.x + 32) * 16 + x) * factor
magz = ((chunk.z + 32) * 16 + z) * factor
height = octaves2(magx, magz, 6)
height *= 15
height = int(height + 70)
current_height = chunk.heightmap[x * 16 + z]
if (-6 < current_height - height < 3 and
current_height > 63 and height > 63):
for y in range(height - 3):
chunk.set_block((x, y, z), blocks["stone"].slot)
- for y in range(y, 128):
+ for y in range(y, CHUNK_HEIGHT // 2):
chunk.set_block((x, y, z), blocks["air"].slot)
name = "cliffs"
before = tuple()
after = tuple()
class FloatGenerator(object):
"""
Rips chunks out of the map, to create surreal chunks of floating land.
This generator relies on implementation details of ``Chunk``.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Create floating islands.
"""
# Eat moar stone
R.seed(seed)
factor = 1 / 256
for x, z in product(xrange(16), repeat=2):
magx = ((chunk.x+16) * 16 + x) * factor
magz = ((chunk.z+16) * 16 + z) * factor
height = octaves2(magx, magz, 6)
height *= 15
height = int(height + 70)
if abs(chunk.heightmap[x * 16 + z] - height) < 10:
height = CHUNK_HEIGHT
else:
height = height - 30 + R.randint(-15, 10)
for y in range(height):
chunk.set_block((x, y, z), blocks["air"].slot)
name = "float"
before = tuple()
after = tuple()
class CaveGenerator(object):
"""
Carve caves and seams out of terrain.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth waves of stone.
"""
sede = seed ^ 0xcafebabe
xzfactor = 1 / 128
yfactor = 1 / 64
for x, z in product(xrange(16), repeat=2):
magx = (chunk.x * 16 + x) * xzfactor
magz = (chunk.z * 16 + z) * xzfactor
- for y in range(128):
+ for y in range(CHUNK_HEIGHT):
if not chunk.get_block((x, y, z)):
continue
magy = y * yfactor
set_seed(seed)
should_cave = abs(octaves3(magx, magz, magy, 3))
set_seed(sede)
should_cave *= abs(octaves3(magx, magz, magy, 3))
if should_cave < 0.002:
chunk.set_block((x, y, z), blocks["air"].slot)
name = "caves"
before = ("grass", "erosion", "simplex", "complex", "boring")
after = tuple()
class SaplingGenerator(object):
"""
Plant saplings at relatively silly places around the map.
"""
implements(ITerrainGenerator)
primes = [401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467,
479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569,
571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643,
647, 653, 659, 661, 673, 677, 683, 691]
"""
A field of prime numbers, used to select factors for trees.
"""
ground = (blocks["grass"].slot, blocks["dirt"].slot)
def populate(self, chunk, seed):
"""
Place saplings.
The algorithm used to pick locations for the saplings is quite
simple, although slightly involved. The basic technique is to
calculate a Morton number for every xz-column in the chunk, and then
use coprime offsets to sprinkle selected points fairly evenly
throughout the chunk.
Saplings are only placed on dirt and grass blocks.
"""
R.seed(seed)
factors = R.choice(list(combinations(self.primes, 3)))
for x, z in product(xrange(16), repeat=2):
# Make a Morton number.
morton = morton2(chunk.x * 16 + x, chunk.z * 16 + z)
if not all(morton % factor for factor in factors):
# Magic number is how many tree types are available
species = morton % 4
# Plant a sapling.
y = chunk.height_at(x, z)
if chunk.get_block((x, y, z)) in self.ground:
chunk.set_block((x, y + 1, z), blocks["sapling"].slot)
chunk.set_metadata((x, y + 1, z), species)
name = "saplings"
before = ("grass", "erosion", "simplex", "complex", "boring")
after = tuple()
diff --git a/bravo/terrain/trees.py b/bravo/terrain/trees.py
index 0eb660c..b98ce83 100644
--- a/bravo/terrain/trees.py
+++ b/bravo/terrain/trees.py
@@ -1,552 +1,554 @@
from __future__ import division
from itertools import product
from math import cos, pi, sin, sqrt
from random import choice, random, randint
from zope.interface import Interface, implements
from bravo.blocks import blocks
+from bravo.chunk import CHUNK_HEIGHT
+
PHI = (sqrt(5) - 1) * 0.5
IPHI = (sqrt(5) + 1) * 0.5
"""
Phi and inverse phi constants.
"""
# add lights in the middle of foliage clusters
# for those huge trees that get so dark underneath
# or for enchanted forests that should glow and stuff
# Only works if SHAPE is "round" or "cone" or "procedural"
# 0 makes just normal trees
# 1 adds one light inside the foliage clusters for a bit of light
# 2 adds two lights around the base of each cluster, for more light
# 4 adds lights all around the base of each cluster for lots of light
LIGHTTREE = 0
def dist_to_mat(cord, vec, matidxlist, world, invert=False, limit=None):
"""
Find the distance from the given coordinates to any of a set of blocks
along a certain vector.
"""
curcord = [i + .5 for i in cord]
iterations = 0
on_map = True
while on_map:
x = int(curcord[0])
y = int(curcord[1])
z = int(curcord[2])
- if not 0 <= y < 128:
+ if not 0 <= y < CHUNK_HEIGHT:
break
block = world.sync_get_block((x, y, z))
if block in matidxlist and not invert:
break
elif block not in matidxlist and invert:
break
else:
curcord = [curcord[i] + vec[i] for i in range(3)]
iterations += 1
if limit and iterations > limit:
break
return iterations
class ITree(Interface):
"""
An ideal Platonic tree.
Trees usually are made of some sort of wood, and are adorned with leaves.
These trees also may have lanterns hanging in their branches.
"""
def prepare(world):
"""
Do any post-__init__() setup.
"""
def make_trunk(world):
"""
Write a trunk to the world.
"""
def make_foliage(world):
"""
Write foliage (leaves, lanterns) to the world.
"""
class Tree(object):
"""
Set up the interface for tree objects. Designed for subclassing.
"""
implements(ITree)
def __init__(self, pos, height=None):
if height is None:
self.height = randint(4, 7)
else:
self.height = height
self.species = 0 # default to oak
self.pos = pos
def prepare(self, world):
pass
def make_trunk(self, world):
pass
def make_foliage(self, world):
pass
class StickTree(Tree):
"""
A large stick or log.
Subclass this to build trees which only require a single-log trunk.
"""
def make_trunk(self, world):
x, y, z = self.pos
for i in xrange(self.height):
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
y += 1
class NormalTree(StickTree):
"""
A Notchy tree.
This tree will be a single bulb of foliage above a single width trunk.
The shape is very similar to the default Minecraft tree.
"""
def make_foliage(self, world):
topy = self.pos[1] + self.height - 1
start = topy - 2
end = topy + 2
for y in xrange(start, end):
if y > start + 1:
rad = 1
else:
rad = 2
for xoff, zoff in product(xrange(-rad, rad + 1), repeat=2):
if (random() > PHI and abs(xoff) == abs(zoff) == rad or
xoff == zoff == 0):
continue
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
world.sync_set_metadata((x, y, z), self.species)
class BambooTree(StickTree):
"""
A bamboo-like tree.
Bamboo foliage is sparse and always adjacent to the trunk.
"""
def make_foliage(self, world):
start = self.pos[1]
end = start + self.height + 1
for y in xrange(start, end):
for i in (0, 1):
xoff = choice([-1, 1])
zoff = choice([-1, 1])
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
class PalmTree(StickTree):
"""
A traditional palm tree.
This tree has four tufts of foliage at the top of the trunk. No coconuts,
though.
"""
def make_foliage(self, world):
y = self.pos[1] + self.height
for xoff, zoff in product(xrange(-2, 3), repeat=2):
if abs(xoff) == abs(zoff):
x = self.pos[0] + xoff
z = self.pos[2] + zoff
world.sync_set_block((x, y, z), blocks["leaves"].slot)
class ProceduralTree(Tree):
"""
Base class for larger, complex, procedurally generated trees.
This tree type has roots, a trunk, branches all of varying width, and many
foliage clusters.
This class needs to be subclassed to be useful. Specifically,
foliage_shape must be set. Subclass 'prepare' and 'shapefunc' to make
different shaped trees.
"""
def cross_section(self, center, radius, diraxis, matidx, world):
"""Create a round section of type matidx in world.
Passed values:
center = [x,y,z] for the coordinates of the center block
radius = <number> as the radius of the section. May be a float or int.
diraxis: The list index for the axis to make the section
perpendicular to. 0 indicates the x axis, 1 the y, 2 the z. The
section will extend along the other two axies.
matidx = <int> the integer value to make the section out of.
world = the array generated by make_mcmap
matdata = <int> the integer value to make the block data value.
"""
rad = int(radius + PHI)
if rad <= 0:
return None
secidx1 = (diraxis - 1) % 3
secidx2 = (1 + diraxis) % 3
coord = [0] * 3
for off1, off2 in product(xrange(-rad, rad + 1), repeat=2):
thisdist = sqrt((abs(off1) + .5)**2 + (abs(off2) + .5)**2)
if thisdist > radius:
continue
pri = center[diraxis]
sec1 = center[secidx1] + off1
sec2 = center[secidx2] + off2
coord[diraxis] = pri
coord[secidx1] = sec1
coord[secidx2] = sec2
world.sync_set_block(coord, matidx)
world.sync_set_metadata(coord, self.species)
def shapefunc(self, y):
"""
Obtain a radius for the given height.
Subclass this method to customize tree design.
If None is returned, no foliage cluster will be created.
:returns: radius, or None
"""
if random() < 100 / ((self.height)**2) and y < self.trunkheight:
return self.height * .12
return None
def foliage_cluster(self, center, world):
"""
Generate a round cluster of foliage at the location center.
The shape of the cluster is defined by the list self.foliage_shape.
This list must be set in a subclass of ProceduralTree.
"""
x = center[0]
y = center[1]
z = center[2]
for i in self.foliage_shape:
self.cross_section([x, y, z], i, 1, blocks["leaves"].slot, world)
y += 1
def taperedcylinder(self, start, end, startsize, endsize, world, blockdata):
"""
Create a tapered cylinder in world.
start and end are the beginning and ending coordinates of form [x,y,z].
startsize and endsize are the beginning and ending radius.
The material of the cylinder is WOODMAT.
"""
# delta is the coordinate vector for the difference between
# start and end.
delta = [int(e - s) for e, s in zip(end, start)]
# primidx is the index (0,1,or 2 for x,y,z) for the coordinate
# which has the largest overall delta.
maxdist = max(delta, key=abs)
if maxdist == 0:
return None
primidx = delta.index(maxdist)
# secidx1 and secidx2 are the remaining indices out of [0,1,2].
secidx1 = (primidx - 1) % 3
secidx2 = (1 + primidx) % 3
# primsign is the digit 1 or -1 depending on whether the limb is headed
# along the positive or negative primidx axis.
primsign = cmp(delta[primidx], 0) or 1
# secdelta1 and ...2 are the amount the associated values change
# for every step along the prime axis.
secdelta1 = delta[secidx1]
secfac1 = float(secdelta1)/delta[primidx]
secdelta2 = delta[secidx2]
secfac2 = float(secdelta2)/delta[primidx]
# Initialize coord. These values could be anything, since
# they are overwritten.
coord = [0] * 3
# Loop through each crossection along the primary axis,
# from start to end.
endoffset = delta[primidx] + primsign
for primoffset in xrange(0, endoffset, primsign):
primloc = start[primidx] + primoffset
secloc1 = int(start[secidx1] + primoffset*secfac1)
secloc2 = int(start[secidx2] + primoffset*secfac2)
coord[primidx] = primloc
coord[secidx1] = secloc1
coord[secidx2] = secloc2
primdist = abs(delta[primidx])
radius = endsize + (startsize-endsize) * abs(delta[primidx]
- primoffset) / primdist
self.cross_section(coord, radius, primidx, blockdata, world)
def make_foliage(self, world):
"""
Generate the foliage for the tree in world.
Also place lanterns.
"""
foliage_coords = self.foliage_cords
for coord in foliage_coords:
self.foliage_cluster(coord,world)
for x, y, z in foliage_coords:
world.sync_set_block((x, y, z), blocks["log"].slot)
world.sync_set_metadata((x, y, z), self.species)
if LIGHTTREE == 1:
world.sync_set_block((x, y + 1, z), blocks["lightstone"].slot)
elif LIGHTTREE in [2,4]:
world.sync_set_block((x + 1, y, z), blocks["lightstone"].slot)
world.sync_set_block((x - 1, y, z), blocks["lightstone"].slot)
if LIGHTTREE == 4:
world.sync_set_block((x, y, z + 1), blocks["lightstone"].slot)
world.sync_set_block((x, y, z - 1), blocks["lightstone"].slot)
def make_branches(self, world):
"""Generate the branches and enter them in world.
"""
treeposition = self.pos
height = self.height
topy = treeposition[1] + int(self.trunkheight + 0.5)
# endrad is the base radius of the branches at the trunk
endrad = max(self.trunkradius * (1 - self.trunkheight/height), 1)
for coord in self.foliage_cords:
dist = (sqrt(float(coord[0]-treeposition[0])**2 +
float(coord[2]-treeposition[2])**2))
ydist = coord[1]-treeposition[1]
# value is a magic number that weights the probability
# of generating branches properly so that
# you get enough on small trees, but not too many
# on larger trees.
# Very difficult to get right... do not touch!
value = (self.branchdensity * 220 * height)/((ydist + dist) ** 3)
if value < random():
continue
posy = coord[1]
slope = self.branchslope + (0.5 - random()) * .16
if coord[1] - dist*slope > topy:
# Another random rejection, for branches between
# the top of the trunk and the crown of the tree
threshhold = 1 / height
if random() < threshhold:
continue
branchy = topy
basesize = endrad
else:
branchy = posy-dist*slope
basesize = (endrad + (self.trunkradius-endrad) *
(topy - branchy) / self.trunkheight)
startsize = (basesize * (1 + random()) * PHI *
(dist/height)**PHI)
if startsize < 1.0:
startsize = 1.0
rndr = sqrt(random()) * basesize * PHI
rndang = random() * 2 * pi
rndx = int(rndr*sin(rndang) + 0.5)
rndz = int(rndr*cos(rndang) + 0.5)
startcoord = [treeposition[0]+rndx,
int(branchy),
treeposition[2]+rndz]
endsize = 1.0
self.taperedcylinder(startcoord, coord, startsize, endsize, world,
blocks["log"].slot)
def make_trunk(self, world):
"""
Make the trunk, roots, buttresses, branches, etc.
"""
# In this method, x and z are the position of the trunk.
x, starty, z = self.pos
midy = starty + int(self.trunkheight / (PHI + 1))
topy = starty + int(self.trunkheight + 0.5)
end_size_factor = self.trunkheight / self.height
endrad = max(self.trunkradius * (1 - end_size_factor), 1)
midrad = max(self.trunkradius * (1 - end_size_factor * .5), endrad)
# Make the lower and upper sections of the trunk.
self.taperedcylinder([x,starty,z], [x,midy,z], self.trunkradius,
midrad, world, blocks["log"].slot)
self.taperedcylinder([x,midy,z], [x,topy,z], midrad, endrad, world,
blocks["log"].slot)
#Make the branches.
self.make_branches(world)
def prepare(self, world):
"""
Initialize the internal values for the Tree object.
Primarily, sets up the foliage cluster locations.
"""
treeposition = self.pos
self.trunkradius = PHI * sqrt(self.height)
if self.trunkradius < 1:
self.trunkradius = 1
self.trunkheight = self.height
yend = int(treeposition[1] + self.height)
self.branchdensity = 1.0
foliage_coords = []
ystart = treeposition[1]
num_of_clusters_per_y = int(1.5 + (self.height / 19)**2)
if num_of_clusters_per_y < 1:
num_of_clusters_per_y = 1
# make sure we don't spend too much time off the top of the map
if yend > 255:
yend = 255
if ystart > 255:
ystart = 255
for y in xrange(yend, ystart, -1):
for i in xrange(num_of_clusters_per_y):
shapefac = self.shapefunc(y - ystart)
if shapefac is None:
continue
r = (sqrt(random()) + .328)*shapefac
theta = random()*2*pi
x = int(r*sin(theta)) + treeposition[0]
z = int(r*cos(theta)) + treeposition[2]
foliage_coords += [[x,y,z]]
self.foliage_cords = foliage_coords
class RoundTree(ProceduralTree):
"""
A rounded deciduous tree.
"""
branchslope = 1 / (PHI + 1)
foliage_shape = [2, 3, 3, 2.5, 1.6]
def prepare(self, world):
self.species = 2 # birch wood
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.8
self.trunkheight *= 0.7
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.282 + .1 * sqrt(random())) :
return None
radius = self.height / 2
adj = self.height / 2 - y
if adj == 0 :
dist = radius
elif abs(adj) >= radius:
dist = 0
else:
dist = sqrt((radius**2) - (adj**2))
dist *= PHI
return dist
class ConeTree(ProceduralTree):
"""
A conifer.
"""
branchslope = 0.15
foliage_shape = [3, 2.6, 2, 1]
def prepare(self, world):
self.species = 1 # pine wood
ProceduralTree.prepare(self, world)
self.trunkradius *= 0.5
def shapefunc(self, y):
twigs = ProceduralTree.shapefunc(self, y)
if twigs is not None:
return twigs
if y < self.height * (.25 + .05 * sqrt(random())):
return None
# Radius.
return max((self.height - y) / (PHI + 1), 0)
class RainforestTree(ProceduralTree):
"""
A big rainforest tree.
"""
branchslope = 1
foliage_shape = [3.4, 2.6]
def prepare(self, world):
self.species = 3 # jungle wood
# TODO: play with these numbers until jungles look right
self.height = randint(10, 20)
self.trunkradius = randint(5, 15)
ProceduralTree.prepare(self, world)
self.trunkradius /= PHI + 1
self.trunkheight *= .9
def shapefunc(self, y):
# Bottom 4/5 of the tree is probably branch-free.
if y < self.height * 0.8:
twigs = ProceduralTree.shapefunc(self,y)
if twigs is not None and random() < 0.07:
return twigs
return None
else:
width = self.height * 1 / (IPHI + 1)
topdist = (self.height - y) / (self.height * 0.2)
dist = width * (PHI + topdist) * (PHI + random()) * 1 / (IPHI + 1)
return dist
class MangroveTree(RoundTree):
"""
A mangrove tree.
Like the round deciduous tree, but bigger, taller, and generally more
awesome.
"""
branchslope = 1
def prepare(self, world):
RoundTree.prepare(self, world)
self.trunkradius *= PHI
def shapefunc(self, y):
val = RoundTree.shapefunc(self, y)
if val is not None:
val *= IPHI
return val
def make_roots(self, rootbases, world):
"""generate the roots and enter them in world.
rootbases = [[x,z,base_radius], ...] and is the list of locations
the roots can originate from, and the size of that location.
"""
treeposition = self.pos
height = self.height
for coord in self.foliage_cords:
# First, set the threshhold for randomly selecting this
diff --git a/bravo/tests/plugins/test_generators.py b/bravo/tests/plugins/test_generators.py
index de53dfe..72f5426 100644
--- a/bravo/tests/plugins/test_generators.py
+++ b/bravo/tests/plugins/test_generators.py
@@ -1,78 +1,78 @@
import unittest
from itertools import product
import bravo.blocks
from bravo.chunk import Chunk, CHUNK_HEIGHT
import bravo.ibravo
import bravo.plugin
class TestGenerators(unittest.TestCase):
def setUp(self):
self.chunk = Chunk(0, 0)
self.p = bravo.plugin.retrieve_plugins(bravo.ibravo.ITerrainGenerator)
def test_trivial(self):
pass
def test_boring(self):
if "boring" not in self.p:
raise unittest.SkipTest("plugin not present")
plugin = self.p["boring"]
plugin.populate(self.chunk, 0)
for x, y, z in product(xrange(16), xrange(CHUNK_HEIGHT), xrange(16)):
- if y < 128:
+ if y < CHUNK_HEIGHT // 2:
self.assertEqual(self.chunk.get_block((x, y, z)),
bravo.blocks.blocks["stone"].slot)
else:
self.assertEqual(self.chunk.get_block((x, y, z)),
bravo.blocks.blocks["air"].slot)
def test_beaches_range(self):
if "beaches" not in self.p:
raise unittest.SkipTest("plugin not present")
plugin = self.p["beaches"]
# Prepare chunk.
for i in range(5):
self.chunk.set_block((i, 61 + i, i),
bravo.blocks.blocks["dirt"].slot)
plugin.populate(self.chunk, 0)
for i in range(5):
self.assertEqual(self.chunk.get_block((i, 61 + i, i)),
bravo.blocks.blocks["sand"].slot,
"%d, %d, %d is wrong" % (i, 61 + i, i))
def test_beaches_immersed(self):
"""
Test that beaches still generate properly around pre-existing water
tables.
This test is meant to ensure that the order of beaches and watertable
does not matter.
"""
if "beaches" not in self.p:
raise unittest.SkipTest("plugin not present")
plugin = self.p["beaches"]
# Prepare chunk.
for x, z, y in product(xrange(16), xrange(16), xrange(60, 64)):
self.chunk.set_block((x, y, z),
bravo.blocks.blocks["spring"].slot)
for i in range(5):
self.chunk.set_block((i, 61 + i, i),
bravo.blocks.blocks["dirt"].slot)
plugin.populate(self.chunk, 0)
for i in range(5):
self.assertEqual(self.chunk.get_block((i, 61 + i, i)),
bravo.blocks.blocks["sand"].slot,
"%d, %d, %d is wrong" % (i, 61 + i, i))
diff --git a/tools/simplexbench.py b/tools/simplexbench.py
index 0f15cbe..d218bad 100755
--- a/tools/simplexbench.py
+++ b/tools/simplexbench.py
@@ -1,55 +1,56 @@
#!/usr/bin/env python
from functools import wraps
from time import time
from bravo.simplex import set_seed, simplex2, simplex3, octaves2, octaves3
+from bravo.chunk import CHUNK_HEIGHT
print "Be patient; this benchmark takes a minute or so to run each test."
chunk2d = 16 * 16
-chunk3d = chunk2d * 128
+chunk3d = chunk2d * CHUNK_HEIGHT
set_seed(time())
def timed(f):
@wraps(f)
def deco():
before = time()
for i in range(1000000):
f(i)
after = time()
t = after - before
actual = t / 1000
print ("Time taken for %s: %f seconds" % (f, t))
print ("Time for one call: %d ms" % (actual))
print ("Time to fill a chunk by column: %d ms"
% (chunk2d * actual))
print ("Time to fill a chunk by block: %d ms"
% (chunk3d * actual))
print ("Time to fill 315 chunks by column: %d ms"
% (315 * chunk2d * actual))
print ("Time to fill 315 chunks by block: %d ms"
% (315 * chunk3d * actual))
return deco
@timed
def time_simplex2(i):
simplex2(i, i)
@timed
def time_simplex3(i):
simplex3(i, i, i)
@timed
def time_octaves2(i):
octaves2(i, i, 5)
@timed
def time_octaves3(i):
octaves3(i, i, i, 5)
time_simplex2()
time_simplex3()
time_octaves2()
time_octaves3()
|
bravoserver/bravo
|
60b0b9a29de7449b5ec5bc04003fc487b3ba6f9c
|
Use CHUNK_HEIGHT in place of 256.
|
diff --git a/bravo/beta/protocol.py b/bravo/beta/protocol.py
index af5efc9..d2cbd93 100644
--- a/bravo/beta/protocol.py
+++ b/bravo/beta/protocol.py
@@ -1,1004 +1,1006 @@
# vim: set fileencoding=utf8 :
from itertools import product, chain
from time import time
from urlparse import urlunparse
from twisted.internet import reactor
from twisted.internet.defer import (DeferredList, inlineCallbacks,
maybeDeferred, succeed)
from twisted.internet.protocol import Protocol
from twisted.internet.task import cooperate, deferLater, LoopingCall
from twisted.internet.task import TaskDone, TaskFailed
from twisted.protocols.policies import TimeoutMixin
from twisted.python import log
from twisted.web.client import getPage
from bravo import version
from bravo.beta.structures import BuildData, Settings
from bravo.blocks import blocks, items
+from bravo.chunk import CHUNK_HEIGHT
from bravo.entity import Sign
from bravo.errors import BetaClientError, BuildError
from bravo.ibravo import (IChatCommand, IPreBuildHook, IPostBuildHook,
IWindowOpenHook, IWindowClickHook, IWindowCloseHook,
IPreDigHook, IDigHook, ISignHook, IUseHook)
from bravo.infini.factory import InfiniClientFactory
from bravo.inventory.windows import InventoryWindow
from bravo.location import Location, Orientation, Position
from bravo.motd import get_motd
from bravo.beta.packets import parse_packets, make_packet, make_error_packet
from bravo.plugin import retrieve_plugins
from bravo.policy.dig import dig_policies
from bravo.utilities.coords import adjust_coords_for_face, split_coords
from bravo.utilities.chat import complete, username_alternatives
from bravo.utilities.maths import circling, clamp, sorted_by_distance
from bravo.utilities.temporal import timestamp_from_clock
# States of the protocol.
(STATE_UNAUTHENTICATED, STATE_AUTHENTICATED, STATE_LOCATED) = range(3)
SUPPORTED_PROTOCOL = 60
class BetaServerProtocol(object, Protocol, TimeoutMixin):
"""
The Minecraft Alpha/Beta server protocol.
This class is mostly designed to be a skeleton for featureful clients. It
tries hard to not step on the toes of potential subclasses.
"""
excess = ""
packet = None
state = STATE_UNAUTHENTICATED
buf = ""
parser = None
handler = None
player = None
username = None
settings = Settings("en_US", "normal")
motd = "Bravo Generic Beta Server"
_health = 20
_latency = 0
def __init__(self):
self.chunks = dict()
self.windows = {}
self.wid = 1
self.location = Location()
self.handlers = {
0: self.ping,
2: self.handshake,
3: self.chat,
7: self.use,
9: self.respawn,
10: self.grounded,
11: self.position,
12: self.orientation,
13: self.location_packet,
14: self.digging,
15: self.build,
16: self.equip,
18: self.animate,
19: self.action,
21: self.pickup,
101: self.wclose,
102: self.waction,
106: self.wacknowledge,
107: self.wcreative,
130: self.sign,
203: self.complete,
204: self.settings_packet,
254: self.poll,
255: self.quit,
}
self._ping_loop = LoopingCall(self.update_ping)
self.setTimeout(30)
# Low-level packet handlers
# Try not to hook these if possible, since they offer no convenient
# abstractions or protections.
def ping(self, container):
"""
Hook for ping packets.
By default, this hook will examine the timestamps on incoming pings,
and use them to estimate the current latency of the connected client.
"""
now = timestamp_from_clock(reactor)
then = container.pid
self.latency = now - then
def handshake(self, container):
"""
Hook for handshake packets.
Override this to customize how logins are handled. By default, this
method will only confirm that the negotiated wire protocol is the
correct version, copy data out of the packet and onto the protocol,
and then run the ``authenticated`` callback.
This method will call the ``pre_handshake`` method hook prior to
logging in the client.
"""
self.username = container.username
if container.protocol < SUPPORTED_PROTOCOL:
# Kick old clients.
self.error("This server doesn't support your ancient client.")
return
elif container.protocol > SUPPORTED_PROTOCOL:
# Kick new clients.
self.error("This server doesn't support your newfangled client.")
return
log.msg("Handshaking with client, protocol version %d" %
container.protocol)
if not self.pre_handshake():
log.msg("Pre-handshake hook failed; kicking client")
self.error("You failed the pre-handshake hook.")
return
players = min(self.factory.limitConnections, 20)
self.write_packet("login", eid=self.eid, leveltype="default",
mode=self.factory.mode,
dimension=self.factory.world.dimension,
difficulty="peaceful", unused=0, maxplayers=players)
self.authenticated()
def pre_handshake(self):
"""
Whether this client should be logged in.
"""
return True
def chat(self, container):
"""
Hook for chat packets.
"""
def use(self, container):
"""
Hook for use packets.
"""
def respawn(self, container):
"""
Hook for respawn packets.
"""
def grounded(self, container):
"""
Hook for grounded packets.
"""
self.location.grounded = bool(container.grounded)
def position(self, container):
"""
Hook for position packets.
"""
if self.state != STATE_LOCATED:
log.msg("Ignoring unlocated position!")
return
self.grounded(container.grounded)
old_position = self.location.pos
position = Position.from_player(container.position.x,
container.position.y, container.position.z)
altered = False
dx, dy, dz = old_position - position
if any(abs(d) >= 64 for d in (dx, dy, dz)):
# Whoa, slow down there, cowboy. You're moving too fast. We're
# gonna ignore this position change completely, because it's
# either bogus or ignoring a recent teleport.
altered = True
else:
self.location.pos = position
self.location.stance = container.position.stance
# Santitize location. This handles safety boundaries, illegal stance,
# etc.
altered = self.location.clamp() or altered
# If, for any reason, our opinion on where the client should be
# located is different than theirs, force them to conform to our point
# of view.
if altered:
log.msg("Not updating bogus position!")
self.update_location()
# If our position actually changed, fire the position change hook.
if old_position != position:
self.position_changed()
def orientation(self, container):
"""
Hook for orientation packets.
"""
self.grounded(container.grounded)
old_orientation = self.location.ori
orientation = Orientation.from_degs(container.orientation.rotation,
container.orientation.pitch)
self.location.ori = orientation
if old_orientation != orientation:
self.orientation_changed()
def location_packet(self, container):
"""
Hook for location packets.
"""
self.position(container)
self.orientation(container)
def digging(self, container):
"""
Hook for digging packets.
"""
def build(self, container):
"""
Hook for build packets.
"""
def equip(self, container):
"""
Hook for equip packets.
"""
def pickup(self, container):
"""
Hook for pickup packets.
"""
def animate(self, container):
"""
Hook for animate packets.
"""
def action(self, container):
"""
Hook for action packets.
"""
def wclose(self, container):
"""
Hook for wclose packets.
"""
def waction(self, container):
"""
Hook for waction packets.
"""
def wacknowledge(self, container):
"""
Hook for wacknowledge packets.
"""
def wcreative(self, container):
"""
Hook for creative inventory action packets.
"""
def sign(self, container):
"""
Hook for sign packets.
"""
def complete(self, container):
"""
Hook for tab-completion packets.
"""
def settings_packet(self, container):
"""
Hook for client settings packets.
"""
distance = ["far", "normal", "short", "tiny"][container.distance]
self.settings = Settings(container.locale, distance)
def poll(self, container):
"""
Hook for poll packets.
By default, queries the parent factory for some data, and replays it
in a specific format to the requester. The connection is then closed
at both ends. This functionality is used by Beta 1.8 clients to poll
servers for status.
"""
players = unicode(len(self.factory.protocols))
max_players = unicode(self.factory.limitConnections or 1000000)
data = [
u"§1",
unicode(SUPPORTED_PROTOCOL),
u"Bravo %s" % version,
self.motd,
players,
max_players,
]
response = u"\u0000".join(data)
self.error(response)
def quit(self, container):
"""
Hook for quit packets.
By default, merely logs the quit message and drops the connection.
Even if the connection is not dropped, it will be lost anyway since
the client will close the connection. It's better to explicitly let it
go here than to have zombie protocols.
"""
log.msg("Client is quitting: %s" % container.message)
self.transport.loseConnection()
# Twisted-level data handlers and methods
# Please don't override these needlessly, as they are pretty solid and
# shouldn't need to be touched.
def dataReceived(self, data):
self.buf += data
packets, self.buf = parse_packets(self.buf)
if packets:
self.resetTimeout()
for header, payload in packets:
if header in self.handlers:
d = maybeDeferred(self.handlers[header], payload)
@d.addErrback
def eb(failure):
log.err("Error while handling packet %d" % header)
log.err(failure)
return None
else:
log.err("Didn't handle parseable packet %d!" % header)
log.err(payload)
def connectionLost(self, reason):
if self._ping_loop.running:
self._ping_loop.stop()
def timeoutConnection(self):
self.error("Connection timed out")
# State-change callbacks
# Feel free to override these, but call them at some point.
def authenticated(self):
"""
Called when the client has successfully authenticated with the server.
"""
self.state = STATE_AUTHENTICATED
self._ping_loop.start(30)
# Event callbacks
# These are meant to be overriden.
def orientation_changed(self):
"""
Called when the client moves.
This callback is only for orientation, not position.
"""
pass
def position_changed(self):
"""
Called when the client moves.
This callback is only for position, not orientation.
"""
pass
# Convenience methods for consolidating code and expressing intent. I
# hear that these are occasionally useful. If a method in this section can
# be used, then *PLEASE* use it; not using it is the same as open-coding
# whatever you're doing, and only hurts in the long run.
def write_packet(self, header, **payload):
"""
Send a packet to the client.
"""
self.transport.write(make_packet(header, **payload))
def update_ping(self):
"""
Send a keepalive to the client.
"""
timestamp = timestamp_from_clock(reactor)
self.write_packet("ping", pid=timestamp)
def update_location(self):
"""
Send this client's location to the client.
Also let other clients know where this client is.
"""
# Don't bother trying to update things if the position's not yet
# synchronized. We could end up jettisoning them into the void.
if self.state != STATE_LOCATED:
return
x, y, z = self.location.pos
yaw, pitch = self.location.ori.to_fracs()
# Inform everybody of our new location.
packet = make_packet("teleport", eid=self.player.eid, x=x, y=y, z=z,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
# Inform ourselves of our new location.
packet = self.location.save_to_packet()
self.transport.write(packet)
def ascend(self, count):
"""
Ascend to the next XZ-plane.
``count`` is the number of ascensions to perform, and may be zero in
order to force this player to not be standing inside a block.
:returns: bool of whether the ascension was successful
This client must be located for this method to have any effect.
"""
if self.state != STATE_LOCATED:
return False
x, y, z = self.location.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
chunk = self.chunks[bigx, bigz]
- column = [chunk.get_block((smallx, i, smallz)) for i in range(256)]
+ column = [chunk.get_block((smallx, i, smallz))
+ for i in range(CHUNK_HEIGHT)]
# Special case: Ascend at most once, if the current spot isn't good.
if count == 0:
if (not column[y]) or column[y + 1] or column[y + 2]:
# Yeah, we're gonna need to move.
count += 1
else:
# Nope, we're fine where we are.
return True
for i in xrange(y, 255):
# Find the next spot above us which has a platform and two empty
# blocks of air.
if column[i] and (not column[i + 1]) and not column[i + 2]:
count -= 1
if not count:
break
else:
return False
self.location.pos = self.location.pos._replace(y=i * 32)
return True
def error(self, message):
"""
Error out.
This method sends ``message`` to the client as a descriptive error
message, then closes the connection.
"""
log.msg("Error: %s" % message)
self.transport.write(make_error_packet(message))
self.transport.loseConnection()
def play_notes(self, notes):
"""
Play some music.
Send a sequence of notes to the player. ``notes`` is a finite iterable
of pairs of instruments and pitches.
There is no way to time notes; if staggered playback is desired (and
it usually is!), then ``play_notes()`` should be called repeatedly at
the appropriate times.
This method turns the block beneath the player into a note block,
plays the requested notes through it, then turns it back into the
original block, all without actually modifying the chunk.
"""
x, y, z = self.location.pos.to_block()
if y:
y -= 1
bigx, smallx, bigz, smallz = split_coords(x, z)
if (bigx, bigz) not in self.chunks:
return
block = self.chunks[bigx, bigz].get_block((smallx, y, smallz))
meta = self.chunks[bigx, bigz].get_metadata((smallx, y, smallz))
self.write_packet("block", x=x, y=y, z=z,
type=blocks["note-block"].slot, meta=0)
for instrument, pitch in notes:
self.write_packet("note", x=x, y=y, z=z, pitch=pitch,
instrument=instrument)
self.write_packet("block", x=x, y=y, z=z, type=block, meta=meta)
# Automatic properties. Assigning to them causes the client to be notified
# of changes.
@property
def health(self):
return self._health
@health.setter
def health(self, value):
if not 0 <= value <= 20:
raise BetaClientError("Invalid health value %d" % value)
if self._health != value:
self.write_packet("health", hp=value, fp=0, saturation=0)
self._health = value
@property
def latency(self):
return self._latency
@latency.setter
def latency(self, value):
# Clamp the value to not exceed the boundaries of the packet. This is
# necessary even though, in theory, a ping this high is bad news.
value = clamp(value, 0, 65535)
# Check to see if this is a new value, and if so, alert everybody.
if self._latency != value:
packet = make_packet("players", name=self.username, online=True,
ping=value)
self.factory.broadcast(packet)
self._latency = value
class KickedProtocol(BetaServerProtocol):
"""
A very simple Beta protocol that helps enforce IP bans, Max Connections,
and Max Connections Per IP.
This protocol disconnects people as soon as they connect, with a helpful
message.
"""
def __init__(self, reason=None):
if reason:
self.reason = reason
else:
self.reason = (
"This server doesn't like you very much."
" I don't like you very much either.")
def connectionMade(self):
self.error("%s" % self.reason)
class BetaProxyProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` that proxies for an InfiniCraft client.
"""
gateway = "server.wiki.vg"
def handshake(self, container):
self.write_packet("handshake", username="-")
def login(self, container):
self.username = container.username
self.write_packet("login", protocol=0, username="", seed=0,
dimension="earth")
url = urlunparse(("http", self.gateway, "/node/0/0/", None, None,
None))
d = getPage(url)
d.addCallback(self.start_proxy)
def start_proxy(self, response):
log.msg("Response: %s" % response)
log.msg("Starting proxy...")
address, port = response.split(":")
self.add_node(address, int(port))
def add_node(self, address, port):
"""
Add a new node to this client.
"""
from twisted.internet.endpoints import TCP4ClientEndpoint
log.msg("Adding node %s:%d" % (address, port))
endpoint = TCP4ClientEndpoint(reactor, address, port, 5)
self.node_factory = InfiniClientFactory()
d = endpoint.connect(self.node_factory)
d.addCallback(self.node_connected)
d.addErrback(self.node_connect_error)
def node_connected(self, protocol):
log.msg("Connected new node!")
def node_connect_error(self, reason):
log.err("Couldn't connect node!")
log.err(reason)
class BravoProtocol(BetaServerProtocol):
"""
A ``BetaServerProtocol`` suitable for serving MC worlds to clients.
This protocol really does need to be hooked up with a ``BravoFactory`` or
something very much like it.
"""
chunk_tasks = None
time_loop = None
eid = 0
last_dig = None
def __init__(self, config, name):
BetaServerProtocol.__init__(self)
self.config = config
self.config_name = "world %s" % name
# Retrieve the MOTD. Only needs to be done once.
self.motd = self.config.getdefault(self.config_name, "motd",
"BravoServer")
def register_hooks(self):
log.msg("Registering client hooks...")
plugin_types = {
"open_hooks": IWindowOpenHook,
"click_hooks": IWindowClickHook,
"close_hooks": IWindowCloseHook,
"pre_build_hooks": IPreBuildHook,
"post_build_hooks": IPostBuildHook,
"pre_dig_hooks": IPreDigHook,
"dig_hooks": IDigHook,
"sign_hooks": ISignHook,
"use_hooks": IUseHook,
}
for t in plugin_types:
setattr(self, t, getattr(self.factory, t))
log.msg("Registering policies...")
if self.factory.mode == "creative":
self.dig_policy = dig_policies["speedy"]
else:
self.dig_policy = dig_policies["notchy"]
log.msg("Registered client plugin hooks!")
def pre_handshake(self):
"""
Set up username and get going.
"""
if self.username in self.factory.protocols:
# This username's already taken; find a new one.
for name in username_alternatives(username):
if name not in self.factory.protocols:
container.username = name
break
else:
self.error("Your username is already taken.")
return False
return True
@inlineCallbacks
def authenticated(self):
BetaServerProtocol.authenticated(self)
# Init player, and copy data into it.
self.player = yield self.factory.world.load_player(self.username)
self.player.eid = self.eid
self.location = self.player.location
# Init players' inventory window.
self.inventory = InventoryWindow(self.player.inventory)
# *Now* we are in our factory's list of protocols. Be aware.
self.factory.protocols[self.username] = self
# Announce our presence.
self.factory.chat("%s is joining the game..." % self.username)
packet = make_packet("players", name=self.username, online=True,
ping=0)
self.factory.broadcast(packet)
# Craft our avatar and send it to already-connected other players.
packet = make_packet("create", eid=self.player.eid)
packet += self.player.save_to_packet()
self.factory.broadcast_for_others(packet, self)
# And of course spawn all of those players' avatars in our client as
# well.
for protocol in self.factory.protocols.itervalues():
# Skip over ourselves; otherwise, the client tweaks out and
# usually either dies or locks up.
if protocol is self:
continue
self.write_packet("create", eid=protocol.player.eid)
packet = protocol.player.save_to_packet()
packet += protocol.player.save_equipment_to_packet()
self.transport.write(packet)
# Send spawn and inventory.
spawn = self.factory.world.level.spawn
packet = make_packet("spawn", x=spawn[0], y=spawn[1], z=spawn[2])
packet += self.inventory.save_to_packet()
self.transport.write(packet)
# Send weather.
self.transport.write(self.factory.vane.make_packet())
self.send_initial_chunk_and_location()
self.time_loop = LoopingCall(self.update_time)
self.time_loop.start(10)
def orientation_changed(self):
# Bang your head!
yaw, pitch = self.location.ori.to_fracs()
packet = make_packet("entity-orientation", eid=self.player.eid,
yaw=yaw, pitch=pitch)
self.factory.broadcast_for_others(packet, self)
def position_changed(self):
# Send chunks.
self.update_chunks()
for entity in self.entities_near(2):
if entity.name != "Item":
continue
left = self.player.inventory.add(entity.item, entity.quantity)
if left != entity.quantity:
if left != 0:
# partial collect
entity.quantity = left
else:
packet = make_packet("collect", eid=entity.eid,
destination=self.player.eid)
packet += make_packet("destroy", count=1, eid=[entity.eid])
self.factory.broadcast(packet)
self.factory.destroy_entity(entity)
packet = self.inventory.save_to_packet()
self.transport.write(packet)
def entities_near(self, radius):
"""
Obtain the entities within a radius of this player.
Radius is measured in blocks.
"""
chunk_radius = int(radius // 16 + 1)
chunkx, chaff, chunkz, chaff = split_coords(self.location.pos.x,
self.location.pos.z)
minx = chunkx - chunk_radius
maxx = chunkx + chunk_radius + 1
minz = chunkz - chunk_radius
maxz = chunkz + chunk_radius + 1
for x, z in product(xrange(minx, maxx), xrange(minz, maxz)):
if (x, z) not in self.chunks:
continue
chunk = self.chunks[x, z]
yieldables = [entity for entity in chunk.entities
if self.location.distance(entity.location) <= (radius * 32)]
for i in yieldables:
yield i
def chat(self, container):
if container.message.startswith("/"):
pp = {"factory": self.factory}
commands = retrieve_plugins(IChatCommand, factory=self.factory)
# Register aliases.
for plugin in commands.values():
for alias in plugin.aliases:
commands[alias] = plugin
params = container.message[1:].split(" ")
command = params.pop(0).lower()
if command and command in commands:
def cb(iterable):
for line in iterable:
self.write_packet("chat", message=line)
def eb(error):
self.write_packet("chat", message="Error: %s" %
error.getErrorMessage())
d = maybeDeferred(commands[command].chat_command,
self.username, params)
d.addCallback(cb)
d.addErrback(eb)
else:
self.write_packet("chat",
message="Unknown command: %s" % command)
else:
# Send the message up to the factory to be chatified.
message = "<%s> %s" % (self.username, container.message)
self.factory.chat(message)
def use(self, container):
"""
For each entity in proximity (4 blocks), check if it is the target
of this packet and call all hooks that stated interested in this
type.
"""
nearby_players = self.factory.players_near(self.player, 4)
for entity in chain(self.entities_near(4), nearby_players):
if entity.eid == container.target:
for hook in self.use_hooks[entity.name]:
hook.use_hook(self.factory, self.player, entity,
container.button == 0)
break
@inlineCallbacks
def digging(self, container):
if container.x == -1 and container.z == -1 and container.y == 255:
# Lala-land dig packet. Discard it for now.
return
# Player drops currently holding item/block.
if (container.state == "dropped" and container.face == "-y" and
container.x == 0 and container.y == 0 and container.z == 0):
i = self.player.inventory
holding = i.holdables[self.player.equipped]
if holding:
primary, secondary, count = holding
if i.consume((primary, secondary), self.player.equipped):
dest = self.location.in_front_of(2)
coords = dest.pos._replace(y=dest.pos.y + 1)
self.factory.give(coords, (primary, secondary), 1)
# Re-send inventory.
packet = self.inventory.save_to_packet()
self.transport.write(packet)
# If no items in this slot are left, this player isn't
# holding an item anymore.
if i.holdables[self.player.equipped] is None:
packet = make_packet("entity-equipment",
eid=self.player.eid,
slot=0,
primary=65535,
count=1,
secondary=0
)
self.factory.broadcast_for_others(packet, self)
return
if container.state == "shooting":
self.shoot_arrow()
return
bigx, smallx, bigz, smallz = split_coords(container.x, container.z)
coords = smallx, container.y, smallz
try:
chunk = self.chunks[bigx, bigz]
except KeyError:
self.error("Couldn't dig in chunk (%d, %d)!" % (bigx, bigz))
return
block = chunk.get_block((smallx, container.y, smallz))
if container.state == "started":
# Run pre dig hooks
for hook in self.pre_dig_hooks:
cancel = yield maybeDeferred(hook.pre_dig_hook, self.player,
(container.x, container.y, container.z), block)
if cancel:
return
tool = self.player.inventory.holdables[self.player.equipped]
# Check to see whether we should break this block.
if self.dig_policy.is_1ko(block, tool):
self.run_dig_hooks(chunk, coords, blocks[block])
else:
# Set up a timer for breaking the block later.
dtime = time() + self.dig_policy.dig_time(block, tool)
self.last_dig = coords, block, dtime
elif container.state == "stopped":
# The client thinks it has broken a block. We shall see.
if not self.last_dig:
return
oldcoords, oldblock, dtime = self.last_dig
if oldcoords != coords or oldblock != block:
# Nope!
self.last_dig = None
return
dtime -= time()
# When enough time has elapsed, run the dig hooks.
d = deferLater(reactor, max(dtime, 0), self.run_dig_hooks, chunk,
coords, blocks[block])
d.addCallback(lambda none: setattr(self, "last_dig", None))
def run_dig_hooks(self, chunk, coords, block):
"""
Destroy a block and run the post-destroy dig hooks.
"""
x, y, z = coords
if block.breakable:
chunk.destroy(coords)
l = []
for hook in self.dig_hooks:
l.append(maybeDeferred(hook.dig_hook, chunk, x, y, z, block))
dl = DeferredList(l)
dl.addCallback(lambda none: self.factory.flush_chunk(chunk))
@inlineCallbacks
def build(self, container):
"""
Handle a build packet.
Several things must happen. First, the packet's contents need to be
examined to ensure that the packet is valid. A check is done to see if
the packet is opening a windowed object. If not, then a build is
run.
"""
diff --git a/bravo/chunk.py b/bravo/chunk.py
index 90c13ab..12e925b 100644
--- a/bravo/chunk.py
+++ b/bravo/chunk.py
@@ -1,630 +1,630 @@
from array import array
from functools import wraps
from itertools import product
from struct import pack
from warnings import warn
from bravo.blocks import blocks, glowing_blocks
from bravo.beta.packets import make_packet
from bravo.geometry.section import Section
from bravo.utilities.bits import pack_nibbles
from bravo.utilities.maths import clamp
CHUNK_HEIGHT = 256
"""
The total height of chunks.
"""
class ChunkWarning(Warning):
"""
Somebody did something inappropriate to this chunk, but it probably isn't
lethal, so the chunk is issuing a warning instead of an exception.
"""
def check_bounds(f):
"""
Decorate a function or method to have its first positional argument be
treated as an (x, y, z) tuple which must fit inside chunk boundaries of
- 16, 256, and 16, respectively.
+ 16, CHUNK_HEIGHT, and 16, respectively.
A warning will be raised if the bounds check fails.
"""
@wraps(f)
def deco(chunk, coords, *args, **kwargs):
x, y, z = coords
# Coordinates were out-of-bounds; warn and run away.
- if not (0 <= x < 16 and 0 <= z < 16 and 0 <= y < 256):
+ if not (0 <= x < 16 and 0 <= z < 16 and 0 <= y < CHUNK_HEIGHT):
warn("Coordinates %s are OOB in %s() of %s, ignoring call"
% (coords, f.func_name, chunk), ChunkWarning, stacklevel=2)
# A concession towards where this decorator will be used. The
# value is likely to be discarded either way, but if the value is
# used, we shouldn't horribly die because of None/0 mismatch.
return 0
return f(chunk, coords, *args, **kwargs)
return deco
def ci(x, y, z):
"""
Turn an (x, y, z) tuple into a chunk index.
This is really a macro and not a function, but Python doesn't know the
difference. Hopefully this is faster on PyPy than on CPython.
"""
- return (x * 16 + z) * 256 + y
+ return (x * 16 + z) * CHUNK_HEIGHT + y
def segment_array(a):
"""
Chop up a chunk-sized array into sixteen components.
The chops are done in order to produce the smaller chunks preferred by
modern clients.
"""
l = [array(a.typecode) for chaff in range(16)]
index = 0
for i in range(0, len(a), 16):
l[index].extend(a[i:i + 16])
index = (index + 1) % 16
return l
def make_glows():
"""
Set up glow tables.
These tables provide glow maps for illuminated points.
"""
glow = [None] * 16
for i in range(16):
dim = 2 * i + 1
glow[i] = array("b", [0] * (dim**3))
for x, y, z in product(xrange(dim), repeat=3):
distance = abs(x - i) + abs(y - i) + abs(z - i)
glow[i][(x * dim + y) * dim + z] = i + 1 - distance
glow[i] = array("B", [clamp(x, 0, 15) for x in glow[i]])
return glow
glow = make_glows()
def composite_glow(target, strength, x, y, z):
"""
Composite a light source onto a lightmap.
The exact operation is not quite unlike an add.
"""
ambient = glow[strength]
- xbound, zbound, ybound = 16, 256, 16
+ xbound, zbound, ybound = 16, CHUNK_HEIGHT, 16
sx = x - strength
sy = y - strength
sz = z - strength
ex = x + strength
ey = y + strength
ez = z + strength
si, sj, sk = 0, 0, 0
ei, ej, ek = strength * 2, strength * 2, strength * 2
if sx < 0:
sx, si = 0, -sx
if sy < 0:
sy, sj = 0, -sy
if sz < 0:
sz, sk = 0, -sz
if ex > xbound:
ex, ei = xbound, ei - ex + xbound
if ey > ybound:
ey, ej = ybound, ej - ey + ybound
if ez > zbound:
ez, ek = zbound, ek - ez + zbound
adim = 2 * strength + 1
# Composite! Apologies for the loops.
for (tx, ax) in zip(range(sx, ex), range(si, ei)):
for (tz, az) in zip(range(sz, ez), range(sk, ek)):
for (ty, ay) in zip(range(sy, ey), range(sj, ej)):
ambient_index = (ax * adim + az) * adim + ay
target[ci(tx, ty, tz)] += ambient[ambient_index]
def iter_neighbors(coords):
"""
Iterate over the chunk-local coordinates surrounding the given
coordinates.
All coordinates are chunk-local.
Coordinates which are not valid chunk-local coordinates will not be
generated.
"""
x, z, y = coords
for dx, dz, dy in (
(1, 0, 0),
(-1, 0, 0),
(0, 1, 0),
(0, -1, 0),
(0, 0, 1),
(0, 0, -1)):
nx = x + dx
nz = z + dz
ny = y + dy
if not (0 <= nx < 16 and
0 <= nz < 16 and
- 0 <= ny < 256):
+ 0 <= ny < CHUNK_HEIGHT):
continue
yield nx, nz, ny
def neighboring_light(glow, block):
"""
Calculate the amount of light that should be shone on a block.
``glow`` is the brighest neighboring light. ``block`` is the slot of the
block being illuminated.
The return value is always a valid light value.
"""
return clamp(glow - blocks[block].dim, 0, 15)
class Chunk(object):
"""
A chunk of blocks.
Chunks are large pieces of world geometry (block data). The blocks, light
maps, and associated metadata are stored in chunks. Chunks are
- always measured 16x256x16 and are aligned on 16x16 boundaries in
+ always measured 16xCHUNK_HEIGHTx16 and are aligned on 16x16 boundaries in
the xz-plane.
:cvar bool dirty: Whether this chunk needs to be flushed to disk.
:cvar bool populated: Whether this chunk has had its initial block data
filled out.
"""
all_damaged = False
dirty = True
populated = False
def __init__(self, x, z):
"""
:param int x: X coordinate in chunk coords
:param int z: Z coordinate in chunk coords
:ivar array.array heightmap: Tracks the tallest block in each xz-column.
:ivar bool all_damaged: Flag for forcing the entire chunk to be
damaged. This is for efficiency; past a certain point, it is not
efficient to batch block updates or track damage. Heavily damaged
chunks have their damage represented as a complete resend of the
entire chunk.
"""
self.x = int(x)
self.z = int(z)
self.heightmap = array("B", [0] * (16 * 16))
- self.blocklight = array("B", [0] * (16 * 16 * 256))
+ self.blocklight = array("B", [0] * (16 * 16 * CHUNK_HEIGHT))
self.sections = [Section() for i in range(16)]
self.entities = set()
self.tiles = {}
self.damaged = set()
def __repr__(self):
return "Chunk(%d, %d)" % (self.x, self.z)
__str__ = __repr__
def regenerate_heightmap(self):
"""
Regenerate the height map array.
The height map is merely the position of the tallest block in any
xz-column.
"""
for x in range(16):
for z in range(16):
column = x * 16 + z
for y in range(255, -1, -1):
if self.get_block((x, y, z)):
break
self.heightmap[column] = y
def regenerate_blocklight(self):
- lightmap = array("L", [0] * (16 * 16 * 256))
+ lightmap = array("L", [0] * (16 * 16 * CHUNK_HEIGHT))
- for x, y, z in product(xrange(16), xrange(256), xrange(16)):
+ for x, y, z in product(xrange(16), xrange(CHUNK_HEIGHT), xrange(16)):
block = self.get_block((x, y, z))
if block in glowing_blocks:
composite_glow(lightmap, glowing_blocks[block], x, y, z)
self.blocklight = array("B", [clamp(x, 0, 15) for x in lightmap])
def regenerate_skylight(self):
"""
Regenerate the ambient light map.
Each block's individual light comes from two sources. The ambient
light comes from the sky.
The height map must be valid for this method to produce valid results.
"""
# Create an array of skylights, and a mask of dimming blocks.
lights = [0xf] * (16 * 16)
mask = [0x0] * (16 * 16)
# For each y-level, we're going to update the mask, apply it to the
# lights, apply the lights to the section, and then blur the lights
# and move downwards. Since empty sections are full of air, and air
# doesn't ever dim, ignoring empty sections should be a correct way
# to speed things up. Another optimization is that the process ends
# early if the entire slice of lights is dark.
for section in reversed(self.sections):
if not section:
continue
for y in range(15, -1, -1):
# Early-out if there's no more light left.
if not any(lights):
break
# Update the mask.
for x, z in product(range(16), repeat=2):
offset = x * 16 + z
block = section.get_block((x, y, z))
mask[offset] = blocks[block].dim
# Apply the mask to the lights.
for i, dim in enumerate(mask):
# Keep it positive.
lights[i] = max(0, lights[i] - dim)
# Apply the lights to the section.
for x, z in product(range(16), repeat=2):
offset = x * 16 + z
section.set_skylight((x, y, z), lights[offset])
# XXX blur the lights
# And continue moving downward.
def regenerate(self):
"""
Regenerate all auxiliary tables.
"""
self.regenerate_heightmap()
self.regenerate_blocklight()
self.regenerate_skylight()
self.dirty = True
def damage(self, coords):
"""
Record damage on this chunk.
"""
if self.all_damaged:
return
x, y, z = coords
self.damaged.add(coords)
# The number 176 represents the threshold at which it is cheaper to
# resend the entire chunk instead of individual blocks.
if len(self.damaged) > 176:
self.all_damaged = True
self.damaged.clear()
def is_damaged(self):
"""
Determine whether any damage is pending on this chunk.
:rtype: bool
:returns: True if any damage is pending on this chunk, False if not.
"""
return self.all_damaged or bool(self.damaged)
def get_damage_packet(self):
"""
Make a packet representing the current damage on this chunk.
This method is not private, but some care should be taken with it,
since it wraps some fairly cryptic internal data structures.
If this chunk is currently undamaged, this method will return an empty
string, which should be safe to treat as a packet. Please check with
`is_damaged()` before doing this if you need to optimize this case.
To avoid extra overhead, this method should really be used in
conjunction with `Factory.broadcast_for_chunk()`.
Do not forget to clear this chunk's damage! Callers are responsible
for doing this.
>>> packet = chunk.get_damage_packet()
>>> factory.broadcast_for_chunk(packet, chunk.x, chunk.z)
>>> chunk.clear_damage()
:rtype: str
:returns: String representation of the packet.
"""
if self.all_damaged:
# Resend the entire chunk!
return self.save_to_packet()
elif not self.damaged:
# Send nothing at all; we don't even have a scratch on us.
return ""
elif len(self.damaged) == 1:
# Use a single block update packet. Find the first (only) set bit
# in the damaged array, and use it as an index.
coords = next(iter(self.damaged))
block = self.get_block(coords)
metadata = self.get_metadata(coords)
x, y, z = coords
return make_packet("block",
x=x + self.x * 16,
y=y,
z=z + self.z * 16,
type=block,
meta=metadata)
else:
# Use a batch update.
records = []
for coords in self.damaged:
block = self.get_block(coords)
metadata = self.get_metadata(coords)
x, y, z = coords
record = x << 28 | z << 24 | y << 16 | block << 4 | metadata
records.append(record)
data = "".join(pack(">I", record) for record in records)
return make_packet("batch", x=self.x, z=self.z,
count=len(records), data=data)
def clear_damage(self):
"""
Clear this chunk's damage.
"""
self.damaged.clear()
self.all_damaged = False
def save_to_packet(self):
"""
Generate a chunk packet.
"""
mask = 0
packed = []
ls = segment_array(self.blocklight)
for i, section in enumerate(self.sections):
if any(section.blocks):
mask |= 1 << i
packed.append(section.blocks.tostring())
for i, section in enumerate(self.sections):
if mask & 1 << i:
packed.append(pack_nibbles(section.metadata))
for i, l in enumerate(ls):
if mask & 1 << i:
packed.append(pack_nibbles(l))
for i, section in enumerate(self.sections):
if mask & 1 << i:
packed.append(pack_nibbles(section.skylight))
# Fake the biome data.
packed.append("\x00" * 256)
packet = make_packet("chunk", x=self.x, z=self.z, continuous=True,
primary=mask, add=0x0, data="".join(packed))
return packet
@check_bounds
def get_block(self, coords):
"""
Look up a block value.
:param tuple coords: coordinate triplet
:rtype: int
:returns: int representing block type
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_block((x, y, z))
@check_bounds
def set_block(self, coords, block):
"""
Update a block value.
:param tuple coords: coordinate triplet
:param int block: block type
"""
x, y, z = coords
index, section_y = divmod(y, 16)
column = x * 16 + z
if self.get_block(coords) != block:
self.sections[index].set_block((x, section_y, z), block)
if not self.populated:
return
# Regenerate heightmap at this coordinate.
if block:
self.heightmap[column] = max(self.heightmap[column], y)
else:
# If we replace the highest block with air, we need to go
# through all blocks below it to find the new top block.
height = self.heightmap[column]
if y == height:
for y in range(height, -1, -1):
if self.get_block((x, y, z)):
break
self.heightmap[column] = y
# Do the blocklight at this coordinate, if appropriate.
if block in glowing_blocks:
composite_glow(self.blocklight, glowing_blocks[block],
x, y, z)
self.blocklight = array("B", [clamp(x, 0, 15) for x in
self.blocklight])
# And the skylight.
glow = max(self.get_skylight((nx, ny, nz))
for nx, nz, ny in iter_neighbors((x, z, y)))
self.set_skylight((x, y, z), neighboring_light(glow, block))
self.dirty = True
self.damage(coords)
@check_bounds
def get_metadata(self, coords):
"""
Look up metadata.
:param tuple coords: coordinate triplet
:rtype: int
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_metadata((x, y, z))
@check_bounds
def set_metadata(self, coords, metadata):
"""
Update metadata.
:param tuple coords: coordinate triplet
:param int metadata:
"""
if self.get_metadata(coords) != metadata:
x, y, z = coords
index, y = divmod(y, 16)
self.sections[index].set_metadata((x, y, z), metadata)
self.dirty = True
self.damage(coords)
@check_bounds
def get_skylight(self, coords):
"""
Look up skylight value.
:param tuple coords: coordinate triplet
:rtype: int
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_skylight((x, y, z))
@check_bounds
def set_skylight(self, coords, value):
"""
Update skylight value.
:param tuple coords: coordinate triplet
:param int metadata:
"""
if self.get_metadata(coords) != value:
x, y, z = coords
index, y = divmod(y, 16)
self.sections[index].set_skylight((x, y, z), value)
@check_bounds
def destroy(self, coords):
"""
Destroy the block at the given coordinates.
This may or may not set the block to be full of air; it uses the
block's preferred replacement. For example, ice generally turns to
water when destroyed.
This is safe as a no-op; for example, destroying a block of air with
no metadata is not going to cause state changes.
:param tuple coords: coordinate triplet
"""
block = blocks[self.get_block(coords)]
self.set_block(coords, block.replace)
self.set_metadata(coords, 0)
def height_at(self, x, z):
"""
Get the height of an xz-column of blocks.
:param int x: X coordinate
:param int z: Z coordinate
:rtype: int
:returns: The height of the given column of blocks.
"""
return self.heightmap[x * 16 + z]
def sed(self, search, replace):
"""
Execute a search and replace on all blocks in this chunk.
Named after the ubiquitous Unix tool. Does a semantic
s/search/replace/g on this chunk's blocks.
:param int search: block to find
:param int replace: block to use as a replacement
"""
for section in self.sections:
for i, block in enumerate(section.blocks):
if block == search:
section.blocks[i] = replace
self.all_damaged = True
self.dirty = True
diff --git a/bravo/plugins/commands/warp.py b/bravo/plugins/commands/warp.py
index 7eb6f4a..2b64cb0 100644
--- a/bravo/plugins/commands/warp.py
+++ b/bravo/plugins/commands/warp.py
@@ -1,295 +1,297 @@
import csv
from StringIO import StringIO
from zope.interface import implements
+from bravo.chunk import CHUNK_HEIGHT
from bravo.ibravo import IChatCommand, IConsoleCommand
from bravo.location import Orientation, Position
from bravo.utilities.coords import split_coords
csv.register_dialect("hey0", delimiter=":")
def get_locations(data):
d = {}
for line in csv.reader(StringIO(data), dialect="hey0"):
name, x, y, z, yaw, pitch = line[:6]
x = float(x)
y = float(y)
z = float(z)
yaw = float(yaw)
pitch = float(pitch)
d[name] = (x, y, z, yaw, pitch)
return d
def put_locations(d):
data = StringIO()
writer = csv.writer(data, dialect="hey0")
for name, stuff in d.iteritems():
writer.writerow([name] + list(stuff))
return data.getvalue()
class Home(object):
"""
Warp a player to their home.
"""
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
data = self.factory.world.serializer.load_plugin_data("homes")
homes = get_locations(data)
protocol = self.factory.protocols[username]
l = protocol.player.location
if username in homes:
yield "Teleporting %s home" % username
x, y, z, yaw, pitch = homes[username]
else:
yield "Teleporting %s to spawn" % username
x, y, z = self.factory.world.level.spawn
yaw, pitch = 0, 0
l.pos = Position.from_player(x, y, z)
l.ori = Orientation.from_degs(yaw, pitch)
protocol.send_initial_chunk_and_location()
yield "Teleportation successful!"
def console_command(self, parameters):
for i in self.chat_command(parameters[0], parameters[1:]):
yield i
name = "home"
aliases = tuple()
usage = ""
class SetHome(object):
"""
Set a player's home.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
yield "Saving %s's home..." % username
protocol = self.factory.protocols[username]
x, y, z = protocol.player.location.pos.to_block()
yaw, pitch = protocol.player.location.ori.to_degs()
data = self.factory.world.serializer.load_plugin_data("homes")
d = get_locations(data)
d[username] = x, y, z, yaw, pitch
data = put_locations(d)
self.factory.world.serializer.save_plugin_data("homes", data)
yield "Saved %s!" % username
name = "sethome"
aliases = tuple()
usage = ""
class Warp(object):
"""
Warp a player to a preset location.
"""
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
data = self.factory.world.serializer.load_plugin_data("warps")
warps = get_locations(data)
if len(parameters) == 0:
yield "Usage: /warp <warpname>"
return
location = parameters[0]
if location in warps:
yield "Teleporting you to %s" % location
protocol = self.factory.protocols[username]
# An explanation might be necessary.
# We are changing the location of the player, but we must
# immediately send a new location packet in order to force the
# player to appear at the new location. However, before we can do
# that, we need to get the chunk loaded for them. This ends up
# being the same sequence of events as the initial chunk and
# location setup, so we call send_initial_chunk_and_location()
# instead of update_location().
l = protocol.player.location
x, y, z, yaw, pitch = warps[location]
l.pos = Position.from_player(x, y, z)
l.ori = Orientation.from_degs(yaw, pitch)
protocol.send_initial_chunk_and_location()
yield "Teleportation successful!"
else:
yield "No warp location %s available" % parameters
def console_command(self, parameters):
for i in self.chat_command(parameters[0], parameters[1:]):
yield i
name = "warp"
aliases = tuple()
usage = "<location>"
class ListWarps(object):
"""
List preset warp locations.
"""
implements(IChatCommand, IConsoleCommand)
def __init__(self, factory):
self.factory = factory
def dispatch(self):
data = self.factory.world.serializer.load_plugin_data("warps")
warps = get_locations(data)
if warps:
yield "Warp locations:"
for key in sorted(warps.iterkeys()):
yield "~ %s" % key
else:
yield "No warps are set!"
def chat_command(self, username, parameters):
for i in self.dispatch():
yield i
def console_command(self, parameters):
for i in self.dispatch():
yield i
name = "listwarps"
aliases = tuple()
usage = ""
class SetWarp(object):
"""
Set a warp location.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
name = "".join(parameters)
yield "Saving warp %s..." % name
protocol = self.factory.protocols[username]
x, y, z = protocol.player.location.pos.to_block()
yaw, pitch = protocol.player.location.ori.to_degs()
data = self.factory.world.serializer.load_plugin_data("warps")
d = get_locations(data)
d[name] = x, y, z, yaw, pitch
data = put_locations(d)
self.factory.world.serializer.save_plugin_data("warps", data)
yield "Saved %s!" % name
name = "setwarp"
aliases = tuple()
usage = "<name>"
class RemoveWarp(object):
"""
Remove a warp location.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
name = "".join(parameters)
yield "Removing warp %s..." % name
data = self.factory.world.serializer.load_plugin_data("warps")
d = get_locations(data)
if name in d:
del d[name]
yield "Saving warps..."
data = put_locations(d)
self.factory.world.serializer.save_plugin_data("warps", data)
yield "Removed %s!" % name
else:
yield "No such warp %s!" % name
name = "removewarp"
aliases = tuple()
usage = "<name>"
class Ascend(object):
"""
Warp to a location above the current location.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
protocol = self.factory.protocols[username]
success = protocol.ascend(1)
if success:
return ("Ascended!",)
else:
return ("Couldn't find anywhere to ascend!",)
name = "ascend"
aliases = tuple()
usage = ""
class Descend(object):
"""
Warp to a location below the current location.
"""
implements(IChatCommand)
def __init__(self, factory):
self.factory = factory
def chat_command(self, username, parameters):
protocol = self.factory.protocols[username]
l = protocol.player.location
x, y, z = l.pos.to_block()
bigx, smallx, bigz, smallz = split_coords(x, z)
chunk = self.factory.world.sync_request_chunk((x, y, z))
- column = [chunk.get_block((smallx, i, smallz)) for i in range(256)]
+ column = [chunk.get_block((smallx, i, smallz))
+ for i in range(CHUNK_HEIGHT)]
# Find the next spot below us which has a platform and two empty
# blocks of air.
while y > 0:
y -= 1
if column[y] and not column[y + 1] and not column[y + 2]:
break
else:
return ("Couldn't find anywhere to descend!",)
l.pos = l.pos._replace(y=y)
protocol.send_initial_chunk_and_location()
return ("Descended!",)
name = "descend"
aliases = tuple()
usage = ""
diff --git a/bravo/plugins/generators.py b/bravo/plugins/generators.py
index 40fa931..083356d 100644
--- a/bravo/plugins/generators.py
+++ b/bravo/plugins/generators.py
@@ -1,476 +1,477 @@
from __future__ import division
from array import array
from itertools import combinations, product
from random import Random
from zope.interface import implements
from bravo.blocks import blocks
+from bravo.chunk import CHUNK_HEIGHT
from bravo.ibravo import ITerrainGenerator
from bravo.simplex import octaves2, octaves3, set_seed
from bravo.utilities.maths import morton2
R = Random()
class BoringGenerator(object):
"""
Generates boring slabs of flat stone.
This generator relies on implementation details of ``Chunk``.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Fill the bottom half of the chunk with stone.
"""
# Optimized fill. Fill the bottom eight sections with stone.
stone = array("B", [blocks["stone"].slot] * 16 * 16 * 16)
for section in chunk.sections[:8]:
section.blocks[:] = stone[:]
name = "boring"
before = tuple()
after = tuple()
class SimplexGenerator(object):
"""
Generates waves of stone.
This class uses a simplex noise generator to procedurally generate
organic-looking, continuously smooth terrain.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth waves of stone.
"""
set_seed(seed)
# And into one end he plugged the whole of reality as extrapolated
# from a piece of fairy cake, and into the other end he plugged his
# wife: so that when he turned it on she saw in one instant the whole
# infinity of creation and herself in relation to it.
factor = 1 / 256
for x, z in product(xrange(16), repeat=2):
magx = (chunk.x * 16 + x) * factor
magz = (chunk.z * 16 + z) * factor
height = octaves2(magx, magz, 6)
# Normalize around 70. Normalization is scaled according to a
# rotated cosine.
#scale = rotated_cosine(magx, magz, seed, 16 * 10)
height *= 15
height = int(height + 70)
# Make our chunk offset, and render into the chunk.
for y in range(height):
chunk.set_block((x, y, z), blocks["stone"].slot)
name = "simplex"
before = tuple()
after = tuple()
class ComplexGenerator(object):
"""
Generate islands of stone.
This class uses a simplex noise generator to procedurally generate
ridiculous things.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth islands of stone.
"""
set_seed(seed)
factor = 1 / 256
- for x, z, y in product(xrange(16), xrange(16), xrange(256)):
+ for x, z, y in product(xrange(16), xrange(16), xrange(CHUNK_HEIGHT)):
magx = (chunk.x * 16 + x) * factor
magz = (chunk.z * 16 + z) * factor
sample = octaves3(magx, magz, y * factor, 6)
if sample > 0.5:
chunk.set_block((x, y, z), blocks["stone"].slot)
name = "complex"
before = tuple()
after = tuple()
class WaterTableGenerator(object):
"""
Create a water table.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Generate a flat water table halfway up the map.
"""
for x, z, y in product(xrange(16), xrange(16), xrange(62)):
if chunk.get_block((x, y, z)) == blocks["air"].slot:
chunk.set_block((x, y, z), blocks["spring"].slot)
name = "watertable"
before = tuple()
after = ("trees", "caves")
class ErosionGenerator(object):
"""
Erodes stone surfaces into dirt.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Turn the top few layers of stone into dirt.
"""
chunk.regenerate_heightmap()
for x, z in product(xrange(16), repeat=2):
y = chunk.height_at(x, z)
if chunk.get_block((x, y, z)) == blocks["stone"].slot:
bottom = max(y - 3, 0)
for i in range(bottom, y + 1):
chunk.set_block((x, i, z), blocks["dirt"].slot)
name = "erosion"
before = ("boring", "simplex")
after = ("watertable",)
class GrassGenerator(object):
"""
Find exposed dirt and grow grass.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Find the top dirt block in each y-level and turn it into grass.
"""
chunk.regenerate_heightmap()
for x, z in product(xrange(16), repeat=2):
y = chunk.height_at(x, z)
if (chunk.get_block((x, y, z)) == blocks["dirt"].slot and
(y == 127 or
chunk.get_block((x, y + 1, z)) == blocks["air"].slot)):
chunk.set_block((x, y, z), blocks["grass"].slot)
name = "grass"
before = ("erosion", "complex")
after = tuple()
class BeachGenerator(object):
"""
Generates simple beaches.
Beaches are areas of sand around bodies of water. This generator will form
beaches near all bodies of water regardless of size or composition; it
will form beaches at large seashores and frozen lakes. It will even place
beaches on one-block puddles.
"""
implements(ITerrainGenerator)
above = set([blocks["air"].slot, blocks["water"].slot,
blocks["spring"].slot, blocks["ice"].slot])
replace = set([blocks["dirt"].slot, blocks["grass"].slot])
def populate(self, chunk, seed):
"""
Find blocks within a height range and turn them into sand if they are
dirt and underwater or exposed to air. If the height range is near the
water table level, this creates fairly good beaches.
"""
chunk.regenerate_heightmap()
for x, z in product(xrange(16), repeat=2):
y = chunk.height_at(x, z)
while y > 60 and chunk.get_block((x, y, z)) in self.above:
y -= 1
if not 60 < y < 66:
continue
if chunk.get_block((x, y, z)) in self.replace:
chunk.set_block((x, y, z), blocks["sand"].slot)
name = "beaches"
before = ("erosion", "complex")
after = ("saplings",)
class OreGenerator(object):
"""
Place ores and clay.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
set_seed(seed)
xzfactor = 1 / 16
yfactor = 1 / 32
for x, z in product(xrange(16), repeat=2):
for y in range(chunk.height_at(x, z) + 1):
magx = (chunk.x * 16 + x) * xzfactor
magz = (chunk.z * 16 + z) * xzfactor
magy = y * yfactor
sample = octaves3(magx, magz, magy, 3)
if sample > 0.9999:
# Figure out what to place here.
old = chunk.get_block((x, y, z))
new = None
if old == blocks["sand"].slot:
# Sand becomes clay.
new = blocks["clay"].slot
elif old == blocks["dirt"].slot:
# Dirt becomes gravel.
new = blocks["gravel"].slot
elif old == blocks["stone"].slot:
# Stone becomes one of the ores.
if y < 12:
new = blocks["diamond-ore"].slot
elif y < 24:
new = blocks["gold-ore"].slot
elif y < 36:
new = blocks["redstone-ore"].slot
elif y < 48:
new = blocks["iron-ore"].slot
else:
new = blocks["coal-ore"].slot
if new:
chunk.set_block((x, y, z), new)
name = "ore"
before = ("erosion", "complex", "beaches")
after = tuple()
class SafetyGenerator(object):
"""
Generates terrain features essential for the safety of clients.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Spread a layer of bedrock along the bottom of the chunk, and clear the
top two layers to avoid players getting stuck at the top.
"""
for x, z in product(xrange(16), repeat=2):
chunk.set_block((x, 0, z), blocks["bedrock"].slot)
chunk.set_block((x, 126, z), blocks["air"].slot)
chunk.set_block((x, 127, z), blocks["air"].slot)
name = "safety"
before = ("boring", "simplex", "complex", "cliffs", "float", "caves")
after = tuple()
class CliffGenerator(object):
"""
This class/generator creates cliffs by selectively applying a offset of
the noise map to blocks based on height. Feel free to make this more
realistic.
This generator relies on implementation details of ``Chunk``.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth waves of stone, then compare to current landscape.
"""
set_seed(seed)
factor = 1 / 256
for x, z in product(xrange(16), repeat=2):
magx = ((chunk.x + 32) * 16 + x) * factor
magz = ((chunk.z + 32) * 16 + z) * factor
height = octaves2(magx, magz, 6)
height *= 15
height = int(height + 70)
current_height = chunk.heightmap[x * 16 + z]
if (-6 < current_height - height < 3 and
current_height > 63 and height > 63):
for y in range(height - 3):
chunk.set_block((x, y, z), blocks["stone"].slot)
for y in range(y, 128):
chunk.set_block((x, y, z), blocks["air"].slot)
name = "cliffs"
before = tuple()
after = tuple()
class FloatGenerator(object):
"""
Rips chunks out of the map, to create surreal chunks of floating land.
This generator relies on implementation details of ``Chunk``.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Create floating islands.
"""
# Eat moar stone
R.seed(seed)
factor = 1 / 256
for x, z in product(xrange(16), repeat=2):
magx = ((chunk.x+16) * 16 + x) * factor
magz = ((chunk.z+16) * 16 + z) * factor
height = octaves2(magx, magz, 6)
height *= 15
height = int(height + 70)
if abs(chunk.heightmap[x * 16 + z] - height) < 10:
- height = 256
+ height = CHUNK_HEIGHT
else:
height = height - 30 + R.randint(-15, 10)
for y in range(height):
chunk.set_block((x, y, z), blocks["air"].slot)
name = "float"
before = tuple()
after = tuple()
class CaveGenerator(object):
"""
Carve caves and seams out of terrain.
"""
implements(ITerrainGenerator)
def populate(self, chunk, seed):
"""
Make smooth waves of stone.
"""
sede = seed ^ 0xcafebabe
xzfactor = 1 / 128
yfactor = 1 / 64
for x, z in product(xrange(16), repeat=2):
magx = (chunk.x * 16 + x) * xzfactor
magz = (chunk.z * 16 + z) * xzfactor
for y in range(128):
if not chunk.get_block((x, y, z)):
continue
magy = y * yfactor
set_seed(seed)
should_cave = abs(octaves3(magx, magz, magy, 3))
set_seed(sede)
should_cave *= abs(octaves3(magx, magz, magy, 3))
if should_cave < 0.002:
chunk.set_block((x, y, z), blocks["air"].slot)
name = "caves"
before = ("grass", "erosion", "simplex", "complex", "boring")
after = tuple()
class SaplingGenerator(object):
"""
Plant saplings at relatively silly places around the map.
"""
implements(ITerrainGenerator)
primes = [401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467,
479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569,
571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643,
647, 653, 659, 661, 673, 677, 683, 691]
"""
A field of prime numbers, used to select factors for trees.
"""
ground = (blocks["grass"].slot, blocks["dirt"].slot)
def populate(self, chunk, seed):
"""
Place saplings.
The algorithm used to pick locations for the saplings is quite
simple, although slightly involved. The basic technique is to
calculate a Morton number for every xz-column in the chunk, and then
use coprime offsets to sprinkle selected points fairly evenly
throughout the chunk.
Saplings are only placed on dirt and grass blocks.
"""
R.seed(seed)
factors = R.choice(list(combinations(self.primes, 3)))
for x, z in product(xrange(16), repeat=2):
# Make a Morton number.
morton = morton2(chunk.x * 16 + x, chunk.z * 16 + z)
if not all(morton % factor for factor in factors):
# Magic number is how many tree types are available
species = morton % 4
# Plant a sapling.
y = chunk.height_at(x, z)
if chunk.get_block((x, y, z)) in self.ground:
chunk.set_block((x, y + 1, z), blocks["sapling"].slot)
chunk.set_metadata((x, y + 1, z), species)
name = "saplings"
before = ("grass", "erosion", "simplex", "complex", "boring")
after = tuple()
diff --git a/bravo/tests/plugins/test_generators.py b/bravo/tests/plugins/test_generators.py
index 01aa0e3..de53dfe 100644
--- a/bravo/tests/plugins/test_generators.py
+++ b/bravo/tests/plugins/test_generators.py
@@ -1,78 +1,78 @@
import unittest
from itertools import product
import bravo.blocks
-import bravo.chunk
+from bravo.chunk import Chunk, CHUNK_HEIGHT
import bravo.ibravo
import bravo.plugin
class TestGenerators(unittest.TestCase):
def setUp(self):
- self.chunk = bravo.chunk.Chunk(0, 0)
+ self.chunk = Chunk(0, 0)
self.p = bravo.plugin.retrieve_plugins(bravo.ibravo.ITerrainGenerator)
def test_trivial(self):
pass
def test_boring(self):
if "boring" not in self.p:
raise unittest.SkipTest("plugin not present")
plugin = self.p["boring"]
plugin.populate(self.chunk, 0)
- for x, y, z in product(xrange(16), xrange(256), xrange(16)):
+ for x, y, z in product(xrange(16), xrange(CHUNK_HEIGHT), xrange(16)):
if y < 128:
self.assertEqual(self.chunk.get_block((x, y, z)),
bravo.blocks.blocks["stone"].slot)
else:
self.assertEqual(self.chunk.get_block((x, y, z)),
bravo.blocks.blocks["air"].slot)
def test_beaches_range(self):
if "beaches" not in self.p:
raise unittest.SkipTest("plugin not present")
plugin = self.p["beaches"]
# Prepare chunk.
for i in range(5):
self.chunk.set_block((i, 61 + i, i),
bravo.blocks.blocks["dirt"].slot)
plugin.populate(self.chunk, 0)
for i in range(5):
self.assertEqual(self.chunk.get_block((i, 61 + i, i)),
bravo.blocks.blocks["sand"].slot,
"%d, %d, %d is wrong" % (i, 61 + i, i))
def test_beaches_immersed(self):
"""
Test that beaches still generate properly around pre-existing water
tables.
This test is meant to ensure that the order of beaches and watertable
does not matter.
"""
if "beaches" not in self.p:
raise unittest.SkipTest("plugin not present")
plugin = self.p["beaches"]
# Prepare chunk.
for x, z, y in product(xrange(16), xrange(16), xrange(60, 64)):
self.chunk.set_block((x, y, z),
bravo.blocks.blocks["spring"].slot)
for i in range(5):
self.chunk.set_block((i, 61 + i, i),
bravo.blocks.blocks["dirt"].slot)
plugin.populate(self.chunk, 0)
for i in range(5):
self.assertEqual(self.chunk.get_block((i, 61 + i, i)),
bravo.blocks.blocks["sand"].slot,
"%d, %d, %d is wrong" % (i, 61 + i, i))
diff --git a/bravo/utilities/coords.py b/bravo/utilities/coords.py
index eaffde3..be96ec8 100644
--- a/bravo/utilities/coords.py
+++ b/bravo/utilities/coords.py
@@ -1,125 +1,127 @@
"""
Utilities for coordinate handling and munging.
"""
from itertools import product
from math import floor, ceil
+from bravo.chunk import CHUNK_HEIGHT
+
def polar_round_vector(vector):
"""
Rounds a vector towards zero
"""
if vector[0] >= 0:
calculated_x = floor(vector[0])
else:
calculated_x = ceil(vector[0])
if vector[1] >= 0:
calculated_y = floor(vector[1])
else:
calculated_y = ceil(vector[1])
if vector[2] >= 0:
calculated_z = floor(vector[2])
else:
calculated_z = ceil(vector[2])
return calculated_x, calculated_y, calculated_z
def split_coords(x, z):
"""
Split a pair of coordinates into chunk and subchunk coordinates.
:param int x: the X coordinate
:param int z: the Z coordinate
:returns: a tuple of the X chunk, X subchunk, Z chunk, and Z subchunk
"""
first, second = divmod(int(x), 16)
third, fourth = divmod(int(z), 16)
return first, second, third, fourth
def taxicab2(x1, y1, x2, y2):
"""
Return the taxicab distance between two blocks.
"""
return abs(x1 - x2) + abs(y1 - y2)
def taxicab3(x1, y1, z1, x2, y2, z2):
"""
Return the taxicab distance between two blocks, in three dimensions.
"""
return abs(x1 - x2) + abs(y1 - y2) + abs(z1 - z2)
def adjust_coords_for_face(coords, face):
"""
Adjust a set of coords according to a face.
The face is a standard string descriptor, such as "+x".
The "noop" face is supported.
"""
x, y, z = coords
if face == "-x":
x -= 1
elif face == "+x":
x += 1
elif face == "-y":
y -= 1
elif face == "+y":
y += 1
elif face == "-z":
z -= 1
elif face == "+z":
z += 1
return x, y, z
def iterneighbors(x, y, z):
"""
Yield an iterable of neighboring block coordinates.
The first item in the iterable is the original coordinates.
Coordinates with invalid Y values are discarded automatically.
"""
for (dx, dy, dz) in (
( 0, 0, 0),
( 0, 0, 1),
( 0, 0, -1),
( 0, 1, 0),
( 0, -1, 0),
( 1, 0, 0),
(-1, 0, 0)):
- if 0 <= y + dy < 256:
+ if 0 <= y + dy < CHUNK_HEIGHT:
yield x + dx, y + dy, z + dz
def itercube(x, y, z, r):
"""
Yield an iterable of coordinates in a cube around a given block.
Coordinates with invalid Y values are discarded automatically.
"""
bx = x - r
tx = x + r + 1
by = max(y - r, 0)
- ty = min(y + r + 1, 256)
+ ty = min(y + r + 1, CHUNK_HEIGHT)
bz = z - r
tz = z + r + 1
return product(xrange(bx, tx), xrange(by, ty), xrange(bz, tz))
diff --git a/bravo/world.py b/bravo/world.py
index 6785202..f76cfd5 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,577 +1,577 @@
from array import array
from functools import wraps
from itertools import product
import random
import sys
import weakref
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
from twisted.internet.task import LoopingCall
from twisted.python import log
from bravo.beta.structures import Level
-from bravo.chunk import Chunk
+from bravo.chunk import Chunk, CHUNK_HEIGHT
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
class ImpossibleCoordinates(Exception):
"""
A coordinate could not ever be valid.
"""
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
- if not 0 <= y < 256:
+ if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
# Fail early if Y is OOB.
- if not 0 <= y < 256:
+ if not 0 <= y < CHUNK_HEIGHT:
raise ImpossibleCoordinates("Y value %d is impossible" % y)
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
if bigcoords in self.chunk_cache:
chunk = self.chunk_cache[bigcoords]
elif bigcoords in self.dirty_chunk_cache:
chunk = self.dirty_chunk_cache[bigcoords]
else:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
permanent_cache = None
"""
A permanent cache of chunks which are never evicted from memory.
This cache is used to speed up logins near the spawn point.
"""
level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
The initial level data.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
self.chunk_cache = weakref.WeakValueDictionary()
self.dirty_chunk_cache = dict()
self._pending_chunks = dict()
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
log.msg("World started on %s, using serializer %s" %
(world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
self.chunk_management_loop = LoopingCall(self.sort_chunks)
self.chunk_management_loop.start(1)
self.mob_manager = MobManager() # XXX Put this in init or here?
self.mob_manager.world = self # XXX Put this in the managers constructor?
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
"""
self.chunk_management_loop.stop()
# Flush all dirty chunks to disk.
for chunk in self.dirty_chunk_cache.itervalues():
self.save_chunk(chunk)
# Evict all chunks.
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
# Save the level data.
self.serializer.save_level(self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
"""
log.msg("Setting cache size to %d, please hold..." % size)
self.permanent_cache = set()
assign = self.permanent_cache.add
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
for x, z in product(rx, rz):
log.msg("Adding %d, %d to cache..." % (x, z))
self.request_chunk(x, z).addCallback(assign)
log.msg("Cache size is now %d!" % size)
def sort_chunks(self):
"""
Sort out the internal caches.
This method will always block when there are dirty chunks.
"""
first = True
all_chunks = dict(self.dirty_chunk_cache)
all_chunks.update(self.chunk_cache)
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
for coords, chunk in all_chunks.iteritems():
if chunk.dirty:
if first:
first = False
self.save_chunk(chunk)
self.chunk_cache[coords] = chunk
else:
self.dirty_chunk_cache[coords] = chunk
else:
self.chunk_cache[coords] = chunk
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
d = dict(self.chunk_cache)
self.chunk_cache = d
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
d = weakref.WeakValueDictionary(self.chunk_cache)
self.chunk_cache = d
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
if hasattr(entity,'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
if (x, z) in self.chunk_cache:
returnValue(self.chunk_cache[x, z])
elif (x, z) in self.dirty_chunk_cache:
returnValue(self.dirty_chunk_cache[x, z])
elif (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
if chunk.populated:
self.chunk_cache[x, z] = chunk
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
def pp(chunk):
chunk.populated = True
chunk.dirty = True
self.postprocess_chunk(chunk)
self.dirty_chunk_cache[x, z] = chunk
del self._pending_chunks[x, z]
return chunk
# Set up callbacks.
d.addCallback(pp)
d.chainDeferred(pe)
# Because multiple people might be attached to this callback, we're
# going to do something magical here. We will yield a forked version
# of our Deferred. This means that we will wait right here, for a
# long, long time, before actually returning with the chunk, *but*,
# when we actually finish, we'll be ready to return the chunk
# immediately. Our caller cannot possibly care because they only see a
# Deferred either way.
retval = yield retval
returnValue(retval)
def save_chunk(self, chunk):
if not chunk.dirty or not self.saving:
return
d = maybeDeferred(self.serializer.save_chunk, chunk)
@d.addCallback
def cb(none):
chunk.dirty = False
@d.addErrback
def eb(failure):
failure.trap(SerializerWriteException)
log.msg("Couldn't write %r" % chunk)
def load_player(self, username):
"""
Retrieve player data.
:returns: a ``Deferred`` that will be fired with a ``Player``
"""
# Get the player, possibly.
d = maybeDeferred(self.serializer.load_player, username)
@d.addErrback
def eb(failure):
failure.trap(SerializerReadException)
log.msg("Couldn't load player %r" % username)
# Make a player.
player = Player(username=username)
player.location.x = self.level.spawn[0]
player.location.y = self.level.spawn[1]
player.location.stance = self.level.spawn[1]
player.location.z = self.level.spawn[2]
return player
# This Deferred's good to go as-is.
return d
def save_player(self, username, player):
if self.saving:
self.serializer.save_player(player)
# World-level geometry access.
# These methods let external API users refrain from going through the
# standard motions of looking up and loading chunk information.
@coords_to_chunk
def get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_block(coords)
@coords_to_chunk
def set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_block(coords, value)
@coords_to_chunk
def get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_metadata(coords)
@coords_to_chunk
def set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_metadata(coords, value)
@coords_to_chunk
def destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.destroy(coords)
@coords_to_chunk
def mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.dirty = True
|
bravoserver/bravo
|
951b251dc99b8733a9a886c3b5c41f978b652037
|
chunk: Introduce a constant for chunk height.
|
diff --git a/bravo/chunk.py b/bravo/chunk.py
index e133cea..90c13ab 100644
--- a/bravo/chunk.py
+++ b/bravo/chunk.py
@@ -1,524 +1,529 @@
from array import array
from functools import wraps
from itertools import product
from struct import pack
from warnings import warn
from bravo.blocks import blocks, glowing_blocks
from bravo.beta.packets import make_packet
from bravo.geometry.section import Section
from bravo.utilities.bits import pack_nibbles
from bravo.utilities.maths import clamp
+CHUNK_HEIGHT = 256
+"""
+The total height of chunks.
+"""
+
class ChunkWarning(Warning):
"""
Somebody did something inappropriate to this chunk, but it probably isn't
lethal, so the chunk is issuing a warning instead of an exception.
"""
def check_bounds(f):
"""
Decorate a function or method to have its first positional argument be
treated as an (x, y, z) tuple which must fit inside chunk boundaries of
16, 256, and 16, respectively.
A warning will be raised if the bounds check fails.
"""
@wraps(f)
def deco(chunk, coords, *args, **kwargs):
x, y, z = coords
# Coordinates were out-of-bounds; warn and run away.
if not (0 <= x < 16 and 0 <= z < 16 and 0 <= y < 256):
warn("Coordinates %s are OOB in %s() of %s, ignoring call"
% (coords, f.func_name, chunk), ChunkWarning, stacklevel=2)
# A concession towards where this decorator will be used. The
# value is likely to be discarded either way, but if the value is
# used, we shouldn't horribly die because of None/0 mismatch.
return 0
return f(chunk, coords, *args, **kwargs)
return deco
def ci(x, y, z):
"""
Turn an (x, y, z) tuple into a chunk index.
This is really a macro and not a function, but Python doesn't know the
difference. Hopefully this is faster on PyPy than on CPython.
"""
return (x * 16 + z) * 256 + y
def segment_array(a):
"""
Chop up a chunk-sized array into sixteen components.
The chops are done in order to produce the smaller chunks preferred by
modern clients.
"""
l = [array(a.typecode) for chaff in range(16)]
index = 0
for i in range(0, len(a), 16):
l[index].extend(a[i:i + 16])
index = (index + 1) % 16
return l
def make_glows():
"""
Set up glow tables.
These tables provide glow maps for illuminated points.
"""
glow = [None] * 16
for i in range(16):
dim = 2 * i + 1
glow[i] = array("b", [0] * (dim**3))
for x, y, z in product(xrange(dim), repeat=3):
distance = abs(x - i) + abs(y - i) + abs(z - i)
glow[i][(x * dim + y) * dim + z] = i + 1 - distance
glow[i] = array("B", [clamp(x, 0, 15) for x in glow[i]])
return glow
glow = make_glows()
def composite_glow(target, strength, x, y, z):
"""
Composite a light source onto a lightmap.
The exact operation is not quite unlike an add.
"""
ambient = glow[strength]
xbound, zbound, ybound = 16, 256, 16
sx = x - strength
sy = y - strength
sz = z - strength
ex = x + strength
ey = y + strength
ez = z + strength
si, sj, sk = 0, 0, 0
ei, ej, ek = strength * 2, strength * 2, strength * 2
if sx < 0:
sx, si = 0, -sx
if sy < 0:
sy, sj = 0, -sy
if sz < 0:
sz, sk = 0, -sz
if ex > xbound:
ex, ei = xbound, ei - ex + xbound
if ey > ybound:
ey, ej = ybound, ej - ey + ybound
if ez > zbound:
ez, ek = zbound, ek - ez + zbound
adim = 2 * strength + 1
# Composite! Apologies for the loops.
for (tx, ax) in zip(range(sx, ex), range(si, ei)):
for (tz, az) in zip(range(sz, ez), range(sk, ek)):
for (ty, ay) in zip(range(sy, ey), range(sj, ej)):
ambient_index = (ax * adim + az) * adim + ay
target[ci(tx, ty, tz)] += ambient[ambient_index]
def iter_neighbors(coords):
"""
Iterate over the chunk-local coordinates surrounding the given
coordinates.
All coordinates are chunk-local.
Coordinates which are not valid chunk-local coordinates will not be
generated.
"""
x, z, y = coords
for dx, dz, dy in (
(1, 0, 0),
(-1, 0, 0),
(0, 1, 0),
(0, -1, 0),
(0, 0, 1),
(0, 0, -1)):
nx = x + dx
nz = z + dz
ny = y + dy
if not (0 <= nx < 16 and
0 <= nz < 16 and
0 <= ny < 256):
continue
yield nx, nz, ny
def neighboring_light(glow, block):
"""
Calculate the amount of light that should be shone on a block.
``glow`` is the brighest neighboring light. ``block`` is the slot of the
block being illuminated.
The return value is always a valid light value.
"""
return clamp(glow - blocks[block].dim, 0, 15)
class Chunk(object):
"""
A chunk of blocks.
Chunks are large pieces of world geometry (block data). The blocks, light
maps, and associated metadata are stored in chunks. Chunks are
always measured 16x256x16 and are aligned on 16x16 boundaries in
the xz-plane.
:cvar bool dirty: Whether this chunk needs to be flushed to disk.
:cvar bool populated: Whether this chunk has had its initial block data
filled out.
"""
all_damaged = False
dirty = True
populated = False
def __init__(self, x, z):
"""
:param int x: X coordinate in chunk coords
:param int z: Z coordinate in chunk coords
:ivar array.array heightmap: Tracks the tallest block in each xz-column.
:ivar bool all_damaged: Flag for forcing the entire chunk to be
damaged. This is for efficiency; past a certain point, it is not
efficient to batch block updates or track damage. Heavily damaged
chunks have their damage represented as a complete resend of the
entire chunk.
"""
self.x = int(x)
self.z = int(z)
self.heightmap = array("B", [0] * (16 * 16))
self.blocklight = array("B", [0] * (16 * 16 * 256))
self.sections = [Section() for i in range(16)]
self.entities = set()
self.tiles = {}
self.damaged = set()
def __repr__(self):
return "Chunk(%d, %d)" % (self.x, self.z)
__str__ = __repr__
def regenerate_heightmap(self):
"""
Regenerate the height map array.
The height map is merely the position of the tallest block in any
xz-column.
"""
for x in range(16):
for z in range(16):
column = x * 16 + z
for y in range(255, -1, -1):
if self.get_block((x, y, z)):
break
self.heightmap[column] = y
def regenerate_blocklight(self):
lightmap = array("L", [0] * (16 * 16 * 256))
for x, y, z in product(xrange(16), xrange(256), xrange(16)):
block = self.get_block((x, y, z))
if block in glowing_blocks:
composite_glow(lightmap, glowing_blocks[block], x, y, z)
self.blocklight = array("B", [clamp(x, 0, 15) for x in lightmap])
def regenerate_skylight(self):
"""
Regenerate the ambient light map.
Each block's individual light comes from two sources. The ambient
light comes from the sky.
The height map must be valid for this method to produce valid results.
"""
# Create an array of skylights, and a mask of dimming blocks.
lights = [0xf] * (16 * 16)
mask = [0x0] * (16 * 16)
# For each y-level, we're going to update the mask, apply it to the
# lights, apply the lights to the section, and then blur the lights
# and move downwards. Since empty sections are full of air, and air
# doesn't ever dim, ignoring empty sections should be a correct way
# to speed things up. Another optimization is that the process ends
# early if the entire slice of lights is dark.
for section in reversed(self.sections):
if not section:
continue
for y in range(15, -1, -1):
# Early-out if there's no more light left.
if not any(lights):
break
# Update the mask.
for x, z in product(range(16), repeat=2):
offset = x * 16 + z
block = section.get_block((x, y, z))
mask[offset] = blocks[block].dim
# Apply the mask to the lights.
for i, dim in enumerate(mask):
# Keep it positive.
lights[i] = max(0, lights[i] - dim)
# Apply the lights to the section.
for x, z in product(range(16), repeat=2):
offset = x * 16 + z
section.set_skylight((x, y, z), lights[offset])
# XXX blur the lights
# And continue moving downward.
def regenerate(self):
"""
Regenerate all auxiliary tables.
"""
self.regenerate_heightmap()
self.regenerate_blocklight()
self.regenerate_skylight()
self.dirty = True
def damage(self, coords):
"""
Record damage on this chunk.
"""
if self.all_damaged:
return
x, y, z = coords
self.damaged.add(coords)
# The number 176 represents the threshold at which it is cheaper to
# resend the entire chunk instead of individual blocks.
if len(self.damaged) > 176:
self.all_damaged = True
self.damaged.clear()
def is_damaged(self):
"""
Determine whether any damage is pending on this chunk.
:rtype: bool
:returns: True if any damage is pending on this chunk, False if not.
"""
return self.all_damaged or bool(self.damaged)
def get_damage_packet(self):
"""
Make a packet representing the current damage on this chunk.
This method is not private, but some care should be taken with it,
since it wraps some fairly cryptic internal data structures.
If this chunk is currently undamaged, this method will return an empty
string, which should be safe to treat as a packet. Please check with
`is_damaged()` before doing this if you need to optimize this case.
To avoid extra overhead, this method should really be used in
conjunction with `Factory.broadcast_for_chunk()`.
Do not forget to clear this chunk's damage! Callers are responsible
for doing this.
>>> packet = chunk.get_damage_packet()
>>> factory.broadcast_for_chunk(packet, chunk.x, chunk.z)
>>> chunk.clear_damage()
:rtype: str
:returns: String representation of the packet.
"""
if self.all_damaged:
# Resend the entire chunk!
return self.save_to_packet()
elif not self.damaged:
# Send nothing at all; we don't even have a scratch on us.
return ""
elif len(self.damaged) == 1:
# Use a single block update packet. Find the first (only) set bit
# in the damaged array, and use it as an index.
coords = next(iter(self.damaged))
block = self.get_block(coords)
metadata = self.get_metadata(coords)
x, y, z = coords
return make_packet("block",
x=x + self.x * 16,
y=y,
z=z + self.z * 16,
type=block,
meta=metadata)
else:
# Use a batch update.
records = []
for coords in self.damaged:
block = self.get_block(coords)
metadata = self.get_metadata(coords)
x, y, z = coords
record = x << 28 | z << 24 | y << 16 | block << 4 | metadata
records.append(record)
data = "".join(pack(">I", record) for record in records)
return make_packet("batch", x=self.x, z=self.z,
count=len(records), data=data)
def clear_damage(self):
"""
Clear this chunk's damage.
"""
self.damaged.clear()
self.all_damaged = False
def save_to_packet(self):
"""
Generate a chunk packet.
"""
mask = 0
packed = []
ls = segment_array(self.blocklight)
for i, section in enumerate(self.sections):
if any(section.blocks):
mask |= 1 << i
packed.append(section.blocks.tostring())
for i, section in enumerate(self.sections):
if mask & 1 << i:
packed.append(pack_nibbles(section.metadata))
for i, l in enumerate(ls):
if mask & 1 << i:
packed.append(pack_nibbles(l))
for i, section in enumerate(self.sections):
if mask & 1 << i:
packed.append(pack_nibbles(section.skylight))
# Fake the biome data.
packed.append("\x00" * 256)
packet = make_packet("chunk", x=self.x, z=self.z, continuous=True,
primary=mask, add=0x0, data="".join(packed))
return packet
@check_bounds
def get_block(self, coords):
"""
Look up a block value.
:param tuple coords: coordinate triplet
:rtype: int
:returns: int representing block type
"""
x, y, z = coords
index, y = divmod(y, 16)
return self.sections[index].get_block((x, y, z))
@check_bounds
def set_block(self, coords, block):
"""
Update a block value.
:param tuple coords: coordinate triplet
:param int block: block type
"""
x, y, z = coords
index, section_y = divmod(y, 16)
column = x * 16 + z
if self.get_block(coords) != block:
self.sections[index].set_block((x, section_y, z), block)
if not self.populated:
return
# Regenerate heightmap at this coordinate.
if block:
self.heightmap[column] = max(self.heightmap[column], y)
else:
# If we replace the highest block with air, we need to go
# through all blocks below it to find the new top block.
height = self.heightmap[column]
if y == height:
for y in range(height, -1, -1):
if self.get_block((x, y, z)):
break
self.heightmap[column] = y
# Do the blocklight at this coordinate, if appropriate.
if block in glowing_blocks:
composite_glow(self.blocklight, glowing_blocks[block],
x, y, z)
self.blocklight = array("B", [clamp(x, 0, 15) for x in
self.blocklight])
# And the skylight.
glow = max(self.get_skylight((nx, ny, nz))
for nx, nz, ny in iter_neighbors((x, z, y)))
self.set_skylight((x, y, z), neighboring_light(glow, block))
self.dirty = True
self.damage(coords)
@check_bounds
def get_metadata(self, coords):
"""
Look up metadata.
:param tuple coords: coordinate triplet
:rtype: int
"""
|
bravoserver/bravo
|
8a04b9fa99145afe4a199116e2a12860813953f7
|
motd: Patches welcome!
|
diff --git a/bravo/motd.py b/bravo/motd.py
index 89a77c2..e078275 100644
--- a/bravo/motd.py
+++ b/bravo/motd.py
@@ -1,70 +1,72 @@
from random import choice
motds = """
Open-source!
%
+Patches welcome!
+%
Distribute!
%
Reverse-engineered!
%
Don't look directly at the features!
%
The work of MAD!
%
Made in the USA!
%
Celestial!
%
Asynchronous!
%
Seasons!
%
Sponges!
%
Simplex noise!
%
MIT-licensed!
%
Unit-tested!
%
Documented!
%
Password login!
%
Fluid simulations!
%
Whoo, Bukkit!
%
Whoo, Charged Miners!
%
Whoo, Mineflayer!
%
Whoo, Mineserver!
%
Whoo, craftd!
%
Can't be held by Bukkit!
%
The test that stumped them all!
%
Comfortably numb!
%
The hidden riddle!
%
We are all made of stars!
%
Out of beta and releasing on time!
%
Still alive!
%
"Pentasyllabic" is an autonym!
"""
motds = [i.strip() for i in motds.split("%")]
def get_motd():
"""
Retrieve a random MOTD.
"""
return choice(motds)
|
bravoserver/bravo
|
87a88403d5f9a16aaf939f06a5b982af811b28a8
|
plugins/physics: Fix another OOB access in Fluid.
|
diff --git a/bravo/plugins/physics.py b/bravo/plugins/physics.py
index 347fe06..84d3194 100644
--- a/bravo/plugins/physics.py
+++ b/bravo/plugins/physics.py
@@ -1,346 +1,348 @@
from itertools import chain
from twisted.internet.defer import inlineCallbacks
from twisted.internet.task import LoopingCall
from zope.interface import implements
from bravo.blocks import blocks
from bravo.ibravo import IAutomaton, IDigHook
from bravo.utilities.automatic import naive_scan
from bravo.utilities.coords import itercube, iterneighbors
from bravo.utilities.spatial import Block2DSpatialDict, Block3DSpatialDict
from bravo.world import ChunkNotLoaded
FALLING = 0x8
"""
Flag indicating whether fluid is in freefall.
"""
class Fluid(object):
"""
Fluid simulator.
"""
implements(IAutomaton, IDigHook)
sponge = None
"""
Block that will soak up fluids and springs that are near it.
Defaults to None, which effectively disables this feature.
"""
def __init__(self, factory):
self.factory = factory
self.sponges = Block3DSpatialDict()
self.springs = Block2DSpatialDict()
self.tracked = set()
self.new = set()
self.loop = LoopingCall(self.process)
def start(self):
if not self.loop.running:
self.loop.start(self.step)
def stop(self):
if self.loop.running:
self.loop.stop()
def schedule(self):
if self.tracked:
self.start()
else:
self.stop()
@property
def blocks(self):
retval = [self.spring, self.fluid]
if self.sponge:
retval.append(self.sponge)
return retval
def feed(self, coordinates):
"""
Accept the coordinates and stash them for later processing.
"""
self.tracked.add(coordinates)
self.schedule()
scan = naive_scan
def update_fluid(self, w, coords, falling, level=0):
if not 0 <= coords[1] < 256:
return False
block = w.sync_get_block(coords)
if (block in self.whitespace and not
any(self.sponges.iteritemsnear(coords, 2))):
w.sync_set_block(coords, self.fluid)
if falling:
level |= FALLING
w.sync_set_metadata(coords, level)
self.new.add(coords)
return True
return False
def add_sponge(self, w, x, y, z):
# Track this sponge.
self.sponges[x, y, z] = True
# Destroy the water! Destroy!
for coords in itercube(x, y, z, 2):
try:
target = w.sync_get_block(coords)
if target == self.spring:
if (coords[0], coords[2]) in self.springs:
del self.springs[coords[0],
coords[2]]
w.sync_destroy(coords)
elif target == self.fluid:
w.sync_destroy(coords)
except ChunkNotLoaded:
pass
# And now mark our surroundings so that they can be
# updated appropriately.
for coords in itercube(x, y, z, 3):
if coords != (x, y, z):
self.new.add(coords)
def add_spring(self, w, x, y, z):
# Double-check that we weren't placed inside a sponge. That's just
# not going to work out.
if any(self.sponges.iteritemsnear((x, y, z), 2)):
w.sync_destroy((x, y, z))
return
# Track this spring.
self.springs[x, z] = y
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
# Spawn water from springs.
for coords in neighbors:
try:
self.update_fluid(w, coords, False)
except ChunkNotLoaded:
pass
# Is this water falling down to the next y-level? We don't really
# care, but we'll run the update nonetheless.
if y > 0:
# Our downstairs pal.
below = x, y - 1, z
self.update_fluid(w, below, True)
def add_fluid(self, w, x, y, z):
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
# Our downstairs pal.
below = (x, y - 1, z)
# Double-check that we weren't placed inside a sponge.
if any(self.sponges.iteritemsnear((x, y, z), 2)):
w.sync_destroy((x, y, z))
return
# First, figure out whether or not we should be spreading. Let's see
# if there are any springs nearby which are above us and thus able to
# fuel us.
if not any(springy >= y
for springy in
self.springs.itervaluesnear((x, z), self.levels + 1)):
# Oh noes, we're drying up! We should mark our neighbors and dry
# ourselves up.
self.new.update(neighbors)
if y:
self.new.add(below)
w.sync_destroy((x, y, z))
return
newmd = self.levels + 1
for coords in neighbors:
try:
jones = w.sync_get_block(coords)
if jones == self.spring:
newmd = 0
self.new.update(neighbors)
break
elif jones == self.fluid:
jonesmd = w.sync_get_metadata(coords) & ~FALLING
if jonesmd + 1 < newmd:
newmd = jonesmd + 1
except ChunkNotLoaded:
pass
current_md = w.sync_get_metadata((x,y,z))
if newmd > self.levels and current_md < FALLING:
# We should dry up.
self.new.update(neighbors)
if y:
self.new.add(below)
w.sync_destroy((x, y, z))
return
# Mark any neighbors which should adjust themselves. This will only
# mark lower water levels than ourselves, and only if they are
# definitely too low.
for coords in neighbors:
try:
neighbor = w.sync_get_metadata(coords)
if neighbor & ~FALLING > newmd + 1:
self.new.add(coords)
except ChunkNotLoaded:
pass
# Now, it's time to extend water. Remember, either the water flows
# downward to the next y-level, or it flows out across the xz-level,
# but *not* both.
# Fall down to the next y-level, if possible.
if y and self.update_fluid(w, below, True, newmd):
return
# Clamp our newmd and assign. Also, set ourselves again; we changed
# this time and we might change again.
if current_md < FALLING:
w.sync_set_metadata((x, y, z), newmd)
# If pending block is already above fluid, don't keep spreading.
if neighbor == self.fluid:
return
# Otherwise, just fill our neighbors with water, where applicable, and
# mark them.
if newmd < self.levels:
newmd += 1
for coords in neighbors:
try:
self.update_fluid(w, coords, False, newmd)
except ChunkNotLoaded:
pass
def remove_sponge(self, x, y, z):
# The evil sponge tyrant is gone. Flow, minions, flow!
for coords in itercube(x, y, z, 3):
if coords != (x, y, z):
self.new.add(coords)
def remove_spring(self, x, y, z):
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
- # Our downstairs pal.
- below = (x, y - 1, z)
# Destroyed spring. Add neighbors and below to blocks to update.
del self.springs[x, z]
self.new.update(neighbors)
- self.new.add(below)
+
+ if y:
+ # Our downstairs pal.
+ below = x, y - 1, z
+ self.new.add(below)
def process(self):
w = self.factory.world
for x, y, z in self.tracked:
# Try each block separately. If it can't be done, it'll be
# discarded from the set simply by not being added to the new set
# for the next iteration.
try:
block = w.sync_get_block((x, y, z))
if block == self.sponge:
self.add_sponge(w, x, y, z)
elif block == self.spring:
self.add_spring(w, x, y, z)
elif block == self.fluid:
self.add_fluid(w, x, y, z)
else:
# Hm, why would a pending block not be any of the things
# we care about? Maybe it used to be a spring or
# something?
if (x, z) in self.springs:
self.remove_spring(x, y, z)
elif (x, y, z) in self.sponges:
self.remove_sponge(x, y, z)
except ChunkNotLoaded:
pass
# Flush affected chunks.
to_flush = set()
for x, y, z in chain(self.tracked, self.new):
to_flush.add((x // 16, z // 16))
for x, z in to_flush:
d = self.factory.world.request_chunk(x, z)
d.addCallback(self.factory.flush_chunk)
self.tracked = self.new
self.new = set()
# Prune, and reschedule.
self.schedule()
@inlineCallbacks
def dig_hook(self, chunk, x, y, z, block):
"""
Check for neighboring water that might want to spread.
Also check to see whether we are, for example, dug ice that has turned
back into water.
"""
x += chunk.x * 16
z += chunk.z * 16
# Check for sponges first, since they will mark the entirety of the
# area.
if block == self.sponge:
for coords in itercube(x, y, z, 3):
self.tracked.add(coords)
else:
for coords in iterneighbors(x, y, z):
test_block = yield self.factory.world.get_block(coords)
if test_block in (self.spring, self.fluid):
self.tracked.add(coords)
self.schedule()
before = ("build",)
after = tuple()
class Water(Fluid):
spring = blocks["spring"].slot
fluid = blocks["water"].slot
levels = 7
sponge = blocks["sponge"].slot
whitespace = (blocks["air"].slot, blocks["snow"].slot)
meltables = (blocks["ice"].slot,)
step = 0.2
name = "water"
class Lava(Fluid):
spring = blocks["lava-spring"].slot
fluid = blocks["lava"].slot
levels = 3
whitespace = (blocks["air"].slot, blocks["snow"].slot)
meltables = (blocks["ice"].slot,)
step = 0.5
name = "lava"
|
bravoserver/bravo
|
d7fd2807a408498d73919fee9e258832011705d3
|
plugins/physics: Fix OOB access in Fluid.
|
diff --git a/bravo/plugins/physics.py b/bravo/plugins/physics.py
index 62804e8..347fe06 100644
--- a/bravo/plugins/physics.py
+++ b/bravo/plugins/physics.py
@@ -1,344 +1,346 @@
-from itertools import chain, product
+from itertools import chain
from twisted.internet.defer import inlineCallbacks
from twisted.internet.task import LoopingCall
from zope.interface import implements
from bravo.blocks import blocks
from bravo.ibravo import IAutomaton, IDigHook
from bravo.utilities.automatic import naive_scan
from bravo.utilities.coords import itercube, iterneighbors
from bravo.utilities.spatial import Block2DSpatialDict, Block3DSpatialDict
from bravo.world import ChunkNotLoaded
FALLING = 0x8
"""
Flag indicating whether fluid is in freefall.
"""
class Fluid(object):
"""
Fluid simulator.
"""
implements(IAutomaton, IDigHook)
sponge = None
"""
Block that will soak up fluids and springs that are near it.
Defaults to None, which effectively disables this feature.
"""
def __init__(self, factory):
self.factory = factory
self.sponges = Block3DSpatialDict()
self.springs = Block2DSpatialDict()
self.tracked = set()
self.new = set()
self.loop = LoopingCall(self.process)
def start(self):
if not self.loop.running:
self.loop.start(self.step)
def stop(self):
if self.loop.running:
self.loop.stop()
def schedule(self):
if self.tracked:
self.start()
else:
self.stop()
@property
def blocks(self):
retval = [self.spring, self.fluid]
if self.sponge:
retval.append(self.sponge)
return retval
def feed(self, coordinates):
"""
Accept the coordinates and stash them for later processing.
"""
self.tracked.add(coordinates)
self.schedule()
scan = naive_scan
def update_fluid(self, w, coords, falling, level=0):
-
- if not 0 <= coords[1] < 128:
+ if not 0 <= coords[1] < 256:
return False
block = w.sync_get_block(coords)
if (block in self.whitespace and not
any(self.sponges.iteritemsnear(coords, 2))):
w.sync_set_block(coords, self.fluid)
if falling:
level |= FALLING
w.sync_set_metadata(coords, level)
self.new.add(coords)
return True
return False
def add_sponge(self, w, x, y, z):
# Track this sponge.
self.sponges[x, y, z] = True
# Destroy the water! Destroy!
for coords in itercube(x, y, z, 2):
try:
target = w.sync_get_block(coords)
if target == self.spring:
if (coords[0], coords[2]) in self.springs:
del self.springs[coords[0],
coords[2]]
w.sync_destroy(coords)
elif target == self.fluid:
w.sync_destroy(coords)
except ChunkNotLoaded:
pass
# And now mark our surroundings so that they can be
# updated appropriately.
for coords in itercube(x, y, z, 3):
if coords != (x, y, z):
self.new.add(coords)
def add_spring(self, w, x, y, z):
# Double-check that we weren't placed inside a sponge. That's just
# not going to work out.
if any(self.sponges.iteritemsnear((x, y, z), 2)):
w.sync_destroy((x, y, z))
return
# Track this spring.
self.springs[x, z] = y
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
- # Our downstairs pal.
- below = (x, y - 1, z)
# Spawn water from springs.
for coords in neighbors:
try:
self.update_fluid(w, coords, False)
except ChunkNotLoaded:
pass
# Is this water falling down to the next y-level? We don't really
# care, but we'll run the update nonetheless.
- self.update_fluid(w, below, True)
+ if y > 0:
+ # Our downstairs pal.
+ below = x, y - 1, z
+ self.update_fluid(w, below, True)
def add_fluid(self, w, x, y, z):
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
# Our downstairs pal.
below = (x, y - 1, z)
# Double-check that we weren't placed inside a sponge.
if any(self.sponges.iteritemsnear((x, y, z), 2)):
w.sync_destroy((x, y, z))
return
# First, figure out whether or not we should be spreading. Let's see
# if there are any springs nearby which are above us and thus able to
# fuel us.
if not any(springy >= y
for springy in
self.springs.itervaluesnear((x, z), self.levels + 1)):
# Oh noes, we're drying up! We should mark our neighbors and dry
# ourselves up.
self.new.update(neighbors)
- self.new.add(below)
+ if y:
+ self.new.add(below)
w.sync_destroy((x, y, z))
return
newmd = self.levels + 1
for coords in neighbors:
try:
jones = w.sync_get_block(coords)
if jones == self.spring:
newmd = 0
self.new.update(neighbors)
break
elif jones == self.fluid:
jonesmd = w.sync_get_metadata(coords) & ~FALLING
if jonesmd + 1 < newmd:
newmd = jonesmd + 1
except ChunkNotLoaded:
pass
current_md = w.sync_get_metadata((x,y,z))
if newmd > self.levels and current_md < FALLING:
# We should dry up.
self.new.update(neighbors)
- self.new.add(below)
+ if y:
+ self.new.add(below)
w.sync_destroy((x, y, z))
return
# Mark any neighbors which should adjust themselves. This will only
# mark lower water levels than ourselves, and only if they are
# definitely too low.
for coords in neighbors:
try:
neighbor = w.sync_get_metadata(coords)
if neighbor & ~FALLING > newmd + 1:
self.new.add(coords)
except ChunkNotLoaded:
pass
# Now, it's time to extend water. Remember, either the water flows
# downward to the next y-level, or it flows out across the xz-level,
# but *not* both.
# Fall down to the next y-level, if possible.
- if self.update_fluid(w, below, True, newmd):
+ if y and self.update_fluid(w, below, True, newmd):
return
# Clamp our newmd and assign. Also, set ourselves again; we changed
# this time and we might change again.
if current_md < FALLING:
w.sync_set_metadata((x, y, z), newmd)
# If pending block is already above fluid, don't keep spreading.
if neighbor == self.fluid:
return
# Otherwise, just fill our neighbors with water, where applicable, and
# mark them.
if newmd < self.levels:
newmd += 1
for coords in neighbors:
try:
self.update_fluid(w, coords, False, newmd)
except ChunkNotLoaded:
pass
def remove_sponge(self, x, y, z):
# The evil sponge tyrant is gone. Flow, minions, flow!
for coords in itercube(x, y, z, 3):
if coords != (x, y, z):
self.new.add(coords)
def remove_spring(self, x, y, z):
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
# Our downstairs pal.
below = (x, y - 1, z)
# Destroyed spring. Add neighbors and below to blocks to update.
del self.springs[x, z]
self.new.update(neighbors)
self.new.add(below)
def process(self):
w = self.factory.world
for x, y, z in self.tracked:
# Try each block separately. If it can't be done, it'll be
# discarded from the set simply by not being added to the new set
# for the next iteration.
try:
block = w.sync_get_block((x, y, z))
if block == self.sponge:
self.add_sponge(w, x, y, z)
elif block == self.spring:
self.add_spring(w, x, y, z)
elif block == self.fluid:
self.add_fluid(w, x, y, z)
else:
# Hm, why would a pending block not be any of the things
# we care about? Maybe it used to be a spring or
# something?
if (x, z) in self.springs:
self.remove_spring(x, y, z)
elif (x, y, z) in self.sponges:
self.remove_sponge(x, y, z)
except ChunkNotLoaded:
pass
# Flush affected chunks.
to_flush = set()
for x, y, z in chain(self.tracked, self.new):
to_flush.add((x // 16, z // 16))
for x, z in to_flush:
d = self.factory.world.request_chunk(x, z)
d.addCallback(self.factory.flush_chunk)
self.tracked = self.new
self.new = set()
# Prune, and reschedule.
self.schedule()
@inlineCallbacks
def dig_hook(self, chunk, x, y, z, block):
"""
Check for neighboring water that might want to spread.
Also check to see whether we are, for example, dug ice that has turned
back into water.
"""
x += chunk.x * 16
z += chunk.z * 16
# Check for sponges first, since they will mark the entirety of the
# area.
if block == self.sponge:
for coords in itercube(x, y, z, 3):
self.tracked.add(coords)
else:
for coords in iterneighbors(x, y, z):
test_block = yield self.factory.world.get_block(coords)
if test_block in (self.spring, self.fluid):
self.tracked.add(coords)
self.schedule()
before = ("build",)
after = tuple()
class Water(Fluid):
spring = blocks["spring"].slot
fluid = blocks["water"].slot
levels = 7
sponge = blocks["sponge"].slot
whitespace = (blocks["air"].slot, blocks["snow"].slot)
meltables = (blocks["ice"].slot,)
step = 0.2
name = "water"
class Lava(Fluid):
spring = blocks["lava-spring"].slot
fluid = blocks["lava"].slot
levels = 3
whitespace = (blocks["air"].slot, blocks["snow"].slot)
meltables = (blocks["ice"].slot,)
step = 0.5
name = "lava"
|
bravoserver/bravo
|
7ffbf6a88816abea089d68851f63ef09de897ac9
|
plugins/automatons: Fix OOB coordinate access in Grass.
|
diff --git a/bravo/plugins/automatons.py b/bravo/plugins/automatons.py
index a4b78e8..3d411e6 100644
--- a/bravo/plugins/automatons.py
+++ b/bravo/plugins/automatons.py
@@ -1,213 +1,213 @@
from __future__ import division
from collections import deque
from itertools import product
from random import randint, random
from twisted.internet import reactor
from twisted.internet.task import LoopingCall
from zope.interface import implements
from bravo.blocks import blocks
from bravo.ibravo import IAutomaton, IDigHook
from bravo.terrain.trees import ConeTree, NormalTree, RoundTree, RainforestTree
from bravo.utilities.automatic import column_scan
from bravo.world import ChunkNotLoaded
class Trees(object):
"""
Turn saplings into trees.
"""
implements(IAutomaton)
blocks = (blocks["sapling"].slot,)
grow_step_min = 15
grow_step_max = 60
trees = [
NormalTree,
ConeTree,
RoundTree,
RainforestTree,
]
def __init__(self, factory):
self.factory = factory
self.tracked = set()
def start(self):
# Noop for now -- this is wrong for several reasons.
pass
def stop(self):
for call in self.tracked:
if call.active():
call.cancel()
def process(self, coords):
try:
metadata = self.factory.world.sync_get_metadata(coords)
# Is this sapling ready to grow into a big tree? We use a bit-trick to
# check.
if metadata >= 12:
# Tree time!
tree = self.trees[metadata % 4](pos=coords)
tree.prepare(self.factory.world)
tree.make_trunk(self.factory.world)
tree.make_foliage(self.factory.world)
# We can't easily tell how many chunks were modified, so we have
# to flush all of them.
self.factory.flush_all_chunks()
else:
# Increment metadata.
metadata += 4
self.factory.world.sync_set_metadata(coords, metadata)
call = reactor.callLater(
randint(self.grow_step_min, self.grow_step_max), self.process,
coords)
self.tracked.add(call)
# Filter tracked set.
self.tracked = set(i for i in self.tracked if i.active())
except ChunkNotLoaded:
pass
def feed(self, coords):
call = reactor.callLater(
randint(self.grow_step_min, self.grow_step_max), self.process,
coords)
self.tracked.add(call)
scan = column_scan
name = "trees"
class Grass(object):
implements(IAutomaton, IDigHook)
blocks = (blocks["dirt"].slot,)
step = 1
def __init__(self, factory):
self.factory = factory
self.tracked = deque()
self.loop = LoopingCall(self.process)
def start(self):
if not self.loop.running:
self.loop.start(self.step, now=False)
def stop(self):
if self.loop.running:
self.loop.stop()
def process(self):
if not self.tracked:
return
# Effectively stop tracking this block. We'll add it back in if we're
# not finished with it.
coords = self.tracked.pop()
# Try to do our neighbor lookups. If it can't happen, don't worry
# about it; we can get to it later. Grass isn't exactly a
# super-high-tension thing that must happen.
try:
current = self.factory.world.sync_get_block(coords)
if current == blocks["dirt"].slot:
# Yep, it's still dirt. Let's look around and see whether it
# should be grassy. Our general strategy is as follows: We
# look at the blocks nearby. If at least eight of them are
# grass, grassiness is guaranteed, but if none of them are
# grass, grassiness just won't happen.
x, y, z = coords
# First things first: Grass can't grow if there's things on
# top of it, so check that first.
above = self.factory.world.sync_get_block((x, y + 1, z))
if above:
return
# The number of grassy neighbors.
grasses = 0
# Intentional shadow.
for x, y, z in product(xrange(x - 1, x + 2),
- xrange(y - 1, y + 4), xrange(z - 1, z + 2)):
+ xrange(max(y - 1, 0), y + 4), xrange(z - 1, z + 2)):
# Early-exit to avoid block lookup if we finish early.
if grasses >= 8:
break
block = self.factory.world.sync_get_block((x, y, z))
if block == blocks["grass"].slot:
grasses += 1
# Randomly determine whether we are finished.
if grasses / 8 >= random():
# Hey, let's make some grass.
self.factory.world.set_block(coords, blocks["grass"].slot)
# And schedule the chunk to be flushed.
x, y, z = coords
d = self.factory.world.request_chunk(x // 16, z // 16)
d.addCallback(self.factory.flush_chunk)
else:
# Not yet; add it back to the list.
self.tracked.appendleft(coords)
except ChunkNotLoaded:
pass
def feed(self, coords):
self.tracked.appendleft(coords)
scan = column_scan
def dig_hook(self, chunk, x, y, z, block):
if y > 0:
block = chunk.get_block((x, y - 1, z))
if block in self.blocks:
# Track it now.
coords = (chunk.x * 16 + x, y - 1, chunk.z * 16 + z)
self.tracked.appendleft(coords)
name = "grass"
before = tuple()
after = tuple()
class Rain(object):
"""
Make it rain.
Rain only occurs during spring.
"""
implements(IAutomaton)
blocks = tuple()
def __init__(self, factory):
self.factory = factory
self.season_loop = LoopingCall(self.check_season)
def scan(self, chunk):
pass
def feed(self, coords):
pass
def start(self):
self.season_loop.start(5 * 60)
def stop(self):
self.season_loop.stop()
def check_season(self):
if self.factory.world.season.name == "spring":
self.factory.vane.weather = "rainy"
reactor.callLater(1 * 60, setattr, self.factory.vane, "weather",
"sunny")
name = "rain"
|
bravoserver/bravo
|
bedb35a4edb1aab1786769ab37e5c67627db04de
|
plugins/physics: Refactor to use itercube().
|
diff --git a/bravo/plugins/physics.py b/bravo/plugins/physics.py
index c3fa44d..62804e8 100644
--- a/bravo/plugins/physics.py
+++ b/bravo/plugins/physics.py
@@ -1,357 +1,344 @@
from itertools import chain, product
from twisted.internet.defer import inlineCallbacks
from twisted.internet.task import LoopingCall
from zope.interface import implements
from bravo.blocks import blocks
from bravo.ibravo import IAutomaton, IDigHook
from bravo.utilities.automatic import naive_scan
-from bravo.utilities.coords import iterneighbors
+from bravo.utilities.coords import itercube, iterneighbors
from bravo.utilities.spatial import Block2DSpatialDict, Block3DSpatialDict
from bravo.world import ChunkNotLoaded
FALLING = 0x8
"""
Flag indicating whether fluid is in freefall.
"""
class Fluid(object):
"""
Fluid simulator.
"""
implements(IAutomaton, IDigHook)
sponge = None
"""
Block that will soak up fluids and springs that are near it.
Defaults to None, which effectively disables this feature.
"""
def __init__(self, factory):
self.factory = factory
self.sponges = Block3DSpatialDict()
self.springs = Block2DSpatialDict()
self.tracked = set()
self.new = set()
self.loop = LoopingCall(self.process)
def start(self):
if not self.loop.running:
self.loop.start(self.step)
def stop(self):
if self.loop.running:
self.loop.stop()
def schedule(self):
if self.tracked:
self.start()
else:
self.stop()
@property
def blocks(self):
retval = [self.spring, self.fluid]
if self.sponge:
retval.append(self.sponge)
return retval
def feed(self, coordinates):
"""
Accept the coordinates and stash them for later processing.
"""
self.tracked.add(coordinates)
self.schedule()
scan = naive_scan
def update_fluid(self, w, coords, falling, level=0):
if not 0 <= coords[1] < 128:
return False
block = w.sync_get_block(coords)
if (block in self.whitespace and not
any(self.sponges.iteritemsnear(coords, 2))):
w.sync_set_block(coords, self.fluid)
if falling:
level |= FALLING
w.sync_set_metadata(coords, level)
self.new.add(coords)
return True
return False
def add_sponge(self, w, x, y, z):
# Track this sponge.
self.sponges[x, y, z] = True
# Destroy the water! Destroy!
- for coords in product(
- xrange(x - 2, x + 3),
- xrange(max(y - 2, 0), min(y + 3, 128)),
- xrange(z - 2, z + 3),
- ):
+ for coords in itercube(x, y, z, 2):
try:
target = w.sync_get_block(coords)
if target == self.spring:
if (coords[0], coords[2]) in self.springs:
del self.springs[coords[0],
coords[2]]
w.sync_destroy(coords)
elif target == self.fluid:
w.sync_destroy(coords)
except ChunkNotLoaded:
pass
# And now mark our surroundings so that they can be
# updated appropriately.
- for coords in product(
- xrange(x - 3, x + 4),
- xrange(max(y - 3, 0), min(y + 4, 128)),
- xrange(z - 3, z + 4),
- ):
+ for coords in itercube(x, y, z, 3):
if coords != (x, y, z):
self.new.add(coords)
def add_spring(self, w, x, y, z):
# Double-check that we weren't placed inside a sponge. That's just
# not going to work out.
if any(self.sponges.iteritemsnear((x, y, z), 2)):
w.sync_destroy((x, y, z))
return
# Track this spring.
self.springs[x, z] = y
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
# Our downstairs pal.
below = (x, y - 1, z)
# Spawn water from springs.
for coords in neighbors:
try:
self.update_fluid(w, coords, False)
except ChunkNotLoaded:
pass
# Is this water falling down to the next y-level? We don't really
# care, but we'll run the update nonetheless.
self.update_fluid(w, below, True)
def add_fluid(self, w, x, y, z):
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
# Our downstairs pal.
below = (x, y - 1, z)
# Double-check that we weren't placed inside a sponge.
if any(self.sponges.iteritemsnear((x, y, z), 2)):
w.sync_destroy((x, y, z))
return
# First, figure out whether or not we should be spreading. Let's see
# if there are any springs nearby which are above us and thus able to
# fuel us.
if not any(springy >= y
for springy in
self.springs.itervaluesnear((x, z), self.levels + 1)):
# Oh noes, we're drying up! We should mark our neighbors and dry
# ourselves up.
self.new.update(neighbors)
self.new.add(below)
w.sync_destroy((x, y, z))
return
newmd = self.levels + 1
for coords in neighbors:
try:
jones = w.sync_get_block(coords)
if jones == self.spring:
newmd = 0
self.new.update(neighbors)
break
elif jones == self.fluid:
jonesmd = w.sync_get_metadata(coords) & ~FALLING
if jonesmd + 1 < newmd:
newmd = jonesmd + 1
except ChunkNotLoaded:
pass
current_md = w.sync_get_metadata((x,y,z))
if newmd > self.levels and current_md < FALLING:
# We should dry up.
self.new.update(neighbors)
self.new.add(below)
w.sync_destroy((x, y, z))
return
# Mark any neighbors which should adjust themselves. This will only
# mark lower water levels than ourselves, and only if they are
# definitely too low.
for coords in neighbors:
try:
neighbor = w.sync_get_metadata(coords)
if neighbor & ~FALLING > newmd + 1:
self.new.add(coords)
except ChunkNotLoaded:
pass
# Now, it's time to extend water. Remember, either the water flows
# downward to the next y-level, or it flows out across the xz-level,
# but *not* both.
# Fall down to the next y-level, if possible.
if self.update_fluid(w, below, True, newmd):
return
# Clamp our newmd and assign. Also, set ourselves again; we changed
# this time and we might change again.
if current_md < FALLING:
w.sync_set_metadata((x, y, z), newmd)
# If pending block is already above fluid, don't keep spreading.
if neighbor == self.fluid:
return
# Otherwise, just fill our neighbors with water, where applicable, and
# mark them.
if newmd < self.levels:
newmd += 1
for coords in neighbors:
try:
self.update_fluid(w, coords, False, newmd)
except ChunkNotLoaded:
pass
def remove_sponge(self, x, y, z):
# The evil sponge tyrant is gone. Flow, minions, flow!
- for coords in product(xrange(x - 3, x + 4),
- xrange(max(y - 3, 0), min(y + 4, 128)), xrange(z - 3, z + 4)):
+ for coords in itercube(x, y, z, 3):
if coords != (x, y, z):
self.new.add(coords)
def remove_spring(self, x, y, z):
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
# Our downstairs pal.
below = (x, y - 1, z)
# Destroyed spring. Add neighbors and below to blocks to update.
del self.springs[x, z]
self.new.update(neighbors)
self.new.add(below)
def process(self):
w = self.factory.world
for x, y, z in self.tracked:
# Try each block separately. If it can't be done, it'll be
# discarded from the set simply by not being added to the new set
# for the next iteration.
try:
block = w.sync_get_block((x, y, z))
if block == self.sponge:
self.add_sponge(w, x, y, z)
elif block == self.spring:
self.add_spring(w, x, y, z)
elif block == self.fluid:
self.add_fluid(w, x, y, z)
else:
# Hm, why would a pending block not be any of the things
# we care about? Maybe it used to be a spring or
# something?
if (x, z) in self.springs:
self.remove_spring(x, y, z)
elif (x, y, z) in self.sponges:
self.remove_sponge(x, y, z)
except ChunkNotLoaded:
pass
# Flush affected chunks.
to_flush = set()
for x, y, z in chain(self.tracked, self.new):
to_flush.add((x // 16, z // 16))
for x, z in to_flush:
d = self.factory.world.request_chunk(x, z)
d.addCallback(self.factory.flush_chunk)
self.tracked = self.new
self.new = set()
# Prune, and reschedule.
self.schedule()
@inlineCallbacks
def dig_hook(self, chunk, x, y, z, block):
"""
Check for neighboring water that might want to spread.
Also check to see whether we are, for example, dug ice that has turned
back into water.
"""
x += chunk.x * 16
z += chunk.z * 16
# Check for sponges first, since they will mark the entirety of the
# area.
if block == self.sponge:
- for coords in product(
- xrange(x - 3, x + 4),
- xrange(max(y - 3, 0), min(y + 4, 128)),
- xrange(z - 3, z + 4),
- ):
+ for coords in itercube(x, y, z, 3):
self.tracked.add(coords)
else:
for coords in iterneighbors(x, y, z):
test_block = yield self.factory.world.get_block(coords)
if test_block in (self.spring, self.fluid):
self.tracked.add(coords)
self.schedule()
before = ("build",)
after = tuple()
class Water(Fluid):
spring = blocks["spring"].slot
fluid = blocks["water"].slot
levels = 7
sponge = blocks["sponge"].slot
whitespace = (blocks["air"].slot, blocks["snow"].slot)
meltables = (blocks["ice"].slot,)
step = 0.2
name = "water"
class Lava(Fluid):
spring = blocks["lava-spring"].slot
fluid = blocks["lava"].slot
levels = 3
whitespace = (blocks["air"].slot, blocks["snow"].slot)
meltables = (blocks["ice"].slot,)
step = 0.5
name = "lava"
|
bravoserver/bravo
|
2129e5551c668a040bccdffbdc8b98db6318dcee
|
utilities/coords: Isolate itercube() and test.
|
diff --git a/bravo/tests/utilities/test_coords.py b/bravo/tests/utilities/test_coords.py
index 9e0fc08..d64cc1c 100644
--- a/bravo/tests/utilities/test_coords.py
+++ b/bravo/tests/utilities/test_coords.py
@@ -1,24 +1,35 @@
import unittest
-from bravo.utilities.coords import adjust_coords_for_face, iterneighbors
+from bravo.utilities.coords import (adjust_coords_for_face, itercube,
+ iterneighbors)
+
class TestAdjustCoords(unittest.TestCase):
def test_adjust_plusx(self):
coords = range(3)
adjusted = adjust_coords_for_face(coords, "+x")
self.assertEqual(adjusted, (1, 1, 2))
+
class TestIterNeighbors(unittest.TestCase):
def test_no_neighbors(self):
x, y, z = 0, -2, 0
self.assertEqual(list(iterneighbors(x, y, z)), [])
def test_above(self):
x, y, z = 0, 0, 0
self.assertTrue((0, 1, 0) in iterneighbors(x, y, z))
+
+
+class TestIterCube(unittest.TestCase):
+
+ def test_no_cube(self):
+ x, y, z, r = 0, -2, 0, 1
+
+ self.assertEqual(list(itercube(x, y, z, r)), [])
diff --git a/bravo/utilities/coords.py b/bravo/utilities/coords.py
index b9c87aa..eaffde3 100644
--- a/bravo/utilities/coords.py
+++ b/bravo/utilities/coords.py
@@ -1,107 +1,125 @@
"""
Utilities for coordinate handling and munging.
"""
+from itertools import product
from math import floor, ceil
def polar_round_vector(vector):
"""
Rounds a vector towards zero
"""
if vector[0] >= 0:
calculated_x = floor(vector[0])
else:
calculated_x = ceil(vector[0])
if vector[1] >= 0:
calculated_y = floor(vector[1])
else:
calculated_y = ceil(vector[1])
if vector[2] >= 0:
calculated_z = floor(vector[2])
else:
calculated_z = ceil(vector[2])
return calculated_x, calculated_y, calculated_z
def split_coords(x, z):
"""
Split a pair of coordinates into chunk and subchunk coordinates.
:param int x: the X coordinate
:param int z: the Z coordinate
:returns: a tuple of the X chunk, X subchunk, Z chunk, and Z subchunk
"""
first, second = divmod(int(x), 16)
third, fourth = divmod(int(z), 16)
return first, second, third, fourth
def taxicab2(x1, y1, x2, y2):
"""
Return the taxicab distance between two blocks.
"""
return abs(x1 - x2) + abs(y1 - y2)
def taxicab3(x1, y1, z1, x2, y2, z2):
"""
Return the taxicab distance between two blocks, in three dimensions.
"""
return abs(x1 - x2) + abs(y1 - y2) + abs(z1 - z2)
def adjust_coords_for_face(coords, face):
"""
Adjust a set of coords according to a face.
The face is a standard string descriptor, such as "+x".
The "noop" face is supported.
"""
x, y, z = coords
if face == "-x":
x -= 1
elif face == "+x":
x += 1
elif face == "-y":
y -= 1
elif face == "+y":
y += 1
elif face == "-z":
z -= 1
elif face == "+z":
z += 1
return x, y, z
def iterneighbors(x, y, z):
"""
Yield an iterable of neighboring block coordinates.
The first item in the iterable is the original coordinates.
Coordinates with invalid Y values are discarded automatically.
"""
for (dx, dy, dz) in (
( 0, 0, 0),
( 0, 0, 1),
( 0, 0, -1),
( 0, 1, 0),
( 0, -1, 0),
( 1, 0, 0),
(-1, 0, 0)):
if 0 <= y + dy < 256:
yield x + dx, y + dy, z + dz
+
+
+def itercube(x, y, z, r):
+ """
+ Yield an iterable of coordinates in a cube around a given block.
+
+ Coordinates with invalid Y values are discarded automatically.
+ """
+
+ bx = x - r
+ tx = x + r + 1
+ by = max(y - r, 0)
+ ty = min(y + r + 1, 256)
+ bz = z - r
+ tz = z + r + 1
+
+ return product(xrange(bx, tx), xrange(by, ty), xrange(bz, tz))
|
bravoserver/bravo
|
d3566d687dcbf527523746e4f03803808a8994cd
|
plugins/physics: Use iterneighbor().
|
diff --git a/bravo/plugins/physics.py b/bravo/plugins/physics.py
index 73d9118..c3fa44d 100644
--- a/bravo/plugins/physics.py
+++ b/bravo/plugins/physics.py
@@ -1,363 +1,357 @@
from itertools import chain, product
from twisted.internet.defer import inlineCallbacks
from twisted.internet.task import LoopingCall
from zope.interface import implements
from bravo.blocks import blocks
from bravo.ibravo import IAutomaton, IDigHook
from bravo.utilities.automatic import naive_scan
+from bravo.utilities.coords import iterneighbors
from bravo.utilities.spatial import Block2DSpatialDict, Block3DSpatialDict
from bravo.world import ChunkNotLoaded
FALLING = 0x8
"""
Flag indicating whether fluid is in freefall.
"""
class Fluid(object):
"""
Fluid simulator.
"""
implements(IAutomaton, IDigHook)
sponge = None
"""
Block that will soak up fluids and springs that are near it.
Defaults to None, which effectively disables this feature.
"""
def __init__(self, factory):
self.factory = factory
self.sponges = Block3DSpatialDict()
self.springs = Block2DSpatialDict()
self.tracked = set()
self.new = set()
self.loop = LoopingCall(self.process)
def start(self):
if not self.loop.running:
self.loop.start(self.step)
def stop(self):
if self.loop.running:
self.loop.stop()
def schedule(self):
if self.tracked:
self.start()
else:
self.stop()
@property
def blocks(self):
retval = [self.spring, self.fluid]
if self.sponge:
retval.append(self.sponge)
return retval
def feed(self, coordinates):
"""
Accept the coordinates and stash them for later processing.
"""
self.tracked.add(coordinates)
self.schedule()
scan = naive_scan
def update_fluid(self, w, coords, falling, level=0):
if not 0 <= coords[1] < 128:
return False
block = w.sync_get_block(coords)
if (block in self.whitespace and not
any(self.sponges.iteritemsnear(coords, 2))):
w.sync_set_block(coords, self.fluid)
if falling:
level |= FALLING
w.sync_set_metadata(coords, level)
self.new.add(coords)
return True
return False
def add_sponge(self, w, x, y, z):
# Track this sponge.
self.sponges[x, y, z] = True
# Destroy the water! Destroy!
for coords in product(
xrange(x - 2, x + 3),
xrange(max(y - 2, 0), min(y + 3, 128)),
xrange(z - 2, z + 3),
):
try:
target = w.sync_get_block(coords)
if target == self.spring:
if (coords[0], coords[2]) in self.springs:
del self.springs[coords[0],
coords[2]]
w.sync_destroy(coords)
elif target == self.fluid:
w.sync_destroy(coords)
except ChunkNotLoaded:
pass
# And now mark our surroundings so that they can be
# updated appropriately.
for coords in product(
xrange(x - 3, x + 4),
xrange(max(y - 3, 0), min(y + 4, 128)),
xrange(z - 3, z + 4),
):
if coords != (x, y, z):
self.new.add(coords)
def add_spring(self, w, x, y, z):
# Double-check that we weren't placed inside a sponge. That's just
# not going to work out.
if any(self.sponges.iteritemsnear((x, y, z), 2)):
w.sync_destroy((x, y, z))
return
# Track this spring.
self.springs[x, z] = y
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
# Our downstairs pal.
below = (x, y - 1, z)
# Spawn water from springs.
for coords in neighbors:
try:
self.update_fluid(w, coords, False)
except ChunkNotLoaded:
pass
# Is this water falling down to the next y-level? We don't really
# care, but we'll run the update nonetheless.
self.update_fluid(w, below, True)
def add_fluid(self, w, x, y, z):
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
# Our downstairs pal.
below = (x, y - 1, z)
# Double-check that we weren't placed inside a sponge.
if any(self.sponges.iteritemsnear((x, y, z), 2)):
w.sync_destroy((x, y, z))
return
# First, figure out whether or not we should be spreading. Let's see
# if there are any springs nearby which are above us and thus able to
# fuel us.
if not any(springy >= y
for springy in
self.springs.itervaluesnear((x, z), self.levels + 1)):
# Oh noes, we're drying up! We should mark our neighbors and dry
# ourselves up.
self.new.update(neighbors)
self.new.add(below)
w.sync_destroy((x, y, z))
return
newmd = self.levels + 1
for coords in neighbors:
try:
jones = w.sync_get_block(coords)
if jones == self.spring:
newmd = 0
self.new.update(neighbors)
break
elif jones == self.fluid:
jonesmd = w.sync_get_metadata(coords) & ~FALLING
if jonesmd + 1 < newmd:
newmd = jonesmd + 1
except ChunkNotLoaded:
pass
current_md = w.sync_get_metadata((x,y,z))
if newmd > self.levels and current_md < FALLING:
# We should dry up.
self.new.update(neighbors)
self.new.add(below)
w.sync_destroy((x, y, z))
return
# Mark any neighbors which should adjust themselves. This will only
# mark lower water levels than ourselves, and only if they are
# definitely too low.
for coords in neighbors:
try:
neighbor = w.sync_get_metadata(coords)
if neighbor & ~FALLING > newmd + 1:
self.new.add(coords)
except ChunkNotLoaded:
pass
# Now, it's time to extend water. Remember, either the water flows
# downward to the next y-level, or it flows out across the xz-level,
# but *not* both.
# Fall down to the next y-level, if possible.
if self.update_fluid(w, below, True, newmd):
return
# Clamp our newmd and assign. Also, set ourselves again; we changed
# this time and we might change again.
if current_md < FALLING:
w.sync_set_metadata((x, y, z), newmd)
# If pending block is already above fluid, don't keep spreading.
if neighbor == self.fluid:
return
# Otherwise, just fill our neighbors with water, where applicable, and
# mark them.
if newmd < self.levels:
newmd += 1
for coords in neighbors:
try:
self.update_fluid(w, coords, False, newmd)
except ChunkNotLoaded:
pass
def remove_sponge(self, x, y, z):
# The evil sponge tyrant is gone. Flow, minions, flow!
for coords in product(xrange(x - 3, x + 4),
xrange(max(y - 3, 0), min(y + 4, 128)), xrange(z - 3, z + 4)):
if coords != (x, y, z):
self.new.add(coords)
def remove_spring(self, x, y, z):
# Neighbors on the xz-level.
neighbors = ((x - 1, y, z), (x + 1, y, z), (x, y, z - 1),
(x, y, z + 1))
# Our downstairs pal.
below = (x, y - 1, z)
# Destroyed spring. Add neighbors and below to blocks to update.
del self.springs[x, z]
self.new.update(neighbors)
self.new.add(below)
def process(self):
w = self.factory.world
for x, y, z in self.tracked:
# Try each block separately. If it can't be done, it'll be
# discarded from the set simply by not being added to the new set
# for the next iteration.
try:
block = w.sync_get_block((x, y, z))
if block == self.sponge:
self.add_sponge(w, x, y, z)
elif block == self.spring:
self.add_spring(w, x, y, z)
elif block == self.fluid:
self.add_fluid(w, x, y, z)
else:
# Hm, why would a pending block not be any of the things
# we care about? Maybe it used to be a spring or
# something?
if (x, z) in self.springs:
self.remove_spring(x, y, z)
elif (x, y, z) in self.sponges:
self.remove_sponge(x, y, z)
except ChunkNotLoaded:
pass
# Flush affected chunks.
to_flush = set()
for x, y, z in chain(self.tracked, self.new):
to_flush.add((x // 16, z // 16))
for x, z in to_flush:
d = self.factory.world.request_chunk(x, z)
d.addCallback(self.factory.flush_chunk)
self.tracked = self.new
self.new = set()
# Prune, and reschedule.
self.schedule()
@inlineCallbacks
def dig_hook(self, chunk, x, y, z, block):
"""
Check for neighboring water that might want to spread.
Also check to see whether we are, for example, dug ice that has turned
back into water.
"""
x += chunk.x * 16
z += chunk.z * 16
# Check for sponges first, since they will mark the entirety of the
# area.
if block == self.sponge:
for coords in product(
xrange(x - 3, x + 4),
xrange(max(y - 3, 0), min(y + 4, 128)),
xrange(z - 3, z + 4),
):
self.tracked.add(coords)
else:
- for (dx, dy, dz) in (
- ( 0, 0, 0),
- ( 0, 0, 1),
- ( 0, 0, -1),
- ( 0, 1, 0),
- ( 1, 0, 0),
- (-1, 0, 0)):
- coords = x + dx, y + dy, z + dz
+ for coords in iterneighbors(x, y, z):
test_block = yield self.factory.world.get_block(coords)
if test_block in (self.spring, self.fluid):
self.tracked.add(coords)
self.schedule()
before = ("build",)
after = tuple()
class Water(Fluid):
spring = blocks["spring"].slot
fluid = blocks["water"].slot
levels = 7
sponge = blocks["sponge"].slot
whitespace = (blocks["air"].slot, blocks["snow"].slot)
meltables = (blocks["ice"].slot,)
step = 0.2
name = "water"
class Lava(Fluid):
spring = blocks["lava-spring"].slot
fluid = blocks["lava"].slot
levels = 3
whitespace = (blocks["air"].slot, blocks["snow"].slot)
meltables = (blocks["ice"].slot,)
step = 0.5
name = "lava"
|
bravoserver/bravo
|
105a2955c4b80f61e8603c1ddc4d7c4504f8d6b5
|
utilities/coords: Isolate iterneighbors(), and test.
|
diff --git a/bravo/tests/utilities/test_coords.py b/bravo/tests/utilities/test_coords.py
index 6f29e7a..9e0fc08 100644
--- a/bravo/tests/utilities/test_coords.py
+++ b/bravo/tests/utilities/test_coords.py
@@ -1,12 +1,24 @@
import unittest
-from bravo.utilities.coords import adjust_coords_for_face
+from bravo.utilities.coords import adjust_coords_for_face, iterneighbors
class TestAdjustCoords(unittest.TestCase):
def test_adjust_plusx(self):
coords = range(3)
adjusted = adjust_coords_for_face(coords, "+x")
self.assertEqual(adjusted, (1, 1, 2))
+
+class TestIterNeighbors(unittest.TestCase):
+
+ def test_no_neighbors(self):
+ x, y, z = 0, -2, 0
+
+ self.assertEqual(list(iterneighbors(x, y, z)), [])
+
+ def test_above(self):
+ x, y, z = 0, 0, 0
+
+ self.assertTrue((0, 1, 0) in iterneighbors(x, y, z))
diff --git a/bravo/utilities/coords.py b/bravo/utilities/coords.py
index 81aceb4..b9c87aa 100644
--- a/bravo/utilities/coords.py
+++ b/bravo/utilities/coords.py
@@ -1,79 +1,107 @@
"""
Utilities for coordinate handling and munging.
"""
+
from math import floor, ceil
+
+
def polar_round_vector(vector):
"""
Rounds a vector towards zero
"""
if vector[0] >= 0:
calculated_x = floor(vector[0])
else:
calculated_x = ceil(vector[0])
if vector[1] >= 0:
calculated_y = floor(vector[1])
else:
calculated_y = ceil(vector[1])
if vector[2] >= 0:
calculated_z = floor(vector[2])
else:
calculated_z = ceil(vector[2])
return calculated_x, calculated_y, calculated_z
+
def split_coords(x, z):
"""
Split a pair of coordinates into chunk and subchunk coordinates.
:param int x: the X coordinate
:param int z: the Z coordinate
:returns: a tuple of the X chunk, X subchunk, Z chunk, and Z subchunk
"""
first, second = divmod(int(x), 16)
third, fourth = divmod(int(z), 16)
return first, second, third, fourth
+
def taxicab2(x1, y1, x2, y2):
"""
Return the taxicab distance between two blocks.
"""
return abs(x1 - x2) + abs(y1 - y2)
+
def taxicab3(x1, y1, z1, x2, y2, z2):
"""
Return the taxicab distance between two blocks, in three dimensions.
"""
return abs(x1 - x2) + abs(y1 - y2) + abs(z1 - z2)
+
def adjust_coords_for_face(coords, face):
"""
Adjust a set of coords according to a face.
The face is a standard string descriptor, such as "+x".
The "noop" face is supported.
"""
x, y, z = coords
if face == "-x":
x -= 1
elif face == "+x":
x += 1
elif face == "-y":
y -= 1
elif face == "+y":
y += 1
elif face == "-z":
z -= 1
elif face == "+z":
z += 1
return x, y, z
+
+
+def iterneighbors(x, y, z):
+ """
+ Yield an iterable of neighboring block coordinates.
+
+ The first item in the iterable is the original coordinates.
+
+ Coordinates with invalid Y values are discarded automatically.
+ """
+
+ for (dx, dy, dz) in (
+ ( 0, 0, 0),
+ ( 0, 0, 1),
+ ( 0, 0, -1),
+ ( 0, 1, 0),
+ ( 0, -1, 0),
+ ( 1, 0, 0),
+ (-1, 0, 0)):
+ if 0 <= y + dy < 256:
+ yield x + dx, y + dy, z + dz
|
bravoserver/bravo
|
af475448d2327293dab3f80b0f2e7ec2f4a2d62b
|
world: Raise a different exception for impossible chunk lookups.
|
diff --git a/bravo/world.py b/bravo/world.py
index 6566028..6785202 100644
--- a/bravo/world.py
+++ b/bravo/world.py
@@ -1,568 +1,582 @@
from array import array
from functools import wraps
from itertools import product
import random
import sys
import weakref
from twisted.internet import reactor
from twisted.internet.defer import (inlineCallbacks, maybeDeferred,
returnValue, succeed)
from twisted.internet.task import LoopingCall
from twisted.python import log
from bravo.beta.structures import Level
from bravo.chunk import Chunk
from bravo.entity import Player, Furnace
from bravo.errors import (ChunkNotLoaded, SerializerReadException,
SerializerWriteException)
from bravo.ibravo import ISerializer
from bravo.plugin import retrieve_named_plugins
from bravo.utilities.coords import split_coords
from bravo.utilities.temporal import PendingEvent
from bravo.mobmanager import MobManager
+class ImpossibleCoordinates(Exception):
+ """
+ A coordinate could not ever be valid.
+ """
+
def coords_to_chunk(f):
"""
Automatically look up the chunk for the coordinates, and convert world
coordinates to chunk coordinates.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
+ # Fail early if Y is OOB.
+ if not 0 <= y < 256:
+ raise ImpossibleCoordinates("Y value %d is impossible" % y)
+
bigx, smallx, bigz, smallz = split_coords(x, z)
d = self.request_chunk(bigx, bigz)
@d.addCallback
def cb(chunk):
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return d
return decorated
def sync_coords_to_chunk(f):
"""
Either get a chunk for the coordinates, or raise an exception.
"""
@wraps(f)
def decorated(self, coords, *args, **kwargs):
x, y, z = coords
+ # Fail early if Y is OOB.
+ if not 0 <= y < 256:
+ raise ImpossibleCoordinates("Y value %d is impossible" % y)
+
bigx, smallx, bigz, smallz = split_coords(x, z)
bigcoords = bigx, bigz
+
if bigcoords in self.chunk_cache:
chunk = self.chunk_cache[bigcoords]
elif bigcoords in self.dirty_chunk_cache:
chunk = self.dirty_chunk_cache[bigcoords]
else:
raise ChunkNotLoaded("Chunk (%d, %d) isn't loaded" % bigcoords)
return f(self, chunk, (smallx, y, smallz), *args, **kwargs)
return decorated
class World(object):
"""
Object representing a world on disk.
Worlds are composed of levels and chunks, each of which corresponds to
exactly one file on disk. Worlds also contain saved player data.
"""
factory = None
"""
The factory managing this world.
Worlds do not need to be owned by a factory, but will not callback to
surrounding objects without an owner.
"""
season = None
"""
The current `ISeason`.
"""
saving = True
"""
Whether objects belonging to this world may be written out to disk.
"""
async = False
"""
Whether this world is using multiprocessing methods to generate geometry.
"""
dimension = "earth"
"""
The world dimension. Valid values are earth, sky, and nether.
"""
permanent_cache = None
"""
A permanent cache of chunks which are never evicted from memory.
This cache is used to speed up logins near the spawn point.
"""
level = Level(seed=0, spawn=(0, 0, 0), time=0)
"""
The initial level data.
"""
def __init__(self, config, name):
"""
:Parameters:
name : str
The configuration key to use to look up configuration data.
"""
self.config = config
self.config_name = "world %s" % name
self.chunk_cache = weakref.WeakValueDictionary()
self.dirty_chunk_cache = dict()
self._pending_chunks = dict()
def connect(self):
"""
Connect to the world.
"""
world_url = self.config.get(self.config_name, "url")
world_sf_name = self.config.get(self.config_name, "serializer")
# Get the current serializer list, and attempt to connect our
# serializer of choice to our resource.
# This could fail. Each of these lines (well, only the first and
# third) could raise a variety of exceptions. They should *all* be
# fatal.
serializers = retrieve_named_plugins(ISerializer, [world_sf_name])
self.serializer = serializers[0]
self.serializer.connect(world_url)
log.msg("World started on %s, using serializer %s" %
(world_url, self.serializer.name))
def start(self):
"""
Start managing a world.
Connect to the world and turn on all of the timed actions which
continuously manage the world.
"""
self.connect()
# Pick a random number for the seed. Use the configured value if one
# is present.
seed = random.randint(0, sys.maxint)
seed = self.config.getintdefault(self.config_name, "seed", seed)
self.level = self.level._replace(seed=seed)
# Check if we should offload chunk requests to ampoule.
if self.config.getbooleandefault("bravo", "ampoule", False):
try:
import ampoule
if ampoule:
self.async = True
except ImportError:
pass
log.msg("World is %s" %
("read-write" if self.saving else "read-only"))
log.msg("Using Ampoule: %s" % self.async)
# First, try loading the level, to see if there's any data out there
# which we can use. If not, don't worry about it.
d = maybeDeferred(self.serializer.load_level)
@d.addCallback
def cb(level):
self.level = level
log.msg("Loaded level data!")
@d.addErrback
def sre(failure):
failure.trap(SerializerReadException)
log.msg("Had issues loading level data, continuing anyway...")
# And now save our level.
if self.saving:
self.serializer.save_level(self.level)
self.chunk_management_loop = LoopingCall(self.sort_chunks)
self.chunk_management_loop.start(1)
self.mob_manager = MobManager() # XXX Put this in init or here?
self.mob_manager.world = self # XXX Put this in the managers constructor?
def stop(self):
"""
Stop managing the world.
This can be a time-consuming, blocking operation, while the world's
data is serialized.
Note to callers: If you want the world time to be accurate, don't
forget to write it back before calling this method!
"""
self.chunk_management_loop.stop()
# Flush all dirty chunks to disk.
for chunk in self.dirty_chunk_cache.itervalues():
self.save_chunk(chunk)
# Evict all chunks.
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
# Save the level data.
self.serializer.save_level(self.level)
def enable_cache(self, size):
"""
Set the permanent cache size.
Changing the size of the cache sets off a series of events which will
empty or fill the cache to make it the proper size.
For reference, 3 is a large-enough size to completely satisfy the
Notchian client's login demands. 10 is enough to completely fill the
Notchian client's chunk buffer.
:param int size: The taxicab radius of the cache, in chunks
"""
log.msg("Setting cache size to %d, please hold..." % size)
self.permanent_cache = set()
assign = self.permanent_cache.add
x = self.level.spawn[0] // 16
z = self.level.spawn[2] // 16
rx = xrange(x - size, x + size)
rz = xrange(z - size, z + size)
for x, z in product(rx, rz):
log.msg("Adding %d, %d to cache..." % (x, z))
self.request_chunk(x, z).addCallback(assign)
log.msg("Cache size is now %d!" % size)
def sort_chunks(self):
"""
Sort out the internal caches.
This method will always block when there are dirty chunks.
"""
first = True
all_chunks = dict(self.dirty_chunk_cache)
all_chunks.update(self.chunk_cache)
self.chunk_cache.clear()
self.dirty_chunk_cache.clear()
for coords, chunk in all_chunks.iteritems():
if chunk.dirty:
if first:
first = False
self.save_chunk(chunk)
self.chunk_cache[coords] = chunk
else:
self.dirty_chunk_cache[coords] = chunk
else:
self.chunk_cache[coords] = chunk
def save_off(self):
"""
Disable saving to disk.
This is useful for accessing the world on disk without Bravo
interfering, for backing up the world.
"""
if not self.saving:
return
d = dict(self.chunk_cache)
self.chunk_cache = d
self.saving = False
def save_on(self):
"""
Enable saving to disk.
"""
if self.saving:
return
d = weakref.WeakValueDictionary(self.chunk_cache)
self.chunk_cache = d
self.saving = True
def postprocess_chunk(self, chunk):
"""
Do a series of final steps to bring a chunk into the world.
This method might be called multiple times on a chunk, but it should
not be harmful to do so.
"""
# Apply the current season to the chunk.
if self.season:
self.season.transform(chunk)
# Since this chunk hasn't been given to any player yet, there's no
# conceivable way that any meaningful damage has been accumulated;
# anybody loading any part of this chunk will want the entire thing.
# Thus, it should start out undamaged.
chunk.clear_damage()
# Skip some of the spendier scans if we have no factory; for example,
# if we are generating chunks offline.
if not self.factory:
return chunk
# XXX slightly icky, print statements are bad
# Register the chunk's entities with our parent factory.
for entity in chunk.entities:
if hasattr(entity,'loop'):
print "Started mob!"
self.mob_manager.start_mob(entity)
else:
print "I have no loop"
self.factory.register_entity(entity)
# XXX why is this for furnaces only? :T
# Scan the chunk for burning furnaces
for coords, tile in chunk.tiles.iteritems():
# If the furnace was saved while burning ...
if type(tile) == Furnace and tile.burntime != 0:
x, y, z = coords
coords = chunk.x, x, chunk.z, z, y
# ... start it's burning loop
reactor.callLater(2, tile.changed, self.factory, coords)
# Return the chunk, in case we are in a Deferred chain.
return chunk
@inlineCallbacks
def request_chunk(self, x, z):
"""
Request a ``Chunk`` to be delivered later.
:returns: ``Deferred`` that will be called with the ``Chunk``
"""
if (x, z) in self.chunk_cache:
returnValue(self.chunk_cache[x, z])
elif (x, z) in self.dirty_chunk_cache:
returnValue(self.dirty_chunk_cache[x, z])
elif (x, z) in self._pending_chunks:
# Rig up another Deferred and wrap it up in a to-go box.
retval = yield self._pending_chunks[x, z].deferred()
returnValue(retval)
try:
chunk = yield maybeDeferred(self.serializer.load_chunk, x, z)
except SerializerReadException:
# Looks like the chunk wasn't already on disk. Guess we're gonna
# need to keep going.
chunk = Chunk(x, z)
if chunk.populated:
self.chunk_cache[x, z] = chunk
self.postprocess_chunk(chunk)
if self.factory:
self.factory.scan_chunk(chunk)
returnValue(chunk)
if self.async:
from ampoule import deferToAMPProcess
from bravo.remote import MakeChunk
generators = [plugin.name for plugin in self.pipeline]
d = deferToAMPProcess(MakeChunk, x=x, z=z, seed=self.level.seed,
generators=generators)
# Get chunk data into our chunk object.
def fill_chunk(kwargs):
chunk.blocks = array("B")
chunk.blocks.fromstring(kwargs["blocks"])
chunk.heightmap = array("B")
chunk.heightmap.fromstring(kwargs["heightmap"])
chunk.metadata = array("B")
chunk.metadata.fromstring(kwargs["metadata"])
chunk.skylight = array("B")
chunk.skylight.fromstring(kwargs["skylight"])
chunk.blocklight = array("B")
chunk.blocklight.fromstring(kwargs["blocklight"])
return chunk
d.addCallback(fill_chunk)
else:
# Populate the chunk the slow way. :c
for stage in self.pipeline:
stage.populate(chunk, self.level.seed)
chunk.regenerate()
d = succeed(chunk)
# Set up our event and generate our return-value Deferred. It has to
# be done early becaues PendingEvents only fire exactly once and it
# might fire immediately in certain cases.
pe = PendingEvent()
# This one is for our return value.
retval = pe.deferred()
# This one is for scanning the chunk for automatons.
if self.factory:
pe.deferred().addCallback(self.factory.scan_chunk)
self._pending_chunks[x, z] = pe
def pp(chunk):
chunk.populated = True
chunk.dirty = True
self.postprocess_chunk(chunk)
self.dirty_chunk_cache[x, z] = chunk
del self._pending_chunks[x, z]
return chunk
# Set up callbacks.
d.addCallback(pp)
d.chainDeferred(pe)
# Because multiple people might be attached to this callback, we're
# going to do something magical here. We will yield a forked version
# of our Deferred. This means that we will wait right here, for a
# long, long time, before actually returning with the chunk, *but*,
# when we actually finish, we'll be ready to return the chunk
# immediately. Our caller cannot possibly care because they only see a
# Deferred either way.
retval = yield retval
returnValue(retval)
def save_chunk(self, chunk):
if not chunk.dirty or not self.saving:
return
d = maybeDeferred(self.serializer.save_chunk, chunk)
@d.addCallback
def cb(none):
chunk.dirty = False
@d.addErrback
def eb(failure):
failure.trap(SerializerWriteException)
log.msg("Couldn't write %r" % chunk)
def load_player(self, username):
"""
Retrieve player data.
:returns: a ``Deferred`` that will be fired with a ``Player``
"""
# Get the player, possibly.
d = maybeDeferred(self.serializer.load_player, username)
@d.addErrback
def eb(failure):
failure.trap(SerializerReadException)
log.msg("Couldn't load player %r" % username)
# Make a player.
player = Player(username=username)
player.location.x = self.level.spawn[0]
player.location.y = self.level.spawn[1]
player.location.stance = self.level.spawn[1]
player.location.z = self.level.spawn[2]
return player
# This Deferred's good to go as-is.
return d
def save_player(self, username, player):
if self.saving:
self.serializer.save_player(player)
# World-level geometry access.
# These methods let external API users refrain from going through the
# standard motions of looking up and loading chunk information.
@coords_to_chunk
def get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_block(coords)
@coords_to_chunk
def set_block(self, chunk, coords, value):
"""
Set a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_block(coords, value)
@coords_to_chunk
def get_metadata(self, chunk, coords):
"""
Get a block's metadata from an unknown chunk.
:returns: a ``Deferred`` with the requested value
"""
return chunk.get_metadata(coords)
@coords_to_chunk
def set_metadata(self, chunk, coords, value):
"""
Set a block's metadata in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.set_metadata(coords, value)
@coords_to_chunk
def destroy(self, chunk, coords):
"""
Destroy a block in an unknown chunk.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.destroy(coords)
@coords_to_chunk
def mark_dirty(self, chunk, coords):
"""
Mark an unknown chunk dirty.
:returns: a ``Deferred`` that will fire on completion
"""
chunk.dirty = True
@sync_coords_to_chunk
def sync_get_block(self, chunk, coords):
"""
Get a block from an unknown chunk.
|
bravoserver/bravo
|
55d3919ef12579918b870f17115a59c32a51b631
|
Fixed PEP8 E303.
|
diff --git a/bravo/blocks.py b/bravo/blocks.py
index 9042239..43c4c21 100644
--- a/bravo/blocks.py
+++ b/bravo/blocks.py
@@ -42,797 +42,796 @@ class Block(object):
:param tuple drop: The type of block that should be dropped when an
instance of this block is destroyed. Defaults to the block value,
to drop instances of this same type of block. To indicate that
this block does not drop anything, set to air (0, 0).
:param int replace: The type of block to place in the map when
instances of this block are destroyed. Defaults to air.
:param float ratio: The probability of this block dropping a block
on destruction.
:param int quantity: The number of blocks dropped when this block
is destroyed.
:param int dim: How much light dims when passing through this kind
of block. Defaults to 16 = opaque block.
:param bool breakable: Whether this block is diggable, breakable,
bombable, explodeable, etc. Only a few blocks actually genuinely
cannot be broken, so the default is True.
:param tuple orientation: The orientation data for a block. See
:meth:`orientable` for an explanation. The data should be in standard
face order.
:param bool vanishes: Whether this block vanishes, or is replaced by,
another block when built upon.
"""
self.slot = slot
self.name = name
self.key = (self.slot, secondary)
if drop is None:
self.drop = self.key
else:
self.drop = drop
self.replace = replace
self.ratio = ratio
self.quantity = quantity
self.dim = dim
self.breakable = breakable
self.vanishes = vanishes
if orientation:
self._o_dict = dict(zip(faces, orientation))
self._f_dict = dict(zip(orientation, faces))
else:
self._o_dict = self._f_dict = {}
def __str__(self):
"""
Fairly verbose explanation of what this block is capable of.
"""
attributes = []
if not self.breakable:
attributes.append("unbreakable")
if self.dim == 0:
attributes.append("transparent")
elif self.dim < 16:
attributes.append("translucent (%d)" % self.dim)
if self.replace:
attributes.append("becomes %d" % self.replace)
if self.ratio != 1 or self.quantity > 1 or self.drop != self.key:
attributes.append("drops %r (key %r, rate %2.2f%%)" %
(self.quantity, self.drop, self.ratio * 100))
if attributes:
attributes = ": %s" % ", ".join(attributes)
else:
attributes = ""
return "Block(%r %r%s)" % (self.key, self.name, attributes)
__repr__ = __str__
def orientable(self):
"""
Whether this block can be oriented.
Orientable blocks are positioned according to the face on which they
are built. They may not be buildable on all faces. Blocks are only
orientable if their metadata can be used to directly and uniquely
determine the face against which they were built.
Ladders are orientable, signposts are not.
:rtype: bool
:returns: True if this block can be oriented, False if not.
"""
return bool(self._o_dict)
def face(self, metadata):
"""
Retrieve the face for given metadata corresponding to an orientation,
or None if the metadata is invalid for this block.
This method only returns valid data for orientable blocks; check
:meth:`orientable` first.
"""
return self._f_dict.get(metadata)
def orientation(self, face):
"""
Retrieve the metadata for a certain orientation, or None if this block
cannot be built against the given face.
This method only returns valid data for orientable blocks; check
:meth:`orientable` first.
"""
return self._o_dict.get(face)
class Item(object):
"""
An item.
"""
__slots__ = (
"key",
"name",
"slot",
)
def __init__(self, slot, name, secondary=0):
self.slot = slot
self.name = name
self.key = (self.slot, secondary)
def __str__(self):
return "Item(%r %r)" % (self.key, self.name)
__repr__ = __str__
block_names = [
"air", # 0x0
"stone",
"grass",
"dirt",
"cobblestone",
"wood",
"sapling",
"bedrock",
"water",
"spring",
"lava",
"lava-spring",
"sand",
"gravel",
"gold-ore",
"iron-ore",
"coal-ore", # 0x10
"log",
"leaves",
"sponge",
"glass",
"lapis-lazuli-ore",
"lapis-lazuli-block",
"dispenser",
"sandstone",
"note-block",
"bed-block",
"powered-rail",
"detector-rail",
"sticky-piston",
"spider-web",
"tall-grass",
"shrub", # 0x20
"piston",
"",
"wool",
"",
"flower",
"rose",
"brown-mushroom",
"red-mushroom",
"gold",
"iron",
"double-stone-slab",
"single-stone-slab",
"brick",
"tnt",
"bookshelf",
"mossy-cobblestone", # 0x30
"obsidian",
"torch",
"fire",
"mob-spawner",
"wooden-stairs",
"chest",
"redstone-wire",
"diamond-ore",
"diamond-block",
"workbench",
"crops",
"soil",
"furnace",
"burning-furnace",
"signpost",
"wooden-door-block", # 0x40
"ladder",
"tracks",
"stone-stairs",
"wall-sign",
"lever",
"stone-plate",
"iron-door-block",
"wooden-plate",
"redstone-ore",
"glowing-redstone-ore",
"redstone-torch-off",
"redstone-torch",
"stone-button",
"snow",
"ice",
"snow-block", # 0x50
"cactus",
"clay",
"reed",
"jukebox",
"fence",
"pumpkin",
"brimstone",
"slow-sand",
"lightstone",
"portal",
"jack-o-lantern",
"cake-block",
"redstone-repeater-off",
"redstone-repeater-on",
"locked-chest",
"trapdoor", # 0x60
"hidden-silverfish",
"stone-brick",
"huge-brown-mushroom",
"huge-red-mushroom",
"iron-bars",
"glass-pane",
"melon",
"pumpkin-stem",
"melon-stem",
"vine",
"fence-gate",
"brick-stairs",
"stone-brick-stairs",
"mycelium",
"lily-pad",
"nether-brick", # 0x70
"nether-brick-fence",
"nether-brick-stairs",
"nether-wart-block", # 0x73
"",
"",
"",
"",
"",
"",
"",
"",
"",
"double-wooden-slab",
"single-wooden-slab",
"",
"", # 0x80
"emerald-ore",
"",
"",
"",
"emerald-block", # 0x85
]
item_names = [
"iron-shovel", # 0x100
"iron-pickaxe",
"iron-axe",
"flint-and-steel",
"apple",
"bow",
"arrow",
"coal",
"diamond",
"iron-ingot",
"gold-ingot",
"iron-sword",
"wooden-sword",
"wooden-shovel",
"wooden-pickaxe",
"wooden-axe",
"stone-sword", # 0x110
"stone-shovel",
"stone-pickaxe",
"stone-axe",
"diamond-sword",
"diamond-shovel",
"diamond-pickaxe",
"diamond-axe",
"stick",
"bowl",
"mushroom-soup",
"gold-sword",
"gold-shovel",
"gold-pickaxe",
"gold-axe",
"string",
"feather", # 0x120
"sulphur",
"wooden-hoe",
"stone-hoe",
"iron-hoe",
"diamond-hoe",
"gold-hoe",
"seeds",
"wheat",
"bread",
"leather-helmet",
"leather-chestplate",
"leather-leggings",
"leather-boots",
"chainmail-helmet",
"chainmail-chestplate",
"chainmail-leggings", # 0x130
"chainmail-boots",
"iron-helmet",
"iron-chestplate",
"iron-leggings",
"iron-boots",
"diamond-helmet",
"diamond-chestplate",
"diamond-leggings",
"diamond-boots",
"gold-helmet",
"gold-chestplate",
"gold-leggings",
"gold-boots",
"flint",
"raw-porkchop",
"cooked-porkchop", # 0x140
"paintings",
"golden-apple",
"sign",
"wooden-door",
"bucket",
"water-bucket",
"lava-bucket",
"mine-cart",
"saddle",
"iron-door",
"redstone",
"snowball",
"boat",
"leather",
"milk",
"clay-brick", # 0x150
"clay-balls",
"sugar-cane",
"paper",
"book",
"slimeball",
"storage-minecart",
"powered-minecart",
"egg",
"compass",
"fishing-rod",
"clock",
"glowstone-dust",
"raw-fish",
"cooked-fish",
"dye",
"bone", # 0x160
"sugar",
"cake",
"bed",
"redstone-repeater",
"cookie",
"map",
"shears",
"melon-slice",
"pumpkin-seeds",
"melon-seeds",
"raw-beef",
"steak",
"raw-chicken",
"cooked-chicken",
"rotten-flesh",
"ender-pearl", # 0x170
"blaze-rod",
"ghast-tear",
"gold-nugget",
"nether-wart",
"potions",
"glass-bottle",
"spider-eye",
"fermented-spider-eye",
"blaze-powder",
"magma-cream", # 0x17a
"",
"",
"",
"",
"",
"", # 0x180
"",
"",
"",
"emerald", #0x184
]
special_item_names = [
"gold-music-disc",
"green-music-disc",
"blocks-music-disc",
"chirp-music-disc",
"far-music-disc"
]
dye_names = [
"ink-sac",
"red-dye",
"green-dye",
"cocoa-beans",
"lapis-lazuli",
"purple-dye",
"cyan-dye",
"light-gray-dye",
"gray-dye",
"pink-dye",
"lime-dye",
"yellow-dye",
"light-blue-dye",
"magenta-dye",
"orange-dye",
"bone-meal",
]
wool_names = [
"white-wool",
"orange-wool",
"magenta-wool",
"light-blue-wool",
"yellow-wool",
"lime-wool",
"pink-wool",
"gray-wool",
"light-gray-wool",
"cyan-wool",
"purple-wool",
"blue-wool",
"brown-wool",
"dark-green-wool",
"red-wool",
"black-wool",
]
sapling_names = [
"normal-sapling",
"pine-sapling",
"birch-sapling",
"jungle-sapling",
]
log_names = [
"normal-log",
"pine-log",
"birch-log",
"jungle-log",
]
leaf_names = [
"normal-leaf",
"pine-leaf",
"birch-leaf",
"jungle-leaf",
]
coal_names = [
"normal-coal",
"charcoal",
]
step_names = [
"single-stone-slab",
"single-sandstone-slab",
"single-wooden-slab",
"single-cobblestone-slab",
]
drops = {}
# Block -> block drops.
# If the drop block is zero, then it drops nothing.
drops[1] = (4, 0) # Stone -> Cobblestone
drops[2] = (3, 0) # Grass -> Dirt
drops[20] = (0, 0) # Glass
drops[52] = (0, 0) # Mob spawner
drops[60] = (3, 0) # Soil -> Dirt
drops[62] = (61, 0) # Burning Furnace -> Furnace
drops[78] = (0, 0) # Snow
# Block -> item drops.
drops[16] = (263, 0) # Coal Ore Block -> Coal
drops[56] = (264, 0) # Diamond Ore Block -> Diamond
drops[63] = (323, 0) # Sign Post -> Sign Item
drops[68] = (323, 0) # Wall Sign -> Sign Item
drops[83] = (338, 0) # Reed -> Reed Item
drops[89] = (348, 0) # Lightstone -> Lightstone Dust
drops[93] = (356, 0) # Redstone Repeater, on -> Redstone Repeater
drops[94] = (356, 0) # Redstone Repeater, off -> Redstone Repeater
drops[97] = (0, 0) # Hidden Silverfish
drops[110] = (3, 0) # Mycelium -> Dirt
drops[111] = (0, 0) # Lily Pad
drops[115] = (372, 0) # Nether Wart BLock -> Nether Wart
-
unbreakables = set()
unbreakables.add(0) # Air
unbreakables.add(7) # Bedrock
unbreakables.add(10) # Lava
unbreakables.add(11) # Lava spring
# When one of these is targeted and a block is placed, these are replaced
softblocks = set()
softblocks.add(30) # Cobweb
softblocks.add(31) # Tall grass
softblocks.add(70) # Snow
softblocks.add(106) # Vines
dims = {}
dims[0] = 0 # Air
dims[6] = 0 # Sapling
dims[10] = 0 # Lava
dims[11] = 0 # Lava spring
dims[20] = 0 # Glass
dims[26] = 0 # Bed
dims[37] = 0 # Yellow Flowers
dims[38] = 0 # Red Flowers
dims[39] = 0 # Brown Mushrooms
dims[40] = 0 # Red Mushrooms
dims[44] = 0 # Single Step
dims[51] = 0 # Fire
dims[52] = 0 # Mob spawner
dims[53] = 0 # Wooden stairs
dims[55] = 0 # Redstone (Wire)
dims[59] = 0 # Crops
dims[60] = 0 # Soil
dims[63] = 0 # Sign
dims[64] = 0 # Wood door
dims[66] = 0 # Rails
dims[67] = 0 # Stone stairs
dims[68] = 0 # Sign (on wall)
dims[69] = 0 # Lever
dims[70] = 0 # Stone Pressure Plate
dims[71] = 0 # Iron door
dims[72] = 0 # Wood Pressure Plate
dims[78] = 0 # Snow
dims[81] = 0 # Cactus
dims[83] = 0 # Sugar Cane
dims[85] = 0 # Fence
dims[90] = 0 # Portal
dims[92] = 0 # Cake
dims[93] = 0 # redstone-repeater-off
dims[94] = 0 # redstone-repeater-on
blocks = {}
"""
A dictionary of ``Block`` objects.
This dictionary can be indexed by slot number or block name.
"""
def _add_block(block):
blocks[block.slot] = block
blocks[block.name] = block
# Special blocks. Please remember to comment *what* makes the block special;
# most of us don't have all blocks memorized yet.
# Water (both kinds) is unbreakable, and dims by 3.
_add_block(Block(8, "water", breakable=False, dim=3))
_add_block(Block(9, "spring", breakable=False, dim=3))
# Gravel drops flint, with 1 in 10 odds.
_add_block(Block(13, "gravel", drop=(318, 0), ratio=1 / 10))
# Leaves drop saplings, with 1 in 9 odds, and dims by 1.
_add_block(Block(18, "leaves", drop=(6, 0), ratio=1 / 9, dim=1))
# Lapis lazuli ore drops 6 lapis lazuli items.
_add_block(Block(21, "lapis-lazuli-ore", drop=(351, 4), quantity=6))
# Beds are orientable and drops Bed Item
_add_block(Block(26, "bed-block", drop=(355, 0),
orientation=(None, None, 2, 0, 1, 3)))
# Torches are orientable and don't dim.
_add_block(Block(50, "torch", orientation=(None, 5, 4, 3, 2, 1), dim=0))
# Chests are orientable.
_add_block(Block(54, "chest", orientation=(None, None, 2, 3, 4, 5)))
# Furnaces are orientable.
_add_block(Block(61, "furnace", orientation=(None, None, 2, 3, 4, 5)))
# Wooden Door is orientable and drops Wooden Door item
_add_block(Block(64, "wooden-door-block", drop=(324, 0),
orientation=(None, None, 1, 3, 0, 2)))
# Ladders are orientable and don't dim.
_add_block(Block(65, "ladder", orientation=(None, None, 2, 3, 4, 5), dim=0))
# Levers are orientable and don't dim. Additionally, levers have special hax
# to be orientable two different ways.
_add_block(Block(69, "lever", orientation=(None, 5, 4, 3, 2, 1), dim=0))
blocks["lever"]._f_dict.update(
{13: "+y", 12: "-z", 11: "+z", 10: "-x", 9: "+x"})
# Iron Door is orientable and drops Iron Door item
_add_block(Block(71, "iron-door-block", drop=(330, 0),
orientation=(None, None, 1, 3, 0, 2)))
# Redstone ore drops 5 redstone dusts.
_add_block(Block(73, "redstone-ore", drop=(331, 0), quantity=5))
_add_block(Block(74, "glowing-redstone-ore", drop=(331, 0), quantity=5))
# Redstone torches are orientable and don't dim.
_add_block(Block(75, "redstone-torch-off", orientation=(None, 5, 4, 3, 2, 1), dim=0))
_add_block(Block(76, "redstone-torch", orientation=(None, 5, 4, 3, 2, 1), dim=0))
# Stone buttons are orientable and don't dim.
_add_block(Block(77, "stone-button", orientation=(None, None, 1, 2, 3, 4), dim=0))
# Snow vanishes upon build.
_add_block(Block(78, "snow", vanishes=True))
# Ice drops nothing, is replaced by springs, and dims by 3.
_add_block(Block(79, "ice", drop=(0, 0), replace=9, dim=3))
# Clay drops 4 clay balls.
_add_block(Block(82, "clay", drop=(337, 0), quantity=4))
# Trapdoor is orientable
_add_block(Block(96, "trapdoor", orientation=(None, None, 0, 1, 2, 3)))
# Giant brown mushrooms drop brown mushrooms.
_add_block(Block(99, "huge-brown-mushroom", drop=(39, 0), quantity=2))
# Giant red mushrooms drop red mushrooms.
_add_block(Block(100, "huge-red-mushroom", drop=(40, 0), quantity=2))
# Pumpkin stems drop pumpkin seeds.
_add_block(Block(104, "pumpkin-stem", drop=(361, 0), quantity=3))
# Melon stems drop melon seeds.
_add_block(Block(105, "melon-stem", drop=(362, 0), quantity=3))
for block in blocks.values():
blocks[block.name] = block
blocks[block.slot] = block
items = {}
"""
A dictionary of ``Item`` objects.
This dictionary can be indexed by slot number or block name.
"""
for i, name in enumerate(block_names):
if not name or name in blocks:
continue
kwargs = {}
if i in drops:
kwargs["drop"] = drops[i]
if i in unbreakables:
kwargs["breakable"] = False
if i in dims:
kwargs["dim"] = dims[i]
b = Block(i, name, **kwargs)
_add_block(b)
for i, name in enumerate(item_names):
kwargs = {}
i += 0x100
item = Item(i, name, **kwargs)
items[i] = item
items[name] = item
for i, name in enumerate(special_item_names):
kwargs = {}
i += 0x8D0
item = Item(i, name, **kwargs)
items[i] = item
items[name] = item
_secondary_items = {
items["coal"]: coal_names,
items["dye"]: dye_names,
}
for base_item, names in _secondary_items.iteritems():
for i, name in enumerate(names):
kwargs = {}
item = Item(base_item.slot, name, i, **kwargs)
items[name] = item
_secondary_blocks = {
blocks["leaves"]: leaf_names,
blocks["log"]: log_names,
blocks["sapling"]: sapling_names,
blocks["single-stone-slab"]: step_names,
blocks["wool"]: wool_names,
}
for base_block, names in _secondary_blocks.iteritems():
for i, name in enumerate(names):
kwargs = {}
kwargs["drop"] = base_block.drop
kwargs["breakable"] = base_block.breakable
kwargs["dim"] = base_block.dim
block = Block(base_block.slot, name, i, **kwargs)
_add_block(block)
glowing_blocks = {
blocks["torch"].slot: 14,
blocks["lightstone"].slot: 15,
blocks["jack-o-lantern"].slot: 15,
blocks["fire"].slot: 15,
blocks["lava"].slot: 15,
blocks["lava-spring"].slot: 15,
blocks["locked-chest"].slot: 15,
blocks["burning-furnace"].slot: 13,
blocks["portal"].slot: 11,
blocks["glowing-redstone-ore"].slot: 9,
blocks["redstone-repeater-on"].slot: 9,
blocks["redstone-torch"].slot: 7,
blocks["brown-mushroom"].slot: 1,
}
armor_helmets = (86, 298, 302, 306, 310, 314)
"""
List of slots of helmets.
Note that slot 86 (pumpkin) is a helmet.
"""
armor_chestplates = (299, 303, 307, 311, 315)
"""
List of slots of chestplates.
Note that slot 303 (chainmail chestplate) is a chestplate, even though it is
not normally obtainable.
"""
armor_leggings = (300, 304, 308, 312, 316)
"""
List of slots of leggings.
"""
armor_boots = (301, 305, 309, 313, 317)
"""
List of slots of boots.
"""
"""
List of unstackable items
"""
unstackable = (
items["wooden-sword"].slot,
items["wooden-shovel"].slot,
items["wooden-pickaxe"].slot,
# TODO: update the list
)
"""
List of fuel blocks and items maped to burn time
"""
furnace_fuel = {
items["stick"].slot: 10, # 5s
blocks["sapling"].slot: 10, # 5s
blocks["wood"].slot: 30, # 15s
blocks["fence"].slot: 30, # 15s
blocks["wooden-stairs"].slot: 30, # 15s
blocks["trapdoor"].slot: 30, # 15s
blocks["log"].slot: 30, # 15s
blocks["workbench"].slot: 30, # 15s
blocks["bookshelf"].slot: 30, # 15s
blocks["chest"].slot: 30, # 15s
blocks["locked-chest"].slot: 30, # 15s
blocks["jukebox"].slot: 30, # 15s
blocks["note-block"].slot: 30, # 15s
items["coal"].slot: 160, # 80s
items["lava-bucket"].slot: 2000 # 1000s
}
def parse_block(block):
"""
Get the key for a given block/item.
"""
try:
if block.startswith("0x") and (
(int(block, 16) in blocks) or (int(block, 16) in items)):
return (int(block, 16), 0)
elif (int(block) in blocks) or (int(block) in items):
return (int(block), 0)
else:
raise Exception("Couldn't find block id %s!" % block)
except ValueError:
if block in blocks:
return blocks[block].key
elif block in items:
return items[block].key
else:
raise Exception("Couldn't parse block %s!" % block)
diff --git a/bravo/nbt.py b/bravo/nbt.py
index 413e661..bf6d648 100644
--- a/bravo/nbt.py
+++ b/bravo/nbt.py
@@ -1,409 +1,408 @@
from struct import Struct, error as StructError
from gzip import GzipFile
from UserDict import DictMixin
from bravo.errors import MalformedFileError
TAG_END = 0
TAG_BYTE = 1
TAG_SHORT = 2
TAG_INT = 3
TAG_LONG = 4
TAG_FLOAT = 5
TAG_DOUBLE = 6
TAG_BYTE_ARRAY = 7
TAG_STRING = 8
TAG_LIST = 9
TAG_COMPOUND = 10
class TAG(object):
"""Each Tag needs to take a file-like object for reading and writing.
The file object will be initialised by the calling code."""
id = None
def __init__(self, value=None, name=None):
self.name = name
self.value = value
#Parsers and Generators
def _parse_buffer(self, buffer):
raise NotImplementedError(self.__class__.__name__)
def _render_buffer(self, buffer, offset=None):
raise NotImplementedError(self.__class__.__name__)
#Printing and Formatting of tree
def tag_info(self):
return self.__class__.__name__ + \
('("%s")'%self.name if self.name else "") + \
": " + self.__repr__()
def pretty_tree(self, indent=0):
return ("\t"*indent) + self.tag_info()
class _TAG_Numeric(TAG):
def __init__(self, value=None, name=None, buffer=None):
super(_TAG_Numeric, self).__init__(value, name)
self.size = self.fmt.size
if buffer:
self._parse_buffer(buffer)
#Parsers and Generators
def _parse_buffer(self, buffer, offset=None):
self.value = self.fmt.unpack(buffer.read(self.size))[0]
def _render_buffer(self, buffer, offset=None):
buffer.write(self.fmt.pack(self.value))
#Printing and Formatting of tree
def __repr__(self):
return str(self.value)
#== Value Tags ==#
class TAG_Byte(_TAG_Numeric):
id = TAG_BYTE
fmt = Struct(">b")
class TAG_Short(_TAG_Numeric):
id = TAG_SHORT
fmt = Struct(">h")
class TAG_Int(_TAG_Numeric):
id = TAG_INT
fmt = Struct(">i")
class TAG_Long(_TAG_Numeric):
id = TAG_LONG
fmt = Struct(">q")
class TAG_Float(_TAG_Numeric):
id = TAG_FLOAT
fmt = Struct(">f")
class TAG_Double(_TAG_Numeric):
id = TAG_DOUBLE
fmt = Struct(">d")
class TAG_Byte_Array(TAG):
id = TAG_BYTE_ARRAY
def __init__(self, buffer=None):
super(TAG_Byte_Array, self).__init__()
self.value = ''
if buffer:
self._parse_buffer(buffer)
#Parsers and Generators
def _parse_buffer(self, buffer, offset=None):
length = TAG_Int(buffer=buffer)
self.value = buffer.read(length.value)
def _render_buffer(self, buffer, offset=None):
length = TAG_Int(len(self.value))
length._render_buffer(buffer, offset)
buffer.write(self.value)
#Printing and Formatting of tree
def __repr__(self):
return "[%i bytes]" % len(self.value)
class TAG_String(TAG):
id = TAG_STRING
def __init__(self, value=None, name=None, buffer=None):
super(TAG_String, self).__init__(value, name)
if buffer:
self._parse_buffer(buffer)
#Parsers and Generators
def _parse_buffer(self, buffer, offset=None):
length = TAG_Short(buffer=buffer)
read = buffer.read(length.value)
if len(read) != length.value:
raise StructError()
self.value = unicode(read, "utf-8")
def _render_buffer(self, buffer, offset=None):
save_val = self.value.encode("utf-8")
length = TAG_Short(len(save_val))
length._render_buffer(buffer, offset)
buffer.write(save_val)
#Printing and Formatting of tree
def __repr__(self):
return self.value
#== Collection Tags ==#
class TAG_List(TAG):
id = TAG_LIST
def __init__(self, type=None, value=None, name=None, buffer=None):
super(TAG_List, self).__init__(value, name)
if type:
self.tagID = type.id
else: self.tagID = None
self.tags = []
if buffer:
self._parse_buffer(buffer)
if not self.tagID:
raise ValueError("No type specified for list")
#Parsers and Generators
def _parse_buffer(self, buffer, offset=None):
self.tagID = TAG_Byte(buffer=buffer).value
self.tags = []
length = TAG_Int(buffer=buffer)
for x in range(length.value):
self.tags.append(TAGLIST[self.tagID](buffer=buffer))
def _render_buffer(self, buffer, offset=None):
TAG_Byte(self.tagID)._render_buffer(buffer, offset)
length = TAG_Int(len(self.tags))
length._render_buffer(buffer, offset)
for i, tag in enumerate(self.tags):
if tag.id != self.tagID:
raise ValueError("List element %d(%s) has type %d != container type %d" %
(i, tag, tag.id, self.tagID))
tag._render_buffer(buffer, offset)
#Printing and Formatting of tree
def __repr__(self):
return "%i entries of type %s" % (len(self.tags), TAGLIST[self.tagID].__name__)
def pretty_tree(self, indent=0):
output = [super(TAG_List,self).pretty_tree(indent)]
if len(self.tags):
output.append(("\t"*indent) + "{")
output.extend([tag.pretty_tree(indent+1) for tag in self.tags])
output.append(("\t"*indent) + "}")
return '\n'.join(output)
class TAG_Compound(TAG, DictMixin):
id = TAG_COMPOUND
def __init__(self, buffer=None):
super(TAG_Compound, self).__init__()
self.tags = []
if buffer:
self._parse_buffer(buffer)
#Parsers and Generators
def _parse_buffer(self, buffer, offset=None):
while True:
type = TAG_Byte(buffer=buffer)
if type.value == TAG_END:
#print "found tag_end"
break
else:
name = TAG_String(buffer=buffer).value
try:
#DEBUG print type, name
tag = TAGLIST[type.value](buffer=buffer)
tag.name = name
self.tags.append(tag)
except KeyError:
raise ValueError("Unrecognised tag type")
def _render_buffer(self, buffer, offset=None):
for tag in self.tags:
TAG_Byte(tag.id)._render_buffer(buffer, offset)
TAG_String(tag.name)._render_buffer(buffer, offset)
tag._render_buffer(buffer,offset)
buffer.write('\x00') #write TAG_END
# Dict compatibility.
# DictMixin requires at least __getitem__, and for more functionality,
# __setitem__, __delitem__, and keys.
def __getitem__(self, key):
if isinstance(key,int):
return self.tags[key]
elif isinstance(key, str):
for tag in self.tags:
if tag.name == key:
return tag
else:
raise KeyError("A tag with this name does not exist")
else:
raise ValueError("key needs to be either name of tag, or index of tag")
def __setitem__(self, key, value):
if isinstance(key, int):
# Just try it. The proper error will be raised if it doesn't work.
self.tags[key] = value
elif isinstance(key, str):
value.name = key
for i, tag in enumerate(self.tags):
if tag.name == key:
self.tags[i] = value
return
self.tags.append(value)
def __delitem__(self, key):
if isinstance(key, int):
self.tags = self.tags[:key] + self.tags[key:]
elif isinstance(key, str):
for i, tag in enumerate(self.tags):
if tag.name == key:
self.tags = self.tags[:i] + self.tags[i:]
return
raise KeyError("A tag with this name does not exist")
else:
raise ValueError("key needs to be either name of tag, or index of tag")
def keys(self):
return [tag.name for tag in self.tags]
-
#Printing and Formatting of tree
def __repr__(self):
return '%i Entries' % len(self.tags)
def pretty_tree(self, indent=0):
output = [super(TAG_Compound,self).pretty_tree(indent)]
if len(self.tags):
output.append(("\t"*indent) + "{")
output.extend([tag.pretty_tree(indent+1) for tag in self.tags])
output.append(("\t"*indent) + "}")
return '\n'.join(output)
TAGLIST = {TAG_BYTE:TAG_Byte, TAG_SHORT:TAG_Short, TAG_INT:TAG_Int, TAG_LONG:TAG_Long, TAG_FLOAT:TAG_Float, TAG_DOUBLE:TAG_Double, TAG_BYTE_ARRAY:TAG_Byte_Array, TAG_STRING:TAG_String, TAG_LIST:TAG_List, TAG_COMPOUND:TAG_Compound}
class NBTFile(TAG_Compound):
"""Represents an NBT file object"""
def __init__(self, filename=None, mode=None, buffer=None, fileobj=None):
super(NBTFile,self).__init__()
self.__class__.__name__ = "TAG_Compound"
self.filename = filename
self.type = TAG_Byte(self.id)
#make a file object
if filename:
self.file = GzipFile(filename, mode)
elif buffer:
self.file = buffer
elif fileobj:
self.file = GzipFile(fileobj=fileobj)
else:
self.file = None
#parse the file given intitially
if self.file:
self.parse_file()
if filename and 'close' in dir(self.file):
self.file.close()
self.file = None
def parse_file(self, filename=None, buffer=None, fileobj=None):
if filename:
self.file = GzipFile(filename, 'rb')
elif buffer:
self.file = buffer
elif fileobj:
self.file = GzipFile(fileobj=fileobj)
if self.file:
try:
type = TAG_Byte(buffer=self.file)
if type.value == self.id:
name = TAG_String(buffer=self.file).value
self._parse_buffer(self.file)
self.name = name
self.file.close()
else:
raise MalformedFileError("First record is not a Compound Tag")
except StructError:
raise MalformedFileError("Partial File Parse: file possibly truncated.")
else: ValueError("need a file!")
def write_file(self, filename=None, buffer=None, fileobj=None):
if buffer:
self.file = buffer
elif filename:
self.file = GzipFile(filename, "wb")
elif fileobj:
self.file = GzipFile(fileobj=fileobj)
elif self.filename:
self.file = GzipFile(self.filename, "wb")
elif not self.file:
raise ValueError("Need to specify either a filename or a file")
#Render tree to file
TAG_Byte(self.id)._render_buffer(self.file)
TAG_String(self.name)._render_buffer(self.file)
self._render_buffer(self.file)
#make sure the file is complete
if 'flush' in dir(self.file):
self.file.flush()
if filename and 'close' in dir(self.file):
self.file.close()
# Useful utility functions for handling large NBT structures elegantly and
# Pythonically.
def unpack_nbt(tag):
"""
Unpack an NBT tag into a native Python data structure.
"""
if isinstance(tag, TAG_List):
return [unpack_nbt(i) for i in tag.tags]
elif isinstance(tag, TAG_Compound):
return dict((i.name, unpack_nbt(i)) for i in tag.tags)
else:
return tag.value
def pack_nbt(s):
"""
Pack a native Python data structure into an NBT tag. Only the following
structures and types are supported:
* int
* float
* str
* unicode
* dict
Additionally, arbitrary iterables are supported.
Packing is not lossless. In order to avoid data loss, TAG_Long and
TAG_Double are preferred over the less precise numerical formats.
Lists and tuples may become dicts on unpacking if they were not homogenous
during packing, as a side-effect of NBT's format. Nothing can be done
about this.
Only strings are supported as keys for dicts and other mapping types. If
your keys are not strings, they will be coerced. (Resistance is futile.)
"""
if isinstance(s, int):
return TAG_Long(s)
elif isinstance(s, float):
return TAG_Double(s)
elif isinstance(s, (str, unicode)):
return TAG_String(s)
elif isinstance(s, dict):
tag = TAG_Compound()
for k, v in s:
v = pack_nbt(v)
v.name = str(k)
tag.tags.append(v)
return tag
elif hasattr(s, "__iter__"):
# We arrive at a slight quandry. NBT lists must be homogenous, unlike
# Python lists. NBT compounds work, but require unique names for every
# entry. On the plus side, this technique should work for arbitrary
# iterables as well.
tags = [pack_nbt(i) for i in s]
t = type(tags[0])
# If we're homogenous...
if all(t == type(i) for i in tags):
tag = TAG_List(type=t)
tag.tags = tags
else:
tag = TAG_Compound()
for i, item in enumerate(tags):
item.name = str(i)
tag.tags = tags
return tag
else:
raise ValueError("Couldn't serialise type %s!" % type(s))
diff --git a/bravo/tests/utilities/test_redstone.py b/bravo/tests/utilities/test_redstone.py
index b3bb2c1..07b8e7c 100644
--- a/bravo/tests/utilities/test_redstone.py
+++ b/bravo/tests/utilities/test_redstone.py
@@ -1,419 +1,418 @@
from unittest import TestCase
from bravo.blocks import blocks
from bravo.utilities.redstone import (RedstoneError, Asic, Lever, PlainBlock,
Torch, Wire, bbool, truthify_block)
class TestTruthifyBlock(TestCase):
"""
Truthiness is serious business.
"""
def test_falsify_lever(self):
"""
Levers should be falsifiable without affecting which block they are
attached to.
"""
self.assertEqual(truthify_block(False, blocks["lever"].slot, 0xd),
(blocks["lever"].slot, 0x5))
def test_truthify_lever(self):
"""
Ditto for truthification.
"""
self.assertEqual(truthify_block(True, blocks["lever"].slot, 0x3),
(blocks["lever"].slot, 0xb))
-
def test_wire_idempotence(self):
"""
A wire which is already on shouldn't have its value affected by
``truthify_block()``.
"""
self.assertEqual(
truthify_block(True, blocks["redstone-wire"].slot, 0x9),
(blocks["redstone-wire"].slot, 0x9))
class TestBBool(TestCase):
"""
Blocks are castable to bools, with the help of ``bbool()``.
"""
def test_wire_false(self):
self.assertFalse(bbool(blocks["redstone-wire"].slot, 0x0))
def test_wire_true(self):
self.assertTrue(bbool(blocks["redstone-wire"].slot, 0xc))
def test_lever_false(self):
self.assertFalse(bbool(blocks["lever"].slot, 0x7))
def test_lever_true(self):
self.assertTrue(bbool(blocks["lever"].slot, 0xf))
def test_torch_false(self):
self.assertFalse(bbool(blocks["redstone-torch-off"].slot, 0x0))
def test_torch_true(self):
self.assertTrue(bbool(blocks["redstone-torch"].slot, 0x0))
class TestCircuitPlain(TestCase):
def test_sand_iter_outputs(self):
"""
Sand has several outputs.
"""
sand = PlainBlock((0, 0, 0), blocks["sand"].slot, 0x0)
self.assertTrue((0, 1, 0) in sand.iter_outputs())
class TestCircuitTorch(TestCase):
def test_torch_bad_metadata(self):
"""
Torch circuits know immediately if they have been fed bad metadata.
"""
self.assertRaises(RedstoneError, Torch, (0, 0, 0),
blocks["redstone-torch"].slot, 0x0)
def test_torch_plus_y_iter_inputs(self):
"""
A torch with +y orientation sits on top of a block.
"""
torch = Torch((0, 1, 0), blocks["redstone-torch"].slot,
blocks["redstone-torch"].orientation("+y"))
self.assertTrue((0, 0, 0) in torch.iter_inputs())
def test_torch_plus_z_input_output(self):
"""
A torch with +z orientation accepts input from one block, and sends
output to three blocks around it.
"""
torch = Torch((0, 0, 0), blocks["redstone-torch"].slot,
blocks["redstone-torch"].orientation("+z"))
self.assertTrue((0, 0, -1) in torch.iter_inputs())
self.assertTrue((0, 0, 1) in torch.iter_outputs())
self.assertTrue((1, 0, 0) in torch.iter_outputs())
self.assertTrue((-1, 0, 0) in torch.iter_outputs())
def test_torch_block_change(self):
"""
Torches change block type depending on their status. They don't change
metadata, though.
"""
metadata = blocks["redstone-torch"].orientation("-x")
torch = Torch((0, 0, 0), blocks["redstone-torch"].slot, metadata)
torch.status = False
self.assertEqual(
torch.to_block(blocks["redstone-torch"].slot, metadata),
(blocks["redstone-torch-off"].slot, metadata))
class TestCircuitLever(TestCase):
def test_lever_metadata_extra(self):
"""
Levers have double orientation flags depending on whether they are
flipped. If the extra flag is added, the lever should still be
constructible.
"""
metadata = blocks["lever"].orientation("-x")
Lever((0, 0, 0), blocks["lever"].slot, metadata | 0x8)
class TestCircuitCouplings(TestCase):
def test_sand_torch(self):
"""
A torch attached to a sand block will turn off when the sand block
turns on, and vice versa.
"""
asic = Asic()
sand = PlainBlock((0, 0, 0), blocks["sand"].slot, 0x0)
torch = Torch((1, 0, 0), blocks["redstone-torch"].slot,
blocks["redstone-torch"].orientation("+x"))
sand.connect(asic)
torch.connect(asic)
sand.status = True
torch.update()
self.assertFalse(torch.status)
sand.status = False
torch.update()
self.assertTrue(torch.status)
def test_sand_torch_above(self):
"""
A torch on top of a sand block will turn off when the sand block
turns on, and vice versa.
"""
asic = Asic()
sand = PlainBlock((0, 0, 0), blocks["sand"].slot, 0x0)
torch = Torch((0, 1, 0), blocks["redstone-torch"].slot,
blocks["redstone-torch"].orientation("+y"))
sand.connect(asic)
torch.connect(asic)
sand.status = True
torch.update()
self.assertFalse(torch.status)
sand.status = False
torch.update()
self.assertTrue(torch.status)
def test_lever_sand(self):
"""
A lever attached to a sand block will cause the sand block to have the
same value as the lever.
"""
asic = Asic()
lever = Lever((0, 0, 0), blocks["lever"].slot,
blocks["lever"].orientation("-x"))
sand = PlainBlock((1, 0, 0), blocks["sand"].slot, 0x0)
lever.connect(asic)
sand.connect(asic)
lever.status = False
sand.update()
self.assertFalse(sand.status)
lever.status = True
sand.update()
self.assertTrue(sand.status)
def test_torch_wire(self):
"""
Wires will connect to torches.
"""
asic = Asic()
wire = Wire((0, 0, 0), blocks["redstone-wire"].slot, 0x0)
torch = Torch((0, 0, 1), blocks["redstone-torch"].slot,
blocks["redstone-torch"].orientation("-z"))
wire.connect(asic)
torch.connect(asic)
self.assertTrue(wire in torch.outputs)
self.assertTrue(torch in wire.inputs)
def test_wire_sand_below(self):
"""
Wire will power the plain block beneath it.
"""
asic = Asic()
sand = PlainBlock((0, 0, 0), blocks["sand"].slot, 0x0)
wire = Wire((0, 1, 0), blocks["redstone-wire"].slot, 0x0)
sand.connect(asic)
wire.connect(asic)
wire.status = True
sand.update()
self.assertTrue(wire.status)
wire.status = False
sand.update()
self.assertFalse(wire.status)
class TestAsic(TestCase):
def setUp(self):
self.asic = Asic()
def test_trivial(self):
pass
def test_find_wires_single(self):
wires = set([
Wire((0, 0, 0), blocks["redstone-wire"].slot, 0x0),
])
for wire in wires:
wire.connect(self.asic)
self.assertEqual(wires, self.asic.find_wires(0, 0, 0)[1])
def test_find_wires_plural(self):
wires = set([
Wire((0, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((1, 0, 0), blocks["redstone-wire"].slot, 0x0),
])
for wire in wires:
wire.connect(self.asic)
self.assertEqual(wires, self.asic.find_wires(0, 0, 0)[1])
def test_find_wires_many(self):
wires = set([
Wire((0, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((1, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((2, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((2, 0, 1), blocks["redstone-wire"].slot, 0x0),
])
for wire in wires:
wire.connect(self.asic)
self.assertEqual(wires, self.asic.find_wires(0, 0, 0)[1])
def test_find_wires_cross(self):
"""
Finding wires works when the starting point is inside a cluster of
wires.
"""
wires = set([
Wire((0, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((1, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((-1, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((0, 0, 1), blocks["redstone-wire"].slot, 0x0),
Wire((0, 0, -1), blocks["redstone-wire"].slot, 0x0),
])
for wire in wires:
wire.connect(self.asic)
self.assertEqual(wires, self.asic.find_wires(0, 0, 0)[1])
def test_find_wires_inputs_many(self):
inputs = set([
Wire((0, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((2, 0, 1), blocks["redstone-wire"].slot, 0x0),
])
wires = set([
Wire((1, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((2, 0, 0), blocks["redstone-wire"].slot, 0x0),
])
wires.update(inputs)
torches = set([
Torch((0, 0, 1), blocks["redstone-torch"].slot,
blocks["redstone-torch"].orientation("-z")),
Torch((3, 0, 1), blocks["redstone-torch"].slot,
blocks["redstone-torch"].orientation("-x")),
])
for wire in wires:
wire.connect(self.asic)
for torch in torches:
torch.connect(self.asic)
self.assertEqual(inputs, set(self.asic.find_wires(0, 0, 0)[0]))
def test_find_wires_outputs_many(self):
wires = set([
Wire((0, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((2, 0, 0), blocks["redstone-wire"].slot, 0x0),
])
outputs = set([
Wire((1, 0, 0), blocks["redstone-wire"].slot, 0x0),
Wire((3, 0, 0), blocks["redstone-wire"].slot, 0x0),
])
wires.update(outputs)
plains = set([
PlainBlock((1, 0, 1), blocks["sand"].slot, 0x0),
PlainBlock((4, 0, 0), blocks["sand"].slot, 0x0),
])
for wire in wires:
wire.connect(self.asic)
for plain in plains:
plain.connect(self.asic)
self.assertEqual(outputs, set(self.asic.find_wires(0, 0, 0)[2]))
def test_update_wires_single(self):
torch = Torch((0, 0, 0), blocks["redstone-torch-off"].slot,
blocks["redstone-torch"].orientation("-x"))
wire = Wire((1, 0, 0), blocks["redstone-wire"].slot, 0x0)
plain = PlainBlock((2, 0, 0), blocks["sand"].slot, 0x0)
torch.connect(self.asic)
wire.connect(self.asic)
plain.connect(self.asic)
wires, outputs = self.asic.update_wires(1, 0, 0)
self.assertTrue(wire in wires)
self.assertTrue(plain in outputs)
self.assertFalse(wire.status)
self.assertEqual(wire.metadata, 0)
def test_update_wires_single_powered(self):
torch = Torch((0, 0, 0), blocks["redstone-torch"].slot,
blocks["redstone-torch"].orientation("-x"))
wire = Wire((1, 0, 0), blocks["redstone-wire"].slot, 0x0)
plain = PlainBlock((2, 0, 0), blocks["sand"].slot, 0x0)
torch.connect(self.asic)
wire.connect(self.asic)
plain.connect(self.asic)
torch.status = True
wires, outputs = self.asic.update_wires(1, 0, 0)
self.assertTrue(wire in wires)
self.assertTrue(plain in outputs)
self.assertTrue(wire.status)
self.assertEqual(wire.metadata, 15)
def test_update_wires_multiple(self):
torch = Torch((0, 0, 0), blocks["redstone-torch-off"].slot,
blocks["redstone-torch"].orientation("-x"))
wire = Wire((1, 0, 0), blocks["redstone-wire"].slot, 0x0)
wire2 = Wire((1, 0, 1), blocks["redstone-wire"].slot, 0x0)
plain = PlainBlock((2, 0, 0), blocks["sand"].slot, 0x0)
torch.connect(self.asic)
wire.connect(self.asic)
wire2.connect(self.asic)
plain.connect(self.asic)
wires, outputs = self.asic.update_wires(1, 0, 0)
self.assertTrue(wire in wires)
self.assertTrue(wire2 in wires)
self.assertTrue(plain in outputs)
self.assertFalse(wire.status)
self.assertEqual(wire.metadata, 0)
self.assertFalse(wire2.status)
self.assertEqual(wire2.metadata, 0)
def test_update_wires_multiple_powered(self):
torch = Torch((0, 0, 0), blocks["redstone-torch"].slot,
blocks["redstone-torch"].orientation("-x"))
wire = Wire((1, 0, 0), blocks["redstone-wire"].slot, 0x0)
wire2 = Wire((1, 0, 1), blocks["redstone-wire"].slot, 0x0)
plain = PlainBlock((2, 0, 0), blocks["sand"].slot, 0x0)
torch.connect(self.asic)
wire.connect(self.asic)
wire2.connect(self.asic)
plain.connect(self.asic)
torch.status = True
wires, outputs = self.asic.update_wires(1, 0, 0)
self.assertTrue(wire in wires)
self.assertTrue(wire2 in wires)
self.assertTrue(plain in outputs)
self.assertTrue(wire.status)
self.assertEqual(wire.metadata, 15)
self.assertTrue(wire2.status)
self.assertEqual(wire2.metadata, 14)
diff --git a/bravo/utilities/redstone.py b/bravo/utilities/redstone.py
index 729253a..4543c40 100644
--- a/bravo/utilities/redstone.py
+++ b/bravo/utilities/redstone.py
@@ -1,463 +1,462 @@
from collections import deque
from itertools import chain
from operator import not_
from bravo.blocks import blocks
def truthify_block(truth, block, metadata):
"""
Alter a block based on whether it should be true or false (on or off).
This function returns a tuple of the block and metadata, possibly
partially or fully unaltered.
"""
# Redstone torches.
if block in (blocks["redstone-torch"].slot,
blocks["redstone-torch-off"].slot):
if truth:
return blocks["redstone-torch"].slot, metadata
else:
return blocks["redstone-torch-off"].slot, metadata
# Redstone wires.
elif block == blocks["redstone-wire"].slot:
if truth:
# Try to preserve the current wire value.
return block, metadata if metadata else 0xf
else:
return block, 0x0
# Levers.
elif block == blocks["lever"].slot:
if truth:
return block, metadata | 0x8
else:
return block, metadata & ~0x8
# Hmm...
return block, metadata
def bbool(block, metadata):
"""
Get a Boolean value for a given block and metadata.
"""
if block == blocks["redstone-torch"].slot:
return True
elif block == blocks["redstone-torch-off"].slot:
return False
elif block == blocks["redstone-wire"].slot:
return bool(metadata)
elif block == blocks["lever"].slot:
return bool(metadata & 0x8)
return False
class RedstoneError(Exception):
"""
A ghost in the shell.
"""
class Asic(object):
"""
An integrated circuit.
Asics are aware of all of the circuits hooked into them, and store some
additional data for speeding up certain calculations.
The name "asic" comes from the acronym "ASIC", meaning
"application-specific integrated circuit."
"""
level_marker = object()
def __init__(self):
self.circuits = {}
self._wire_cache = {}
def _get_wire_neighbors(self, wire):
for neighbor in chain(wire.iter_inputs(), wire.iter_outputs()):
if neighbor not in self.circuits:
continue
circuit = self.circuits[neighbor]
if circuit.name == "wire":
yield circuit
-
def find_wires(self, x, y, z):
"""
Collate a group of neighboring wires, starting at a certain point.
This function does a simple breadth-first search to find wires.
The returned data is a tuple of an iterable of wires in the group with
inputs, and an iterable of all wires in the group.
"""
if (x, y, z) not in self.circuits:
raise RedstoneError("Unmanaged coords!")
root = self.circuits[x, y, z]
if root.name != "wire":
raise RedstoneError("Non-wire in find_wires")
d = deque([root])
wires = set()
heads = []
tails = []
while d:
# Breadth-first search. Push on the left, pop on the right. Search
# ends when the deque is empty.
w = d.pop()
for neighbor in self._get_wire_neighbors(w):
if neighbor not in wires:
d.appendleft(neighbor)
# If any additional munging needs to be done, do it here.
wires.add(w)
if w.inputs:
heads.append(w)
if w.outputs:
tails.append(w)
return heads, wires, tails
def update_wires(self, x, y, z):
"""
Find all the wires in a group and update them all, by force if
necessary.
Returns a list of outputs belonging to this wire group, for
convenience.
"""
heads, wires, tails = self.find_wires(x, y, z)
# First, collate our output target blocks. These will be among the
# blocks fired on the tick after this tick.
outputs = set()
for wire in tails:
outputs.update(wire.outputs)
# Save our retvals before we get down to business.
retval = wires.copy(), outputs
# Update all of the head wires, then figure out which ones are
# conveying current and use those as the starters.
for head in heads:
# Wirehax: Use Wire's superclass, Circuit, to do the update,
# because Wire.update() calls this method; Circuit.update()
# contains the actual updating logic.
Circuit.update(head)
starters = [head for head in heads if head.status]
visited = set(starters)
# Breadth-first search, for each glow value, and then flush the
# remaining wires when we finish.
for level in xrange(15, 0, -1):
if not visited:
# Early out. We're out of wires to visit, and we won't be
# getting any more since the next round of visitors is
# completely dependent on this round.
break
to_visit = set()
for wire in visited:
wire.status = True
wire.metadata = level
for neighbor in self._get_wire_neighbors(wire):
if neighbor in wires:
to_visit.add(neighbor)
wires -= visited
visited = to_visit
# Anything left after *that* must have a level of zero.
for wire in wires:
wire.status = False
wire.metadata = 0
return retval
class Circuit(object):
"""
A block or series of blocks conveying a basic composited transistor.
Circuits form the base of speedily-evaluated redstone. They know their
inputs, their outputs, and how to update themselves.
"""
asic = None
def __new__(cls, coordinates, block, metadata):
"""
Create a new circuit.
This method is special; it will return one of its subclasses depending
on that subclass's preferred blocks.
"""
block_to_circuit = {
blocks["lever"].slot: Lever,
blocks["redstone-torch"].slot: Torch,
blocks["redstone-torch-off"].slot: Torch,
blocks["redstone-wire"].slot: Wire,
}
cls = block_to_circuit.get(block, PlainBlock)
obj = object.__new__(cls)
obj.coords = coordinates
obj.block = block
obj.metadata = metadata
obj.inputs = set()
obj.outputs = set()
obj.from_block(block, metadata)
# If any custom __init__() was added to this class, it'll be run after
# this.
return obj
def __str__(self):
return "<%s(%d, %d, %d, %s)>" % (self.__class__.__name__,
self.coords[0], self.coords[1], self.coords[2], self.status)
__repr__ = __str__
def iter_inputs(self):
"""
Iterate over possible input coordinates.
"""
x, y, z = self.coords
for dx, dy, dz in ((-1, 0, 0), (1, 0, 0), (0, 0, -1), (0, 0, 1),
(0, -1, 0), (0, 1, 0)):
yield x + dx, y + dy, z + dz
def iter_outputs(self):
"""
Iterate over possible output coordinates.
"""
x, y, z = self.coords
for dx, dy, dz in ((-1, 0, 0), (1, 0, 0), (0, 0, -1), (0, 0, 1),
(0, -1, 0), (0, 1, 0)):
yield x + dx, y + dy, z + dz
def connect(self, asic):
"""
Add this circuit to an ASIC.
"""
circuits = asic.circuits
if self.coords in circuits and circuits[self.coords] is not self:
raise RedstoneError("Circuit trace already occupied!")
circuits[self.coords] = self
self.asic = asic
for coords in self.iter_inputs():
if coords not in circuits:
continue
target = circuits[coords]
if self.name in target.traceables:
self.inputs.add(target)
target.outputs.add(self)
for coords in self.iter_outputs():
if coords not in circuits:
continue
target = circuits[coords]
if target.name in self.traceables:
target.inputs.add(self)
self.outputs.add(target)
def disconnect(self, asic):
"""
Remove this circuit from an ASIC.
"""
if self.coords not in asic.circuits:
raise RedstoneError("Circuit can't detach from ASIC!")
if asic.circuits[self.coords] is not self:
raise RedstoneError("Circuit can't detach another circuit!")
for circuit in self.inputs:
circuit.outputs.discard(self)
for circuit in self.outputs:
circuit.inputs.discard(self)
self.inputs.clear()
self.outputs.clear()
del asic.circuits[self.coords]
self.asic = None
def update(self):
"""
Update outputs based on current state of inputs.
"""
if not self.inputs:
return (), ()
inputs = [i.status for i in self.inputs]
status = self.op(*inputs)
if self.status != status:
self.status = status
return (self,), self.outputs
else:
return (), ()
def from_block(self, block, metadata):
self.status = bbool(block, metadata)
def to_block(self, block, metadata):
return truthify_block(self.status, block, metadata)
class Wire(Circuit):
"""
The ubiquitous conductor of current.
Wires technically copy all of their inputs to their outputs, but the
operation isn't Boolean. Wires propagate the Boolean sum (OR) of their
inputs to any outputs which are relatively close to those inputs. It's
confusing.
"""
name = "wire"
traceables = ("plain",)
def update(self):
x, y, z = self.coords
return self.asic.update_wires(x, y, z)
@staticmethod
def op(*inputs):
return any(inputs)
def to_block(self, block, metadata):
return block, self.metadata
class PlainBlock(Circuit):
"""
Any block which doesn't contain redstone. Traditionally, a sand block, but
most blocks work for this.
Plain blocks do an OR operation across their inputs.
"""
name = "plain"
traceables = ("torch",)
@staticmethod
def op(*inputs):
return any(inputs)
class OrientedCircuit(Circuit):
"""
A circuit which cares about its orientation.
Examples include torches and levers.
"""
def __init__(self, coords, block, metadata):
self.orientation = blocks[block].face(metadata)
if self.orientation is None:
raise RedstoneError("Bad metadata %d for %r!" % (metadata, self))
class Torch(OrientedCircuit):
"""
A redstone torch.
Torches do a NOT operation from their input.
"""
name = "torch"
traceables = ("wire",)
op = staticmethod(not_)
def iter_inputs(self):
"""
Provide the input corresponding to the block upon which this torch is
mounted.
"""
x, y, z = self.coords
if self.orientation == "+x":
yield x - 1, y, z
elif self.orientation == "-x":
yield x + 1, y, z
elif self.orientation == "+z":
yield x, y, z - 1
elif self.orientation == "-z":
yield x, y, z + 1
elif self.orientation == "+y":
yield x, y - 1, z
def iter_outputs(self):
"""
Provide the outputs corresponding to the block upon which this torch
is mounted.
"""
x, y, z = self.coords
if self.orientation != "+x":
yield x - 1, y, z
if self.orientation != "-x":
yield x + 1, y, z
if self.orientation != "+z":
yield x, y, z - 1
if self.orientation != "-z":
yield x, y, z + 1
if self.orientation != "+y":
yield x, y - 1, z
class Lever(OrientedCircuit):
"""
A settable lever.
Levers only provide output, to a single block.
"""
name = "lever"
traceables = ("plain",)
def iter_inputs(self):
# Just return an empty tuple. Levers will never take inputs.
return ()
def iter_outputs(self):
"""
Provide the output corresponding to the block upon which this lever is
mounted.
"""
x, y, z = self.coords
if self.orientation == "+x":
yield x - 1, y, z
elif self.orientation == "-x":
yield x + 1, y, z
elif self.orientation == "+z":
yield x, y, z - 1
elif self.orientation == "-z":
yield x, y, z + 1
elif self.orientation == "+y":
yield x, y - 1, z
def update(self):
"""
Specialized update routine just for levers.
This could probably be shared with switches later.
"""
return (self,), self.outputs
|
saturnflyer/radiant-people-extension
|
7989982d2fbe69f6aaa4b7895f3d30177cf5af8b
|
Fix copy-pasta error
|
diff --git a/Rakefile b/Rakefile
index 9220b24..e8c57e4 100644
--- a/Rakefile
+++ b/Rakefile
@@ -1,109 +1,109 @@
# Determine where the RSpec plugin is by loading the boot
unless defined? RADIANT_ROOT
ENV["RAILS_ENV"] = "test"
case
when ENV["RADIANT_ENV_FILE"]
require File.dirname(ENV["RADIANT_ENV_FILE"]) + "/boot"
when File.dirname(__FILE__) =~ %r{vendor/radiant/vendor/extensions}
require "#{File.expand_path(File.dirname(__FILE__) + "/../../../../../")}/config/boot"
else
require "#{File.expand_path(File.dirname(__FILE__) + "/../../../")}/config/boot"
end
end
require 'rake'
require 'rdoc/task'
require 'rake/testtask'
rspec_base = File.expand_path(RADIANT_ROOT + '/vendor/plugins/rspec/lib')
$LOAD_PATH.unshift(rspec_base) if File.exist?(rspec_base)
require 'spec/rake/spectask'
require 'cucumber'
require 'cucumber/rake/task'
# Cleanup the RADIANT_ROOT constant so specs will load the environment
Object.send(:remove_const, :RADIANT_ROOT)
extension_root = File.expand_path(File.dirname(__FILE__))
task :default => [:spec, :features]
task :stats => "spec:statsetup"
desc "Run all specs in spec directory"
Spec::Rake::SpecTask.new(:spec) do |t|
t.spec_opts = ['--options', "\"#{extension_root}/spec/spec.opts\""]
t.spec_files = FileList['spec/**/*_spec.rb']
end
task :features => 'spec:integration'
namespace :spec do
desc "Run all specs in spec directory with RCov"
Spec::Rake::SpecTask.new(:rcov) do |t|
t.spec_opts = ['--options', "\"#{extension_root}/spec/spec.opts\""]
t.spec_files = FileList['spec/**/*_spec.rb']
t.rcov = true
t.rcov_opts = ['--exclude', 'spec', '--rails']
end
desc "Print Specdoc for all specs"
Spec::Rake::SpecTask.new(:doc) do |t|
t.spec_opts = ["--format", "specdoc", "--dry-run"]
t.spec_files = FileList['spec/**/*_spec.rb']
end
[:models, :controllers, :views, :helpers].each do |sub|
desc "Run the specs under spec/#{sub}"
Spec::Rake::SpecTask.new(sub) do |t|
t.spec_opts = ['--options', "\"#{extension_root}/spec/spec.opts\""]
t.spec_files = FileList["spec/#{sub}/**/*_spec.rb"]
end
end
desc "Run the Cucumber features"
Cucumber::Rake::Task.new(:integration) do |t|
t.fork = true
t.cucumber_opts = ['--format', (ENV['CUCUMBER_FORMAT'] || 'pretty')]
# t.feature_pattern = "#{extension_root}/features/**/*.feature"
t.profile = "default"
end
# Setup specs for stats
task :statsetup do
require 'code_statistics'
::STATS_DIRECTORIES << %w(Model\ specs spec/models)
::STATS_DIRECTORIES << %w(View\ specs spec/views)
::STATS_DIRECTORIES << %w(Controller\ specs spec/controllers)
::STATS_DIRECTORIES << %w(Helper\ specs spec/views)
::CodeStatistics::TEST_TYPES << "Model specs"
::CodeStatistics::TEST_TYPES << "View specs"
::CodeStatistics::TEST_TYPES << "Controller specs"
::CodeStatistics::TEST_TYPES << "Helper specs"
::STATS_DIRECTORIES.delete_if {|a| a[0] =~ /test/}
end
namespace :db do
namespace :fixtures do
desc "Load fixtures (from spec/fixtures) into the current environment's database. Load specific fixtures using FIXTURES=x,y"
task :load => :environment do
require 'active_record/fixtures'
ActiveRecord::Base.establish_connection(RAILS_ENV.to_sym)
(ENV['FIXTURES'] ? ENV['FIXTURES'].split(/,/) : Dir.glob(File.join(RAILS_ROOT, 'spec', 'fixtures', '*.{yml,csv}'))).each do |fixture_file|
Fixtures.create_fixtures('spec/fixtures', File.basename(fixture_file, '.*'))
end
end
end
end
end
desc 'Generate documentation for the shows extension.'
RDoc::Task.new(:rdoc) do |rdoc|
rdoc.rdoc_dir = 'rdoc'
- rdoc.title = 'ShowsExtension'
+ rdoc.title = 'PeopleExtension'
rdoc.options << '--line-numbers' << '--inline-source'
rdoc.rdoc_files.include('README')
rdoc.rdoc_files.include('lib/**/*.rb')
end
# Load any custom rakefiles for extension
Dir[File.dirname(__FILE__) + '/tasks/*.rake'].sort.each { |f| require f }
|
saturnflyer/radiant-people-extension
|
d6f10bed1ecd185b01479e74341f5368dff62d9f
|
Add a region for easy partial insertion
|
diff --git a/app/views/admin/people/_form.html.haml b/app/views/admin/people/_form.html.haml
index 475cb5b..7a0d13a 100644
--- a/app/views/admin/people/_form.html.haml
+++ b/app/views/admin/people/_form.html.haml
@@ -1,15 +1,16 @@
-.recordPart
- = f.label :first_name
- = f.text_field :first_name
- = f.error_message_on :first_name
-.recordPart
- = f.label :middle_name
- = f.text_field :middle_name
- = f.error_message_on :middle_name
-.recordPart
- = f.label :last_name
- = f.text_field :last_name
- = f.error_message_on :last_name
-.recordPart
- = f.label :gender
- = f.select :gender, [['male','male'],['female','female']], :include_blank => true
\ No newline at end of file
+- render_region :form, :locals => {:f => f} do |form|
+ .recordPart
+ = f.label :first_name
+ = f.text_field :first_name
+ = f.error_message_on :first_name
+ .recordPart
+ = f.label :middle_name
+ = f.text_field :middle_name
+ = f.error_message_on :middle_name
+ .recordPart
+ = f.label :last_name
+ = f.text_field :last_name
+ = f.error_message_on :last_name
+ .recordPart
+ = f.label :gender
+ = f.select :gender, [['male','male'],['female','female']], :include_blank => true
|
saturnflyer/radiant-people-extension
|
1493808c765afa4a157002f58a690a7721ce3944
|
Add a basic README
|
diff --git a/README b/README
deleted file mode 100644
index 6171a20..0000000
--- a/README
+++ /dev/null
@@ -1,3 +0,0 @@
-= People
-
-Description goes here
\ No newline at end of file
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..49f5139
--- /dev/null
+++ b/README.md
@@ -0,0 +1,5 @@
+# People
+
+An extension for managing people in Radiant CMS.
+
+[Built by Saturn Flyer](http://www.saturnflyer.com)
|
saturnflyer/radiant-people-extension
|
42a5419221bb7c026d086923505949bf1eaaacdb
|
Modernize tests
|
diff --git a/spec/controllers/admin/people_controller_spec.rb b/spec/controllers/admin/people_controller_spec.rb
index 9aafad3..6b6385e 100644
--- a/spec/controllers/admin/people_controller_spec.rb
+++ b/spec/controllers/admin/people_controller_spec.rb
@@ -1,14 +1,14 @@
require File.expand_path("../../../spec_helper", __FILE__)
describe Admin::PeopleController do
describe 'routing' do
it "should map the consolidate path" do
- route_for(:controller => "admin/people", :action => "consolidate").should == "/admin/people/consolidate"
+ route_for(:controller => "admin/people", :action => "show", :id => "consolidate").should == "/admin/people/consolidate"
end
it "should map the path to the action" do
- params_from(:get, "/admin/people/consolidate").should == {:controller => "admin/people", :action => "consolidate"}
+ params_from(:get, "/admin/people/consolidate").should == {:controller => "admin/people", :action => "show", :id => "consolidate"}
end
end
end
diff --git a/spec/helpers/admin/people_helper_spec.rb b/spec/helpers/admin/people_helper_spec.rb
deleted file mode 100644
index 9242f32..0000000
--- a/spec/helpers/admin/people_helper_spec.rb
+++ /dev/null
@@ -1,11 +0,0 @@
-require File.expand_path("../../../spec_helper", __FILE__)
-
-describe Admin::PeopleHelper do
-
- #Delete this example and add some real ones or delete this file
- it "should include the Admin::PeopleHelper" do
- included_modules = self.metaclass.send :included_modules
- included_modules.should include(Admin::PeopleHelper)
- end
-
-end
|
saturnflyer/radiant-people-extension
|
88953c324c8928079470ced394679c7ab269a99d
|
new gemspec
|
diff --git a/radiant-people-extension.gemspec b/radiant-people-extension.gemspec
index 0b67037..c4ec801 100644
--- a/radiant-people-extension.gemspec
+++ b/radiant-people-extension.gemspec
@@ -1,75 +1,77 @@
# Generated by jeweler
# DO NOT EDIT THIS FILE DIRECTLY
# Instead, edit Jeweler::Tasks in Rakefile, and run the gemspec command
# -*- encoding: utf-8 -*-
Gem::Specification.new do |s|
s.name = %q{radiant-people-extension}
- s.version = "1.0.0"
+ s.version = "1.1.0"
s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
s.authors = ["Jim Gay"]
- s.date = %q{2010-01-19}
+ s.date = %q{2010-07-25}
s.description = %q{A generic and extendable way to manage people in Radiant CMS}
s.email = %q{[email protected]}
s.extra_rdoc_files = [
"README"
]
s.files = [
".gitignore",
"README",
"Rakefile",
"VERSION",
"app/controllers/admin/people_controller.rb",
"app/helpers/admin/people_helper.rb",
"app/models/person.rb",
"app/views/admin/people/_form.html.haml",
"app/views/admin/people/_person.html.haml",
"app/views/admin/people/edit.html.haml",
"app/views/admin/people/index.html.haml",
"app/views/admin/people/new.html.haml",
+ "config/locales/en.yml",
+ "config/routes.rb",
"cucumber.yml",
"db/migrate/20090905004948_create_people.rb",
"features/support/env.rb",
"features/support/paths.rb",
"lib/tasks/people_extension_tasks.rake",
"people_extension.rb",
"radiant-people-extension.gemspec",
"spec/controllers/admin/people_controller_spec.rb",
"spec/helpers/admin/people_helper_spec.rb",
"spec/models/person_spec.rb",
"spec/spec.opts",
"spec/spec_helper.rb"
]
s.homepage = %q{http://github.com/saturnflyer/radiant-people-extension}
s.rdoc_options = ["--charset=UTF-8"]
s.require_paths = ["lib"]
- s.rubygems_version = %q{1.3.5}
+ s.rubygems_version = %q{1.3.7}
s.summary = %q{Manage People in Radiant CMS}
s.test_files = [
"spec/controllers/admin/people_controller_spec.rb",
"spec/helpers/admin/people_helper_spec.rb",
"spec/models/person_spec.rb",
"spec/spec_helper.rb"
]
if s.respond_to? :specification_version then
current_version = Gem::Specification::CURRENT_SPECIFICATION_VERSION
s.specification_version = 3
- if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.2.0') then
+ if Gem::Version.new(Gem::VERSION) >= Gem::Version.new('1.2.0') then
s.add_runtime_dependency(%q<will_paginate>, [">= 0"])
s.add_runtime_dependency(%q<searchlogic>, [">= 0"])
s.add_runtime_dependency(%q<merger>, [">= 0"])
else
s.add_dependency(%q<will_paginate>, [">= 0"])
s.add_dependency(%q<searchlogic>, [">= 0"])
s.add_dependency(%q<merger>, [">= 0"])
end
else
s.add_dependency(%q<will_paginate>, [">= 0"])
s.add_dependency(%q<searchlogic>, [">= 0"])
s.add_dependency(%q<merger>, [">= 0"])
end
end
|
saturnflyer/radiant-people-extension
|
23b344e08ac941a6f54a78fb2fdd8239fbe72544
|
only clear results when you've searched. version bump
|
diff --git a/VERSION b/VERSION
index 3eefcb9..9084fa2 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-1.0.0
+1.1.0
diff --git a/app/views/admin/people/index.html.haml b/app/views/admin/people/index.html.haml
index d460bca..f54ee09 100644
--- a/app/views/admin/people/index.html.haml
+++ b/app/views/admin/people/index.html.haml
@@ -1,46 +1,47 @@
.outset
- render_region :top do |top|
- top.search do
.search
%h2.label_head Filter
- form_tag admin_people_path, :method => :get do
= label :person, :last_name
= text_field :person, :last_name
= label :person, :first_name
= text_field :person, :first_name
= submit_tag 'Search'
- = link_to 'Clear results...', admin_people_path
+ - if params[:person]
+ = link_to 'Clear results...', admin_people_path
- if params[:person]
- form_for :merge, :url => merge_admin_people_path() do |f|
%table.index
%thead
%tr
- render_region :people_head do |people_head|
- people_head.name_column_head do
%th Name
- people_head.gender_column_head do
%th Gender
%tbody
- unless @people.blank?
- @people.each do |person|
= render_person(person)
%p= submit_tag 'Merge'
- else
%table.index
%thead
%tr
- render_region :people_head do |people_head|
- people_head.name_column_head do
%th Name
- people_head.gender_column_head do
%th Gender
%tbody
- unless @people.blank?
- @people.each do |person|
- @current_person = person # for additions
= render 'person', :person => person
%p= will_paginate @people
#actions
%ul
%li= link_to "Add Person", new_admin_person_path
\ No newline at end of file
|
saturnflyer/radiant-people-extension
|
24093cd921876d8abfb1be91c8a497d402dc0e6f
|
add missing render_person
|
diff --git a/app/helpers/admin/people_helper.rb b/app/helpers/admin/people_helper.rb
index a6e7e7f..94b72e2 100644
--- a/app/helpers/admin/people_helper.rb
+++ b/app/helpers/admin/people_helper.rb
@@ -1,2 +1,6 @@
module Admin::PeopleHelper
+ def render_person(person, locals={})
+ details = locals.merge!({:person => person})
+ render person, details
+ end
end
|
saturnflyer/radiant-people-extension
|
4d77c66aebac5689d1326af153dd38985bb93b7e
|
ignore pkg
|
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..0c6117d
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1 @@
+pkg
\ No newline at end of file
|
saturnflyer/radiant-people-extension
|
f339fb4fd3d34af48078f4ee78ffa285d443748d
|
speedier rendering
|
diff --git a/app/helpers/admin/people_helper.rb b/app/helpers/admin/people_helper.rb
index c238751..a6e7e7f 100644
--- a/app/helpers/admin/people_helper.rb
+++ b/app/helpers/admin/people_helper.rb
@@ -1,6 +1,2 @@
module Admin::PeopleHelper
- def render_person(person)
- @current_person = person
- render 'person', :person => @current_person
- end
end
diff --git a/app/views/admin/people/index.html.haml b/app/views/admin/people/index.html.haml
index 88cf28d..d460bca 100644
--- a/app/views/admin/people/index.html.haml
+++ b/app/views/admin/people/index.html.haml
@@ -1,45 +1,46 @@
.outset
- render_region :top do |top|
- top.search do
.search
%h2.label_head Filter
- form_tag admin_people_path, :method => :get do
= label :person, :last_name
= text_field :person, :last_name
= label :person, :first_name
= text_field :person, :first_name
= submit_tag 'Search'
= link_to 'Clear results...', admin_people_path
- if params[:person]
- form_for :merge, :url => merge_admin_people_path() do |f|
%table.index
%thead
%tr
- render_region :people_head do |people_head|
- people_head.name_column_head do
%th Name
- people_head.gender_column_head do
%th Gender
%tbody
- unless @people.blank?
- @people.each do |person|
= render_person(person)
%p= submit_tag 'Merge'
- else
%table.index
%thead
%tr
- render_region :people_head do |people_head|
- people_head.name_column_head do
%th Name
- people_head.gender_column_head do
%th Gender
%tbody
- unless @people.blank?
- @people.each do |person|
- = render_person(person)
+ - @current_person = person # for additions
+ = render 'person', :person => person
%p= will_paginate @people
#actions
%ul
%li= link_to "Add Person", new_admin_person_path
\ No newline at end of file
|
saturnflyer/radiant-people-extension
|
5bd2432cc3cecfbd98ed5ed4e84897012a799ee0
|
adding gemspec
|
diff --git a/radiant-people-extension.gemspec b/radiant-people-extension.gemspec
new file mode 100644
index 0000000..612d59d
--- /dev/null
+++ b/radiant-people-extension.gemspec
@@ -0,0 +1,74 @@
+# Generated by jeweler
+# DO NOT EDIT THIS FILE DIRECTLY
+# Instead, edit Jeweler::Tasks in Rakefile, and run the gemspec command
+# -*- encoding: utf-8 -*-
+
+Gem::Specification.new do |s|
+ s.name = %q{radiant-people-extension}
+ s.version = "1.0.0"
+
+ s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
+ s.authors = ["Jim Gay"]
+ s.date = %q{2010-01-18}
+ s.description = %q{A generic and extendable way to manage people in Radiant CMS}
+ s.email = %q{[email protected]}
+ s.extra_rdoc_files = [
+ "README"
+ ]
+ s.files = [
+ "README",
+ "Rakefile",
+ "VERSION",
+ "app/controllers/admin/people_controller.rb",
+ "app/helpers/admin/people_helper.rb",
+ "app/models/person.rb",
+ "app/views/admin/people/_form.html.haml",
+ "app/views/admin/people/_person.html.haml",
+ "app/views/admin/people/consolidate.html.haml",
+ "app/views/admin/people/edit.html.haml",
+ "app/views/admin/people/index.html.haml",
+ "app/views/admin/people/new.html.haml",
+ "cucumber.yml",
+ "db/migrate/20090905004948_create_people.rb",
+ "features/support/env.rb",
+ "features/support/paths.rb",
+ "lib/tasks/people_extension_tasks.rake",
+ "people_extension.rb",
+ "spec/controllers/admin/people_controller_spec.rb",
+ "spec/helpers/admin/people_helper_spec.rb",
+ "spec/models/person_spec.rb",
+ "spec/spec.opts",
+ "spec/spec_helper.rb"
+ ]
+ s.homepage = %q{http://github.com/saturnflyer/radiant-people-extension}
+ s.rdoc_options = ["--charset=UTF-8"]
+ s.require_paths = ["lib"]
+ s.rubygems_version = %q{1.3.5}
+ s.summary = %q{Manage People in Radiant CMS}
+ s.test_files = [
+ "spec/controllers/admin/people_controller_spec.rb",
+ "spec/helpers/admin/people_helper_spec.rb",
+ "spec/models/person_spec.rb",
+ "spec/spec_helper.rb"
+ ]
+
+ if s.respond_to? :specification_version then
+ current_version = Gem::Specification::CURRENT_SPECIFICATION_VERSION
+ s.specification_version = 3
+
+ if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.2.0') then
+ s.add_runtime_dependency(%q<will_paginate>, [">= 0"])
+ s.add_runtime_dependency(%q<searchlogic>, [">= 0"])
+ s.add_runtime_dependency(%q<merger>, [">= 0"])
+ else
+ s.add_dependency(%q<will_paginate>, [">= 0"])
+ s.add_dependency(%q<searchlogic>, [">= 0"])
+ s.add_dependency(%q<merger>, [">= 0"])
+ end
+ else
+ s.add_dependency(%q<will_paginate>, [">= 0"])
+ s.add_dependency(%q<searchlogic>, [">= 0"])
+ s.add_dependency(%q<merger>, [">= 0"])
+ end
+end
+
|
saturnflyer/radiant-people-extension
|
21dd4f264643f9524f41ebf97928e9b8562f46a7
|
get version from file
|
diff --git a/people_extension.rb b/people_extension.rb
index 3196624..09df845 100644
--- a/people_extension.rb
+++ b/people_extension.rb
@@ -1,48 +1,48 @@
require 'ostruct'
class PeopleExtension < Radiant::Extension
- version "1.0"
+ version "#{File.read(File.expand_path(File.dirname(__FILE__)) + '/VERSION')}"
description "Manage people."
url "http://saturnflyer.com/"
extension_config do |config|
config.gem 'will_paginate'
config.gem 'searchlogic'
config.gem 'merger'
end
define_routes do |map|
map.merge_admin_people '/admin/people/merge.:format', :controller => 'admin/people', :action => 'merge', :conditions => {:method => :post}
map.namespace :admin do |admin|
admin.resources :people, :member => { :remove => :get }
end
end
def activate
Radiant::AdminUI.class_eval do
attr_accessor :people
end
admin.people = load_default_people_regions
tab "People" do
add_item 'All People', "/admin/people"
end
end
def deactivate
end
def load_default_people_regions
returning OpenStruct.new do |people|
people.index = Radiant::AdminUI::RegionSet.new do |index|
index.top.concat %w{search}
index.people_head.concat %w{name_column_head gender_column_head}
index.person.concat %w{name_column gender_column}
end
people.new = Radiant::AdminUI::RegionSet.new do |new|
new.person_info.concat %w{}
new.buttons.concat %w{}
end
people.edit = people.new.clone
end
end
end
|
saturnflyer/radiant-people-extension
|
aa0a49f895a34de3ed5c0a474d32e283c5b11db0
|
Version bump to 1.0.0
|
diff --git a/VERSION b/VERSION
index 77d6f4c..3eefcb9 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-0.0.0
+1.0.0
|
saturnflyer/radiant-people-extension
|
c3e5754b8834447f237ecd13b5b9ff8490f84024
|
Version bump to 0.0.0
|
diff --git a/VERSION b/VERSION
new file mode 100644
index 0000000..77d6f4c
--- /dev/null
+++ b/VERSION
@@ -0,0 +1 @@
+0.0.0
|
saturnflyer/radiant-people-extension
|
97010c943243885b28d83ae7f0246c4d3c2c0c4c
|
move to gems
|
diff --git a/Rakefile b/Rakefile
index 8d3b41f..4365828 100644
--- a/Rakefile
+++ b/Rakefile
@@ -1,123 +1,141 @@
+begin
+ require 'jeweler'
+ Jeweler::Tasks.new do |gem|
+ gem.name = "radiant-people-extension"
+ gem.summary = %Q{Manage People in Radiant CMS}
+ gem.description = %Q{A generic and extendable way to manage people in Radiant CMS}
+ gem.email = "[email protected]"
+ gem.homepage = "http://github.com/saturnflyer/radiant-people-extension"
+ gem.authors = ["Jim Gay"]
+ gem.add_dependency 'will_paginate'
+ gem.add_dependency 'searchlogic'
+ gem.add_dependency 'merger'
+ # gem is a Gem::Specification... see http://www.rubygems.org/read/chapter/20 for additional settings
+ end
+rescue LoadError
+ puts "Jeweler (or a dependency) not available. This is only required if you plan to package gemmy as a gem."
+end
+
# I think this is the one that should be moved to the extension Rakefile template
# In rails 1.2, plugins aren't available in the path until they're loaded.
# Check to see if the rspec plugin is installed first and require
# it if it is. If not, use the gem version.
# Determine where the RSpec plugin is by loading the boot
unless defined? RADIANT_ROOT
ENV["RAILS_ENV"] = "test"
case
when ENV["RADIANT_ENV_FILE"]
require File.dirname(ENV["RADIANT_ENV_FILE"]) + "/boot"
when File.dirname(__FILE__) =~ %r{vendor/radiant/vendor/extensions}
require "#{File.expand_path(File.dirname(__FILE__) + "/../../../../../")}/config/boot"
else
require "#{File.expand_path(File.dirname(__FILE__) + "/../../../")}/config/boot"
end
end
require 'rake'
require 'rake/rdoctask'
require 'rake/testtask'
rspec_base = File.expand_path(RADIANT_ROOT + '/vendor/plugins/rspec/lib')
$LOAD_PATH.unshift(rspec_base) if File.exist?(rspec_base)
require 'spec/rake/spectask'
require 'cucumber'
require 'cucumber/rake/task'
# Cleanup the RADIANT_ROOT constant so specs will load the environment
Object.send(:remove_const, :RADIANT_ROOT)
extension_root = File.expand_path(File.dirname(__FILE__))
task :default => :spec
task :stats => "spec:statsetup"
desc "Run all specs in spec directory"
Spec::Rake::SpecTask.new(:spec) do |t|
t.spec_opts = ['--options', "\"#{extension_root}/spec/spec.opts\""]
t.spec_files = FileList['spec/**/*_spec.rb']
end
task :features => 'spec:integration'
namespace :spec do
desc "Run all specs in spec directory with RCov"
Spec::Rake::SpecTask.new(:rcov) do |t|
t.spec_opts = ['--options', "\"#{extension_root}/spec/spec.opts\""]
t.spec_files = FileList['spec/**/*_spec.rb']
t.rcov = true
t.rcov_opts = ['--exclude', 'spec', '--rails']
end
desc "Print Specdoc for all specs"
Spec::Rake::SpecTask.new(:doc) do |t|
t.spec_opts = ["--format", "specdoc", "--dry-run"]
t.spec_files = FileList['spec/**/*_spec.rb']
end
[:models, :controllers, :views, :helpers].each do |sub|
desc "Run the specs under spec/#{sub}"
Spec::Rake::SpecTask.new(sub) do |t|
t.spec_opts = ['--options', "\"#{extension_root}/spec/spec.opts\""]
t.spec_files = FileList["spec/#{sub}/**/*_spec.rb"]
end
end
desc "Run the Cucumber features"
Cucumber::Rake::Task.new(:integration) do |t|
t.fork = true
t.cucumber_opts = ['--format', (ENV['CUCUMBER_FORMAT'] || 'pretty')]
# t.feature_pattern = "#{extension_root}/features/**/*.feature"
t.profile = "default"
end
# Setup specs for stats
task :statsetup do
require 'code_statistics'
::STATS_DIRECTORIES << %w(Model\ specs spec/models)
::STATS_DIRECTORIES << %w(View\ specs spec/views)
::STATS_DIRECTORIES << %w(Controller\ specs spec/controllers)
::STATS_DIRECTORIES << %w(Helper\ specs spec/views)
::CodeStatistics::TEST_TYPES << "Model specs"
::CodeStatistics::TEST_TYPES << "View specs"
::CodeStatistics::TEST_TYPES << "Controller specs"
::CodeStatistics::TEST_TYPES << "Helper specs"
::STATS_DIRECTORIES.delete_if {|a| a[0] =~ /test/}
end
namespace :db do
namespace :fixtures do
desc "Load fixtures (from spec/fixtures) into the current environment's database. Load specific fixtures using FIXTURES=x,y"
task :load => :environment do
require 'active_record/fixtures'
ActiveRecord::Base.establish_connection(RAILS_ENV.to_sym)
(ENV['FIXTURES'] ? ENV['FIXTURES'].split(/,/) : Dir.glob(File.join(RAILS_ROOT, 'spec', 'fixtures', '*.{yml,csv}'))).each do |fixture_file|
Fixtures.create_fixtures('spec/fixtures', File.basename(fixture_file, '.*'))
end
end
end
end
end
desc 'Generate documentation for the people extension.'
Rake::RDocTask.new(:rdoc) do |rdoc|
rdoc.rdoc_dir = 'rdoc'
rdoc.title = 'PeopleExtension'
rdoc.options << '--line-numbers' << '--inline-source'
rdoc.rdoc_files.include('README')
rdoc.rdoc_files.include('lib/**/*.rb')
end
# For extensions that are in transition
desc 'Test the people extension.'
Rake::TestTask.new(:test) do |t|
t.libs << 'lib'
t.pattern = 'test/**/*_test.rb'
t.verbose = true
end
# Load any custom rakefiles for extension
Dir[File.dirname(__FILE__) + '/tasks/*.rake'].sort.each { |f| require f }
\ No newline at end of file
diff --git a/app/models/person.rb b/app/models/person.rb
index 9f7f718..c0be979 100644
--- a/app/models/person.rb
+++ b/app/models/person.rb
@@ -1,15 +1,16 @@
class Person < ActiveRecord::Base
+ include Merger
validates_presence_of :first_name
validates_presence_of :last_name
default_scope :order => 'last_name, first_name, middle_name DESC'
def full_name(*options)
options = options.extract_options!
if options[:last_name_first]
"#{last_name}, #{first_name} #{middle_name}".squeeze(' ').strip
else
"#{first_name} #{middle_name} #{last_name}".squeeze(' ').strip
end
end
end
diff --git a/app/views/admin/people/index.html.haml b/app/views/admin/people/index.html.haml
index 951235a..88cf28d 100644
--- a/app/views/admin/people/index.html.haml
+++ b/app/views/admin/people/index.html.haml
@@ -1,43 +1,45 @@
-- render_region :top do |top|
- - top.search do
- .search
- %h2.label_head Filter
- - form_tag admin_people_path, :method => :get do
- = label :person, :last_name
- = text_field :person, :last_name
- = label :person, :first_name
- = text_field :person, :first_name
- = submit_tag 'Search'
- = link_to 'Clear results...', admin_people_path
-%h1 People
-- if params[:person]
- - form_for :merge, :url => merge_admin_people_path() do |f|
+.outset
+ - render_region :top do |top|
+ - top.search do
+ .search
+ %h2.label_head Filter
+ - form_tag admin_people_path, :method => :get do
+ = label :person, :last_name
+ = text_field :person, :last_name
+ = label :person, :first_name
+ = text_field :person, :first_name
+ = submit_tag 'Search'
+ = link_to 'Clear results...', admin_people_path
+ - if params[:person]
+ - form_for :merge, :url => merge_admin_people_path() do |f|
+ %table.index
+ %thead
+ %tr
+ - render_region :people_head do |people_head|
+ - people_head.name_column_head do
+ %th Name
+ - people_head.gender_column_head do
+ %th Gender
+ %tbody
+ - unless @people.blank?
+ - @people.each do |person|
+ = render_person(person)
+ %p= submit_tag 'Merge'
+
+ - else
%table.index
%thead
%tr
- render_region :people_head do |people_head|
- people_head.name_column_head do
%th Name
- people_head.gender_column_head do
%th Gender
%tbody
- unless @people.blank?
- @people.each do |person|
= render_person(person)
- %p= submit_tag 'Merge'
-
-- else
- %table.index
- %thead
- %tr
- - render_region :people_head do |people_head|
- - people_head.name_column_head do
- %th Name
- - people_head.gender_column_head do
- %th Gender
- %tbody
- - unless @people.blank?
- - @people.each do |person|
- = render_person(person)
- %p= link_to "Add Person", new_admin_person_path
-%p= will_paginate @people
\ No newline at end of file
+ %p= will_paginate @people
+ #actions
+ %ul
+ %li= link_to "Add Person", new_admin_person_path
\ No newline at end of file
diff --git a/people_extension.rb b/people_extension.rb
index fdafa87..3196624 100644
--- a/people_extension.rb
+++ b/people_extension.rb
@@ -1,45 +1,48 @@
require 'ostruct'
class PeopleExtension < Radiant::Extension
version "1.0"
description "Manage people."
url "http://saturnflyer.com/"
extension_config do |config|
config.gem 'will_paginate'
config.gem 'searchlogic'
+ config.gem 'merger'
end
define_routes do |map|
map.merge_admin_people '/admin/people/merge.:format', :controller => 'admin/people', :action => 'merge', :conditions => {:method => :post}
map.namespace :admin do |admin|
admin.resources :people, :member => { :remove => :get }
end
end
def activate
Radiant::AdminUI.class_eval do
attr_accessor :people
end
admin.people = load_default_people_regions
- admin.tabs.add "People", "/admin/people", :after => "Layouts", :visibility => [:all]
+ tab "People" do
+ add_item 'All People', "/admin/people"
+ end
end
def deactivate
end
def load_default_people_regions
returning OpenStruct.new do |people|
people.index = Radiant::AdminUI::RegionSet.new do |index|
index.top.concat %w{search}
index.people_head.concat %w{name_column_head gender_column_head}
index.person.concat %w{name_column gender_column}
end
people.new = Radiant::AdminUI::RegionSet.new do |new|
new.person_info.concat %w{}
new.buttons.concat %w{}
end
people.edit = people.new.clone
end
end
end
diff --git a/vendor/plugins/merger b/vendor/plugins/merger
deleted file mode 160000
index 079099b..0000000
--- a/vendor/plugins/merger
+++ /dev/null
@@ -1 +0,0 @@
-Subproject commit 079099b5e134ad40b00eed58b780923081560155
|
saturnflyer/radiant-people-extension
|
4f22271dc61fb43ce811d297055037c4ccd31691
|
some updates"
|
diff --git a/app/controllers/admin/people_controller.rb b/app/controllers/admin/people_controller.rb
index 1e6f071..4e1d0c2 100644
--- a/app/controllers/admin/people_controller.rb
+++ b/app/controllers/admin/people_controller.rb
@@ -1,11 +1,41 @@
class Admin::PeopleController < Admin::ResourceController
before_filter :load_stylesheets
+ before_filter :add_styles
+
+ def index
+ if params[:person] #search
+ @people = Person.search(params[:person]).paginate(:page => params[:page], :per_page => 50)
+ else
+ @people = Person.all.paginate(:page => params[:page], :per_page => 50)
+ end
+ end
+
+ def merge
+ people_ids = params[:merge][:person].collect{|p| p[0].to_i }.to_a
+ @people = Person.find_all_by_id(people_ids, :order => :id)
+ @person = @people.first
+ @person.merge!(@people)
+ flash[:notice] = "The people you selected have been merged into #{@person.full_name}."
+ redirect_to edit_admin_person_path(@person)
+ end
def announce_saved
flash[:notice] = "#{@person.full_name} saved below."
end
def load_stylesheets
- include_stylesheet 'admin/people'
+ # include_stylesheet 'admin/people'
+ end
+ def add_styles
+ @content_for_page_css ||= ''
+ @content_for_page_css << %{
+.search { background: #eae3c5; border: 1px solid #fff; padding: 10px}
+.label_head { float: left; padding: 0; margin: 0 10px 0 0; }
+.form-area { overflow: hidden;}
+.form-area td { vertical-align: top;}
+h2 { color: #b7b092; margin: 1em 0 0; border-bottom: 2px solid #eae3c5; }
+.personExtras { clear: both;}
+.recordPart { float: left;}
+.recordPart label, .recordPart input { display: block; }}
end
end
diff --git a/app/models/person.rb b/app/models/person.rb
index 2f13f22..9f7f718 100644
--- a/app/models/person.rb
+++ b/app/models/person.rb
@@ -1,15 +1,15 @@
class Person < ActiveRecord::Base
validates_presence_of :first_name
validates_presence_of :last_name
default_scope :order => 'last_name, first_name, middle_name DESC'
def full_name(*options)
options = options.extract_options!
if options[:last_name_first]
- "#{last_name}, #{first_name} #{middle_name}"
+ "#{last_name}, #{first_name} #{middle_name}".squeeze(' ').strip
else
- "#{first_name} #{middle_name} #{last_name}"
+ "#{first_name} #{middle_name} #{last_name}".squeeze(' ').strip
end
end
end
diff --git a/app/views/admin/people/_form.html.haml b/app/views/admin/people/_form.html.haml
new file mode 100644
index 0000000..475cb5b
--- /dev/null
+++ b/app/views/admin/people/_form.html.haml
@@ -0,0 +1,15 @@
+.recordPart
+ = f.label :first_name
+ = f.text_field :first_name
+ = f.error_message_on :first_name
+.recordPart
+ = f.label :middle_name
+ = f.text_field :middle_name
+ = f.error_message_on :middle_name
+.recordPart
+ = f.label :last_name
+ = f.text_field :last_name
+ = f.error_message_on :last_name
+.recordPart
+ = f.label :gender
+ = f.select :gender, [['male','male'],['female','female']], :include_blank => true
\ No newline at end of file
diff --git a/app/views/admin/people/_person.html.haml b/app/views/admin/people/_person.html.haml
index 042d7e7..d4bd8bd 100644
--- a/app/views/admin/people/_person.html.haml
+++ b/app/views/admin/people/_person.html.haml
@@ -1,6 +1,9 @@
%tr
- render_region :person do |p|
- p.name_column do
- %td= link_to(person.full_name(:last_name_first => true), edit_admin_person_path(person))
+ %td
+ - if params[:person]
+ = check_box_tag "merge[person][#{person.id}]", 1, true
+ = link_to(person.full_name(:last_name_first => true), edit_admin_person_path(person))
- p.gender_column do
%td= person.gender
\ No newline at end of file
diff --git a/app/views/admin/people/consolidate.html.haml b/app/views/admin/people/consolidate.html.haml
new file mode 100644
index 0000000..f1d41f2
--- /dev/null
+++ b/app/views/admin/people/consolidate.html.haml
@@ -0,0 +1 @@
+%h1 Consolidate
\ No newline at end of file
diff --git a/app/views/admin/people/edit.html.haml b/app/views/admin/people/edit.html.haml
index 98ce4e7..11ce444 100644
--- a/app/views/admin/people/edit.html.haml
+++ b/app/views/admin/people/edit.html.haml
@@ -1,27 +1,12 @@
-- include_stylesheet 'admin/people'
%h1= @person.full_name
- form_for ['admin',@person], :method => :put do |f|
.form-area
- @person_form = f
- render_region :person_info do |person_info|
- .recordPart
- = f.label :first_name
- = f.text_field :first_name
- = f.error_message_on :first_name
- .recordPart
- = f.label :middle_name
- = f.text_field :middle_name
- = f.error_message_on :middle_name
- .recordPart
- = f.label :last_name
- = f.text_field :last_name
- = f.error_message_on :last_name
- .recordPart
- = f.label :gender
- = f.select :gender, [['male','male'],['female','female']], :include_blank => true
+ = render 'form', :f => f
%p.buttons
- render_region :buttons do |buttons|
= save_model_button(@person)
= save_model_and_continue_editing_button(@person)
or
= link_to 'Cancel', admin_people_path
\ No newline at end of file
diff --git a/app/views/admin/people/index.html.haml b/app/views/admin/people/index.html.haml
index 44623b5..951235a 100644
--- a/app/views/admin/people/index.html.haml
+++ b/app/views/admin/people/index.html.haml
@@ -1,15 +1,43 @@
-- render_region :top
+- render_region :top do |top|
+ - top.search do
+ .search
+ %h2.label_head Filter
+ - form_tag admin_people_path, :method => :get do
+ = label :person, :last_name
+ = text_field :person, :last_name
+ = label :person, :first_name
+ = text_field :person, :first_name
+ = submit_tag 'Search'
+ = link_to 'Clear results...', admin_people_path
%h1 People
-%table.index
- %thead
- %tr
- - render_region :people_head do |people_head|
- - people_head.name_column_head do
- %th Name
- - people_head.gender_column_head do
- %th Gender
- %tbody
- - unless @people.blank?
- - @people.each do |person|
- = render_person(person)
-%p= link_to "Add Person", new_admin_person_path
\ No newline at end of file
+- if params[:person]
+ - form_for :merge, :url => merge_admin_people_path() do |f|
+ %table.index
+ %thead
+ %tr
+ - render_region :people_head do |people_head|
+ - people_head.name_column_head do
+ %th Name
+ - people_head.gender_column_head do
+ %th Gender
+ %tbody
+ - unless @people.blank?
+ - @people.each do |person|
+ = render_person(person)
+ %p= submit_tag 'Merge'
+
+- else
+ %table.index
+ %thead
+ %tr
+ - render_region :people_head do |people_head|
+ - people_head.name_column_head do
+ %th Name
+ - people_head.gender_column_head do
+ %th Gender
+ %tbody
+ - unless @people.blank?
+ - @people.each do |person|
+ = render_person(person)
+ %p= link_to "Add Person", new_admin_person_path
+%p= will_paginate @people
\ No newline at end of file
diff --git a/app/views/admin/people/new.html.haml b/app/views/admin/people/new.html.haml
index b46967b..05fdfd1 100644
--- a/app/views/admin/people/new.html.haml
+++ b/app/views/admin/people/new.html.haml
@@ -1,27 +1,13 @@
- include_stylesheet 'admin/people'
%h1 New Person
- form_for ['admin',@person] do |f|
.form-area
- @person_form = f
- render_region :person_info do |person_info|
- .recordPart
- = f.label :first_name
- = f.text_field :first_name
- = f.error_message_on :first_name
- .recordPart
- = f.label :middle_name
- = f.text_field :middle_name
- = f.error_message_on :middle_name
- .recordPart
- = f.label :last_name
- = f.text_field :last_name
- = f.error_message_on :last_name
- .recordPart
- = f.label :gender
- = f.select :gender, [['male','male'],['female','female']], :include_blank => true
+ = render 'form', :f => f
%p.buttons
- render_region :buttons do |buttons|
= save_model_button(@person)
= save_model_and_continue_editing_button(@person)
or
= link_to 'Cancel', admin_people_path
\ No newline at end of file
diff --git a/people_extension.rb b/people_extension.rb
index 6ee01b8..fdafa87 100644
--- a/people_extension.rb
+++ b/people_extension.rb
@@ -1,43 +1,45 @@
require 'ostruct'
class PeopleExtension < Radiant::Extension
version "1.0"
description "Manage people."
url "http://saturnflyer.com/"
extension_config do |config|
config.gem 'will_paginate'
+ config.gem 'searchlogic'
end
define_routes do |map|
- map.namespace :admin, :member => { :remove => :get } do |admin|
- admin.resources :people
+ map.merge_admin_people '/admin/people/merge.:format', :controller => 'admin/people', :action => 'merge', :conditions => {:method => :post}
+ map.namespace :admin do |admin|
+ admin.resources :people, :member => { :remove => :get }
end
end
def activate
Radiant::AdminUI.class_eval do
attr_accessor :people
end
admin.people = load_default_people_regions
admin.tabs.add "People", "/admin/people", :after => "Layouts", :visibility => [:all]
end
def deactivate
end
def load_default_people_regions
returning OpenStruct.new do |people|
people.index = Radiant::AdminUI::RegionSet.new do |index|
- index.top.concat %w{}
+ index.top.concat %w{search}
index.people_head.concat %w{name_column_head gender_column_head}
index.person.concat %w{name_column gender_column}
end
people.new = Radiant::AdminUI::RegionSet.new do |new|
new.person_info.concat %w{}
new.buttons.concat %w{}
end
people.edit = people.new.clone
end
end
end
diff --git a/public/stylesheets/admin/people.css b/public/stylesheets/admin/people.css
deleted file mode 100644
index 0f7453d..0000000
--- a/public/stylesheets/admin/people.css
+++ /dev/null
@@ -1,4 +0,0 @@
-.form-area { overflow: hidden;}
-.personExtras { clear: both;}
-.recordPart { float: left;}
-.recordPart label, .recordPart input { display: block; }
\ No newline at end of file
diff --git a/spec/controllers/admin/people_controller_spec.rb b/spec/controllers/admin/people_controller_spec.rb
index 062dc07..728b271 100644
--- a/spec/controllers/admin/people_controller_spec.rb
+++ b/spec/controllers/admin/people_controller_spec.rb
@@ -1,10 +1,14 @@
require File.dirname(__FILE__) + '/../../spec_helper'
describe Admin::PeopleController do
- #Delete this example and add some real ones
- it "should use Admin::PeopleController" do
- controller.should be_an_instance_of(Admin::PeopleController)
+ describe 'routing' do
+ it "should map the consolidate path" do
+ route_for(:controller => "admin/people", :action => "consolidate").should == "/admin/people/consolidate"
+ end
+ it "should map the path to the action" do
+ params_from(:get, "/admin/people/consolidate").should == {:controller => "admin/people", :action => "consolidate"}
+ end
end
end
diff --git a/spec/models/person_spec.rb b/spec/models/person_spec.rb
index d15c8d7..47dd499 100644
--- a/spec/models/person_spec.rb
+++ b/spec/models/person_spec.rb
@@ -1,11 +1,24 @@
require File.dirname(__FILE__) + '/../spec_helper'
describe Person do
before(:each) do
- @person = Person.new
+ @person = Person.new(:first_name => 'Jim', :last_name => 'Gay', :middle_name => 'M')
end
- it "should be valid" do
- @person.should be_valid
+ describe "full_name" do
+ it "should return the concatenated first, middle, and last names" do
+ @person.full_name.should == 'Jim M Gay'
+ end
+ it "should return the concatenated names stripped of extra whitespace" do
+ @person.first_name = ' Jim'
+ @person.last_name = 'Gay '
+ @person.middle_name = ''
+ @person.full_name.should == 'Jim Gay'
+ end
+ context 'with :last_name_first' do
+ it "should return the concatenated last, first, and middle names" do
+ @person.full_name(:last_name_first => true).should == 'Gay, Jim M'
+ end
+ end
end
end
diff --git a/vendor/plugins/merger b/vendor/plugins/merger
new file mode 160000
index 0000000..079099b
--- /dev/null
+++ b/vendor/plugins/merger
@@ -0,0 +1 @@
+Subproject commit 079099b5e134ad40b00eed58b780923081560155
|
saturnflyer/radiant-people-extension
|
1d5b40145203d82670116d9495dca1b33b78ff86
|
initial load
|
diff --git a/README b/README
new file mode 100644
index 0000000..6171a20
--- /dev/null
+++ b/README
@@ -0,0 +1,3 @@
+= People
+
+Description goes here
\ No newline at end of file
diff --git a/Rakefile b/Rakefile
new file mode 100644
index 0000000..8d3b41f
--- /dev/null
+++ b/Rakefile
@@ -0,0 +1,123 @@
+# I think this is the one that should be moved to the extension Rakefile template
+
+# In rails 1.2, plugins aren't available in the path until they're loaded.
+# Check to see if the rspec plugin is installed first and require
+# it if it is. If not, use the gem version.
+
+# Determine where the RSpec plugin is by loading the boot
+unless defined? RADIANT_ROOT
+ ENV["RAILS_ENV"] = "test"
+ case
+ when ENV["RADIANT_ENV_FILE"]
+ require File.dirname(ENV["RADIANT_ENV_FILE"]) + "/boot"
+ when File.dirname(__FILE__) =~ %r{vendor/radiant/vendor/extensions}
+ require "#{File.expand_path(File.dirname(__FILE__) + "/../../../../../")}/config/boot"
+ else
+ require "#{File.expand_path(File.dirname(__FILE__) + "/../../../")}/config/boot"
+ end
+end
+
+require 'rake'
+require 'rake/rdoctask'
+require 'rake/testtask'
+
+rspec_base = File.expand_path(RADIANT_ROOT + '/vendor/plugins/rspec/lib')
+$LOAD_PATH.unshift(rspec_base) if File.exist?(rspec_base)
+require 'spec/rake/spectask'
+require 'cucumber'
+require 'cucumber/rake/task'
+
+# Cleanup the RADIANT_ROOT constant so specs will load the environment
+Object.send(:remove_const, :RADIANT_ROOT)
+
+extension_root = File.expand_path(File.dirname(__FILE__))
+
+task :default => :spec
+task :stats => "spec:statsetup"
+
+desc "Run all specs in spec directory"
+Spec::Rake::SpecTask.new(:spec) do |t|
+ t.spec_opts = ['--options', "\"#{extension_root}/spec/spec.opts\""]
+ t.spec_files = FileList['spec/**/*_spec.rb']
+end
+
+task :features => 'spec:integration'
+
+namespace :spec do
+ desc "Run all specs in spec directory with RCov"
+ Spec::Rake::SpecTask.new(:rcov) do |t|
+ t.spec_opts = ['--options', "\"#{extension_root}/spec/spec.opts\""]
+ t.spec_files = FileList['spec/**/*_spec.rb']
+ t.rcov = true
+ t.rcov_opts = ['--exclude', 'spec', '--rails']
+ end
+
+ desc "Print Specdoc for all specs"
+ Spec::Rake::SpecTask.new(:doc) do |t|
+ t.spec_opts = ["--format", "specdoc", "--dry-run"]
+ t.spec_files = FileList['spec/**/*_spec.rb']
+ end
+
+ [:models, :controllers, :views, :helpers].each do |sub|
+ desc "Run the specs under spec/#{sub}"
+ Spec::Rake::SpecTask.new(sub) do |t|
+ t.spec_opts = ['--options', "\"#{extension_root}/spec/spec.opts\""]
+ t.spec_files = FileList["spec/#{sub}/**/*_spec.rb"]
+ end
+ end
+
+ desc "Run the Cucumber features"
+ Cucumber::Rake::Task.new(:integration) do |t|
+ t.fork = true
+ t.cucumber_opts = ['--format', (ENV['CUCUMBER_FORMAT'] || 'pretty')]
+ # t.feature_pattern = "#{extension_root}/features/**/*.feature"
+ t.profile = "default"
+ end
+
+ # Setup specs for stats
+ task :statsetup do
+ require 'code_statistics'
+ ::STATS_DIRECTORIES << %w(Model\ specs spec/models)
+ ::STATS_DIRECTORIES << %w(View\ specs spec/views)
+ ::STATS_DIRECTORIES << %w(Controller\ specs spec/controllers)
+ ::STATS_DIRECTORIES << %w(Helper\ specs spec/views)
+ ::CodeStatistics::TEST_TYPES << "Model specs"
+ ::CodeStatistics::TEST_TYPES << "View specs"
+ ::CodeStatistics::TEST_TYPES << "Controller specs"
+ ::CodeStatistics::TEST_TYPES << "Helper specs"
+ ::STATS_DIRECTORIES.delete_if {|a| a[0] =~ /test/}
+ end
+
+ namespace :db do
+ namespace :fixtures do
+ desc "Load fixtures (from spec/fixtures) into the current environment's database. Load specific fixtures using FIXTURES=x,y"
+ task :load => :environment do
+ require 'active_record/fixtures'
+ ActiveRecord::Base.establish_connection(RAILS_ENV.to_sym)
+ (ENV['FIXTURES'] ? ENV['FIXTURES'].split(/,/) : Dir.glob(File.join(RAILS_ROOT, 'spec', 'fixtures', '*.{yml,csv}'))).each do |fixture_file|
+ Fixtures.create_fixtures('spec/fixtures', File.basename(fixture_file, '.*'))
+ end
+ end
+ end
+ end
+end
+
+desc 'Generate documentation for the people extension.'
+Rake::RDocTask.new(:rdoc) do |rdoc|
+ rdoc.rdoc_dir = 'rdoc'
+ rdoc.title = 'PeopleExtension'
+ rdoc.options << '--line-numbers' << '--inline-source'
+ rdoc.rdoc_files.include('README')
+ rdoc.rdoc_files.include('lib/**/*.rb')
+end
+
+# For extensions that are in transition
+desc 'Test the people extension.'
+Rake::TestTask.new(:test) do |t|
+ t.libs << 'lib'
+ t.pattern = 'test/**/*_test.rb'
+ t.verbose = true
+end
+
+# Load any custom rakefiles for extension
+Dir[File.dirname(__FILE__) + '/tasks/*.rake'].sort.each { |f| require f }
\ No newline at end of file
diff --git a/app/controllers/admin/people_controller.rb b/app/controllers/admin/people_controller.rb
new file mode 100644
index 0000000..1e6f071
--- /dev/null
+++ b/app/controllers/admin/people_controller.rb
@@ -0,0 +1,11 @@
+class Admin::PeopleController < Admin::ResourceController
+ before_filter :load_stylesheets
+
+ def announce_saved
+ flash[:notice] = "#{@person.full_name} saved below."
+ end
+
+ def load_stylesheets
+ include_stylesheet 'admin/people'
+ end
+end
diff --git a/app/helpers/admin/people_helper.rb b/app/helpers/admin/people_helper.rb
new file mode 100644
index 0000000..c238751
--- /dev/null
+++ b/app/helpers/admin/people_helper.rb
@@ -0,0 +1,6 @@
+module Admin::PeopleHelper
+ def render_person(person)
+ @current_person = person
+ render 'person', :person => @current_person
+ end
+end
diff --git a/app/models/person.rb b/app/models/person.rb
new file mode 100644
index 0000000..2f13f22
--- /dev/null
+++ b/app/models/person.rb
@@ -0,0 +1,15 @@
+class Person < ActiveRecord::Base
+ validates_presence_of :first_name
+ validates_presence_of :last_name
+
+ default_scope :order => 'last_name, first_name, middle_name DESC'
+
+ def full_name(*options)
+ options = options.extract_options!
+ if options[:last_name_first]
+ "#{last_name}, #{first_name} #{middle_name}"
+ else
+ "#{first_name} #{middle_name} #{last_name}"
+ end
+ end
+end
diff --git a/app/views/admin/people/_person.html.haml b/app/views/admin/people/_person.html.haml
new file mode 100644
index 0000000..042d7e7
--- /dev/null
+++ b/app/views/admin/people/_person.html.haml
@@ -0,0 +1,6 @@
+%tr
+ - render_region :person do |p|
+ - p.name_column do
+ %td= link_to(person.full_name(:last_name_first => true), edit_admin_person_path(person))
+ - p.gender_column do
+ %td= person.gender
\ No newline at end of file
diff --git a/app/views/admin/people/edit.html.haml b/app/views/admin/people/edit.html.haml
new file mode 100644
index 0000000..98ce4e7
--- /dev/null
+++ b/app/views/admin/people/edit.html.haml
@@ -0,0 +1,27 @@
+- include_stylesheet 'admin/people'
+%h1= @person.full_name
+- form_for ['admin',@person], :method => :put do |f|
+ .form-area
+ - @person_form = f
+ - render_region :person_info do |person_info|
+ .recordPart
+ = f.label :first_name
+ = f.text_field :first_name
+ = f.error_message_on :first_name
+ .recordPart
+ = f.label :middle_name
+ = f.text_field :middle_name
+ = f.error_message_on :middle_name
+ .recordPart
+ = f.label :last_name
+ = f.text_field :last_name
+ = f.error_message_on :last_name
+ .recordPart
+ = f.label :gender
+ = f.select :gender, [['male','male'],['female','female']], :include_blank => true
+ %p.buttons
+ - render_region :buttons do |buttons|
+ = save_model_button(@person)
+ = save_model_and_continue_editing_button(@person)
+ or
+ = link_to 'Cancel', admin_people_path
\ No newline at end of file
diff --git a/app/views/admin/people/index.html.haml b/app/views/admin/people/index.html.haml
new file mode 100644
index 0000000..44623b5
--- /dev/null
+++ b/app/views/admin/people/index.html.haml
@@ -0,0 +1,15 @@
+- render_region :top
+%h1 People
+%table.index
+ %thead
+ %tr
+ - render_region :people_head do |people_head|
+ - people_head.name_column_head do
+ %th Name
+ - people_head.gender_column_head do
+ %th Gender
+ %tbody
+ - unless @people.blank?
+ - @people.each do |person|
+ = render_person(person)
+%p= link_to "Add Person", new_admin_person_path
\ No newline at end of file
diff --git a/app/views/admin/people/new.html.haml b/app/views/admin/people/new.html.haml
new file mode 100644
index 0000000..b46967b
--- /dev/null
+++ b/app/views/admin/people/new.html.haml
@@ -0,0 +1,27 @@
+- include_stylesheet 'admin/people'
+%h1 New Person
+- form_for ['admin',@person] do |f|
+ .form-area
+ - @person_form = f
+ - render_region :person_info do |person_info|
+ .recordPart
+ = f.label :first_name
+ = f.text_field :first_name
+ = f.error_message_on :first_name
+ .recordPart
+ = f.label :middle_name
+ = f.text_field :middle_name
+ = f.error_message_on :middle_name
+ .recordPart
+ = f.label :last_name
+ = f.text_field :last_name
+ = f.error_message_on :last_name
+ .recordPart
+ = f.label :gender
+ = f.select :gender, [['male','male'],['female','female']], :include_blank => true
+ %p.buttons
+ - render_region :buttons do |buttons|
+ = save_model_button(@person)
+ = save_model_and_continue_editing_button(@person)
+ or
+ = link_to 'Cancel', admin_people_path
\ No newline at end of file
diff --git a/cucumber.yml b/cucumber.yml
new file mode 100644
index 0000000..7a03ee6
--- /dev/null
+++ b/cucumber.yml
@@ -0,0 +1 @@
+default: --format progress features --tags ~@proposed,~@in_progress
\ No newline at end of file
diff --git a/db/migrate/20090905004948_create_people.rb b/db/migrate/20090905004948_create_people.rb
new file mode 100644
index 0000000..09e5c1e
--- /dev/null
+++ b/db/migrate/20090905004948_create_people.rb
@@ -0,0 +1,17 @@
+class CreatePeople < ActiveRecord::Migration
+ def self.up
+ create_table :people do |t|
+ t.string :prefix
+ t.string :first_name
+ t.string :middle_name
+ t.string :last_name
+ t.string :gender
+
+ t.timestamps
+ end
+ end
+
+ def self.down
+ drop_table :people
+ end
+end
diff --git a/features/support/env.rb b/features/support/env.rb
new file mode 100644
index 0000000..ed24071
--- /dev/null
+++ b/features/support/env.rb
@@ -0,0 +1,16 @@
+# Sets up the Rails environment for Cucumber
+ENV["RAILS_ENV"] = "test"
+# Extension root
+extension_env = File.expand_path(File.dirname(__FILE__) + '/../../../../../config/environment')
+require extension_env+'.rb'
+
+Dir.glob(File.join(RADIANT_ROOT, "features", "**", "*.rb")).each {|step| require step}
+
+Cucumber::Rails::World.class_eval do
+ include Dataset
+ datasets_directory "#{RADIANT_ROOT}/spec/datasets"
+ Dataset::Resolver.default = Dataset::DirectoryResolver.new("#{RADIANT_ROOT}/spec/datasets", File.dirname(__FILE__) + '/../../spec/datasets', File.dirname(__FILE__) + '/../datasets')
+ self.datasets_database_dump_path = "#{Rails.root}/tmp/dataset"
+
+ # dataset :people
+end
\ No newline at end of file
diff --git a/features/support/paths.rb b/features/support/paths.rb
new file mode 100644
index 0000000..9e67624
--- /dev/null
+++ b/features/support/paths.rb
@@ -0,0 +1,14 @@
+def path_to(page_name)
+ case page_name
+
+ when /the homepage/i
+ root_path
+
+ when /login/i
+ login_path
+ # Add more page name => path mappings here
+
+ else
+ raise "Can't find mapping from \"#{page_name}\" to a path."
+ end
+end
\ No newline at end of file
diff --git a/lib/tasks/people_extension_tasks.rake b/lib/tasks/people_extension_tasks.rake
new file mode 100644
index 0000000..264717d
--- /dev/null
+++ b/lib/tasks/people_extension_tasks.rake
@@ -0,0 +1,28 @@
+namespace :radiant do
+ namespace :extensions do
+ namespace :people do
+
+ desc "Runs the migration of the People extension"
+ task :migrate => :environment do
+ require 'radiant/extension_migrator'
+ if ENV["VERSION"]
+ PeopleExtension.migrator.migrate(ENV["VERSION"].to_i)
+ else
+ PeopleExtension.migrator.migrate
+ end
+ end
+
+ desc "Copies public assets of the People to the instance public/ directory."
+ task :update => :environment do
+ is_svn_or_dir = proc {|path| path =~ /\.svn/ || File.directory?(path) }
+ puts "Copying assets from PeopleExtension"
+ Dir[PeopleExtension.root + "/public/**/*"].reject(&is_svn_or_dir).each do |file|
+ path = file.sub(PeopleExtension.root, '')
+ directory = File.dirname(path)
+ mkdir_p RAILS_ROOT + directory, :verbose => false
+ cp file, RAILS_ROOT + path, :verbose => false
+ end
+ end
+ end
+ end
+end
diff --git a/people_extension.rb b/people_extension.rb
new file mode 100644
index 0000000..6ee01b8
--- /dev/null
+++ b/people_extension.rb
@@ -0,0 +1,43 @@
+require 'ostruct'
+class PeopleExtension < Radiant::Extension
+ version "1.0"
+ description "Manage people."
+ url "http://saturnflyer.com/"
+
+ extension_config do |config|
+ config.gem 'will_paginate'
+ end
+
+ define_routes do |map|
+ map.namespace :admin, :member => { :remove => :get } do |admin|
+ admin.resources :people
+ end
+ end
+
+ def activate
+ Radiant::AdminUI.class_eval do
+ attr_accessor :people
+ end
+ admin.people = load_default_people_regions
+ admin.tabs.add "People", "/admin/people", :after => "Layouts", :visibility => [:all]
+ end
+
+ def deactivate
+ end
+
+ def load_default_people_regions
+ returning OpenStruct.new do |people|
+ people.index = Radiant::AdminUI::RegionSet.new do |index|
+ index.top.concat %w{}
+ index.people_head.concat %w{name_column_head gender_column_head}
+ index.person.concat %w{name_column gender_column}
+ end
+ people.new = Radiant::AdminUI::RegionSet.new do |new|
+ new.person_info.concat %w{}
+ new.buttons.concat %w{}
+ end
+ people.edit = people.new.clone
+ end
+ end
+
+end
diff --git a/public/stylesheets/admin/people.css b/public/stylesheets/admin/people.css
new file mode 100644
index 0000000..0f7453d
--- /dev/null
+++ b/public/stylesheets/admin/people.css
@@ -0,0 +1,4 @@
+.form-area { overflow: hidden;}
+.personExtras { clear: both;}
+.recordPart { float: left;}
+.recordPart label, .recordPart input { display: block; }
\ No newline at end of file
diff --git a/spec/controllers/admin/people_controller_spec.rb b/spec/controllers/admin/people_controller_spec.rb
new file mode 100644
index 0000000..062dc07
--- /dev/null
+++ b/spec/controllers/admin/people_controller_spec.rb
@@ -0,0 +1,10 @@
+require File.dirname(__FILE__) + '/../../spec_helper'
+
+describe Admin::PeopleController do
+
+ #Delete this example and add some real ones
+ it "should use Admin::PeopleController" do
+ controller.should be_an_instance_of(Admin::PeopleController)
+ end
+
+end
diff --git a/spec/helpers/admin/people_helper_spec.rb b/spec/helpers/admin/people_helper_spec.rb
new file mode 100644
index 0000000..57937b8
--- /dev/null
+++ b/spec/helpers/admin/people_helper_spec.rb
@@ -0,0 +1,11 @@
+require File.dirname(__FILE__) + '/../../spec_helper'
+
+describe Admin::PeopleHelper do
+
+ #Delete this example and add some real ones or delete this file
+ it "should include the Admin::PeopleHelper" do
+ included_modules = self.metaclass.send :included_modules
+ included_modules.should include(Admin::PeopleHelper)
+ end
+
+end
diff --git a/spec/models/person_spec.rb b/spec/models/person_spec.rb
new file mode 100644
index 0000000..d15c8d7
--- /dev/null
+++ b/spec/models/person_spec.rb
@@ -0,0 +1,11 @@
+require File.dirname(__FILE__) + '/../spec_helper'
+
+describe Person do
+ before(:each) do
+ @person = Person.new
+ end
+
+ it "should be valid" do
+ @person.should be_valid
+ end
+end
diff --git a/spec/spec.opts b/spec/spec.opts
new file mode 100644
index 0000000..d8c8db5
--- /dev/null
+++ b/spec/spec.opts
@@ -0,0 +1,6 @@
+--colour
+--format
+progress
+--loadby
+mtime
+--reverse
diff --git a/spec/spec_helper.rb b/spec/spec_helper.rb
new file mode 100644
index 0000000..0adbf9f
--- /dev/null
+++ b/spec/spec_helper.rb
@@ -0,0 +1,36 @@
+unless defined? RADIANT_ROOT
+ ENV["RAILS_ENV"] = "test"
+ case
+ when ENV["RADIANT_ENV_FILE"]
+ require ENV["RADIANT_ENV_FILE"]
+ when File.dirname(__FILE__) =~ %r{vendor/radiant/vendor/extensions}
+ require "#{File.expand_path(File.dirname(__FILE__) + "/../../../../../../")}/config/environment"
+ else
+ require "#{File.expand_path(File.dirname(__FILE__) + "/../../../../")}/config/environment"
+ end
+end
+require "#{RADIANT_ROOT}/spec/spec_helper"
+
+Dataset::Resolver.default << (File.dirname(__FILE__) + "/datasets")
+
+if File.directory?(File.dirname(__FILE__) + "/matchers")
+ Dir[File.dirname(__FILE__) + "/matchers/*.rb"].each {|file| require file }
+end
+
+Spec::Runner.configure do |config|
+ # config.use_transactional_fixtures = true
+ # config.use_instantiated_fixtures = false
+ # config.fixture_path = RAILS_ROOT + '/spec/fixtures'
+
+ # You can declare fixtures for each behaviour like this:
+ # describe "...." do
+ # fixtures :table_a, :table_b
+ #
+ # Alternatively, if you prefer to declare them only once, you can
+ # do so here, like so ...
+ #
+ # config.global_fixtures = :table_a, :table_b
+ #
+ # If you declare global fixtures, be aware that they will be declared
+ # for all of your examples, even those that don't use them.
+end
\ No newline at end of file
|
superisaac/rainy
|
c653315f86e89ab3bb3572b2e5ea7096efc8c900
|
1. Using integer as index to get revision 2. Added revision for OCM. ie. user.rev()
|
diff --git a/TODO b/TODO
index f965fce..0c5bd63 100644
--- a/TODO
+++ b/TODO
@@ -1,6 +1,7 @@
* Find timeline by time range(page)
* Messages
* Revision functions for model
* Index support, one model several db
* Simple frontend, using django, php, ...
* Support more mimetype, images, video, voice, ...
+ * Protocol buffer support
\ No newline at end of file
diff --git a/lib/couchkit/dbwrapper.py b/lib/couchkit/dbwrapper.py
index b048881..2e701b1 100644
--- a/lib/couchkit/dbwrapper.py
+++ b/lib/couchkit/dbwrapper.py
@@ -1,245 +1,248 @@
import httpc as http
from urllib import quote, urlencode
import simplejson
import settings
class ServerError(Exception):
def __init__(self, status, msg):
self.status = status
self.msg = msg
def __str__(self):
return "Server Error: %s %s" % (self.status, self.msg)
class ServerConnectionError(ServerError):
def __init__(self, cerror):
response = cerror.params.response
super(ServerConnectionError, self).__init__(response.status,
response.reason + '|')
class Server(object):
servers = {}
def __new__(cls, server_url=settings.COUCHDB_SERVER):
if server_url not in cls.servers:
obj = object.__new__(cls)
obj.initialize(server_url)
cls.servers[server_url] = obj
return cls.servers[server_url]
def initialize(self, server_url):
self.server_url = server_url
if self.server_url[-1:] == '/':
self.server_url = self.server_url[:-1]
self.opened_dbs = {}
def handle_response(self, status, msg, body):
if status >= 200 and status < 300:
return simplejson.loads(body)
else:
raise ServerError(status, msg)
def dumps(self, obj):
if obj is None:
return ''
else:
return simplejson.dumps(obj)
def get(self, url='/', **params):
param_str = urlencode(params)
url = quote(url)
if param_str:
url += '?' + param_str
try:
t = http.get_(self.server_url + url,
headers={'Accept':'Application/json'},
)
except http.ConnectionError, e:
raise ServerConnectionError(e)
obj = self.handle_response(*t)
if isinstance(obj, dict):
obj = dict((k.encode(settings.ENCODING), v) for k, v in obj.iteritems())
return obj
def delete(self, url='/', **params):
param_str = urlencode(params)
url = quote(url)
if param_str:
url += '?' + param_str
try:
t = http.delete_(self.server_url + url)
except http.ConnectionError, e:
raise ServerConnectionError(e)
return self.handle_response(*t)
def post(self, url, obj):
url = quote(url)
data = self.dumps(obj)
try:
t = http.post_(self.server_url + url,
data=data,
headers={'content-type': 'application/json',
'accept':'application/json'
},
)
except http.ConnectionError, e:
raise ServerConnectionError(e)
return self.handle_response(*t)
def put(self, url, obj):
url = quote(url)
data = self.dumps(obj)
try:
t = http.put_(self.server_url + url,
data=data,
headers={'conent-type': 'application/json',
'accept':'Application/json'},
)
except http.ConnectionError, e:
raise ServerConnectionError(e)
return self.handle_response(*t)
def __getitem__(self, dbname):
if dbname in self.opened_dbs:
return self.opened_dbs[dbname]
dbs = self.dbs()
if dbname in dbs:
db = Database(self, dbname)
self.opened_dbs[dbname] = db
return db
else:
#raise KeyError(dbname)
return None
def __delitem__(self, dbname):
if dbname in self.opened_dbs:
del self.opened_dbs[dbname]
return self.delete('/%s/' % dbname)
def dbs(self):
return self.get('/_all_dbs')
def create_db(self, dbname):
return self.put('/%s/' % dbname, None)
class Database:
def __init__(self, server, dbname):
self.server = server
self.dbname = dbname
self._cache = {}
self.enable_cache = False
def del_cache(self, docid):
if self.enable_cache:
if docid in self._cache:
del self._cache[docid]
def get_cache(self, docid):
if self.enable_cache:
if docid in self._cache:
return self._cache[docid]
def set_cache(self, docid, obj):
if self.enable_cache:
self._cache[docid] = obj
def clean_cache(self):
self._cache = {}
def info(self):
return self.server.get('/%s/' % self.dbname)
def docs(self):
return self.server.get('/%s/_all_docs' % self.dbname)
def get(self, docid, rev=None):
+ if isinstance(rev, (int, long)):
+ revs = self.revs(docid)
+ rev = revs[rev] # Get index
params = rev and dict(rev=rev) or {}
obj = self.server.get('/%s/%s' % (self.dbname, docid), **params)
self.set_cache(docid, obj)
return obj
def revs(self, docid):
obj = self.server.get('/%s/%s' % (self.dbname, docid), revs='true')
return obj['_revs']
def fetch(self, docid, absent=None):
try:
obj = self.server.get('/%s/%s/' % (self.dbname, docid))
except ServerError:
return absent
self.set_cache(docid, obj)
return obj
def __getitem__(self, docid):
obj = self.get_cache(docid)
return obj or self.get(docid)
def __setitem__(self, docid, obj):
try:
#self.server.put('/%s/%s' % (self.dbname, docid), obj)
self.set(docid, obj)
except ServerError:
doc = self.get(docid)
rev = doc['_rev']
obj['_rev'] = rev
self.server.put('/%s/%s' % (self.dbname, docid), obj)
self.del_cache(docid)
def set(self, docid, obj):
self.server.put('/%s/%s' % (self.dbname, docid), obj)
def create_doc(self, obj):
""" Create a new document with the id and rev generated by server
:returns
{"ok":true, "id":"123BAC", "rev":"946B7D1C"}
"""
return self.server.post('/%s/' % self.dbname, obj)
def __delitem__(self, docid):
doc = self.get(docid)
rev = doc['_rev']
self.del_cache(docid)
return self.server.delete('/%s/%s' % (self.dbname, docid), rev=rev)
def query(self, map_func, reduce_func=""):
"query temporary view"
view = View(map_func, reduce_func)
return self.server.post('/%s/_temp_view' % (self.dbname), view.query_dict())
def query_view(self, design_name, view_name):
return self.server.get('/%s/_view/%s' % (self.dbname,
design_name
))
def create_or_replace_design(self, name, design):
self[name] = design.query_dict(name)
class View:
def __init__(self, map_func='', reduce_func=''):
self.map_func = map_func
self.reduce_func = reduce_func
def query_dict(self):
query = {}
if self.map_func:
query['map'] = self.map_func
if self.reduce_func:
query['reduce'] = self.reduce_func
return query
class Design:
def __init__(self, **views):
self.views = {}
self.views.update(views)
def query_dict(self, name):
query = {
'_id': '_design/%s' % name,
'language': 'javascript',
'views': {},
}
for name, view in self.views.iteritems():
query['views'][name] = view.query_dict()
return query
diff --git a/lib/couchkit/ocm.py b/lib/couchkit/ocm.py
index d110520..8556bd9 100644
--- a/lib/couchkit/ocm.py
+++ b/lib/couchkit/ocm.py
@@ -1,251 +1,254 @@
# ocm.py - Object-Couchdb mapping
# Author: Zeng Ke
import time
from dbwrapper import Server, ServerError
from urllib import quote
class Index(object):
""" Couchdb indexes
It is indeed a new database whose key is the indexed value
and value is a list of ids of master objects.
"""
def __init__(self, model, field):
self.db_name = 'idx--%s--%s' % (model.db_name, field.fieldname)
server = Server()
if not server[self.db_name]:
server.create_db(self.db_name)
def db(self):
server = Server()
return server[self.db_name]
def get_ids(self, v):
model_ids = self.db()["%s" % v]['store_ids']
return model_ids
def set(self, v, model_id):
db = self.db()
obj = db.fetch(v)
if obj:
# Already exits
model_ids = set(obj['store_ids'])
model_ids.add(model_id)
db[v] = {'store_ids': list(model_ids)}
else:
db[v] = {'store_ids': [model_id], '_id': quote(v)}
def delete(self, v, model_id):
model_ids = set(self.get_ids(v))
if model_id in model_ids:
model_ids.remove(model_id)
self.db()[v] = {'store_ids': list(model_ids)}
def change(self, old_v, new_v, model_id):
if old_v:
self.delete(old_v, model_id)
if new_v:
self.set(new_v, model_id)
class Field(object):
""" Field that defines the schema of a DB
Much like the field of relation db ORMs
A proxy of a object's attribute
"""
def __init__(self, null=True):
self.null = null # Seems null is not used currently
self._fieldname = None
self.index = None
self.enable_index = False
def _get_fieldname(self):
return self._fieldname
def _set_fieldname(self, v):
self._fieldname = v
fieldname = property(_get_fieldname, _set_fieldname)
def probe_index(self, model):
if self.enable_index:
self.index = Index(model, self)
def default_value(self):
return None
@classmethod
def get_by_index(cls, **kw):
pass
def __get__(self, obj, type=None):
return getattr(obj, '_proxied_%s' % self.fieldname)
def __set__(self, obj, value):
obj.tainted = True
if self.index:
old_value = getattr(obj, '_proxied_%s' % self.fieldname, None)
obj.changed_indices.append((self.index, old_value, value))
setattr(obj, '_proxied_%s' % self.fieldname, value)
def __del__(self, obj):
pass
class ScalaField(Field):
""" Elementry data type: String and number
"""
def __init__(self, default=None, null=True, index=False):
super(ScalaField, self).__init__(null)
self.default = default
self.enable_index = index
def default_value(self):
return self.default
class ListField(Field):
""" This field is an array
"""
def default_value(self):
return []
class DateTimeField(Field):
""" Datetime format
"""
def default_value(self):
return time.time()
class ModelMeta(type):
""" The meta class of Model
Do some registering of Model classes
"""
def __new__(meta, clsname, bases, classdict):
cls = type.__new__(meta, clsname, bases, classdict)
if clsname == 'Model':
return cls
cls.initialize()
return cls
class Model(object):
""" The model of couchdb
A model defines the schema of a database using its fields
Customed model can be defined by subclassing the Model class.
"""
__metaclass__ = ModelMeta
@classmethod
def indices(cls):
""" Generate all indices of a model
"""
for field in cls.fields.itervalues():
if field.index:
yield field, field.index
@classmethod
def initialize(cls):
""" Initialize the necessary stuffs of a model class
Including:
* Gathering fields and indices
* Touch db if not exist.
Called in ModelMeta's __new__
"""
cls.db_name = cls.__name__.lower()
cls.fields = []
for fieldname, v in vars(cls).items():
if isinstance(v, Field):
v.fieldname = fieldname
cls.fields.append(v)
v.probe_index(cls)
server = Server()
if not server[cls.db_name]:
server.create_db(cls.db_name)
@classmethod
def create(cls, **kwargs):
""" Create a new object
"""
model_obj = cls(**kwargs)
return model_obj
@classmethod
def db(cls):
server = Server()
return server[cls.db_name]
@classmethod
def all(cls):
db = cls.db()
return [cls.get(item['id']) for item in db.docs()['rows']]
@classmethod
- def get(cls, id, exc_class=None):
+ def get(cls, id, rev=None, exc_class=None):
db = cls.db()
try:
- user_dict = db[id]
+ user_dict = db.get(id, rev=rev)
except ServerError:
if exc_class:
raise exc_class()
raise
obj = cls(**user_dict)
return obj
+ def revs(self):
+ return self.db().revs(self.id)
@classmethod
def get_by_ids(cls, *ids):
db = cls.db()
for id in ids:
try:
user_dict = db[id]
yield cls(**user_dict)
except ServerError:
pass
def __eq__(self, other):
return self.__class__ == other.__class__ and \
hasattr(self, 'id') and hasattr(other, 'id') and \
self.id == other.id
def __hash__(self):
return hash(getattr(self, 'id', None))
def save(self):
if not self.tainted:
# TODO: things should be reconsidered when foreign key added in
return
db = self.db()
if hasattr(self, 'id'):
db[self.id] = self.get_dict()
else:
res = db.create_doc(self.get_dict())
self.id = res['id']
for index, old_value, new_value in self.changed_indices:
index.change(old_value, new_value, self.id)
self.changed_indices = []
self.tainted = False
def delete(self):
if hasattr(self, 'id'):
del self.db()[self.id]
for field, index in self.indices():
v = getattr(self, field.fieldname)
if v:
index.delete(v, self.id)
def get_dict(self):
""" Get the dict representation of an object's fields
"""
info_dict = {}
for field in self.fields:
info_dict[field.fieldname] = getattr(self, field.fieldname)
if hasattr(self, 'id'):
info_dict['id'] = self.id
return info_dict
def __init__(self, **kwargs):
self.changed_indices = []
for field in self.fields:
setattr(self,
field.fieldname,
kwargs.get(field.fieldname,
field.default_value()))
+ self._rev = kwargs.get('_rev')
if '_id' in kwargs:
self.id = kwargs['_id']
self.tainted = False
else:
self.tainted = True
|
superisaac/rainy
|
a8882f2cbadacd89c400d9294f1a1f7cd34834fb
|
Fixed some bugs on http request
|
diff --git a/lib/couchkit/dbwrapper.py b/lib/couchkit/dbwrapper.py
index 017c5ee..b048881 100644
--- a/lib/couchkit/dbwrapper.py
+++ b/lib/couchkit/dbwrapper.py
@@ -1,234 +1,245 @@
import httpc as http
-from urllib import quote
+from urllib import quote, urlencode
import simplejson
import settings
class ServerError(Exception):
def __init__(self, status, msg):
self.status = status
self.msg = msg
def __str__(self):
return "Server Error: %s %s" % (self.status, self.msg)
class ServerConnectionError(ServerError):
def __init__(self, cerror):
response = cerror.params.response
super(ServerConnectionError, self).__init__(response.status,
response.reason + '|')
class Server(object):
servers = {}
def __new__(cls, server_url=settings.COUCHDB_SERVER):
if server_url not in cls.servers:
obj = object.__new__(cls)
obj.initialize(server_url)
cls.servers[server_url] = obj
return cls.servers[server_url]
def initialize(self, server_url):
self.server_url = server_url
if self.server_url[-1:] == '/':
self.server_url = self.server_url[:-1]
self.opened_dbs = {}
def handle_response(self, status, msg, body):
if status >= 200 and status < 300:
return simplejson.loads(body)
else:
raise ServerError(status, msg)
def dumps(self, obj):
if obj is None:
return ''
else:
return simplejson.dumps(obj)
- def get(self, url='/'):
+ def get(self, url='/', **params):
+ param_str = urlencode(params)
url = quote(url)
+ if param_str:
+ url += '?' + param_str
try:
t = http.get_(self.server_url + url,
headers={'Accept':'Application/json'},
)
except http.ConnectionError, e:
raise ServerConnectionError(e)
obj = self.handle_response(*t)
if isinstance(obj, dict):
obj = dict((k.encode(settings.ENCODING), v) for k, v in obj.iteritems())
return obj
- def delete(self, url='/'):
+ def delete(self, url='/', **params):
+ param_str = urlencode(params)
url = quote(url)
+ if param_str:
+ url += '?' + param_str
try:
t = http.delete_(self.server_url + url)
except http.ConnectionError, e:
raise ServerConnectionError(e)
return self.handle_response(*t)
def post(self, url, obj):
url = quote(url)
data = self.dumps(obj)
try:
t = http.post_(self.server_url + url,
data=data,
headers={'content-type': 'application/json',
'accept':'application/json'
},
)
except http.ConnectionError, e:
raise ServerConnectionError(e)
return self.handle_response(*t)
def put(self, url, obj):
url = quote(url)
data = self.dumps(obj)
try:
t = http.put_(self.server_url + url,
data=data,
headers={'conent-type': 'application/json',
'accept':'Application/json'},
)
except http.ConnectionError, e:
raise ServerConnectionError(e)
return self.handle_response(*t)
def __getitem__(self, dbname):
if dbname in self.opened_dbs:
return self.opened_dbs[dbname]
dbs = self.dbs()
if dbname in dbs:
db = Database(self, dbname)
self.opened_dbs[dbname] = db
return db
else:
#raise KeyError(dbname)
return None
def __delitem__(self, dbname):
if dbname in self.opened_dbs:
del self.opened_dbs[dbname]
return self.delete('/%s/' % dbname)
def dbs(self):
return self.get('/_all_dbs')
def create_db(self, dbname):
return self.put('/%s/' % dbname, None)
class Database:
def __init__(self, server, dbname):
self.server = server
self.dbname = dbname
self._cache = {}
- self.enable_cache = True
+ self.enable_cache = False
def del_cache(self, docid):
if self.enable_cache:
if docid in self._cache:
del self._cache[docid]
def get_cache(self, docid):
if self.enable_cache:
if docid in self._cache:
return self._cache[docid]
def set_cache(self, docid, obj):
if self.enable_cache:
self._cache[docid] = obj
def clean_cache(self):
self._cache = {}
def info(self):
return self.server.get('/%s/' % self.dbname)
def docs(self):
return self.server.get('/%s/_all_docs' % self.dbname)
- def get(self, docid):
- obj = self.server.get('/%s/%s/' % (self.dbname, docid))
+ def get(self, docid, rev=None):
+ params = rev and dict(rev=rev) or {}
+ obj = self.server.get('/%s/%s' % (self.dbname, docid), **params)
self.set_cache(docid, obj)
return obj
+ def revs(self, docid):
+ obj = self.server.get('/%s/%s' % (self.dbname, docid), revs='true')
+ return obj['_revs']
+
def fetch(self, docid, absent=None):
try:
obj = self.server.get('/%s/%s/' % (self.dbname, docid))
except ServerError:
return absent
self.set_cache(docid, obj)
return obj
def __getitem__(self, docid):
obj = self.get_cache(docid)
return obj or self.get(docid)
def __setitem__(self, docid, obj):
try:
#self.server.put('/%s/%s' % (self.dbname, docid), obj)
self.set(docid, obj)
except ServerError:
doc = self.get(docid)
rev = doc['_rev']
obj['_rev'] = rev
self.server.put('/%s/%s' % (self.dbname, docid), obj)
self.del_cache(docid)
def set(self, docid, obj):
self.server.put('/%s/%s' % (self.dbname, docid), obj)
def create_doc(self, obj):
""" Create a new document with the id and rev generated by server
:returns
{"ok":true, "id":"123BAC", "rev":"946B7D1C"}
"""
return self.server.post('/%s/' % self.dbname, obj)
def __delitem__(self, docid):
doc = self.get(docid)
rev = doc['_rev']
self.del_cache(docid)
- return self.server.delete('/%s/%s/?rev=%s' % (self.dbname, docid, rev))
+ return self.server.delete('/%s/%s' % (self.dbname, docid), rev=rev)
def query(self, map_func, reduce_func=""):
"query temporary view"
view = View(map_func, reduce_func)
return self.server.post('/%s/_temp_view' % (self.dbname), view.query_dict())
def query_view(self, design_name, view_name):
return self.server.get('/%s/_view/%s' % (self.dbname,
design_name
))
def create_or_replace_design(self, name, design):
self[name] = design.query_dict(name)
class View:
def __init__(self, map_func='', reduce_func=''):
self.map_func = map_func
self.reduce_func = reduce_func
def query_dict(self):
query = {}
if self.map_func:
query['map'] = self.map_func
if self.reduce_func:
query['reduce'] = self.reduce_func
return query
class Design:
def __init__(self, **views):
self.views = {}
self.views.update(views)
def query_dict(self, name):
query = {
'_id': '_design/%s' % name,
'language': 'javascript',
'views': {},
}
for name, view in self.views.iteritems():
query['views'][name] = view.query_dict()
return query
diff --git a/lib/couchkit/httpc.py b/lib/couchkit/httpc.py
index 7f68ffd..cd9938b 100755
--- a/lib/couchkit/httpc.py
+++ b/lib/couchkit/httpc.py
@@ -1,663 +1,664 @@
"""\
@file httpc.py
@author Donovan Preston
Copyright (c) 2005-2006, Donovan Preston
Copyright (c) 2007, Linden Research, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
import copy
import datetime
import httplib
import os.path
import os
import time
import urlparse
-
+import settings
_old_HTTPConnection = httplib.HTTPConnection
_old_HTTPSConnection = httplib.HTTPSConnection
-
HTTP_TIME_FORMAT = '%a, %d %b %Y %H:%M:%S GMT'
to_http_time = lambda t: time.strftime(HTTP_TIME_FORMAT, time.gmtime(t))
try:
from mx import DateTime
def from_http_time(t, defaultdate=None):
return int(DateTime.Parser.DateTimeFromString(
t, defaultdate=defaultdate).gmticks())
except ImportError:
import calendar
parse_formats = (HTTP_TIME_FORMAT, # RFC 1123
'%A, %d-%b-%y %H:%M:%S GMT', # RFC 850
'%a %b %d %H:%M:%S %Y') # asctime
def from_http_time(t, defaultdate=None):
for parser in parse_formats:
try:
return calendar.timegm(time.strptime(t, parser))
except ValueError:
continue
return defaultdate
def host_and_port_from_url(url):
"""@brief Simple function to get host and port from an http url.
@return Returns host, port and port may be None.
"""
host = None
port = None
parsed_url = urlparse.urlparse(url)
try:
host, port = parsed_url[1].split(':')
except ValueError:
host = parsed_url[1].split(':')
return host, port
def better_putrequest(self, method, url, skip_host=0, skip_accept_encoding=0):
self.method = method
self.path = url
try:
# Python 2.4 and above
self.old_putrequest(method, url, skip_host, skip_accept_encoding)
except TypeError:
# Python 2.3 and below
self.old_putrequest(method, url, skip_host)
class HttpClient(httplib.HTTPConnection):
"""A subclass of httplib.HTTPConnection that provides a better
putrequest that records the method and path on the request object.
"""
def __init__(self, host, port=None, strict=None):
_old_HTTPConnection.__init__(self, host, port, strict)
old_putrequest = httplib.HTTPConnection.putrequest
putrequest = better_putrequest
class HttpsClient(httplib.HTTPSConnection):
"""A subclass of httplib.HTTPSConnection that provides a better
putrequest that records the method and path on the request object.
"""
old_putrequest = httplib.HTTPSConnection.putrequest
putrequest = better_putrequest
def wrap_httplib_with_httpc():
"""Replace httplib's implementations of these classes with our enhanced ones.
Needed to work around code that uses httplib directly."""
httplib.HTTP._connection_class = httplib.HTTPConnection = HttpClient
httplib.HTTPS._connection_class = httplib.HTTPSConnection = HttpsClient
class FileScheme(object):
"""Retarded scheme to local file wrapper."""
host = '<file>'
port = '<file>'
reason = '<none>'
def __init__(self, location):
pass
def request(self, method, fullpath, body='', headers=None):
self.status = 200
self.msg = ''
self.path = fullpath.split('?')[0]
self.method = method = method.lower()
assert method in ('get', 'put', 'delete')
if method == 'delete':
try:
os.remove(self.path)
except OSError:
pass # don't complain if already deleted
elif method == 'put':
try:
f = file(self.path, 'w')
f.write(body)
f.close()
except IOError, e:
self.status = 500
self.raise_connection_error()
elif method == 'get':
if not os.path.exists(self.path):
self.status = 404
self.raise_connection_error(NotFound)
def connect(self):
pass
def getresponse(self):
return self
def getheader(self, header):
if header == 'content-length':
try:
return os.path.getsize(self.path)
except OSError:
return 0
def read(self, howmuch=None):
if self.method == 'get':
try:
fl = file(self.path, 'r')
if howmuch is None:
return fl.read()
else:
return fl.read(howmuch)
except IOError:
self.status = 500
self.raise_connection_error()
return ''
def raise_connection_error(self, klass=None):
if klass is None:
klass=ConnectionError
raise klass(_Params('file://' + self.path, self.method))
def close(self):
"""We're challenged here, and read the whole file rather than
integrating with this lib. file object already out of scope at this
point"""
pass
class _Params(object):
def __init__(self, url, method, body='', headers=None, dumper=None,
loader=None, use_proxy=False, ok=(), aux=None):
'''
@param connection The connection (as returned by make_connection) to use for the request.
@param method HTTP method
@param url Full url to make request on.
@param body HTTP body, if necessary for the method. Can be any object, assuming an appropriate dumper is also provided.
@param headers Dict of header name to header value
@param dumper Method that formats the body as a string.
@param loader Method that converts the response body into an object.
@param use_proxy Set to True if the connection is to a proxy.
@param ok Set of valid response statuses. If the returned status is not in this list, an exception is thrown.
'''
self.instance = None
self.url = url
self.path = url
self.method = method
self.body = body
if headers is None:
self.headers = {}
else:
self.headers = headers
self.dumper = dumper
self.loader = loader
self.use_proxy = use_proxy
self.ok = ok or (200, 201, 204)
self.orig_body = body
self.aux = aux
-
+ if settings.DEBUG:
+ print '~', self.method, self.url
+
class _LocalParams(_Params):
def __init__(self, params, **kwargs):
self._delegate = params
for k, v in kwargs.iteritems():
setattr(self, k, v)
def __getattr__(self, key):
if key == '__setstate__': return
return getattr(self._delegate, key)
def __reduce__(self):
params = copy.copy(self._delegate)
kwargs = copy.copy(self.__dict__)
assert(kwargs.has_key('_delegate'))
del kwargs['_delegate']
if hasattr(params,'aux'): del params.aux
return (_LocalParams,(params,),kwargs)
def __setitem__(self, k, item):
setattr(self, k, item)
class ConnectionError(Exception):
"""Detailed exception class for reporting on http connection problems.
There are lots of subclasses so you can use closely-specified
exception clauses."""
def __init__(self, params):
self.params = params
Exception.__init__(self)
def location(self):
return self.params.response.msg.dict.get('location')
def expired(self):
# 14.21 Expires
#
# HTTP/1.1 clients and caches MUST treat other invalid date
# formats, especially including the value "0", as in the past
# (i.e., "already expired").
expires = from_http_time(
self.params.response_headers.get('expires', '0'),
defaultdate=DateTime.Epoch)
return time.time() > expires
def __repr__(self):
response = self.params.response
return "%s(url=%r, method=%r, status=%r, reason=%r, body=%r)" % (
self.__class__.__name__, self.params.url, self.params.method,
response.status, response.reason, self.params.body)
__str__ = __repr__
class UnparseableResponse(ConnectionError):
"""Raised when a loader cannot parse the response from the server."""
def __init__(self, content_type, response, url):
self.content_type = content_type
self.response = response
self.url = url
Exception.__init__(self)
def __repr__(self):
return "Could not parse the data at the URL %r of content-type %r\nData:\n%r)" % (
self.url, self.content_type, self.response)
__str__ = __repr__
class Accepted(ConnectionError):
""" 202 Accepted """
pass
class Retriable(ConnectionError):
def retry_method(self):
return self.params.method
def retry_url(self):
return self.location() or self.url()
def retry_(self):
params = _LocalParams(self.params,
url=self.retry_url(),
method=self.retry_method())
return self.params.instance.request_(params)
def retry(self):
return self.retry_()[-1]
class MovedPermanently(Retriable):
""" 301 Moved Permanently """
pass
class Found(Retriable):
""" 302 Found """
pass
class SeeOther(Retriable):
""" 303 See Other """
def retry_method(self):
return 'GET'
class NotModified(ConnectionError):
""" 304 Not Modified """
pass
class TemporaryRedirect(Retriable):
""" 307 Temporary Redirect """
pass
class BadRequest(ConnectionError):
""" 400 Bad Request """
pass
class Unauthorized(ConnectionError):
""" 401 Unauthorized """
pass
class PaymentRequired(ConnectionError):
""" 402 Payment Required """
pass
class Forbidden(ConnectionError):
""" 403 Forbidden """
pass
class NotFound(ConnectionError):
""" 404 Not Found """
pass
class RequestTimeout(ConnectionError):
""" 408 RequestTimeout """
pass
class Gone(ConnectionError):
""" 410 Gone """
pass
class LengthRequired(ConnectionError):
""" 411 Length Required """
pass
class RequestEntityTooLarge(ConnectionError):
""" 413 Request Entity Too Large """
pass
class RequestURITooLong(ConnectionError):
""" 414 Request-URI Too Long """
pass
class UnsupportedMediaType(ConnectionError):
""" 415 Unsupported Media Type """
pass
class RequestedRangeNotSatisfiable(ConnectionError):
""" 416 Requested Range Not Satisfiable """
pass
class ExpectationFailed(ConnectionError):
""" 417 Expectation Failed """
pass
class NotImplemented(ConnectionError):
""" 501 Not Implemented """
pass
class ServiceUnavailable(Retriable):
""" 503 Service Unavailable """
def url(self):
return self.params._delegate.url
class GatewayTimeout(Retriable):
""" 504 Gateway Timeout """
def url(self):
return self.params._delegate.url
class HTTPVersionNotSupported(ConnectionError):
""" 505 HTTP Version Not Supported """
pass
class InternalServerError(ConnectionError):
""" 500 Internal Server Error """
def __repr__(self):
try:
import simplejson
traceback = simplejson.loads(self.params.response_body)
except:
try:
from indra.base import llsd
traceback = llsd.parse(self.params.response_body)
except:
traceback = self.params.response_body
if(isinstance(traceback, dict)
and 'stack-trace' in traceback
and 'description' in traceback):
body = traceback
traceback = "Traceback (most recent call last):\n"
for frame in body['stack-trace']:
traceback += ' File "%s", line %s, in %s\n' % (
frame['filename'], frame['lineno'], frame['method'])
for line in frame['code']:
if line['lineno'] == frame['lineno']:
traceback += ' %s' % (line['line'].lstrip(), )
break
traceback += body['description']
return "The server raised an exception from our request:\n%s %s\n%s %s\n%s" % (
self.params.method, self.params.url, self.params.response.status, self.params.response.reason, traceback)
__str__ = __repr__
status_to_error_map = {
202: Accepted,
301: MovedPermanently,
302: Found,
303: SeeOther,
304: NotModified,
307: TemporaryRedirect,
400: BadRequest,
401: Unauthorized,
402: PaymentRequired,
403: Forbidden,
404: NotFound,
408: RequestTimeout,
410: Gone,
411: LengthRequired,
413: RequestEntityTooLarge,
414: RequestURITooLong,
415: UnsupportedMediaType,
416: RequestedRangeNotSatisfiable,
417: ExpectationFailed,
500: InternalServerError,
501: NotImplemented,
503: ServiceUnavailable,
504: GatewayTimeout,
505: HTTPVersionNotSupported,
}
scheme_to_factory_map = {
'http': HttpClient,
'https': HttpsClient,
'file': FileScheme,
}
def make_connection(scheme, location, use_proxy):
""" Create a connection object to a host:port.
@param scheme Protocol, scheme, whatever you want to call it. http, file, https are currently supported.
@param location Hostname and port number, formatted as host:port or http://host:port if you're so inclined.
@param use_proxy Connect to a proxy instead of the actual location. Uses environment variables to decide where the proxy actually lives.
"""
if use_proxy:
if "http_proxy" in os.environ:
location = os.environ["http_proxy"]
elif "ALL_PROXY" in os.environ:
location = os.environ["ALL_PROXY"]
else:
location = "localhost:3128" #default to local squid
# run a little heuristic to see if location is an url, and if so parse out the hostpart
if location.startswith('http'):
_scheme, location, path, parameters, query, fragment = urlparse.urlparse(location)
result = scheme_to_factory_map[scheme](location)
result.connect()
return result
def connect(url, use_proxy=False):
""" Create a connection object to the host specified in a url. Convenience function for make_connection."""
scheme, location = urlparse.urlparse(url)[:2]
return make_connection(scheme, location, use_proxy)
def make_safe_loader(loader):
if not callable(loader):
return loader
def safe_loader(what):
try:
return loader(what)
except Exception:
import traceback
traceback.print_exc()
return None
return safe_loader
class HttpSuite(object):
def __init__(self, dumper, loader, fallback_content_type):
self.dumper = dumper
self.loader = loader
self.fallback_content_type = fallback_content_type
def request_(self, params):
'''Make an http request to a url, for internal use mostly.'''
params = _LocalParams(params, instance=self)
(scheme, location, path, parameters, query,
fragment) = urlparse.urlparse(params.url)
if params.use_proxy:
if scheme == 'file':
params.use_proxy = False
else:
params.headers['host'] = location
if not params.use_proxy:
params.path = path
if query:
params.path += '?' + query
params.orig_body = params.body
if params.method in ('PUT', 'POST'):
if self.dumper is not None:
params.body = self.dumper(params.body)
# don't set content-length header because httplib does it
# for us in _send_request
else:
params.body = ''
params.response, params.response_body = self._get_response_body(params)
response, body = params.response, params.response_body
if self.loader is not None:
try:
body = make_safe_loader(self.loader(body))
except KeyboardInterrupt:
raise
except Exception, e:
raise UnparseableResponse(self.loader, body, params.url)
return response.status, response.msg, body
def _check_status(self, params):
response = params.response
if response.status not in params.ok:
klass = status_to_error_map.get(response.status, ConnectionError)
raise klass(params)
def _get_response_body(self, params):
connection = connect(params.url, params.use_proxy)
connection.request(params.method, params.path, params.body,
params.headers)
params.response = connection.getresponse()
params.response_body = params.response.read()
connection.close()
self._check_status(params)
return params.response, params.response_body
def request(self, params):
return self.request_(params)[-1]
def head_(self, url, headers=None, use_proxy=False, ok=None, aux=None):
return self.request_(_Params(url, 'HEAD', headers=headers,
loader=self.loader, dumper=self.dumper,
use_proxy=use_proxy, ok=ok, aux=aux))
def head(self, *args, **kwargs):
return self.head_(*args, **kwargs)[-1]
def get_(self, url, headers=None, use_proxy=False, ok=None, aux=None):
if headers is None:
headers = {}
headers['accept'] = self.fallback_content_type+';q=1,*/*;q=0'
return self.request_(_Params(url, 'GET', headers=headers,
loader=self.loader, dumper=self.dumper,
use_proxy=use_proxy, ok=ok, aux=aux))
def get(self, *args, **kwargs):
return self.get_(*args, **kwargs)[-1]
def put_(self, url, data, headers=None, content_type=None, ok=None,
aux=None):
if headers is None:
headers = {}
if 'content-type' not in headers:
if content_type is None:
headers['content-type'] = self.fallback_content_type
else:
headers['content-type'] = content_type
headers['accept'] = headers['content-type']+';q=1,*/*;q=0'
return self.request_(_Params(url, 'PUT', body=data, headers=headers,
loader=self.loader, dumper=self.dumper,
ok=ok, aux=aux))
def put(self, *args, **kwargs):
return self.put_(*args, **kwargs)[-1]
def delete_(self, url, ok=None, aux=None):
return self.request_(_Params(url, 'DELETE', loader=self.loader,
dumper=self.dumper, ok=ok, aux=aux))
def delete(self, *args, **kwargs):
return self.delete_(*args, **kwargs)[-1]
def post_(self, url, data='', headers=None, content_type=None, ok=None,
aux=None):
if headers is None:
headers = {}
if 'content-type' not in headers:
if content_type is None:
headers['content-type'] = self.fallback_content_type
else:
headers['content-type'] = content_type
headers['accept'] = headers['content-type']+';q=1,*/*;q=0'
return self.request_(_Params(url, 'POST', body=data,
headers=headers, loader=self.loader,
dumper=self.dumper, ok=ok, aux=aux))
def post(self, *args, **kwargs):
return self.post_(*args, **kwargs)[-1]
def make_suite(dumper, loader, fallback_content_type):
""" Return a tuple of methods for making http requests with automatic bidirectional formatting with a particular content-type."""
suite = HttpSuite(dumper, loader, fallback_content_type)
return suite.get, suite.put, suite.delete, suite.post
suite = HttpSuite(str, None, 'text/plain')
delete = suite.delete
delete_ = suite.delete_
get = suite.get
get_ = suite.get_
head = suite.head
head_ = suite.head_
post = suite.post
post_ = suite.post_
put = suite.put
put_ = suite.put_
request = suite.request
request_ = suite.request_
diff --git a/lib/couchkit/ocm.py b/lib/couchkit/ocm.py
index d4aa682..d110520 100644
--- a/lib/couchkit/ocm.py
+++ b/lib/couchkit/ocm.py
@@ -1,251 +1,251 @@
# ocm.py - Object-Couchdb mapping
# Author: Zeng Ke
import time
from dbwrapper import Server, ServerError
from urllib import quote
class Index(object):
""" Couchdb indexes
It is indeed a new database whose key is the indexed value
and value is a list of ids of master objects.
"""
def __init__(self, model, field):
self.db_name = 'idx--%s--%s' % (model.db_name, field.fieldname)
server = Server()
if not server[self.db_name]:
server.create_db(self.db_name)
def db(self):
server = Server()
return server[self.db_name]
def get_ids(self, v):
model_ids = self.db()["%s" % v]['store_ids']
return model_ids
def set(self, v, model_id):
db = self.db()
obj = db.fetch(v)
if obj:
# Already exits
model_ids = set(obj['store_ids'])
model_ids.add(model_id)
db[v] = {'store_ids': list(model_ids)}
else:
db[v] = {'store_ids': [model_id], '_id': quote(v)}
def delete(self, v, model_id):
model_ids = set(self.get_ids(v))
if model_id in model_ids:
model_ids.remove(model_id)
self.db()[v] = {'store_ids': list(model_ids)}
def change(self, old_v, new_v, model_id):
if old_v:
self.delete(old_v, model_id)
if new_v:
self.set(new_v, model_id)
class Field(object):
""" Field that defines the schema of a DB
Much like the field of relation db ORMs
A proxy of a object's attribute
"""
def __init__(self, null=True):
self.null = null # Seems null is not used currently
self._fieldname = None
self.index = None
self.enable_index = False
def _get_fieldname(self):
return self._fieldname
def _set_fieldname(self, v):
self._fieldname = v
fieldname = property(_get_fieldname, _set_fieldname)
def probe_index(self, model):
if self.enable_index:
self.index = Index(model, self)
def default_value(self):
return None
@classmethod
def get_by_index(cls, **kw):
pass
def __get__(self, obj, type=None):
return getattr(obj, '_proxied_%s' % self.fieldname)
def __set__(self, obj, value):
obj.tainted = True
if self.index:
old_value = getattr(obj, '_proxied_%s' % self.fieldname, None)
obj.changed_indices.append((self.index, old_value, value))
setattr(obj, '_proxied_%s' % self.fieldname, value)
def __del__(self, obj):
pass
class ScalaField(Field):
""" Elementry data type: String and number
"""
def __init__(self, default=None, null=True, index=False):
super(ScalaField, self).__init__(null)
self.default = default
self.enable_index = index
def default_value(self):
return self.default
class ListField(Field):
""" This field is an array
"""
def default_value(self):
return []
class DateTimeField(Field):
""" Datetime format
"""
def default_value(self):
return time.time()
class ModelMeta(type):
""" The meta class of Model
Do some registering of Model classes
"""
def __new__(meta, clsname, bases, classdict):
cls = type.__new__(meta, clsname, bases, classdict)
if clsname == 'Model':
return cls
cls.initialize()
return cls
class Model(object):
""" The model of couchdb
A model defines the schema of a database using its fields
Customed model can be defined by subclassing the Model class.
"""
__metaclass__ = ModelMeta
@classmethod
def indices(cls):
""" Generate all indices of a model
"""
for field in cls.fields.itervalues():
if field.index:
yield field, field.index
@classmethod
def initialize(cls):
""" Initialize the necessary stuffs of a model class
Including:
* Gathering fields and indices
* Touch db if not exist.
Called in ModelMeta's __new__
"""
cls.db_name = cls.__name__.lower()
cls.fields = []
for fieldname, v in vars(cls).items():
if isinstance(v, Field):
v.fieldname = fieldname
cls.fields.append(v)
v.probe_index(cls)
server = Server()
if not server[cls.db_name]:
server.create_db(cls.db_name)
@classmethod
def create(cls, **kwargs):
""" Create a new object
"""
model_obj = cls(**kwargs)
return model_obj
@classmethod
def db(cls):
-
+ server = Server()
return server[cls.db_name]
@classmethod
def all(cls):
db = cls.db()
return [cls.get(item['id']) for item in db.docs()['rows']]
@classmethod
def get(cls, id, exc_class=None):
db = cls.db()
try:
user_dict = db[id]
except ServerError:
if exc_class:
raise exc_class()
raise
obj = cls(**user_dict)
return obj
@classmethod
def get_by_ids(cls, *ids):
db = cls.db()
for id in ids:
try:
user_dict = db[id]
yield cls(**user_dict)
except ServerError:
pass
def __eq__(self, other):
return self.__class__ == other.__class__ and \
hasattr(self, 'id') and hasattr(other, 'id') and \
self.id == other.id
def __hash__(self):
return hash(getattr(self, 'id', None))
def save(self):
if not self.tainted:
# TODO: things should be reconsidered when foreign key added in
return
db = self.db()
if hasattr(self, 'id'):
db[self.id] = self.get_dict()
else:
res = db.create_doc(self.get_dict())
self.id = res['id']
for index, old_value, new_value in self.changed_indices:
index.change(old_value, new_value, self.id)
self.changed_indices = []
self.tainted = False
def delete(self):
if hasattr(self, 'id'):
del self.db()[self.id]
for field, index in self.indices():
v = getattr(self, field.fieldname)
if v:
index.delete(v, self.id)
def get_dict(self):
""" Get the dict representation of an object's fields
"""
info_dict = {}
for field in self.fields:
info_dict[field.fieldname] = getattr(self, field.fieldname)
if hasattr(self, 'id'):
info_dict['id'] = self.id
return info_dict
def __init__(self, **kwargs):
self.changed_indices = []
for field in self.fields:
setattr(self,
field.fieldname,
kwargs.get(field.fieldname,
field.default_value()))
if '_id' in kwargs:
self.id = kwargs['_id']
self.tainted = False
else:
self.tainted = True
diff --git a/lib/proto/rainyapi.proto b/lib/proto/rainyapi.proto
new file mode 100644
index 0000000..eec4032
--- /dev/null
+++ b/lib/proto/rainyapi.proto
@@ -0,0 +1,5 @@
+package rainyapi;
+
+message Entry {
+ required string content = 1;
+}
diff --git a/lib/proto/rainyapi_pb2.py b/lib/proto/rainyapi_pb2.py
new file mode 100755
index 0000000..f347da8
--- /dev/null
+++ b/lib/proto/rainyapi_pb2.py
@@ -0,0 +1,39 @@
+#!/usr/bin/python2.4
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+
+from google.protobuf import descriptor
+from google.protobuf import message
+from google.protobuf import reflection
+from google.protobuf import service
+from google.protobuf import service_reflection
+from google.protobuf import descriptor_pb2
+
+
+
+_ENTRY = descriptor.Descriptor(
+ name='Entry',
+ full_name='rainyapi.Entry',
+ filename='rainyapi.proto',
+ containing_type=None,
+ fields=[
+ descriptor.FieldDescriptor(
+ name='content', full_name='rainyapi.Entry.content', index=0,
+ number=1, type=9, cpp_type=9, label=2,
+ default_value="",
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ options=None),
+ ],
+ extensions=[
+ ],
+ nested_types=[], # TODO(robinson): Implement.
+ enum_types=[
+ ],
+ options=None)
+
+
+
+class Entry(message.Message):
+ __metaclass__ = reflection.GeneratedProtocolMessageType
+ DESCRIPTOR = _ENTRY
+
diff --git a/settings.py b/settings.py
index 89ff0ec..d6d2936 100644
--- a/settings.py
+++ b/settings.py
@@ -1,24 +1,23 @@
# URLs currently provided
URLMAP = (
(r'^/user/signup', 'Signup'),
(r'^/timeline/((?P<user_id>[0-9a-zA-Zu]+)/)?', 'Timeline'),
(r'^/user/timeline/((?P<user_id>[0-9a-zA-Zu]+)/)?', 'UserTimeline'),
(r'^/user/((?P<user_id>[0-9a-zA-Zu]+)/)?', 'UserInfo'),
(r'^/update/', 'Update'),
(r'^/entry/(?P<entry_id>[0-9a-zA-Z]+)/delete', 'EntryDelete'),
(r'^/entry/(?P<entry_id>[0-9a-zA-Z]+)', 'EntryInfo'),
(r'^/$', 'Hello'),
)
+DEBUG = False
# UTF-8 by default
ENCODING = 'utf8'
-
# Connection url for the back end
COUCHDB_SERVER = 'http://localhost:5984'
# The port server daemon listening on
SERVER_PORT = 8001
-
# Realm token used in Basic HTTP Authentication
APP_REALM = 'RainyAPI'
|
superisaac/rainy
|
3bf95a12408fb5279ad38fd19bfa7ae32b643e5e
|
Added some documents
|
diff --git a/lib/couchkit/ocm.py b/lib/couchkit/ocm.py
index e26dfc6..d4aa682 100644
--- a/lib/couchkit/ocm.py
+++ b/lib/couchkit/ocm.py
@@ -1,215 +1,251 @@
+# ocm.py - Object-Couchdb mapping
+# Author: Zeng Ke
+
import time
from dbwrapper import Server, ServerError
from urllib import quote
class Index(object):
+ """ Couchdb indexes
+ It is indeed a new database whose key is the indexed value
+ and value is a list of ids of master objects.
+ """
def __init__(self, model, field):
self.db_name = 'idx--%s--%s' % (model.db_name, field.fieldname)
server = Server()
if not server[self.db_name]:
server.create_db(self.db_name)
def db(self):
server = Server()
return server[self.db_name]
def get_ids(self, v):
model_ids = self.db()["%s" % v]['store_ids']
return model_ids
def set(self, v, model_id):
db = self.db()
obj = db.fetch(v)
if obj:
# Already exits
model_ids = set(obj['store_ids'])
model_ids.add(model_id)
db[v] = {'store_ids': list(model_ids)}
else:
db[v] = {'store_ids': [model_id], '_id': quote(v)}
def delete(self, v, model_id):
model_ids = set(self.get_ids(v))
if model_id in model_ids:
model_ids.remove(model_id)
self.db()[v] = {'store_ids': list(model_ids)}
def change(self, old_v, new_v, model_id):
if old_v:
self.delete(old_v, model_id)
if new_v:
self.set(new_v, model_id)
class Field(object):
+ """ Field that defines the schema of a DB
+ Much like the field of relation db ORMs
+ A proxy of a object's attribute
+ """
def __init__(self, null=True):
self.null = null # Seems null is not used currently
self._fieldname = None
self.index = None
self.enable_index = False
def _get_fieldname(self):
return self._fieldname
def _set_fieldname(self, v):
self._fieldname = v
fieldname = property(_get_fieldname, _set_fieldname)
def probe_index(self, model):
if self.enable_index:
self.index = Index(model, self)
def default_value(self):
return None
@classmethod
def get_by_index(cls, **kw):
pass
def __get__(self, obj, type=None):
- return getattr(obj, 'proxied_%s' % self.fieldname)
+ return getattr(obj, '_proxied_%s' % self.fieldname)
def __set__(self, obj, value):
obj.tainted = True
if self.index:
- old_value = getattr(obj, 'proxied_%s' % self.fieldname, None)
+ old_value = getattr(obj, '_proxied_%s' % self.fieldname, None)
obj.changed_indices.append((self.index, old_value, value))
- setattr(obj, 'proxied_%s' % self.fieldname, value)
+ setattr(obj, '_proxied_%s' % self.fieldname, value)
def __del__(self, obj):
pass
class ScalaField(Field):
+ """ Elementry data type: String and number
+ """
def __init__(self, default=None, null=True, index=False):
super(ScalaField, self).__init__(null)
self.default = default
self.enable_index = index
def default_value(self):
return self.default
class ListField(Field):
+ """ This field is an array
+ """
def default_value(self):
return []
class DateTimeField(Field):
+ """ Datetime format
+ """
def default_value(self):
return time.time()
-
class ModelMeta(type):
+ """ The meta class of Model
+ Do some registering of Model classes
+ """
def __new__(meta, clsname, bases, classdict):
cls = type.__new__(meta, clsname, bases, classdict)
if clsname == 'Model':
return cls
cls.initialize()
return cls
class Model(object):
+ """ The model of couchdb
+ A model defines the schema of a database using its fields
+ Customed model can be defined by subclassing the Model class.
+ """
__metaclass__ = ModelMeta
@classmethod
def indices(cls):
+ """ Generate all indices of a model
+ """
for field in cls.fields.itervalues():
if field.index:
yield field, field.index
@classmethod
def initialize(cls):
+ """ Initialize the necessary stuffs of a model class
+ Including:
+ * Gathering fields and indices
+ * Touch db if not exist.
+ Called in ModelMeta's __new__
+ """
cls.db_name = cls.__name__.lower()
cls.fields = []
for fieldname, v in vars(cls).items():
if isinstance(v, Field):
v.fieldname = fieldname
cls.fields.append(v)
v.probe_index(cls)
server = Server()
if not server[cls.db_name]:
server.create_db(cls.db_name)
@classmethod
def create(cls, **kwargs):
+ """ Create a new object
+ """
model_obj = cls(**kwargs)
return model_obj
+
@classmethod
def db(cls):
- server = Server()
+
return server[cls.db_name]
@classmethod
def all(cls):
db = cls.db()
return [cls.get(item['id']) for item in db.docs()['rows']]
@classmethod
def get(cls, id, exc_class=None):
db = cls.db()
try:
user_dict = db[id]
except ServerError:
if exc_class:
raise exc_class()
raise
obj = cls(**user_dict)
return obj
@classmethod
def get_by_ids(cls, *ids):
db = cls.db()
for id in ids:
try:
user_dict = db[id]
yield cls(**user_dict)
except ServerError:
pass
def __eq__(self, other):
return self.__class__ == other.__class__ and \
hasattr(self, 'id') and hasattr(other, 'id') and \
self.id == other.id
def __hash__(self):
return hash(getattr(self, 'id', None))
def save(self):
if not self.tainted:
# TODO: things should be reconsidered when foreign key added in
return
db = self.db()
if hasattr(self, 'id'):
db[self.id] = self.get_dict()
else:
res = db.create_doc(self.get_dict())
self.id = res['id']
for index, old_value, new_value in self.changed_indices:
index.change(old_value, new_value, self.id)
self.changed_indices = []
self.tainted = False
def delete(self):
if hasattr(self, 'id'):
del self.db()[self.id]
for field, index in self.indices():
v = getattr(self, field.fieldname)
if v:
index.delete(v, self.id)
def get_dict(self):
+ """ Get the dict representation of an object's fields
+ """
info_dict = {}
for field in self.fields:
info_dict[field.fieldname] = getattr(self, field.fieldname)
if hasattr(self, 'id'):
info_dict['id'] = self.id
return info_dict
def __init__(self, **kwargs):
self.changed_indices = []
for field in self.fields:
setattr(self,
field.fieldname,
kwargs.get(field.fieldname,
field.default_value()))
if '_id' in kwargs:
self.id = kwargs['_id']
self.tainted = False
else:
self.tainted = True
diff --git a/settings.py b/settings.py
index 033574b..11c9e5e 100644
--- a/settings.py
+++ b/settings.py
@@ -1,16 +1,23 @@
+# URLs currently provided
URLMAP = (
(r'^/user/signup', 'Signup'),
(r'^/timeline/((?P<user_id>[0-9a-zA-Zu]+)/)?', 'Timeline'),
(r'^/user/timeline/((?P<user_id>[0-9a-zA-Zu]+)/)?', 'UserTimeline'),
(r'^/user/((?P<user_id>[0-9a-zA-Zu]+)/)?', 'UserInfo'),
(r'^/update/', 'Update'),
(r'^/entry/(?P<entry_id>[0-9a-zA-Z]+)/delete', 'EntryDelete'),
(r'^/entry/(?P<entry_id>[0-9a-zA-Z]+)', 'EntryInfo'),
(r'^/$', 'Hello'),
)
+# UTF-8 by default
ENCODING = 'utf8'
+
+# Connection url for the back end
COUCHDB_SERVER = 'http://localhost:5984'
+
+# The port server daemon listening on
SERVER_PORT = 8001
+# Realm token used in Basic HTTP Authentication
APP_REALM = 'RainyAPI'
|
superisaac/rainy
|
335498abf2063069d4a633fd2754475f3e78dd8c
|
Index step 3
|
diff --git a/lib/couchkit/ocm.py b/lib/couchkit/ocm.py
index 4974e4c..e26dfc6 100644
--- a/lib/couchkit/ocm.py
+++ b/lib/couchkit/ocm.py
@@ -1,210 +1,215 @@
import time
from dbwrapper import Server, ServerError
+from urllib import quote
class Index(object):
def __init__(self, model, field):
self.db_name = 'idx--%s--%s' % (model.db_name, field.fieldname)
server = Server()
if not server[self.db_name]:
server.create_db(self.db_name)
def db(self):
server = Server()
return server[self.db_name]
def get_ids(self, v):
model_ids = self.db()["%s" % v]['store_ids']
return model_ids
def set(self, v, model_id):
db = self.db()
- try:
- db.create_doc({'store_ids': [model_id], '_id': v})
- except ServerError, e:
- print e.status, e.msg
+ obj = db.fetch(v)
+ if obj:
# Already exits
- model_ids = set(db.fetch(v, [])['store_ids'])
+ model_ids = set(obj['store_ids'])
model_ids.add(model_id)
db[v] = {'store_ids': list(model_ids)}
+ else:
+ db[v] = {'store_ids': [model_id], '_id': quote(v)}
def delete(self, v, model_id):
model_ids = set(self.get_ids(v))
if model_id in model_ids:
model_ids.remove(model_id)
self.db()[v] = {'store_ids': list(model_ids)}
def change(self, old_v, new_v, model_id):
if old_v:
self.delete(old_v, model_id)
if new_v:
self.set(new_v, model_id)
class Field(object):
def __init__(self, null=True):
self.null = null # Seems null is not used currently
self._fieldname = None
self.index = None
self.enable_index = False
def _get_fieldname(self):
return self._fieldname
def _set_fieldname(self, v):
self._fieldname = v
fieldname = property(_get_fieldname, _set_fieldname)
def probe_index(self, model):
if self.enable_index:
self.index = Index(model, self)
def default_value(self):
return None
+ @classmethod
+ def get_by_index(cls, **kw):
+ pass
+
def __get__(self, obj, type=None):
return getattr(obj, 'proxied_%s' % self.fieldname)
def __set__(self, obj, value):
obj.tainted = True
if self.index:
old_value = getattr(obj, 'proxied_%s' % self.fieldname, None)
obj.changed_indices.append((self.index, old_value, value))
setattr(obj, 'proxied_%s' % self.fieldname, value)
def __del__(self, obj):
pass
class ScalaField(Field):
def __init__(self, default=None, null=True, index=False):
super(ScalaField, self).__init__(null)
self.default = default
self.enable_index = index
def default_value(self):
return self.default
class ListField(Field):
def default_value(self):
return []
class DateTimeField(Field):
def default_value(self):
return time.time()
class ModelMeta(type):
def __new__(meta, clsname, bases, classdict):
cls = type.__new__(meta, clsname, bases, classdict)
if clsname == 'Model':
return cls
cls.initialize()
return cls
class Model(object):
__metaclass__ = ModelMeta
@classmethod
def indices(cls):
for field in cls.fields.itervalues():
if field.index:
yield field, field.index
@classmethod
def initialize(cls):
cls.db_name = cls.__name__.lower()
cls.fields = []
for fieldname, v in vars(cls).items():
if isinstance(v, Field):
v.fieldname = fieldname
cls.fields.append(v)
v.probe_index(cls)
server = Server()
if not server[cls.db_name]:
server.create_db(cls.db_name)
@classmethod
def create(cls, **kwargs):
model_obj = cls(**kwargs)
return model_obj
@classmethod
def db(cls):
server = Server()
return server[cls.db_name]
@classmethod
def all(cls):
db = cls.db()
return [cls.get(item['id']) for item in db.docs()['rows']]
@classmethod
def get(cls, id, exc_class=None):
db = cls.db()
try:
user_dict = db[id]
except ServerError:
if exc_class:
raise exc_class()
raise
obj = cls(**user_dict)
return obj
@classmethod
def get_by_ids(cls, *ids):
db = cls.db()
for id in ids:
try:
user_dict = db[id]
yield cls(**user_dict)
except ServerError:
pass
def __eq__(self, other):
return self.__class__ == other.__class__ and \
hasattr(self, 'id') and hasattr(other, 'id') and \
self.id == other.id
def __hash__(self):
return hash(getattr(self, 'id', None))
def save(self):
if not self.tainted:
# TODO: things should be reconsidered when foreign key added in
return
db = self.db()
if hasattr(self, 'id'):
db[self.id] = self.get_dict()
else:
res = db.create_doc(self.get_dict())
self.id = res['id']
for index, old_value, new_value in self.changed_indices:
index.change(old_value, new_value, self.id)
self.changed_indices = []
self.tainted = False
def delete(self):
if hasattr(self, 'id'):
del self.db()[self.id]
for field, index in self.indices():
v = getattr(self, field.fieldname)
if v:
index.delete(v, self.id)
def get_dict(self):
info_dict = {}
for field in self.fields:
info_dict[field.fieldname] = getattr(self, field.fieldname)
if hasattr(self, 'id'):
info_dict['id'] = self.id
return info_dict
def __init__(self, **kwargs):
self.changed_indices = []
for field in self.fields:
setattr(self,
field.fieldname,
kwargs.get(field.fieldname,
field.default_value()))
if '_id' in kwargs:
self.id = kwargs['_id']
self.tainted = False
else:
self.tainted = True
diff --git a/scripts/dbtest.py b/scripts/dbtest.py
index b6da1f8..e47a064 100755
--- a/scripts/dbtest.py
+++ b/scripts/dbtest.py
@@ -1,19 +1,20 @@
#!/usr/bin/env python
import sys
sys.path.insert(0, '.')
from models import User, Entry
from lib.couchkit import Server
-user1 = User.signup('zk', 'passwd', 'hihicom')
-user2 = User.signup('mj', 'passwd1', 'whihicom')
+user1 = User.signup('zk', 'passwd', '[email protected]')
+user2 = User.signup('mj', 'passwd1', '[email protected]')
user1.follow(user2)
user1.update('haha, I am here')
user2.update('yes, you got it')
user2.update('good so good')
for status, user in user1.follow_timeline():
print user, 'says:', status.content, status.created_time
server = Server()
-print server['idx--user--email'].docs()
+print server['idx--user--email']['[email protected]']
+
|
superisaac/rainy
|
771e5ffa3d1012474c118fc013d5506f2dece1b4
|
Index step 2 create index automatically at the schema of database
|
diff --git a/lib/couchkit/ocm.py b/lib/couchkit/ocm.py
index 2d07a54..4974e4c 100644
--- a/lib/couchkit/ocm.py
+++ b/lib/couchkit/ocm.py
@@ -1,210 +1,210 @@
import time
from dbwrapper import Server, ServerError
class Index(object):
def __init__(self, model, field):
- self.db_name = 'idx' + model.db_name + field.fieldname
+ self.db_name = 'idx--%s--%s' % (model.db_name, field.fieldname)
server = Server()
if not server[self.db_name]:
server.create_db(self.db_name)
def db(self):
server = Server()
return server[self.db_name]
def get_ids(self, v):
- model_ids = self.db()["%s" % v]
+ model_ids = self.db()["%s" % v]['store_ids']
return model_ids
def set(self, v, model_id):
db = self.db()
try:
- db.set(v, [model_id])
+ db.create_doc({'store_ids': [model_id], '_id': v})
except ServerError, e:
print e.status, e.msg
# Already exits
- model_ids = set(db.fetch(v, []))
+ model_ids = set(db.fetch(v, [])['store_ids'])
model_ids.add(model_id)
- db[v] = list(model_ids)
+ db[v] = {'store_ids': list(model_ids)}
def delete(self, v, model_id):
model_ids = set(self.get_ids(v))
if model_id in model_ids:
model_ids.remove(model_id)
- self.db()[v] = list(model_ids)
+ self.db()[v] = {'store_ids': list(model_ids)}
def change(self, old_v, new_v, model_id):
if old_v:
self.delete(old_v, model_id)
if new_v:
self.set(new_v, model_id)
class Field(object):
def __init__(self, null=True):
self.null = null # Seems null is not used currently
self._fieldname = None
self.index = None
self.enable_index = False
def _get_fieldname(self):
return self._fieldname
def _set_fieldname(self, v):
self._fieldname = v
fieldname = property(_get_fieldname, _set_fieldname)
def probe_index(self, model):
if self.enable_index:
self.index = Index(model, self)
def default_value(self):
return None
def __get__(self, obj, type=None):
return getattr(obj, 'proxied_%s' % self.fieldname)
def __set__(self, obj, value):
obj.tainted = True
if self.index:
old_value = getattr(obj, 'proxied_%s' % self.fieldname, None)
obj.changed_indices.append((self.index, old_value, value))
setattr(obj, 'proxied_%s' % self.fieldname, value)
def __del__(self, obj):
pass
class ScalaField(Field):
def __init__(self, default=None, null=True, index=False):
super(ScalaField, self).__init__(null)
self.default = default
self.enable_index = index
def default_value(self):
return self.default
class ListField(Field):
def default_value(self):
return []
class DateTimeField(Field):
def default_value(self):
return time.time()
class ModelMeta(type):
def __new__(meta, clsname, bases, classdict):
cls = type.__new__(meta, clsname, bases, classdict)
if clsname == 'Model':
return cls
cls.initialize()
return cls
class Model(object):
__metaclass__ = ModelMeta
@classmethod
def indices(cls):
for field in cls.fields.itervalues():
if field.index:
yield field, field.index
@classmethod
def initialize(cls):
cls.db_name = cls.__name__.lower()
cls.fields = []
for fieldname, v in vars(cls).items():
if isinstance(v, Field):
v.fieldname = fieldname
cls.fields.append(v)
v.probe_index(cls)
server = Server()
if not server[cls.db_name]:
server.create_db(cls.db_name)
@classmethod
def create(cls, **kwargs):
model_obj = cls(**kwargs)
return model_obj
@classmethod
def db(cls):
server = Server()
return server[cls.db_name]
@classmethod
def all(cls):
db = cls.db()
return [cls.get(item['id']) for item in db.docs()['rows']]
@classmethod
def get(cls, id, exc_class=None):
db = cls.db()
try:
user_dict = db[id]
except ServerError:
if exc_class:
raise exc_class()
raise
obj = cls(**user_dict)
return obj
@classmethod
def get_by_ids(cls, *ids):
db = cls.db()
for id in ids:
try:
user_dict = db[id]
yield cls(**user_dict)
except ServerError:
pass
def __eq__(self, other):
return self.__class__ == other.__class__ and \
hasattr(self, 'id') and hasattr(other, 'id') and \
self.id == other.id
def __hash__(self):
return hash(getattr(self, 'id', None))
def save(self):
if not self.tainted:
# TODO: things should be reconsidered when foreign key added in
return
db = self.db()
if hasattr(self, 'id'):
db[self.id] = self.get_dict()
else:
res = db.create_doc(self.get_dict())
self.id = res['id']
- #for index, old_value, new_value in self.changed_indices:
- # index.change(old_value, new_value, self.id)
+ for index, old_value, new_value in self.changed_indices:
+ index.change(old_value, new_value, self.id)
self.changed_indices = []
self.tainted = False
def delete(self):
if hasattr(self, 'id'):
del self.db()[self.id]
for field, index in self.indices():
v = getattr(self, field.fieldname)
if v:
index.delete(v, self.id)
def get_dict(self):
info_dict = {}
for field in self.fields:
info_dict[field.fieldname] = getattr(self, field.fieldname)
if hasattr(self, 'id'):
info_dict['id'] = self.id
return info_dict
def __init__(self, **kwargs):
self.changed_indices = []
for field in self.fields:
setattr(self,
field.fieldname,
kwargs.get(field.fieldname,
field.default_value()))
if '_id' in kwargs:
self.id = kwargs['_id']
self.tainted = False
else:
self.tainted = True
diff --git a/scripts/console b/scripts/console
index 8f5348c..6910f0f 100755
--- a/scripts/console
+++ b/scripts/console
@@ -1,12 +1,13 @@
#!/bin/sh
cat > /tmp/shellee.py <<EOF
import sys
sys.path.insert(0, ".")
from lib.couchkit import *
from models import *
+server = Server()
EOF
python -i /tmp/shellee.py
diff --git a/scripts/dbtest.py b/scripts/dbtest.py
index 07641d2..b6da1f8 100755
--- a/scripts/dbtest.py
+++ b/scripts/dbtest.py
@@ -1,18 +1,19 @@
#!/usr/bin/env python
import sys
sys.path.insert(0, '.')
from models import User, Entry
+from lib.couchkit import Server
user1 = User.signup('zk', 'passwd', 'hihicom')
user2 = User.signup('mj', 'passwd1', 'whihicom')
user1.follow(user2)
user1.update('haha, I am here')
user2.update('yes, you got it')
user2.update('good so good')
for status, user in user1.follow_timeline():
print user, 'says:', status.content, status.created_time
-
-#print server['idx--user--email'].docs()
+server = Server()
+print server['idx--user--email'].docs()
diff --git a/scripts/init_db.py b/scripts/init_db.py
index f598d96..576b02d 100755
--- a/scripts/init_db.py
+++ b/scripts/init_db.py
@@ -1,19 +1,18 @@
#!/usr/bin/env python
import sys, os
sys.path.insert(0, '.')
from lib.couchkit import Server
server = Server()
def reset_db(db_name):
if server[db_name]:
del server[db_name]
server.create_db(db_name)
reset_db('user')
reset_db('user-name')
reset_db('entry')
reset_db('idx--user--email')
-reset_db('idxuseremail')
|
superisaac/rainy
|
abd12a14126cd654ae5eb71408f5a0cbecce04d7
|
Added Index TODO: Make it work
|
diff --git a/lib/couchkit/dbwrapper.py b/lib/couchkit/dbwrapper.py
index 6db314e..017c5ee 100644
--- a/lib/couchkit/dbwrapper.py
+++ b/lib/couchkit/dbwrapper.py
@@ -1,229 +1,234 @@
import httpc as http
+from urllib import quote
import simplejson
import settings
class ServerError(Exception):
def __init__(self, status, msg):
self.status = status
self.msg = msg
def __str__(self):
return "Server Error: %s %s" % (self.status, self.msg)
class ServerConnectionError(ServerError):
def __init__(self, cerror):
response = cerror.params.response
super(ServerConnectionError, self).__init__(response.status,
response.reason + '|')
class Server(object):
servers = {}
def __new__(cls, server_url=settings.COUCHDB_SERVER):
if server_url not in cls.servers:
obj = object.__new__(cls)
obj.initialize(server_url)
cls.servers[server_url] = obj
return cls.servers[server_url]
def initialize(self, server_url):
self.server_url = server_url
if self.server_url[-1:] == '/':
self.server_url = self.server_url[:-1]
self.opened_dbs = {}
def handle_response(self, status, msg, body):
if status >= 200 and status < 300:
return simplejson.loads(body)
else:
raise ServerError(status, msg)
def dumps(self, obj):
if obj is None:
return ''
else:
return simplejson.dumps(obj)
def get(self, url='/'):
+ url = quote(url)
try:
t = http.get_(self.server_url + url,
headers={'Accept':'Application/json'},
)
except http.ConnectionError, e:
raise ServerConnectionError(e)
obj = self.handle_response(*t)
if isinstance(obj, dict):
obj = dict((k.encode(settings.ENCODING), v) for k, v in obj.iteritems())
return obj
def delete(self, url='/'):
+ url = quote(url)
try:
t = http.delete_(self.server_url + url)
except http.ConnectionError, e:
raise ServerConnectionError(e)
return self.handle_response(*t)
def post(self, url, obj):
+ url = quote(url)
data = self.dumps(obj)
try:
t = http.post_(self.server_url + url,
data=data,
headers={'content-type': 'application/json',
'accept':'application/json'
},
)
except http.ConnectionError, e:
raise ServerConnectionError(e)
return self.handle_response(*t)
def put(self, url, obj):
+ url = quote(url)
data = self.dumps(obj)
try:
t = http.put_(self.server_url + url,
data=data,
headers={'conent-type': 'application/json',
'accept':'Application/json'},
)
except http.ConnectionError, e:
raise ServerConnectionError(e)
return self.handle_response(*t)
def __getitem__(self, dbname):
if dbname in self.opened_dbs:
return self.opened_dbs[dbname]
dbs = self.dbs()
if dbname in dbs:
db = Database(self, dbname)
self.opened_dbs[dbname] = db
return db
else:
#raise KeyError(dbname)
return None
def __delitem__(self, dbname):
if dbname in self.opened_dbs:
del self.opened_dbs[dbname]
return self.delete('/%s/' % dbname)
def dbs(self):
return self.get('/_all_dbs')
def create_db(self, dbname):
return self.put('/%s/' % dbname, None)
class Database:
def __init__(self, server, dbname):
self.server = server
self.dbname = dbname
self._cache = {}
self.enable_cache = True
def del_cache(self, docid):
if self.enable_cache:
if docid in self._cache:
del self._cache[docid]
def get_cache(self, docid):
if self.enable_cache:
if docid in self._cache:
return self._cache[docid]
def set_cache(self, docid, obj):
if self.enable_cache:
self._cache[docid] = obj
def clean_cache(self):
self._cache = {}
def info(self):
return self.server.get('/%s/' % self.dbname)
def docs(self):
return self.server.get('/%s/_all_docs' % self.dbname)
def get(self, docid):
obj = self.server.get('/%s/%s/' % (self.dbname, docid))
self.set_cache(docid, obj)
return obj
- def fetch(self, docid):
+ def fetch(self, docid, absent=None):
try:
obj = self.server.get('/%s/%s/' % (self.dbname, docid))
except ServerError:
- return None
+ return absent
self.set_cache(docid, obj)
return obj
def __getitem__(self, docid):
obj = self.get_cache(docid)
return obj or self.get(docid)
def __setitem__(self, docid, obj):
try:
#self.server.put('/%s/%s' % (self.dbname, docid), obj)
self.set(docid, obj)
except ServerError:
doc = self.get(docid)
rev = doc['_rev']
obj['_rev'] = rev
self.server.put('/%s/%s' % (self.dbname, docid), obj)
self.del_cache(docid)
def set(self, docid, obj):
self.server.put('/%s/%s' % (self.dbname, docid), obj)
def create_doc(self, obj):
""" Create a new document with the id and rev generated by server
:returns
{"ok":true, "id":"123BAC", "rev":"946B7D1C"}
"""
return self.server.post('/%s/' % self.dbname, obj)
def __delitem__(self, docid):
doc = self.get(docid)
rev = doc['_rev']
self.del_cache(docid)
return self.server.delete('/%s/%s/?rev=%s' % (self.dbname, docid, rev))
def query(self, map_func, reduce_func=""):
"query temporary view"
view = View(map_func, reduce_func)
return self.server.post('/%s/_temp_view' % (self.dbname), view.query_dict())
def query_view(self, design_name, view_name):
return self.server.get('/%s/_view/%s' % (self.dbname,
design_name
))
def create_or_replace_design(self, name, design):
self[name] = design.query_dict(name)
class View:
def __init__(self, map_func='', reduce_func=''):
self.map_func = map_func
self.reduce_func = reduce_func
def query_dict(self):
query = {}
if self.map_func:
query['map'] = self.map_func
if self.reduce_func:
query['reduce'] = self.reduce_func
return query
class Design:
def __init__(self, **views):
self.views = {}
self.views.update(views)
def query_dict(self, name):
query = {
'_id': '_design/%s' % name,
'language': 'javascript',
'views': {},
}
for name, view in self.views.iteritems():
query['views'][name] = view.query_dict()
return query
diff --git a/lib/couchkit/ocm.py b/lib/couchkit/ocm.py
index ad13109..2d07a54 100644
--- a/lib/couchkit/ocm.py
+++ b/lib/couchkit/ocm.py
@@ -1,122 +1,210 @@
import time
from dbwrapper import Server, ServerError
+class Index(object):
+ def __init__(self, model, field):
+ self.db_name = 'idx' + model.db_name + field.fieldname
+ server = Server()
+ if not server[self.db_name]:
+ server.create_db(self.db_name)
+
+ def db(self):
+ server = Server()
+ return server[self.db_name]
+
+ def get_ids(self, v):
+ model_ids = self.db()["%s" % v]
+ return model_ids
+
+ def set(self, v, model_id):
+ db = self.db()
+ try:
+ db.set(v, [model_id])
+ except ServerError, e:
+ print e.status, e.msg
+ # Already exits
+ model_ids = set(db.fetch(v, []))
+ model_ids.add(model_id)
+ db[v] = list(model_ids)
+
+ def delete(self, v, model_id):
+ model_ids = set(self.get_ids(v))
+ if model_id in model_ids:
+ model_ids.remove(model_id)
+ self.db()[v] = list(model_ids)
+
+ def change(self, old_v, new_v, model_id):
+ if old_v:
+ self.delete(old_v, model_id)
+ if new_v:
+ self.set(new_v, model_id)
+
class Field(object):
def __init__(self, null=True):
self.null = null # Seems null is not used currently
+ self._fieldname = None
+ self.index = None
+ self.enable_index = False
+ def _get_fieldname(self):
+ return self._fieldname
+ def _set_fieldname(self, v):
+ self._fieldname = v
+ fieldname = property(_get_fieldname, _set_fieldname)
+
+ def probe_index(self, model):
+ if self.enable_index:
+ self.index = Index(model, self)
+
def default_value(self):
return None
def __get__(self, obj, type=None):
return getattr(obj, 'proxied_%s' % self.fieldname)
def __set__(self, obj, value):
obj.tainted = True
+ if self.index:
+ old_value = getattr(obj, 'proxied_%s' % self.fieldname, None)
+ obj.changed_indices.append((self.index, old_value, value))
setattr(obj, 'proxied_%s' % self.fieldname, value)
def __del__(self, obj):
pass
class ScalaField(Field):
- def __init__(self, default=None, null=True):
+ def __init__(self, default=None, null=True, index=False):
super(ScalaField, self).__init__(null)
self.default = default
+ self.enable_index = index
+
def default_value(self):
return self.default
class ListField(Field):
def default_value(self):
return []
class DateTimeField(Field):
def default_value(self):
return time.time()
+
class ModelMeta(type):
def __new__(meta, clsname, bases, classdict):
cls = type.__new__(meta, clsname, bases, classdict)
if clsname == 'Model':
return cls
- cls.db_name = clsname.lower()
+ cls.initialize()
+ return cls
+
+class Model(object):
+ __metaclass__ = ModelMeta
+
+ @classmethod
+ def indices(cls):
+ for field in cls.fields.itervalues():
+ if field.index:
+ yield field, field.index
+
+ @classmethod
+ def initialize(cls):
+ cls.db_name = cls.__name__.lower()
cls.fields = []
- for field, v in vars(cls).items():
+ for fieldname, v in vars(cls).items():
if isinstance(v, Field):
- v.fieldname = field
+ v.fieldname = fieldname
cls.fields.append(v)
+ v.probe_index(cls)
server = Server()
if not server[cls.db_name]:
server.create_db(cls.db_name)
- return cls
-class Model(object):
- __metaclass__ = ModelMeta
@classmethod
def create(cls, **kwargs):
model_obj = cls(**kwargs)
return model_obj
+ @classmethod
+ def db(cls):
+ server = Server()
+ return server[cls.db_name]
@classmethod
def all(cls):
- server = Server()
- db = server[cls.db_name]
+ db = cls.db()
return [cls.get(item['id']) for item in db.docs()['rows']]
@classmethod
def get(cls, id, exc_class=None):
- server = Server()
- db = server[cls.db_name]
+ db = cls.db()
try:
user_dict = db[id]
except ServerError:
if exc_class:
raise exc_class()
raise
obj = cls(**user_dict)
return obj
-
+
+ @classmethod
+ def get_by_ids(cls, *ids):
+ db = cls.db()
+ for id in ids:
+ try:
+ user_dict = db[id]
+ yield cls(**user_dict)
+ except ServerError:
+ pass
def __eq__(self, other):
return self.__class__ == other.__class__ and \
hasattr(self, 'id') and hasattr(other, 'id') and \
self.id == other.id
def __hash__(self):
return hash(getattr(self, 'id', None))
def save(self):
if not self.tainted:
# TODO: things should be reconsidered when foreign key added in
return
- server = Server()
- db = server[self.db_name]
+ db = self.db()
if hasattr(self, 'id'):
db[self.id] = self.get_dict()
else:
res = db.create_doc(self.get_dict())
self.id = res['id']
+ #for index, old_value, new_value in self.changed_indices:
+ # index.change(old_value, new_value, self.id)
+ self.changed_indices = []
+ self.tainted = False
+
def delete(self):
if hasattr(self, 'id'):
- server = Server()
- db = server[self.db_name]
- del db[self.id]
+ del self.db()[self.id]
+
+ for field, index in self.indices():
+ v = getattr(self, field.fieldname)
+ if v:
+ index.delete(v, self.id)
def get_dict(self):
info_dict = {}
for field in self.fields:
info_dict[field.fieldname] = getattr(self, field.fieldname)
if hasattr(self, 'id'):
info_dict['id'] = self.id
return info_dict
def __init__(self, **kwargs):
+ self.changed_indices = []
for field in self.fields:
setattr(self,
field.fieldname,
kwargs.get(field.fieldname,
field.default_value()))
if '_id' in kwargs:
self.id = kwargs['_id']
self.tainted = False
else:
self.tainted = True
diff --git a/models/__init__.py b/models/__init__.py
index 0b271b7..d12f0c8 100644
--- a/models/__init__.py
+++ b/models/__init__.py
@@ -1,7 +1,3 @@
-
from user import User
from entry import Entry
-#Entry.initialize()
-#User.initialize()
-
diff --git a/models/user.py b/models/user.py
index b7a6fea..2c23276 100644
--- a/models/user.py
+++ b/models/user.py
@@ -1,91 +1,91 @@
from lib.couchkit import Server, ServerError
from lib.couchkit.ocm import Model, ScalaField, ListField
from entry import Entry
class UserSignupException(Exception):
pass
class UserLoginException(Exception):
pass
class User(Model):
- email = ScalaField(null=False)
+ email = ScalaField(null=False, index=True)
username = ScalaField()
follows = ListField()
followed_by = ListField()
timeline = ListField()
@classmethod
def signup(cls, username, password, email):
server = Server()
namedb = server['user-name']
try:
namedb.set(username, {'password': password})
except ServerError:
raise UserSignupException('User %s already exists' %
username)
user = cls.create(username=username, email=email)
user.save()
namedb[username] = dict(password=password,
user_id=user.id)
return user
@classmethod
def login(cls, username, password):
server = Server()
namedb = server['user-name']
try:
login_info = namedb[username]
if login_info['password'] == password:
return cls.get(login_info['user_id'])
except ServerError, e:
return None
return None
def delete(self):
server = Server()
db = server['user-name']
del db[self.username]
super(User, self).delete()
def follow(self, other):
other.followed_by.append(self.id)
self.follows.append(other.id)
other.save()
self.save()
def iter_follows(self):
yield self
for user_id in self.follows:
yield User.get(user_id)
def update(self, text):
entry = Entry.create(owner_id=self.id,
content=text)
entry.save()
self.timeline.insert(0, (entry.id,
entry.created_time))
self.save()
def user_timeline(self, time_limit=0):
for entry_id, created_time in self.timeline:
if created_time < time_limit:
break
entry = Entry.get(entry_id)
yield entry
def follow_timeline(self, time_limit=0):
timeline = []
for user in self.iter_follows():
for entry_id, created_time in user.timeline:
if created_time < time_limit:
break
timeline.append((-created_time, entry_id, user))
timeline.sort()
for _, entry_id, user in timeline:
entry = Entry.get(entry_id)
yield entry, user
def __str__(self):
return self.username
diff --git a/scripts/dbtest.py b/scripts/dbtest.py
index 5995730..07641d2 100755
--- a/scripts/dbtest.py
+++ b/scripts/dbtest.py
@@ -1,15 +1,18 @@
#!/usr/bin/env python
import sys
sys.path.insert(0, '.')
from models import User, Entry
-user1 = User.signup('zk', 'passwd', '[email protected]')
-user2 = User.signup('mj', 'passwd1', '[email protected]')
+user1 = User.signup('zk', 'passwd', 'hihicom')
+user2 = User.signup('mj', 'passwd1', 'whihicom')
user1.follow(user2)
user1.update('haha, I am here')
user2.update('yes, you got it')
user2.update('good so good')
for status, user in user1.follow_timeline():
print user, 'says:', status.content, status.created_time
+
+#print server['idx--user--email'].docs()
+
diff --git a/scripts/init_db.py b/scripts/init_db.py
index 4d45cf4..f598d96 100755
--- a/scripts/init_db.py
+++ b/scripts/init_db.py
@@ -1,17 +1,19 @@
#!/usr/bin/env python
import sys, os
sys.path.insert(0, '.')
from lib.couchkit import Server
server = Server()
def reset_db(db_name):
if server[db_name]:
del server[db_name]
server.create_db(db_name)
reset_db('user')
reset_db('user-name')
reset_db('entry')
+reset_db('idx--user--email')
+reset_db('idxuseremail')
|
superisaac/rainy
|
a418e93ec95b1cfdba0cecbb94fe36cabda8636b
|
Added __metaclass__ for Model Moved initialize to meta class's __new__
|
diff --git a/lib/couchkit/ocm.py b/lib/couchkit/ocm.py
index e1fd33a..ad13109 100644
--- a/lib/couchkit/ocm.py
+++ b/lib/couchkit/ocm.py
@@ -1,116 +1,122 @@
import time
from dbwrapper import Server, ServerError
class Field(object):
def __init__(self, null=True):
self.null = null # Seems null is not used currently
def default_value(self):
return None
def __get__(self, obj, type=None):
return getattr(obj, 'proxied_%s' % self.fieldname)
def __set__(self, obj, value):
obj.tainted = True
setattr(obj, 'proxied_%s' % self.fieldname, value)
def __del__(self, obj):
pass
class ScalaField(Field):
def __init__(self, default=None, null=True):
super(ScalaField, self).__init__(null)
self.default = default
def default_value(self):
return self.default
class ListField(Field):
def default_value(self):
return []
class DateTimeField(Field):
def default_value(self):
return time.time()
+
+class ModelMeta(type):
+ def __new__(meta, clsname, bases, classdict):
+ cls = type.__new__(meta, clsname, bases, classdict)
+ if clsname == 'Model':
+ return cls
+ cls.db_name = clsname.lower()
+ cls.fields = []
+ for field, v in vars(cls).items():
+ if isinstance(v, Field):
+ v.fieldname = field
+ cls.fields.append(v)
+ server = Server()
+ if not server[cls.db_name]:
+ server.create_db(cls.db_name)
+ return cls
class Model(object):
+ __metaclass__ = ModelMeta
@classmethod
def create(cls, **kwargs):
model_obj = cls(**kwargs)
return model_obj
@classmethod
def all(cls):
server = Server()
db = server[cls.db_name]
return [cls.get(item['id']) for item in db.docs()['rows']]
@classmethod
def get(cls, id, exc_class=None):
server = Server()
db = server[cls.db_name]
try:
user_dict = db[id]
except ServerError:
if exc_class:
raise exc_class()
raise
obj = cls(**user_dict)
return obj
- @classmethod
- def initialize(cls):
- cls.db_name = cls.__name__.lower()
- cls.fields = []
- for field, v in vars(cls).items():
- if isinstance(v, Field):
- v.fieldname = field
- cls.fields.append(v)
- server = Server()
- if not server[cls.db_name]:
- server.create_db(cls.db_name)
def __eq__(self, other):
return self.__class__ == other.__class__ and \
hasattr(self, 'id') and hasattr(other, 'id') and \
self.id == other.id
def __hash__(self):
return hash(getattr(self, 'id', None))
def save(self):
if not self.tainted:
# TODO: things should be reconsidered when foreign key added in
return
server = Server()
db = server[self.db_name]
if hasattr(self, 'id'):
db[self.id] = self.get_dict()
else:
res = db.create_doc(self.get_dict())
self.id = res['id']
def delete(self):
if hasattr(self, 'id'):
server = Server()
db = server[self.db_name]
del db[self.id]
def get_dict(self):
info_dict = {}
for field in self.fields:
info_dict[field.fieldname] = getattr(self, field.fieldname)
if hasattr(self, 'id'):
info_dict['id'] = self.id
return info_dict
def __init__(self, **kwargs):
for field in self.fields:
setattr(self,
field.fieldname,
kwargs.get(field.fieldname,
field.default_value()))
if '_id' in kwargs:
self.id = kwargs['_id']
self.tainted = False
else:
self.tainted = True
diff --git a/models/__init__.py b/models/__init__.py
index 0cf0f79..0b271b7 100644
--- a/models/__init__.py
+++ b/models/__init__.py
@@ -1,7 +1,7 @@
from user import User
from entry import Entry
-Entry.initialize()
-User.initialize()
+#Entry.initialize()
+#User.initialize()
|
superisaac/rainy
|
c81f16687ddcef0cbe78a597905d64f3bc10dc2a
|
TODO things
|
diff --git a/TODO b/TODO
index 3024251..f965fce 100644
--- a/TODO
+++ b/TODO
@@ -1,5 +1,6 @@
* Find timeline by time range(page)
- * revision functions for model
+ * Messages
+ * Revision functions for model
* Index support, one model several db
* Simple frontend, using django, php, ...
- * Support more mimetype, images, video, voice, ...
\ No newline at end of file
+ * Support more mimetype, images, video, voice, ...
|
superisaac/rainy
|
9cbb36eb1751669dd92ca337ad424a9bcdbeebe6
|
ignore files TODO
|
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..f3d74a9
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+*.pyc
+*~
diff --git a/TODO b/TODO
index f46ee65..3024251 100644
--- a/TODO
+++ b/TODO
@@ -1,6 +1,5 @@
-
-TODO:
* Find timeline by time range(page)
+ * revision functions for model
* Index support, one model several db
* Simple frontend, using django, php, ...
* Support more mimetype, images, video, voice, ...
\ No newline at end of file
diff --git a/app/user.py b/app/user.py
index fb855aa..81c5a85 100644
--- a/app/user.py
+++ b/app/user.py
@@ -1,54 +1,53 @@
from lib.httputil import require_login, AppError, HttpNotFound
from lib.couchkit import ServerError
from lib.util import require_login, force_unicode, ok
from models import User
class Signup:
def handle_POST(self, reactor):
info = reactor.input_data
try:
user = User.signup(info['username'], info['password'], info['email'])
except ServerError, e:
raise AppError(1001, e.msg)
return {'id': user.id,
'message': 'Welcome',
'username': user.username}
class UserInfo:
@require_login
def handle_GET(self, reactor, user_id=None):
if user_id:
user = User.get(user_id, exc_class=HttpNotFound)
else:
user = reactor.user
return {'id': user.id,
'username': user.username,
'follows': user.follows,
'followed_by': user.followed_by
}
class Timeline:
@require_login
def handle_GET(self, reactor, user_id=None):
if user_id:
user = User.get(user_id, exc_class=HttpNotFound)
else:
user = reactor.user
return [timeline.get_dict() for timeline, user in user.follow_timeline()]
-
class UserTimeline:
@require_login
def handle_GET(self, reactor, user_id=None):
if user_id:
user = User.get(user_id, exc_class=HttpNotFound)
else:
user = reactor.user
return [timeline.get_dict() for timeline, user in user.user_timeline()]
class Update:
@require_login
def handle_POST(self, reactor):
content = force_unicode(reactor.input_data['content'])
content = content[:140]
reactor.user.update(content)
return ok("Updated")
|
superisaac/rainy
|
547f3f7530bd5656aeeed11375ee7d38e893ebb3
|
make ocm model's get() method be able to raise specified exception
|
diff --git a/app/entry.py b/app/entry.py
index 00f643d..6200476 100644
--- a/app/entry.py
+++ b/app/entry.py
@@ -1,18 +1,18 @@
-from lib.httputil import require_login, AppError, Forbidden
+from lib.httputil import require_login, AppError, Forbidden, HttpNotFound
from lib.couchkit import ServerError
from lib.util import require_login, force_unicode, ok
from models import User, Entry
class EntryInfo:
@require_login
def handle_GET(self, reactor, entry_id):
- return Entry.get(entry_id).get_dict()
+ return Entry.get(entry_id, exc_class=HttpNotFound).get_dict()
class EntryDelete:
@require_login
def handle_POST(self, reactor, entry_id):
entry = Entry.get(entry_id)
if entry.owner_id != reactor.user.id:
raise Forbidden()
entry.delete()
return ok("Delete OK")
diff --git a/app/user.py b/app/user.py
index 5cf33ab..fb855aa 100644
--- a/app/user.py
+++ b/app/user.py
@@ -1,54 +1,54 @@
-from lib.httputil import require_login, AppError
+from lib.httputil import require_login, AppError, HttpNotFound
from lib.couchkit import ServerError
from lib.util import require_login, force_unicode, ok
from models import User
class Signup:
def handle_POST(self, reactor):
info = reactor.input_data
try:
user = User.signup(info['username'], info['password'], info['email'])
except ServerError, e:
raise AppError(1001, e.msg)
return {'id': user.id,
'message': 'Welcome',
'username': user.username}
class UserInfo:
@require_login
def handle_GET(self, reactor, user_id=None):
if user_id:
- user = User.get(user_id)
+ user = User.get(user_id, exc_class=HttpNotFound)
else:
user = reactor.user
return {'id': user.id,
'username': user.username,
'follows': user.follows,
'followed_by': user.followed_by
}
class Timeline:
@require_login
def handle_GET(self, reactor, user_id=None):
if user_id:
- user = User.get(user_id)
+ user = User.get(user_id, exc_class=HttpNotFound)
else:
user = reactor.user
return [timeline.get_dict() for timeline, user in user.follow_timeline()]
class UserTimeline:
@require_login
def handle_GET(self, reactor, user_id=None):
if user_id:
- user = User.get(user_id)
+ user = User.get(user_id, exc_class=HttpNotFound)
else:
user = reactor.user
return [timeline.get_dict() for timeline, user in user.user_timeline()]
class Update:
@require_login
def handle_POST(self, reactor):
content = force_unicode(reactor.input_data['content'])
content = content[:140]
reactor.user.update(content)
return ok("Updated")
diff --git a/lib/couchkit/__init__.py b/lib/couchkit/__init__.py
index 0990857..0d1f2f9 100644
--- a/lib/couchkit/__init__.py
+++ b/lib/couchkit/__init__.py
@@ -1,3 +1 @@
-
-#from db import *
from dbwrapper import Server, Database, Design, View, ServerError
diff --git a/lib/couchkit/ocm.py b/lib/couchkit/ocm.py
index 2a31733..e1fd33a 100644
--- a/lib/couchkit/ocm.py
+++ b/lib/couchkit/ocm.py
@@ -1,115 +1,116 @@
import time
from dbwrapper import Server, ServerError
-from lib.httputil import HttpNotFound
class Field(object):
def __init__(self, null=True):
self.null = null # Seems null is not used currently
def default_value(self):
return None
def __get__(self, obj, type=None):
return getattr(obj, 'proxied_%s' % self.fieldname)
def __set__(self, obj, value):
obj.tainted = True
setattr(obj, 'proxied_%s' % self.fieldname, value)
def __del__(self, obj):
pass
class ScalaField(Field):
def __init__(self, default=None, null=True):
super(ScalaField, self).__init__(null)
self.default = default
def default_value(self):
return self.default
class ListField(Field):
def default_value(self):
return []
class DateTimeField(Field):
def default_value(self):
return time.time()
class Model(object):
@classmethod
def create(cls, **kwargs):
model_obj = cls(**kwargs)
return model_obj
@classmethod
def all(cls):
server = Server()
db = server[cls.db_name]
return [cls.get(item['id']) for item in db.docs()['rows']]
@classmethod
- def get(cls, id):
+ def get(cls, id, exc_class=None):
server = Server()
db = server[cls.db_name]
try:
user_dict = db[id]
except ServerError:
- raise HttpNotFound()
+ if exc_class:
+ raise exc_class()
+ raise
obj = cls(**user_dict)
return obj
@classmethod
def initialize(cls):
cls.db_name = cls.__name__.lower()
cls.fields = []
for field, v in vars(cls).items():
if isinstance(v, Field):
v.fieldname = field
cls.fields.append(v)
server = Server()
if not server[cls.db_name]:
server.create_db(cls.db_name)
def __eq__(self, other):
return self.__class__ == other.__class__ and \
hasattr(self, 'id') and hasattr(other, 'id') and \
self.id == other.id
def __hash__(self):
return hash(getattr(self, 'id', None))
def save(self):
if not self.tainted:
# TODO: things should be reconsidered when foreign key added in
return
server = Server()
db = server[self.db_name]
if hasattr(self, 'id'):
db[self.id] = self.get_dict()
else:
res = db.create_doc(self.get_dict())
self.id = res['id']
def delete(self):
if hasattr(self, 'id'):
server = Server()
db = server[self.db_name]
del db[self.id]
def get_dict(self):
info_dict = {}
for field in self.fields:
info_dict[field.fieldname] = getattr(self, field.fieldname)
if hasattr(self, 'id'):
info_dict['id'] = self.id
return info_dict
def __init__(self, **kwargs):
for field in self.fields:
setattr(self,
field.fieldname,
kwargs.get(field.fieldname,
field.default_value()))
if '_id' in kwargs:
self.id = kwargs['_id']
self.tainted = False
else:
self.tainted = True
|
superisaac/rainy
|
198ca90ccc5d344243669cfd59781e30cea18620
|
* Added TODO * Added mimetype field for Entry
|
diff --git a/TODO b/TODO
new file mode 100644
index 0000000..f46ee65
--- /dev/null
+++ b/TODO
@@ -0,0 +1,6 @@
+
+TODO:
+ * Find timeline by time range(page)
+ * Index support, one model several db
+ * Simple frontend, using django, php, ...
+ * Support more mimetype, images, video, voice, ...
\ No newline at end of file
diff --git a/models/entry.py b/models/entry.py
index c62766e..a712aa6 100644
--- a/models/entry.py
+++ b/models/entry.py
@@ -1,8 +1,8 @@
from lib.couchkit import Server, ServerError
from lib.couchkit.ocm import Model, ScalaField, ListField, DateTimeField
class Entry(Model):
owner_id = ScalaField(null=False)
content = ScalaField()
created_time = DateTimeField()
-
+ mimetype = ScalaField(default="text/plain")
|
superisaac/rainy
|
30deb08cfb6b383e8bf60bded5fda01afa46eba2
|
Project files added in
|
diff --git a/app/__init__.py b/app/__init__.py
new file mode 100644
index 0000000..2cac803
--- /dev/null
+++ b/app/__init__.py
@@ -0,0 +1,3 @@
+from hello import Hello
+from user import Signup, UserInfo, Timeline, UserTimeline, Update
+from entry import EntryInfo, EntryDelete
diff --git a/app/entry.py b/app/entry.py
new file mode 100644
index 0000000..00f643d
--- /dev/null
+++ b/app/entry.py
@@ -0,0 +1,18 @@
+from lib.httputil import require_login, AppError, Forbidden
+from lib.couchkit import ServerError
+from lib.util import require_login, force_unicode, ok
+from models import User, Entry
+
+class EntryInfo:
+ @require_login
+ def handle_GET(self, reactor, entry_id):
+ return Entry.get(entry_id).get_dict()
+
+class EntryDelete:
+ @require_login
+ def handle_POST(self, reactor, entry_id):
+ entry = Entry.get(entry_id)
+ if entry.owner_id != reactor.user.id:
+ raise Forbidden()
+ entry.delete()
+ return ok("Delete OK")
diff --git a/app/hello.py b/app/hello.py
new file mode 100644
index 0000000..63c7012
--- /dev/null
+++ b/app/hello.py
@@ -0,0 +1,6 @@
+from lib.httputil import require_login
+
+class Hello:
+ @require_login
+ def handle_GET(self, reactor):
+ return 'Hello World'
diff --git a/app/user.py b/app/user.py
new file mode 100644
index 0000000..5cf33ab
--- /dev/null
+++ b/app/user.py
@@ -0,0 +1,54 @@
+from lib.httputil import require_login, AppError
+from lib.couchkit import ServerError
+from lib.util import require_login, force_unicode, ok
+from models import User
+
+class Signup:
+ def handle_POST(self, reactor):
+ info = reactor.input_data
+ try:
+ user = User.signup(info['username'], info['password'], info['email'])
+ except ServerError, e:
+ raise AppError(1001, e.msg)
+ return {'id': user.id,
+ 'message': 'Welcome',
+ 'username': user.username}
+
+class UserInfo:
+ @require_login
+ def handle_GET(self, reactor, user_id=None):
+ if user_id:
+ user = User.get(user_id)
+ else:
+ user = reactor.user
+ return {'id': user.id,
+ 'username': user.username,
+ 'follows': user.follows,
+ 'followed_by': user.followed_by
+ }
+
+class Timeline:
+ @require_login
+ def handle_GET(self, reactor, user_id=None):
+ if user_id:
+ user = User.get(user_id)
+ else:
+ user = reactor.user
+ return [timeline.get_dict() for timeline, user in user.follow_timeline()]
+
+class UserTimeline:
+ @require_login
+ def handle_GET(self, reactor, user_id=None):
+ if user_id:
+ user = User.get(user_id)
+ else:
+ user = reactor.user
+ return [timeline.get_dict() for timeline, user in user.user_timeline()]
+
+class Update:
+ @require_login
+ def handle_POST(self, reactor):
+ content = force_unicode(reactor.input_data['content'])
+ content = content[:140]
+ reactor.user.update(content)
+ return ok("Updated")
diff --git a/application.py b/application.py
new file mode 100644
index 0000000..c5c991a
--- /dev/null
+++ b/application.py
@@ -0,0 +1,80 @@
+#!/usr/bin/env python
+import re
+import urllib
+import traceback
+from cStringIO import StringIO
+from simplejson import loads, dumps
+import settings
+from lib.httputil import HttpAuthentication, HttpNotFound, AppError, Forbidden
+import app
+
+RESPONSE_CONTENT_TYPE = 'application/json'
+class Reactor:
+ def __init__(self, env, start_response):
+ self.environ = env
+ self.start = start_response
+ self.status = '200 OK'
+ self.response_headers = [('Content-Type', RESPONSE_CONTENT_TYPE)]
+ self.handleRequest()
+
+ def handleRequest(self):
+ self.params = {}
+ for k, v in re.findall(r'([^&#=]+)=([^&#=]*)', self.environ['QUERY_STRING']):
+ #self.params.setdefault(k, []).append(urllib.unquote(v))
+ self.params[k] = urllib.unquote(v)
+
+ def notfound(self):
+ self.start('404 Not Found', [('Content-Type', RESPONSE_CONTENT_TYPE)])
+ return dumps({'error_code': 404, 'message': '404 Not Found'})
+
+ def authentication(self, realm=settings.APP_REALM):
+ self.start('401 Unauthorized', [('Content-Type', RESPONSE_CONTENT_TYPE),
+ ('WWW-Authenticate', 'Basic realm="%s"' % realm)])
+ return dumps({'error_code': 401, 'message': '401 Unauthorized'})
+
+ def servererror(self, trace_info):
+ self.start('500 Server Error', [('Content-Type', 'text/plain'),])
+ return trace_info
+
+ def apperror(self, error_code, message):
+ self.start('500 Server Error', [('Content-Type', RESPONSE_CONTENT_TYPE),])
+ return dumps({'error_code': error_code, 'message': message})
+
+ def forbidden(self):
+ self.start('403 Forbidden', [('Content-Type', RESPONSE_CONTENT_TYPE),])
+ return dumps({'error_code': 403, 'message': 'Permission Denied'})
+
+ def __iter__(self):
+ yield self.one_request()
+
+ def one_request(self):
+ path = self.environ['PATH_INFO']
+ method = self.environ['REQUEST_METHOD']
+ if method in ('POST', 'PUT'):
+ content_length = int(self.environ['CONTENT_LENGTH'])
+ data = self.environ['wsgi.input'].read(content_length)
+ self.input_data = loads(data)
+ else:
+ self.input_data = None
+ try:
+ for pattern, handler_name in settings.URLMAP:
+ m = re.search(pattern, path)
+ if m:
+ handler = getattr(app, handler_name)()
+ obj = getattr(handler, 'handle_%s' % method)(self, **m.groupdict())
+ response = dumps(obj)
+ self.start(self.status, self.response_headers)
+ return response
+ raise HttpNotFound()
+ except HttpAuthentication:
+ return self.authentication()
+ except HttpNotFound:
+ return self.notfound()
+ except AppError, e:
+ return self.apperror(e.error_code, e.message)
+ except Forbidden:
+ return self.forbidden()
+ except:
+ buf = StringIO()
+ traceback.print_exc(file=buf)
+ return self.servererror(buf.getvalue())
diff --git a/httpd.py b/httpd.py
new file mode 100755
index 0000000..da2ff93
--- /dev/null
+++ b/httpd.py
@@ -0,0 +1,11 @@
+#!/usr/bin/env python
+
+import settings
+from application import Reactor
+
+def eventlet_server():
+ from eventlet import wsgi, api
+ wsgi.server(api.tcp_listener(('', settings.SERVER_PORT)), Reactor)
+
+if __name__ == '__main__':
+ eventlet_server()
diff --git a/lib/__init__.py b/lib/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/lib/__init__.py
@@ -0,0 +1 @@
+
diff --git a/lib/couchkit/__init__.py b/lib/couchkit/__init__.py
new file mode 100644
index 0000000..0990857
--- /dev/null
+++ b/lib/couchkit/__init__.py
@@ -0,0 +1,3 @@
+
+#from db import *
+from dbwrapper import Server, Database, Design, View, ServerError
diff --git a/lib/couchkit/dbwrapper.py b/lib/couchkit/dbwrapper.py
new file mode 100644
index 0000000..6db314e
--- /dev/null
+++ b/lib/couchkit/dbwrapper.py
@@ -0,0 +1,229 @@
+import httpc as http
+import simplejson
+import settings
+
+class ServerError(Exception):
+ def __init__(self, status, msg):
+ self.status = status
+ self.msg = msg
+
+ def __str__(self):
+ return "Server Error: %s %s" % (self.status, self.msg)
+
+class ServerConnectionError(ServerError):
+ def __init__(self, cerror):
+ response = cerror.params.response
+ super(ServerConnectionError, self).__init__(response.status,
+ response.reason + '|')
+
+class Server(object):
+ servers = {}
+ def __new__(cls, server_url=settings.COUCHDB_SERVER):
+ if server_url not in cls.servers:
+ obj = object.__new__(cls)
+ obj.initialize(server_url)
+ cls.servers[server_url] = obj
+ return cls.servers[server_url]
+
+ def initialize(self, server_url):
+ self.server_url = server_url
+ if self.server_url[-1:] == '/':
+ self.server_url = self.server_url[:-1]
+
+ self.opened_dbs = {}
+
+ def handle_response(self, status, msg, body):
+ if status >= 200 and status < 300:
+ return simplejson.loads(body)
+ else:
+ raise ServerError(status, msg)
+
+ def dumps(self, obj):
+ if obj is None:
+ return ''
+ else:
+ return simplejson.dumps(obj)
+
+ def get(self, url='/'):
+ try:
+ t = http.get_(self.server_url + url,
+ headers={'Accept':'Application/json'},
+ )
+ except http.ConnectionError, e:
+ raise ServerConnectionError(e)
+ obj = self.handle_response(*t)
+ if isinstance(obj, dict):
+ obj = dict((k.encode(settings.ENCODING), v) for k, v in obj.iteritems())
+ return obj
+
+ def delete(self, url='/'):
+ try:
+ t = http.delete_(self.server_url + url)
+ except http.ConnectionError, e:
+ raise ServerConnectionError(e)
+
+ return self.handle_response(*t)
+
+ def post(self, url, obj):
+ data = self.dumps(obj)
+ try:
+ t = http.post_(self.server_url + url,
+ data=data,
+ headers={'content-type': 'application/json',
+ 'accept':'application/json'
+ },
+ )
+ except http.ConnectionError, e:
+ raise ServerConnectionError(e)
+ return self.handle_response(*t)
+
+ def put(self, url, obj):
+ data = self.dumps(obj)
+ try:
+ t = http.put_(self.server_url + url,
+ data=data,
+ headers={'conent-type': 'application/json',
+ 'accept':'Application/json'},
+ )
+ except http.ConnectionError, e:
+ raise ServerConnectionError(e)
+ return self.handle_response(*t)
+
+ def __getitem__(self, dbname):
+ if dbname in self.opened_dbs:
+ return self.opened_dbs[dbname]
+ dbs = self.dbs()
+ if dbname in dbs:
+ db = Database(self, dbname)
+ self.opened_dbs[dbname] = db
+ return db
+ else:
+ #raise KeyError(dbname)
+ return None
+
+ def __delitem__(self, dbname):
+ if dbname in self.opened_dbs:
+ del self.opened_dbs[dbname]
+ return self.delete('/%s/' % dbname)
+
+ def dbs(self):
+ return self.get('/_all_dbs')
+
+ def create_db(self, dbname):
+ return self.put('/%s/' % dbname, None)
+
+class Database:
+ def __init__(self, server, dbname):
+ self.server = server
+ self.dbname = dbname
+ self._cache = {}
+ self.enable_cache = True
+
+ def del_cache(self, docid):
+ if self.enable_cache:
+ if docid in self._cache:
+ del self._cache[docid]
+ def get_cache(self, docid):
+ if self.enable_cache:
+ if docid in self._cache:
+ return self._cache[docid]
+ def set_cache(self, docid, obj):
+ if self.enable_cache:
+ self._cache[docid] = obj
+
+ def clean_cache(self):
+ self._cache = {}
+
+ def info(self):
+ return self.server.get('/%s/' % self.dbname)
+
+ def docs(self):
+ return self.server.get('/%s/_all_docs' % self.dbname)
+
+ def get(self, docid):
+ obj = self.server.get('/%s/%s/' % (self.dbname, docid))
+ self.set_cache(docid, obj)
+ return obj
+
+ def fetch(self, docid):
+ try:
+ obj = self.server.get('/%s/%s/' % (self.dbname, docid))
+ except ServerError:
+ return None
+ self.set_cache(docid, obj)
+ return obj
+
+ def __getitem__(self, docid):
+ obj = self.get_cache(docid)
+ return obj or self.get(docid)
+
+ def __setitem__(self, docid, obj):
+ try:
+ #self.server.put('/%s/%s' % (self.dbname, docid), obj)
+ self.set(docid, obj)
+ except ServerError:
+ doc = self.get(docid)
+ rev = doc['_rev']
+ obj['_rev'] = rev
+ self.server.put('/%s/%s' % (self.dbname, docid), obj)
+ self.del_cache(docid)
+
+ def set(self, docid, obj):
+ self.server.put('/%s/%s' % (self.dbname, docid), obj)
+
+ def create_doc(self, obj):
+ """ Create a new document with the id and rev generated by server
+ :returns
+ {"ok":true, "id":"123BAC", "rev":"946B7D1C"}
+ """
+ return self.server.post('/%s/' % self.dbname, obj)
+
+ def __delitem__(self, docid):
+ doc = self.get(docid)
+ rev = doc['_rev']
+ self.del_cache(docid)
+ return self.server.delete('/%s/%s/?rev=%s' % (self.dbname, docid, rev))
+
+ def query(self, map_func, reduce_func=""):
+ "query temporary view"
+ view = View(map_func, reduce_func)
+ return self.server.post('/%s/_temp_view' % (self.dbname), view.query_dict())
+
+ def query_view(self, design_name, view_name):
+ return self.server.get('/%s/_view/%s' % (self.dbname,
+ design_name
+ ))
+
+ def create_or_replace_design(self, name, design):
+ self[name] = design.query_dict(name)
+
+class View:
+ def __init__(self, map_func='', reduce_func=''):
+ self.map_func = map_func
+ self.reduce_func = reduce_func
+
+ def query_dict(self):
+ query = {}
+ if self.map_func:
+ query['map'] = self.map_func
+ if self.reduce_func:
+ query['reduce'] = self.reduce_func
+ return query
+
+class Design:
+ def __init__(self, **views):
+ self.views = {}
+ self.views.update(views)
+
+ def query_dict(self, name):
+ query = {
+ '_id': '_design/%s' % name,
+ 'language': 'javascript',
+ 'views': {},
+ }
+
+ for name, view in self.views.iteritems():
+ query['views'][name] = view.query_dict()
+ return query
+
+
diff --git a/lib/couchkit/httpc.py b/lib/couchkit/httpc.py
new file mode 100755
index 0000000..7f68ffd
--- /dev/null
+++ b/lib/couchkit/httpc.py
@@ -0,0 +1,663 @@
+"""\
+@file httpc.py
+@author Donovan Preston
+
+Copyright (c) 2005-2006, Donovan Preston
+Copyright (c) 2007, Linden Research, Inc.
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+"""
+
+import copy
+import datetime
+import httplib
+import os.path
+import os
+import time
+import urlparse
+
+
+_old_HTTPConnection = httplib.HTTPConnection
+_old_HTTPSConnection = httplib.HTTPSConnection
+
+
+HTTP_TIME_FORMAT = '%a, %d %b %Y %H:%M:%S GMT'
+to_http_time = lambda t: time.strftime(HTTP_TIME_FORMAT, time.gmtime(t))
+
+try:
+
+ from mx import DateTime
+ def from_http_time(t, defaultdate=None):
+ return int(DateTime.Parser.DateTimeFromString(
+ t, defaultdate=defaultdate).gmticks())
+
+except ImportError:
+
+ import calendar
+ parse_formats = (HTTP_TIME_FORMAT, # RFC 1123
+ '%A, %d-%b-%y %H:%M:%S GMT', # RFC 850
+ '%a %b %d %H:%M:%S %Y') # asctime
+ def from_http_time(t, defaultdate=None):
+ for parser in parse_formats:
+ try:
+ return calendar.timegm(time.strptime(t, parser))
+ except ValueError:
+ continue
+ return defaultdate
+
+
+def host_and_port_from_url(url):
+ """@brief Simple function to get host and port from an http url.
+ @return Returns host, port and port may be None.
+ """
+ host = None
+ port = None
+ parsed_url = urlparse.urlparse(url)
+ try:
+ host, port = parsed_url[1].split(':')
+ except ValueError:
+ host = parsed_url[1].split(':')
+ return host, port
+
+
+def better_putrequest(self, method, url, skip_host=0, skip_accept_encoding=0):
+ self.method = method
+ self.path = url
+ try:
+ # Python 2.4 and above
+ self.old_putrequest(method, url, skip_host, skip_accept_encoding)
+ except TypeError:
+ # Python 2.3 and below
+ self.old_putrequest(method, url, skip_host)
+
+
+class HttpClient(httplib.HTTPConnection):
+ """A subclass of httplib.HTTPConnection that provides a better
+ putrequest that records the method and path on the request object.
+ """
+ def __init__(self, host, port=None, strict=None):
+ _old_HTTPConnection.__init__(self, host, port, strict)
+
+ old_putrequest = httplib.HTTPConnection.putrequest
+ putrequest = better_putrequest
+
+class HttpsClient(httplib.HTTPSConnection):
+ """A subclass of httplib.HTTPSConnection that provides a better
+ putrequest that records the method and path on the request object.
+ """
+ old_putrequest = httplib.HTTPSConnection.putrequest
+ putrequest = better_putrequest
+
+
+def wrap_httplib_with_httpc():
+ """Replace httplib's implementations of these classes with our enhanced ones.
+
+ Needed to work around code that uses httplib directly."""
+ httplib.HTTP._connection_class = httplib.HTTPConnection = HttpClient
+ httplib.HTTPS._connection_class = httplib.HTTPSConnection = HttpsClient
+
+
+
+class FileScheme(object):
+ """Retarded scheme to local file wrapper."""
+ host = '<file>'
+ port = '<file>'
+ reason = '<none>'
+
+ def __init__(self, location):
+ pass
+
+ def request(self, method, fullpath, body='', headers=None):
+ self.status = 200
+ self.msg = ''
+ self.path = fullpath.split('?')[0]
+ self.method = method = method.lower()
+ assert method in ('get', 'put', 'delete')
+ if method == 'delete':
+ try:
+ os.remove(self.path)
+ except OSError:
+ pass # don't complain if already deleted
+ elif method == 'put':
+ try:
+ f = file(self.path, 'w')
+ f.write(body)
+ f.close()
+ except IOError, e:
+ self.status = 500
+ self.raise_connection_error()
+ elif method == 'get':
+ if not os.path.exists(self.path):
+ self.status = 404
+ self.raise_connection_error(NotFound)
+
+ def connect(self):
+ pass
+
+ def getresponse(self):
+ return self
+
+ def getheader(self, header):
+ if header == 'content-length':
+ try:
+ return os.path.getsize(self.path)
+ except OSError:
+ return 0
+
+ def read(self, howmuch=None):
+ if self.method == 'get':
+ try:
+ fl = file(self.path, 'r')
+ if howmuch is None:
+ return fl.read()
+ else:
+ return fl.read(howmuch)
+ except IOError:
+ self.status = 500
+ self.raise_connection_error()
+ return ''
+
+ def raise_connection_error(self, klass=None):
+ if klass is None:
+ klass=ConnectionError
+ raise klass(_Params('file://' + self.path, self.method))
+
+ def close(self):
+ """We're challenged here, and read the whole file rather than
+ integrating with this lib. file object already out of scope at this
+ point"""
+ pass
+
+class _Params(object):
+ def __init__(self, url, method, body='', headers=None, dumper=None,
+ loader=None, use_proxy=False, ok=(), aux=None):
+ '''
+ @param connection The connection (as returned by make_connection) to use for the request.
+ @param method HTTP method
+ @param url Full url to make request on.
+ @param body HTTP body, if necessary for the method. Can be any object, assuming an appropriate dumper is also provided.
+ @param headers Dict of header name to header value
+ @param dumper Method that formats the body as a string.
+ @param loader Method that converts the response body into an object.
+ @param use_proxy Set to True if the connection is to a proxy.
+ @param ok Set of valid response statuses. If the returned status is not in this list, an exception is thrown.
+ '''
+ self.instance = None
+ self.url = url
+ self.path = url
+ self.method = method
+ self.body = body
+ if headers is None:
+ self.headers = {}
+ else:
+ self.headers = headers
+ self.dumper = dumper
+ self.loader = loader
+ self.use_proxy = use_proxy
+ self.ok = ok or (200, 201, 204)
+ self.orig_body = body
+ self.aux = aux
+
+
+class _LocalParams(_Params):
+ def __init__(self, params, **kwargs):
+ self._delegate = params
+ for k, v in kwargs.iteritems():
+ setattr(self, k, v)
+
+ def __getattr__(self, key):
+ if key == '__setstate__': return
+ return getattr(self._delegate, key)
+
+ def __reduce__(self):
+ params = copy.copy(self._delegate)
+ kwargs = copy.copy(self.__dict__)
+ assert(kwargs.has_key('_delegate'))
+ del kwargs['_delegate']
+ if hasattr(params,'aux'): del params.aux
+ return (_LocalParams,(params,),kwargs)
+
+ def __setitem__(self, k, item):
+ setattr(self, k, item)
+
+class ConnectionError(Exception):
+ """Detailed exception class for reporting on http connection problems.
+
+ There are lots of subclasses so you can use closely-specified
+ exception clauses."""
+ def __init__(self, params):
+ self.params = params
+ Exception.__init__(self)
+
+ def location(self):
+ return self.params.response.msg.dict.get('location')
+
+ def expired(self):
+ # 14.21 Expires
+ #
+ # HTTP/1.1 clients and caches MUST treat other invalid date
+ # formats, especially including the value "0", as in the past
+ # (i.e., "already expired").
+ expires = from_http_time(
+ self.params.response_headers.get('expires', '0'),
+ defaultdate=DateTime.Epoch)
+ return time.time() > expires
+
+ def __repr__(self):
+ response = self.params.response
+ return "%s(url=%r, method=%r, status=%r, reason=%r, body=%r)" % (
+ self.__class__.__name__, self.params.url, self.params.method,
+ response.status, response.reason, self.params.body)
+
+ __str__ = __repr__
+
+
+class UnparseableResponse(ConnectionError):
+ """Raised when a loader cannot parse the response from the server."""
+ def __init__(self, content_type, response, url):
+ self.content_type = content_type
+ self.response = response
+ self.url = url
+ Exception.__init__(self)
+
+ def __repr__(self):
+ return "Could not parse the data at the URL %r of content-type %r\nData:\n%r)" % (
+ self.url, self.content_type, self.response)
+
+ __str__ = __repr__
+
+
+class Accepted(ConnectionError):
+ """ 202 Accepted """
+ pass
+
+
+class Retriable(ConnectionError):
+ def retry_method(self):
+ return self.params.method
+
+ def retry_url(self):
+ return self.location() or self.url()
+
+ def retry_(self):
+ params = _LocalParams(self.params,
+ url=self.retry_url(),
+ method=self.retry_method())
+ return self.params.instance.request_(params)
+
+ def retry(self):
+ return self.retry_()[-1]
+
+
+class MovedPermanently(Retriable):
+ """ 301 Moved Permanently """
+ pass
+
+
+class Found(Retriable):
+ """ 302 Found """
+
+ pass
+
+
+class SeeOther(Retriable):
+ """ 303 See Other """
+
+ def retry_method(self):
+ return 'GET'
+
+
+class NotModified(ConnectionError):
+ """ 304 Not Modified """
+ pass
+
+
+class TemporaryRedirect(Retriable):
+ """ 307 Temporary Redirect """
+ pass
+
+
+class BadRequest(ConnectionError):
+ """ 400 Bad Request """
+ pass
+
+class Unauthorized(ConnectionError):
+ """ 401 Unauthorized """
+ pass
+
+class PaymentRequired(ConnectionError):
+ """ 402 Payment Required """
+ pass
+
+
+class Forbidden(ConnectionError):
+ """ 403 Forbidden """
+ pass
+
+
+class NotFound(ConnectionError):
+ """ 404 Not Found """
+ pass
+
+class RequestTimeout(ConnectionError):
+ """ 408 RequestTimeout """
+ pass
+
+
+class Gone(ConnectionError):
+ """ 410 Gone """
+ pass
+
+class LengthRequired(ConnectionError):
+ """ 411 Length Required """
+ pass
+
+class RequestEntityTooLarge(ConnectionError):
+ """ 413 Request Entity Too Large """
+ pass
+
+class RequestURITooLong(ConnectionError):
+ """ 414 Request-URI Too Long """
+ pass
+
+class UnsupportedMediaType(ConnectionError):
+ """ 415 Unsupported Media Type """
+ pass
+
+class RequestedRangeNotSatisfiable(ConnectionError):
+ """ 416 Requested Range Not Satisfiable """
+ pass
+
+class ExpectationFailed(ConnectionError):
+ """ 417 Expectation Failed """
+ pass
+
+class NotImplemented(ConnectionError):
+ """ 501 Not Implemented """
+ pass
+
+class ServiceUnavailable(Retriable):
+ """ 503 Service Unavailable """
+ def url(self):
+ return self.params._delegate.url
+
+class GatewayTimeout(Retriable):
+ """ 504 Gateway Timeout """
+ def url(self):
+ return self.params._delegate.url
+
+class HTTPVersionNotSupported(ConnectionError):
+ """ 505 HTTP Version Not Supported """
+ pass
+
+class InternalServerError(ConnectionError):
+ """ 500 Internal Server Error """
+ def __repr__(self):
+ try:
+ import simplejson
+ traceback = simplejson.loads(self.params.response_body)
+ except:
+ try:
+ from indra.base import llsd
+ traceback = llsd.parse(self.params.response_body)
+ except:
+ traceback = self.params.response_body
+ if(isinstance(traceback, dict)
+ and 'stack-trace' in traceback
+ and 'description' in traceback):
+ body = traceback
+ traceback = "Traceback (most recent call last):\n"
+ for frame in body['stack-trace']:
+ traceback += ' File "%s", line %s, in %s\n' % (
+ frame['filename'], frame['lineno'], frame['method'])
+ for line in frame['code']:
+ if line['lineno'] == frame['lineno']:
+ traceback += ' %s' % (line['line'].lstrip(), )
+ break
+ traceback += body['description']
+ return "The server raised an exception from our request:\n%s %s\n%s %s\n%s" % (
+ self.params.method, self.params.url, self.params.response.status, self.params.response.reason, traceback)
+ __str__ = __repr__
+
+
+
+status_to_error_map = {
+ 202: Accepted,
+ 301: MovedPermanently,
+ 302: Found,
+ 303: SeeOther,
+ 304: NotModified,
+ 307: TemporaryRedirect,
+ 400: BadRequest,
+ 401: Unauthorized,
+ 402: PaymentRequired,
+ 403: Forbidden,
+ 404: NotFound,
+ 408: RequestTimeout,
+ 410: Gone,
+ 411: LengthRequired,
+ 413: RequestEntityTooLarge,
+ 414: RequestURITooLong,
+ 415: UnsupportedMediaType,
+ 416: RequestedRangeNotSatisfiable,
+ 417: ExpectationFailed,
+ 500: InternalServerError,
+ 501: NotImplemented,
+ 503: ServiceUnavailable,
+ 504: GatewayTimeout,
+ 505: HTTPVersionNotSupported,
+}
+
+scheme_to_factory_map = {
+ 'http': HttpClient,
+ 'https': HttpsClient,
+ 'file': FileScheme,
+}
+
+
+def make_connection(scheme, location, use_proxy):
+ """ Create a connection object to a host:port.
+
+ @param scheme Protocol, scheme, whatever you want to call it. http, file, https are currently supported.
+ @param location Hostname and port number, formatted as host:port or http://host:port if you're so inclined.
+ @param use_proxy Connect to a proxy instead of the actual location. Uses environment variables to decide where the proxy actually lives.
+ """
+ if use_proxy:
+ if "http_proxy" in os.environ:
+ location = os.environ["http_proxy"]
+ elif "ALL_PROXY" in os.environ:
+ location = os.environ["ALL_PROXY"]
+ else:
+ location = "localhost:3128" #default to local squid
+
+ # run a little heuristic to see if location is an url, and if so parse out the hostpart
+ if location.startswith('http'):
+ _scheme, location, path, parameters, query, fragment = urlparse.urlparse(location)
+
+ result = scheme_to_factory_map[scheme](location)
+ result.connect()
+ return result
+
+
+def connect(url, use_proxy=False):
+ """ Create a connection object to the host specified in a url. Convenience function for make_connection."""
+ scheme, location = urlparse.urlparse(url)[:2]
+ return make_connection(scheme, location, use_proxy)
+
+
+def make_safe_loader(loader):
+ if not callable(loader):
+ return loader
+ def safe_loader(what):
+ try:
+ return loader(what)
+ except Exception:
+ import traceback
+ traceback.print_exc()
+ return None
+ return safe_loader
+
+
+class HttpSuite(object):
+ def __init__(self, dumper, loader, fallback_content_type):
+ self.dumper = dumper
+ self.loader = loader
+ self.fallback_content_type = fallback_content_type
+
+ def request_(self, params):
+ '''Make an http request to a url, for internal use mostly.'''
+
+ params = _LocalParams(params, instance=self)
+
+ (scheme, location, path, parameters, query,
+ fragment) = urlparse.urlparse(params.url)
+
+ if params.use_proxy:
+ if scheme == 'file':
+ params.use_proxy = False
+ else:
+ params.headers['host'] = location
+
+ if not params.use_proxy:
+ params.path = path
+ if query:
+ params.path += '?' + query
+
+ params.orig_body = params.body
+
+ if params.method in ('PUT', 'POST'):
+ if self.dumper is not None:
+ params.body = self.dumper(params.body)
+ # don't set content-length header because httplib does it
+ # for us in _send_request
+ else:
+ params.body = ''
+
+ params.response, params.response_body = self._get_response_body(params)
+ response, body = params.response, params.response_body
+
+ if self.loader is not None:
+ try:
+ body = make_safe_loader(self.loader(body))
+ except KeyboardInterrupt:
+ raise
+ except Exception, e:
+ raise UnparseableResponse(self.loader, body, params.url)
+
+ return response.status, response.msg, body
+
+ def _check_status(self, params):
+ response = params.response
+ if response.status not in params.ok:
+ klass = status_to_error_map.get(response.status, ConnectionError)
+ raise klass(params)
+
+ def _get_response_body(self, params):
+ connection = connect(params.url, params.use_proxy)
+ connection.request(params.method, params.path, params.body,
+ params.headers)
+ params.response = connection.getresponse()
+ params.response_body = params.response.read()
+ connection.close()
+ self._check_status(params)
+
+ return params.response, params.response_body
+
+ def request(self, params):
+ return self.request_(params)[-1]
+
+ def head_(self, url, headers=None, use_proxy=False, ok=None, aux=None):
+ return self.request_(_Params(url, 'HEAD', headers=headers,
+ loader=self.loader, dumper=self.dumper,
+ use_proxy=use_proxy, ok=ok, aux=aux))
+
+ def head(self, *args, **kwargs):
+ return self.head_(*args, **kwargs)[-1]
+
+ def get_(self, url, headers=None, use_proxy=False, ok=None, aux=None):
+ if headers is None:
+ headers = {}
+ headers['accept'] = self.fallback_content_type+';q=1,*/*;q=0'
+ return self.request_(_Params(url, 'GET', headers=headers,
+ loader=self.loader, dumper=self.dumper,
+ use_proxy=use_proxy, ok=ok, aux=aux))
+
+ def get(self, *args, **kwargs):
+ return self.get_(*args, **kwargs)[-1]
+
+ def put_(self, url, data, headers=None, content_type=None, ok=None,
+ aux=None):
+ if headers is None:
+ headers = {}
+ if 'content-type' not in headers:
+ if content_type is None:
+ headers['content-type'] = self.fallback_content_type
+ else:
+ headers['content-type'] = content_type
+ headers['accept'] = headers['content-type']+';q=1,*/*;q=0'
+ return self.request_(_Params(url, 'PUT', body=data, headers=headers,
+ loader=self.loader, dumper=self.dumper,
+ ok=ok, aux=aux))
+
+ def put(self, *args, **kwargs):
+ return self.put_(*args, **kwargs)[-1]
+
+ def delete_(self, url, ok=None, aux=None):
+ return self.request_(_Params(url, 'DELETE', loader=self.loader,
+ dumper=self.dumper, ok=ok, aux=aux))
+
+ def delete(self, *args, **kwargs):
+ return self.delete_(*args, **kwargs)[-1]
+
+ def post_(self, url, data='', headers=None, content_type=None, ok=None,
+ aux=None):
+ if headers is None:
+ headers = {}
+ if 'content-type' not in headers:
+ if content_type is None:
+ headers['content-type'] = self.fallback_content_type
+ else:
+ headers['content-type'] = content_type
+ headers['accept'] = headers['content-type']+';q=1,*/*;q=0'
+ return self.request_(_Params(url, 'POST', body=data,
+ headers=headers, loader=self.loader,
+ dumper=self.dumper, ok=ok, aux=aux))
+
+ def post(self, *args, **kwargs):
+ return self.post_(*args, **kwargs)[-1]
+
+
+def make_suite(dumper, loader, fallback_content_type):
+ """ Return a tuple of methods for making http requests with automatic bidirectional formatting with a particular content-type."""
+ suite = HttpSuite(dumper, loader, fallback_content_type)
+ return suite.get, suite.put, suite.delete, suite.post
+
+
+suite = HttpSuite(str, None, 'text/plain')
+delete = suite.delete
+delete_ = suite.delete_
+get = suite.get
+get_ = suite.get_
+head = suite.head
+head_ = suite.head_
+post = suite.post
+post_ = suite.post_
+put = suite.put
+put_ = suite.put_
+request = suite.request
+request_ = suite.request_
diff --git a/lib/couchkit/ocm.py b/lib/couchkit/ocm.py
new file mode 100644
index 0000000..2a31733
--- /dev/null
+++ b/lib/couchkit/ocm.py
@@ -0,0 +1,115 @@
+import time
+from dbwrapper import Server, ServerError
+from lib.httputil import HttpNotFound
+
+class Field(object):
+ def __init__(self, null=True):
+ self.null = null # Seems null is not used currently
+
+ def default_value(self):
+ return None
+
+ def __get__(self, obj, type=None):
+ return getattr(obj, 'proxied_%s' % self.fieldname)
+
+ def __set__(self, obj, value):
+ obj.tainted = True
+ setattr(obj, 'proxied_%s' % self.fieldname, value)
+
+ def __del__(self, obj):
+ pass
+
+class ScalaField(Field):
+ def __init__(self, default=None, null=True):
+ super(ScalaField, self).__init__(null)
+ self.default = default
+ def default_value(self):
+ return self.default
+
+class ListField(Field):
+ def default_value(self):
+ return []
+
+class DateTimeField(Field):
+ def default_value(self):
+ return time.time()
+
+class Model(object):
+ @classmethod
+ def create(cls, **kwargs):
+ model_obj = cls(**kwargs)
+ return model_obj
+
+ @classmethod
+ def all(cls):
+ server = Server()
+ db = server[cls.db_name]
+ return [cls.get(item['id']) for item in db.docs()['rows']]
+
+ @classmethod
+ def get(cls, id):
+ server = Server()
+ db = server[cls.db_name]
+ try:
+ user_dict = db[id]
+ except ServerError:
+ raise HttpNotFound()
+ obj = cls(**user_dict)
+ return obj
+
+ @classmethod
+ def initialize(cls):
+ cls.db_name = cls.__name__.lower()
+ cls.fields = []
+ for field, v in vars(cls).items():
+ if isinstance(v, Field):
+ v.fieldname = field
+ cls.fields.append(v)
+ server = Server()
+ if not server[cls.db_name]:
+ server.create_db(cls.db_name)
+
+ def __eq__(self, other):
+ return self.__class__ == other.__class__ and \
+ hasattr(self, 'id') and hasattr(other, 'id') and \
+ self.id == other.id
+ def __hash__(self):
+ return hash(getattr(self, 'id', None))
+
+ def save(self):
+ if not self.tainted:
+ # TODO: things should be reconsidered when foreign key added in
+ return
+ server = Server()
+ db = server[self.db_name]
+ if hasattr(self, 'id'):
+ db[self.id] = self.get_dict()
+ else:
+ res = db.create_doc(self.get_dict())
+ self.id = res['id']
+
+ def delete(self):
+ if hasattr(self, 'id'):
+ server = Server()
+ db = server[self.db_name]
+ del db[self.id]
+
+ def get_dict(self):
+ info_dict = {}
+ for field in self.fields:
+ info_dict[field.fieldname] = getattr(self, field.fieldname)
+ if hasattr(self, 'id'):
+ info_dict['id'] = self.id
+ return info_dict
+
+ def __init__(self, **kwargs):
+ for field in self.fields:
+ setattr(self,
+ field.fieldname,
+ kwargs.get(field.fieldname,
+ field.default_value()))
+ if '_id' in kwargs:
+ self.id = kwargs['_id']
+ self.tainted = False
+ else:
+ self.tainted = True
diff --git a/lib/httputil.py b/lib/httputil.py
new file mode 100644
index 0000000..38efd3c
--- /dev/null
+++ b/lib/httputil.py
@@ -0,0 +1,33 @@
+import base64
+import re
+
+class HttpAuthentication(Exception):
+ pass
+
+class HttpNotFound(Exception):
+ pass
+
+class Forbidden(Exception):
+ pass
+
+class AppError(Exception):
+ def __init__(self, error_code, message):
+ self.error_code = error_code
+ self.message = message
+
+def require_login(func):
+ def __decorator(self, reactor, *args, **kwargs):
+ from models import User
+ auth_str = reactor.environ.get('HTTP_AUTHORIZATION')
+ if auth_str:
+ m = re.match(r'[Bb]asic (.+)', auth_str)
+ if m:
+ try:
+ username, password = base64.decodestring(m.group(1)).split(':', 1)
+ reactor.user = User.login(username, password)
+ except:
+ raise HttpAuthentication()
+ if reactor.user:
+ return func(self, reactor, *args, **kwargs)
+ raise HttpAuthentication()
+ return __decorator
diff --git a/lib/util.py b/lib/util.py
new file mode 100644
index 0000000..78d75a3
--- /dev/null
+++ b/lib/util.py
@@ -0,0 +1,31 @@
+import base64
+import re
+from models import User
+
+class HttpAuthentication(Exception):
+ pass
+
+class HttpNotFound(Exception):
+ pass
+
+def require_login(func):
+ def __decorator(self, reactor, **kwargs):
+ auth_str = reactor.environ.get('HTTP_AUTHORIZATION')
+ if auth_str:
+ m = re.match(r'Basic (\w+)', auth_str)
+ if m:
+ username, password = base64.decodestring(m.group(1)).split(':', 1)
+ reactor.user = User.login(username, password)
+ if reactor.user:
+ return func(self, reactor, **kwargs)
+ raise HttpAuthentication
+ return __decorator
+
+def force_unicode(data):
+ if isinstance(data, unicode):
+ return data
+ return unicode(data, 'utf8')
+
+def ok(message):
+ return {'result': 'ok',
+ 'message': message}
diff --git a/models/__init__.py b/models/__init__.py
new file mode 100644
index 0000000..0cf0f79
--- /dev/null
+++ b/models/__init__.py
@@ -0,0 +1,7 @@
+
+from user import User
+from entry import Entry
+
+Entry.initialize()
+User.initialize()
+
diff --git a/models/entry.py b/models/entry.py
new file mode 100644
index 0000000..c62766e
--- /dev/null
+++ b/models/entry.py
@@ -0,0 +1,8 @@
+from lib.couchkit import Server, ServerError
+from lib.couchkit.ocm import Model, ScalaField, ListField, DateTimeField
+
+class Entry(Model):
+ owner_id = ScalaField(null=False)
+ content = ScalaField()
+ created_time = DateTimeField()
+
diff --git a/models/user.py b/models/user.py
new file mode 100644
index 0000000..b7a6fea
--- /dev/null
+++ b/models/user.py
@@ -0,0 +1,91 @@
+from lib.couchkit import Server, ServerError
+from lib.couchkit.ocm import Model, ScalaField, ListField
+from entry import Entry
+
+class UserSignupException(Exception):
+ pass
+
+class UserLoginException(Exception):
+ pass
+
+class User(Model):
+ email = ScalaField(null=False)
+ username = ScalaField()
+ follows = ListField()
+ followed_by = ListField()
+ timeline = ListField()
+
+ @classmethod
+ def signup(cls, username, password, email):
+ server = Server()
+ namedb = server['user-name']
+ try:
+ namedb.set(username, {'password': password})
+ except ServerError:
+ raise UserSignupException('User %s already exists' %
+ username)
+ user = cls.create(username=username, email=email)
+ user.save()
+ namedb[username] = dict(password=password,
+ user_id=user.id)
+ return user
+
+ @classmethod
+ def login(cls, username, password):
+ server = Server()
+ namedb = server['user-name']
+ try:
+ login_info = namedb[username]
+ if login_info['password'] == password:
+ return cls.get(login_info['user_id'])
+ except ServerError, e:
+ return None
+ return None
+
+ def delete(self):
+ server = Server()
+ db = server['user-name']
+ del db[self.username]
+ super(User, self).delete()
+
+ def follow(self, other):
+ other.followed_by.append(self.id)
+ self.follows.append(other.id)
+ other.save()
+ self.save()
+
+ def iter_follows(self):
+ yield self
+ for user_id in self.follows:
+ yield User.get(user_id)
+
+ def update(self, text):
+ entry = Entry.create(owner_id=self.id,
+ content=text)
+ entry.save()
+ self.timeline.insert(0, (entry.id,
+ entry.created_time))
+ self.save()
+
+ def user_timeline(self, time_limit=0):
+ for entry_id, created_time in self.timeline:
+ if created_time < time_limit:
+ break
+ entry = Entry.get(entry_id)
+ yield entry
+
+ def follow_timeline(self, time_limit=0):
+ timeline = []
+ for user in self.iter_follows():
+ for entry_id, created_time in user.timeline:
+ if created_time < time_limit:
+ break
+ timeline.append((-created_time, entry_id, user))
+ timeline.sort()
+
+ for _, entry_id, user in timeline:
+ entry = Entry.get(entry_id)
+ yield entry, user
+
+ def __str__(self):
+ return self.username
diff --git a/scripts/console b/scripts/console
new file mode 100755
index 0000000..8f5348c
--- /dev/null
+++ b/scripts/console
@@ -0,0 +1,12 @@
+#!/bin/sh
+
+cat > /tmp/shellee.py <<EOF
+import sys
+sys.path.insert(0, ".")
+from lib.couchkit import *
+from models import *
+
+EOF
+
+python -i /tmp/shellee.py
+
diff --git a/scripts/dbtest.py b/scripts/dbtest.py
new file mode 100755
index 0000000..5995730
--- /dev/null
+++ b/scripts/dbtest.py
@@ -0,0 +1,15 @@
+#!/usr/bin/env python
+import sys
+sys.path.insert(0, '.')
+from models import User, Entry
+
+user1 = User.signup('zk', 'passwd', '[email protected]')
+user2 = User.signup('mj', 'passwd1', '[email protected]')
+
+user1.follow(user2)
+user1.update('haha, I am here')
+user2.update('yes, you got it')
+user2.update('good so good')
+
+for status, user in user1.follow_timeline():
+ print user, 'says:', status.content, status.created_time
diff --git a/scripts/init_db.py b/scripts/init_db.py
new file mode 100755
index 0000000..4d45cf4
--- /dev/null
+++ b/scripts/init_db.py
@@ -0,0 +1,17 @@
+#!/usr/bin/env python
+import sys, os
+sys.path.insert(0, '.')
+from lib.couchkit import Server
+
+server = Server()
+def reset_db(db_name):
+ if server[db_name]:
+ del server[db_name]
+ server.create_db(db_name)
+reset_db('user')
+reset_db('user-name')
+reset_db('entry')
+
+
+
+
diff --git a/settings.py b/settings.py
new file mode 100644
index 0000000..033574b
--- /dev/null
+++ b/settings.py
@@ -0,0 +1,16 @@
+URLMAP = (
+ (r'^/user/signup', 'Signup'),
+ (r'^/timeline/((?P<user_id>[0-9a-zA-Zu]+)/)?', 'Timeline'),
+ (r'^/user/timeline/((?P<user_id>[0-9a-zA-Zu]+)/)?', 'UserTimeline'),
+ (r'^/user/((?P<user_id>[0-9a-zA-Zu]+)/)?', 'UserInfo'),
+ (r'^/update/', 'Update'),
+ (r'^/entry/(?P<entry_id>[0-9a-zA-Z]+)/delete', 'EntryDelete'),
+ (r'^/entry/(?P<entry_id>[0-9a-zA-Z]+)', 'EntryInfo'),
+ (r'^/$', 'Hello'),
+ )
+
+ENCODING = 'utf8'
+COUCHDB_SERVER = 'http://localhost:5984'
+SERVER_PORT = 8001
+
+APP_REALM = 'RainyAPI'
|
olivierch/openBarter
|
f424bd78f4ac72b928a5354127c747ea0fff2569
|
vacuum full made by bash
|
diff --git a/doc/doc-ob-user.odt b/doc/doc-ob-user.odt
deleted file mode 100644
index 07d0596..0000000
Binary files a/doc/doc-ob-user.odt and /dev/null differ
diff --git a/doc/doc-ob.odt b/doc/doc-ob.odt
index bc3ad2e..0054fa7 100644
Binary files a/doc/doc-ob.odt and b/doc/doc-ob.odt differ
diff --git a/doc/figure-x.odg b/doc/figure-x.odg
new file mode 100644
index 0000000..6bddbec
Binary files /dev/null and b/doc/figure-x.odg differ
diff --git a/doc/tu_doc.sql b/doc/tu_doc.sql
new file mode 100644
index 0000000..d294104
--- /dev/null
+++ b/doc/tu_doc.sql
@@ -0,0 +1,14 @@
+-- Trilateral exchange between owners with distinct users
+---------------------------------------------------------
+--variables
+--USER:admin
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
+---------------------------------------------------------
+--USER:test_clienta
+--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
+SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
+--USER:test_clientb
+SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
+
+SELECT * FROM market.fsubmitorder('limit','wc','c', 10,'a',20);
+--Trilateral exchange expected
diff --git a/simu/liquid/cliquid.py b/simu/liquid/cliquid.py
index 90b8dd8..1518325 100644
--- a/simu/liquid/cliquid.py
+++ b/simu/liquid/cliquid.py
@@ -1,178 +1,173 @@
#!/usr/bin/python
# -*- coding: utf8 -*-
import distrib
import os.path
DB_NAME='liquid'
DB_USER='olivier'
DB_PWD=''
DB_HOST='localhost'
DB_PORT=5432
PATH_ICI = os.path.dirname(os.path.abspath(__file__))
PATH_SRC= os.path.join(os.path.dirname(PATH_ICI),'src')
PATH_DATA=os.path.join(PATH_ICI,'data')
MAX_TOWNER=10000 # maximum number of owners in towner
MAX_TORDER=1000000 # maximum size of the order book
MAX_TSTACK=100
MAX_QLT=100 # maximum number of qualities
#
QTT_PROV = 10000 # quantity provided
-class Exec1:
- def __init__(self):
- self.NAME = "X1"
- # model
- self.MAXCYCLE=64
- self.MAXPATHFETCHED=1024*5
- self.MAXMVTPERTRANS=128
-
-
-class Exec2:
- def __init__(self):
- self.NAME = "X2"
- # model
- self.MAXCYCLE=32
- self.MAXPATHFETCHED=1024*10
- self.MAXMVTPERTRANS=128
-
-
-class Exec3:
- def __init__(self):
- self.NAME = "X3"
- # model
- self.MAXCYCLE=64
- self.MAXPATHFETCHED=1024*10
- self.MAXMVTPERTRANS=128
-
-
-class Exec4:
- def __init__(self):
- self.NAME = "X4"
- # model
- self.MAXCYCLE=64
- self.MAXPATHFETCHED=1024*10
- self.MAXMVTPERTRANS=256
+class Execu(object):
+ def __init__(self,name,cycle,fetched,mvtpertrans):
+ self.NAME = name
+ self.MAXCYCLE = cycle
+ self.MAXPATHFETCHED = fetched
+ self.MAXMVTPERTRANS = mvtpertrans
+
+exec1 = Execu("X1",64,1024*5,128)
+exec2 = Execu("X2",32,1024*10,128)
+exec3 = Execu("X3",64,1024*10,128)
+exec4 = Execu("X4",64,1024*10,256)
+exec5 = Execu("X5",2,1024*10,256)
+
+class EnvExec(object):
+ def __init__(self,name,nbowner,qlt,distrib,pas):
+ self.CONF_NAME=name
+
+ self.MAX_OWNER=min(nbowner,MAXOWNER) # maximum number of owners
+ self.MAX_QLT=nbqlt # maximum number of qualities
+ """
+ fonction de distribution des qualités
+ """
+ self.distribQlt = distrib
+ self.coupleQlt = couple
-class Exec5:
- def __init__(self):
- self.NAME = "E5_Y2P1024_10M256"
- # model
- self.MAXCYCLE=2
- self.MAXPATHFETCHED=1024*10
- self.MAXMVTPERTRANS=256
-
+ # etendue des tests
+ self.LIQ_PAS = pas
+ self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
+
+envbasic10= EnvExec('1e1uni', 100,10,distrib.uniformQlt,500)
+envbasic100= EnvExec('1e2uni', 100,100,distrib.uniformQlt,500)
+envbasic1000= EnvExec('B3', 100,1000,distrib.betaQlt,500)
+envbasic10000= EnvExec('1e4uni', 100,10000,distrib.betaQlt,500)
+envMoney100= EnvExec('money100', 100,100,distrib.uniformQlt,500)
+'''
class Basic10:
def __init__(self):
self.CONF_NAME='1e1uni'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=10 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.uniformQlt
self.coupleQlt = distrib.couple
# etendue des tests
self.LIQ_PAS = 500
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
class Basic100:
def __init__(self):
self.CONF_NAME='1e2uni'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=100 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.betaQlt
self.coupleQlt = distrib.couple
# etendue des tests
self.LIQ_PAS = 500
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
-
+
+
class Basic1000:
def __init__(self):
self.CONF_NAME='B3'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=1000 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.betaQlt
self.coupleQlt = distrib.couple
# etendue des tests
self.LIQ_PAS = 500
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
-
+
+
class Basic10000:
def __init__(self):
self.CONF_NAME='1e4uni'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=10000 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.betaQlt
self.coupleQlt = distrib.couple
# etendue des tests
self.LIQ_PAS = 500
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
-
+
+basic100large=EnvExec('1E4UNI',100,10000,distrib.uniformQlt,3000)
class Basic100large:
def __init__(self):
self.CONF_NAME='1E4UNI'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=10000 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.uniformQlt
self.coupleQlt = distrib.couple
# etendue des tests
self.LIQ_PAS = 3000
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
class Money100:
def __init__(self):
self.CONF_NAME='money100'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=100 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.uniformQlt
self.coupleQlt = distrib.couple_money
# etendue des tests
self.LIQ_PAS = 500
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
-
+'''
diff --git a/simu/liquid/confex.py b/simu/liquid/confex.py
new file mode 100644
index 0000000..8e914de
--- /dev/null
+++ b/simu/liquid/confex.py
@@ -0,0 +1,77 @@
+#!/usr/bin/python
+# -*- coding: utf8 -*-
+import distrib
+import os.path
+
+DB_NAME='liquid'
+DB_USER='olivier'
+DB_PWD=''
+DB_HOST='localhost'
+DB_PORT=5432
+
+PATH_ICI = os.path.dirname(os.path.abspath(__file__))
+PATH_SRC= os.path.join(os.path.dirname(PATH_ICI),'src')
+PATH_DATA=os.path.join(PATH_ICI,'data')
+
+MAX_TOWNER=10000 # maximum number of owners in towner
+MAX_TORDER=1000000 # maximum size of the order book
+MAX_TSTACK=100
+MAX_QLT=100 # maximum number of qualities
+
+#
+QTT_PROV = 10000 # quantity provided
+
+class Execu(object):
+ def __init__(self,name,cycle,fetched,mvtpertrans):
+ self.NAME = name
+ self.MAXCYCLE = cycle
+ self.MAXPATHFETCHED = fetched
+ self.MAXMVTPERTRANS = mvtpertrans
+
+exec1 = Execu("X1",64,1024*5,128)
+exec2 = Execu("X2",32,1024*10,128)
+exec3 = Execu("X3",64,1024*10,128)
+exec4 = Execu("X4",64,1024*10,256)
+exec5 = Execu("X5",2,1024*10,256)
+
+'''
+class Exec1:
+ def __init__(self):
+ self.NAME = "X1"
+ # model
+ self.MAXCYCLE=64
+ self.MAXPATHFETCHED=1024*5
+ self.MAXMVTPERTRANS=128
+
+class Exec2:
+ def __init__(self):
+ self.NAME = "X2"
+ # model
+ self.MAXCYCLE=32
+ self.MAXPATHFETCHED=1024*10
+ self.MAXMVTPERTRANS=128
+
+class Exec3:
+ def __init__(self):
+ self.NAME = "X3"
+ # model
+ self.MAXCYCLE=64
+ self.MAXPATHFETCHED=1024*10
+ self.MAXMVTPERTRANS=128
+
+class Exec4:
+ def __init__(self):
+ self.NAME = "X4"
+ # model
+ self.MAXCYCLE=64
+ self.MAXPATHFETCHED=1024*10
+ self.MAXMVTPERTRANS=256
+
+class Exec5:
+ def __init__(self):
+ self.NAME = "E5_Y2P1024_10M256"
+ # model
+ self.MAXCYCLE=2
+ self.MAXPATHFETCHED=1024*10
+ self.MAXMVTPERTRANS=256
+'''
diff --git a/simu/liquid/findings/notes.txt b/simu/liquid/findings/notes.txt
index 4b3d51e..64c45b8 100644
--- a/simu/liquid/findings/notes.txt
+++ b/simu/liquid/findings/notes.txt
@@ -1,30 +1,30 @@
Findings
Results presented here are produced by version 7.0.1 of openBarter.
They were obtained to observe variations of foor indicators for different size of the order book:
the mean execution time of an order (delay),
the number of movements per cycle (nbcycle),
-fluidity of the market is a measurement of it's capacity to drain values out of the order book (fluidity),
+fluidity of the market as a measurement of it's capacity to drain values out of the order book (fluidity),
the ratio between omega produced by movements and minimum omega required by the corresponding order (gain).
-The order book is filled with barter limit orders where the ratio qtt_prov/qtt_requ is chosen randomly using a continuous uniform distribution between 0.5 and 1.5, and where the quantity provided (qtt_prov) is 10 000. The order book is filled by producing first a big book with the desired statistic, then extracting the beginning of the book to obtain the chosen size. For this given size, results presented are the means of values obtained by submitting 1000 barter limit orders. This set of order submitted is the same for all size.
+The order book is filled with barter limit orders where the ratio qtt_prov/qtt_requ is chosen randomly using a continuous uniform distribution between 0.5 and 1.5, and where the quantity provided (qtt_prov) is 10 000. The order book is filled by producing first a big book with the desired statistic, then extracting the beginning of the book to obtain the chosen size. For this given size, results presented are the means of values obtained by submitting 1000 barter limit orders. This set of order submitted is the same for all sizes.
-The fluidity of the market is defined as the ratio between the output flow of values produced by movements and the input flow of values offered by orders submitted to the order book. Flows of value are obtained by summing quantities of values for all qualities. The input flow is 10 000 * 1000 orders, and the output flow is the sum of quantities moved.
+The fluidity of the market is defined as the ratio between the output flow of values produced by movements and the input flow of values offered by orders submitted to the order book. Flows of value are obtained by summing quantities of values for all qualities. Consequently, the input flow is 10 000 * 1000 orders, and the output flow is the sum of quantities moved.
result_1.html
scenario money and uni100 are compared
result_2.html
scenario UNI1000 big order book
result_3.html
scenario uni100 with diverse limits (E1,E2,E3,E4)
result_4.html
Different diversity of qualities are considered.
scenario uni100,uni1000,uni10000
diff --git a/simu/liquid/gen.py b/simu/liquid/gen.py
index fd65948..59740a9 100644
--- a/simu/liquid/gen.py
+++ b/simu/liquid/gen.py
@@ -1,176 +1,179 @@
#!/usr/bin/python
# -*- coding: utf8 -*-
import cliquid
import os
#import cliquid_basic as conf
import random
import distrib
+import molet
"""
pour modifier la conf, modifier l'import
"""
def generate(config):
''' dépend de cliquid et conf
produit trois fichiers
'''
conf = config()
# towner
+ molet.mkdir(cliquid.PATH_DATA,ignoreWarning=True)
fn = os.path.join(cliquid.PATH_DATA,'towner.sql')
- with open(fn,'w') as f:
- for i in range(cliquid.MAX_TOWNER):
- j = i+1
- f.write('%i\town%i\t2013-02-10 16:24:01.651649\t\N\n' % (j,j))
+ if(not os.path.exists(fn)):
+ with open(fn,'w') as f:
+ for i in range(cliquid.MAX_TOWNER):
+ j = i+1
+ f.write('%i\town%i\t2013-02-10 16:24:01.651649\t\N\n' % (j,j))
# torder
fn = os.path.join(cliquid.PATH_DATA,'torder_'+conf.CONF_NAME+'.sql')
with open(fn,'w') as f:
for i in range(cliquid.MAX_TORDER):
j = i+1
w = random.randint(1,conf.MAX_OWNER)
qlt_prov,qlt_requ = conf.coupleQlt(conf.distribQlt,conf.MAX_QLT)
r = random.random()+0.5
qtt_requ = int(cliquid.QTT_PROV * r) # proba(QTT_PROV/qtt_requ < 1) = 0.5
- line = "%s\t(1,%i,%i,%i,%i,qlt%i,%i,qlt%i,%i)\t2013-02-10 16:24:01.651649\t\N\n"
- f.write(line % (cliquid.DB_USER,j,w,j,qtt_requ,qlt_requ,cliquid.QTT_PROV,qlt_prov,cliquid.QTT_PROV))
-
+ #line = "%s\t(1,%i,%i,%i,%i,qlt%i,%i,qlt%i,%i)\t2013-02-10 16:24:01.651649\t\N\n"
+ line = "%s\t(2,%i,%i,%i,%i,qlt%i,%i,qlt%i,%i,\"(0,0),(0,0)\",\"(0,0),(0,0)\",0,\"(1.5707963267949,3.14159265358979),(-1.5707963267949,-3.14159265358979)\")\t2014-04-29 19:40:44.382527\t2014-04-29 19:40:44.448502\n"
+ f.write(line % (cliquid.DB_USER,j,w,j,qtt_requ,qlt_requ,cliquid.QTT_PROV,qlt_prov,cliquid.QTT_PROV))
# tstack
fn = os.path.join(cliquid.PATH_DATA,'tstack_'+conf.CONF_NAME+'.sql')
with open(fn,'w') as f:
for i in range(cliquid.MAX_TSTACK):
j = i+1+cliquid.MAX_TORDER
w = random.randint(1,conf.MAX_OWNER)
qlt_prov,qlt_requ = conf.coupleQlt(conf.distribQlt,conf.MAX_QLT)
r = random.random()+0.5
qtt_requ = int(cliquid.QTT_PROV * r) # proba(QTT_PROV/qtt_requ < 1) = 0.5
line = "%i\t%s\town%i\t\N\t1\tqlt%i\t%i\tqlt%i\t%i\t%i\t100 year\t2013-03-24 22:50:08.300833\n"
f.write(line % (j,cliquid.DB_USER,w,qlt_requ,qtt_requ,qlt_prov,cliquid.QTT_PROV,cliquid.QTT_PROV))
print "conf \'%s\' generated" % (conf.CONF_NAME,)
def test(cexec,conf,size):
"""
truncate towner;
copy towner from '/home/olivier/ob92/simu/liquid/data/towner.sql';
truncate torder;
copy torder(usr,ord,created,updated) from '/home/olivier/ob92/simu/liquid/data/torder_basic.sql';
SELECT setval('torder_id_seq',1,false);
truncate tstack;
copy tstack from '/home/olivier/ob92/simu/liquid/data/tstack_basic.sql';
SELECT setval('tstack_id_seq',10000,true);
truncate tmvt;
SELECT setval('tmvt_id_seq',1,false);
"""
import util
_size = min(size,cliquid.MAX_TORDER)
fn = os.path.join(cliquid.PATH_DATA,'torder_'+conf.CONF_NAME+'.sql')
gn = os.path.join(cliquid.PATH_DATA,'_tmp.sql')
with open(fn,'r') as f:
with open(gn,'w') as g:
for i in range(_size):
g.write(f.readline())
with util.DbConn(cliquid) as dbcon:
with util.DbCursor(dbcon) as cur:
cur.execute("UPDATE tconst SET value=%s WHERE name=%s",[cexec.MAXCYCLE,"MAXCYCLE"])
cur.execute("UPDATE tconst SET value=%s WHERE name=%s",[cexec.MAXPATHFETCHED,"MAXPATHFETCHED"])
cur.execute("UPDATE tconst SET value=%s WHERE name=%s",[cexec.MAXMVTPERTRANS,"MAXMVTPERTRANS"])
cur.execute("truncate towner",[])
fn = os.path.join(cliquid.PATH_DATA,'towner.sql')
cur.execute("copy towner from %s",[fn])
cur.execute("truncate torder",[])
cur.execute("copy torder(usr,ord,created,updated) from %s",[gn])
cur.execute("truncate tstack",[])
fn = os.path.join(cliquid.PATH_DATA,'tstack_'+conf.CONF_NAME+'.sql')
cur.execute("copy tstack from %s",[fn])
cur.execute("SELECT setval('tstack_id_seq',%s,false)",[cliquid.MAX_TORDER+cliquid.MAX_TSTACK+1])
cur.execute("truncate tmvt",[])
cur.execute("SELECT setval('tmvt_id_seq',1,false)",[])
begin = util.now()
_cnt = 1
while(_cnt>=1):
cur.execute("SELECT * from femptystack()",[])
vec = cur.next()
_cnt = vec[0]
duree = util.getDelai(begin)
duree = duree/cliquid.MAX_TSTACK
cur.execute("SELECT sum(qtt) FROM tmvt",[])
vec = cur.next()
if(vec[0] is None):
vol = 0
else:
vol = vec[0]
liqu = float(vol)/float(cliquid.MAX_TSTACK * cliquid.QTT_PROV)
cur.execute("SELECT avg(s.nbc) FROM (SELECT max(nbc) as nbc,grp FROM tmvt group by grp) s",[])
vec = cur.next()
if(vec[0] is None):
nbcm = 0
else:
nbcm = vec[0]
cur.execute("SELECT avg(om_exp/om_rea) FROM tmvt ",[])
vec = cur.next()
if(vec[0] is None):
gain = 0
else:
gain = vec[0]
#print duree,liqu,nbcm
print "test \'%s_%s\' performed with size=%i" % (conf.CONF_NAME,cexec.NAME,_size)
return duree,liqu,nbcm,gain
def perftests():
import concat
cexecs = [cliquid.Exec1(),cliquid.Exec2(),cliquid.Exec3(),cliquid.Exec4()]
confs= [cliquid.Basic1000()] #,cliquid.Basic1000()] #,cliquid.Money100(),cliquid.Basic1000large()]
for conf in confs:
fn = os.path.join(cliquid.PATH_DATA,'tstack_'+conf.CONF_NAME+'.sql')
if(not os.path.exists(fn) or True):
generate(conf)
for cexec in cexecs:
fn = os.path.join(cliquid.PATH_DATA,'result_'+conf.CONF_NAME+'_'+cexec.NAME+'.txt')
with open(fn,'w') as f:
for i in range(conf.LIQ_ITER):
size = (i+1) * conf.LIQ_PAS
duree,liqu,nbcm,gain = test(cexec,conf,size)
f.write("%i;%f;%f;%f;%f;\n" % (size,duree,liqu,nbcm,gain))
concat.makeVis('result_')
def perftests2():
import concat
#confs= [(cliquid.Exec3(),cliquid.Basic100()),(cliquid.Exec5(),cliquid.Money100())] #
confs= [(cliquid.Exec3(),cliquid.Basic100large())] # large order book
for config in confs:
cexec,conf = config
fn = os.path.join(cliquid.PATH_DATA,'tstack_'+conf.CONF_NAME+'.sql')
if(not os.path.exists(fn) or True):
generate(conf)
fn = os.path.join(cliquid.PATH_DATA,'result_'+conf.CONF_NAME+'_'+cexec.NAME+'.txt')
with open(fn,'w') as f:
for i in range(conf.LIQ_ITER):
size = (i+1) * conf.LIQ_PAS
duree,liqu,nbcm,gain = test(cexec,conf,size)
f.write("%i;%f;%f;%f;%f;\n" % (size,duree,liqu,nbcm,gain))
concat.makeVis('result_')
diff --git a/simu/liquid/liquid.py b/simu/liquid/liquid.py
index 9352a46..65d9741 100755
--- a/simu/liquid/liquid.py
+++ b/simu/liquid/liquid.py
@@ -1,105 +1,107 @@
#!/usr/bin/python
# -*- coding: utf8 -*-
"""
objectifs: Analyse de liquidité
Voir à partir de quel volume l'order book devient liquide.
Constantes dimensionnantes: dans cliquid.py
Principes:
l'ob est rempli avec des ordres
passer des trocs
mvt[i].qtt est la somme des quantités des mvts pour une qualité donnée i
troc[.].qtt est la comme des quantités des mvts pour une qualité donnée i
liquidité = sum(mvt[.].qtt) / sum(troc[.].qtt)
détail:
1) créer towner.sql avec 10000 owner
2) pour chaque config,
un fichier config_orderbook.sql contenant torder
un fichier config_stack.sql contenant tstack
3) liquid.py
pour chaque volume
charger une partie de config_orderbook.sql
charger config_stack.sql
vider le stack
calculer la liquidité
enregister les res
les configs sont dans des liquid_conf.py (import liquid_conf as conf)
les *.sql sont dans cliquid.PATH_DATA
les fichiers sont gégérés directement en python par gen.py.
import prims
import sys
import const
import util
import random
import psycopg2
import sys
# import curses
import logging
import scenarii
logging.basicConfig(level=logging.DEBUG,
format='(%(threadName)-10s) %(message)s',
)
"""
import cliquid
import gen
+
def simu(options):
if(options.generate):
try:
- conf = getattr(cliquid,options.generate)
- gen.generate(conf)
+ for _conf in ('Basic10','Basic100','Basic1000'):
+ conf = getattr(cliquid,_conf)
+ gen.generate(conf)
except ImportError,e:
print "this configuration is not defined"
return
if(options.test):
gen.perftests()
return
print 'give an option'
return
from optparse import OptionParser
def main():
usage = """usage: %prog [options]
to change config, modify the import in gen.py"""
parser = OptionParser(usage)
parser.add_option("-g", "--generate",type="string",action="store",dest="generate",help="the scenario choosen",default=None)
parser.add_option("-t", "--test",action="store_true", dest="test",help="make the test",default=False)
(options, args) = parser.parse_args()
simu(options)
if __name__ == "__main__":
main()
"""
base simu_r1
./simu.py -i 10000 -t 10
done: {'nbAgreement': 75000L, 'nbMvtAgr': 240865L, 'nbMvtGarbadge': 11152L, 'nbOrder': 13836L}
simu terminated after 28519.220312 seconds (0.285192 secs/oper)
"""
"""
parser.add_option("-i", "--iteration",type="int", dest="iteration",help="number of iteration",default=0)
parser.add_option("-r", "--reset",action="store_true", dest="reset",help="database is reset",default=False)
parser.add_option("-v", "--verif",action="store_true", dest="verif",help="fgeterrs run after",default=False)
parser.add_option("-m", "--maxparams",action="store_true", dest="maxparams",help="print max parameters",default=False)
parser.add_option("-t", "--threads",type="int", dest="threads",help="number of threads",default=1)
parser.add_option("--seed",type="int",dest="seed",help="reset random seed",default=0)
parser.add_option("--MAXCYCLE",type="int",dest="MAXCYCLE",help="reset MAXCYCLE")
parser.add_option("--MAXTRY",type="int",dest="MAXTRY",help="reset MAXTRY")
parser.add_option("--MAXPATHFETCHED",type="int",dest="MAXPATHFETCHED",help="reset MAXPATHFETCHED")
parser.add_option("--CHECKQUALITYOWNERSHIP",action="store_true",dest="CHECKQUALITYOWNERSHIP",help="set CHECK_QUALITY_OWNERSHIP",default=False)
parser.add_option("-s","--scenario",type="string",action="store",dest="scenario",help="the scenario choosen",default="basic")
"""
diff --git a/simu/liquid/molet.py b/simu/liquid/molet.py
new file mode 100644
index 0000000..ea7d8e9
--- /dev/null
+++ b/simu/liquid/molet.py
@@ -0,0 +1,228 @@
+# -*- coding: utf-8 -*-
+class MoletException(Exception):
+ pass
+
+'''---------------------------------------------------------------------------
+envoi des mails
+---------------------------------------------------------------------------'''
+import smtplib
+from email.mime.multipart import MIMEMultipart
+from email.mime.text import MIMEText
+
+def sendMail(subject,body,recipients,smtpServer,
+ smtpSender,smtpPort,smtpPassword,smtpLogin):
+
+ ''' Sends an e-mail to the specified recipients
+ returns False when failed'''
+
+ if(len(recipients)==0):
+ raise MoletException("No recipients was found for the message")
+ return False
+
+ msg = MIMEMultipart("alternative")
+ msg.set_charset("utf-8")
+
+ msg["Subject"] = subject
+ msg["From"] = smtpSender
+ msg["To"] = ','.join(recipients)
+
+ try:
+ _uniBody = unicode(body,'utf-8','replace') if isinstance(body,str) else body
+ _encBody = _uniBody.encode('utf-8')
+
+ part1 = MIMEText(_encBody,'html',_charset='utf-8')
+ # possible UnicodeDecodeError before this
+
+ msg.attach(part1)
+
+ session = smtplib.SMTP(smtpServer, smtpPort)
+
+ session.ehlo()
+ session.starttls()
+ session.ehlo
+ session.login(smtpLogin, smtpPassword)
+ # print msg.as_string()
+ session.sendmail(smtpSender, recipients, msg.as_string())
+ session.quit()
+ return True
+
+ except Exception,e:
+ raise MoletException('The message "%s" could not be sent.' % subject )
+ return False
+
+
+###########################################################################
+# gestion de fichiers et de répertoires
+
+import os,shutil
+def removeFile(f,ignoreWarning = False):
+ """ removes a file """
+ try:
+ os.remove(f)
+ return True
+ except OSError,e:
+ if e.errno!=2:
+ raise e
+ if not ignoreWarning:
+ raise MoletException("path %s could not be removed" % f)
+ return False
+
+def removeTree(path,ignoreWarning = False):
+ try:
+ shutil.rmtree(path)
+ return True
+ except OSError,e:
+ if e.errno!=2:
+ raise e
+ if not ignoreWarning:
+ raise MoletException("directory %s could not be removed" % path)
+ return False
+
+def mkdir(path,mode = 0755,ignoreWarning = False):
+ try:
+ os.mkdir(path,mode)
+ return True
+ except OSError,e:
+ if e.errno!=17: # exists
+ raise e
+ if not ignoreWarning:
+ raise MoletException("directory %s exists" % path)
+ return False
+
+def readIntFile(f):
+ try:
+ if(os.path.exists(f)):
+ with open(f,'r') as f:
+ r = f.readline()
+ i = int(r)
+ return i
+ else:
+ return None
+ except ValueError,e:
+ return None
+
+def writeIntFile(lfile,i):
+ with open(lfile,'w') as f:
+ f.write('%d\n' % i)
+
+###########################################################################
+# driver postgres
+
+import psycopg2
+import psycopg2.extras
+import psycopg2.extensions
+
+class DbCursor(object):
+ '''
+ with statement used to wrap a transaction. The transaction and cursor type
+ is defined by DbData object. The transaction is commited by the wrapper.
+ Several cursors can be opened with the connection.
+ usage:
+ dbData = DbData(dbBO,dic=True,autocommit=True)
+
+ with DbCursor(dbData) as cur:
+ ... (always close con and cur)
+
+ '''
+ def __init__(self,dbData, dic = False,exit = False):
+ self.dbData = dbData
+ self.cur = None
+ self.dic = dic
+ self.exit = exit
+
+ def __enter__(self):
+ self.cur = self.dbData.getCursor(dic = self.dic)
+ return self.cur
+
+ def __exit__(self, type, value, traceback):
+ exit = self.exit
+
+ if self.cur:
+ self.cur.close()
+
+ if type is None:
+ self.dbData.commit()
+ exit = True
+ else:
+ self.dbData.rollback()
+ self.dbData.exception(value,msg='An exception occured while using the cursor',tipe=type,triceback=traceback)
+ #return False # on propage l'exception
+ return exit
+
+import traceback
+class DbData(object):
+ ''' DbData(db com.srvob_conf.DbInti(),dic = False,autocommit = True)
+ db defines DSN.
+ '''
+ def __init__(self,db,autocommit = True,login=None):
+ self.db = db
+ self.aut = autocommit
+ self.login = login
+
+ self.con=psycopg2.connect(self.getDSN())
+
+ if self.aut:
+ self.con.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
+
+ def getDSN(self):
+ if self.login:
+ _login = self.login
+ else:
+ _login = self.db.login
+ return "dbname='%s' user='%s' password='%s' host='%s' port='%s'" % (self.db.name,_login,self.db.password,self.db.host,self.db.port)
+
+ def getCursor(self,dic=False):
+ if dic:
+ cur = self.con.cursor(cursor_factory=psycopg2.extras.RealDictCursor)
+ else:
+ cur = self.con.cursor()
+ return cur
+
+ def commit(self):
+ if self.aut:
+ # raise MoletException("Commit called while autocommit")
+ return
+ self.con.commit()
+
+ def rollback(self):
+ if self.aut:
+ # raise MoletException("Rollback called while autocommit")
+ return
+ try:
+ self.con.rollback()
+ except psycopg2.InterfaceError,e:
+ self.exception(e,msg="Attempt to rollback while the connection were closed")
+
+ def exception(self,e,msg = None,tipe = None,triceback = None):
+ if msg:
+ print e,tipe,traceback.print_tb(triceback)
+ raise MoletException(msg)
+ else:
+ raise e
+
+ def close(self):
+ self.con.close()
+
+
+'''---------------------------------------------------------------------------
+divers
+---------------------------------------------------------------------------'''
+import datetime
+def utcNow():
+ return datetime.datetime.utcnow()
+
+import os
+import pwd
+import grp
+
+def get_username():
+ return pwd.getpwuid( os.getuid() )[ 0 ]
+
+def get_usergroup(_file):
+ stat_info = os.stat(_file)
+ uid = stat_info.st_uid
+ gid = stat_info.st_gid
+ user = pwd.getpwuid(uid)[0]
+ group = grp.getgrgid(gid)[0]
+ return (user, group)
+
diff --git a/src/flowf--unpackaged--0.1.sql b/src/flowf--unpackaged--0.1.sql
index 217dd11..7e3c20e 100644
--- a/src/flowf--unpackaged--0.1.sql
+++ b/src/flowf--unpackaged--0.1.sql
@@ -1,3 +1,3 @@
-/* contrib/flow/flow--unpackaged--1.1.sql */
+/* contrib/flow/flow--unpackaged--0.1.sql */
diff --git a/src/sql/currencies.sql b/src/sql/currencies.sql
index 2b7f5ca..2d39385 100644
--- a/src/sql/currencies.sql
+++ b/src/sql/currencies.sql
@@ -1,317 +1,318 @@
CREATE TABLE townauth (
name text
);
GRANT SELECT ON townauth TO role_com;
+
COPY townauth (name) FROM stdin;
-google.com
+openbarter.org
\.
CREATE TABLE tcurrency (
name text
);
GRANT SELECT ON tcurrency TO role_com;
COPY tcurrency (name) FROM stdin;
ADP
AED
AFA
AFN
ALK
ALL
AMD
ANG
AOA
AOK
AON
AOR
ARA
ARP
ARS
ARY
ATS
AUD
AWG
AYM
AZM
AZN
BAD
BAM
BBD
BDT
BEC
BEF
BEL
BGJ
BGK
BGL
BGN
BHD
BIF
BMD
BND
BOB
BOP
BOV
BRB
BRC
BRE
BRL
BRN
BRR
BSD
BTN
BUK
BWP
BYB
BYR
BZD
CAD
CDF
CHC
CHE
CHF
CHW
CLF
CLP
CNX
CNY
COP
COU
CRC
CSD
CSJ
CSK
CUC
CUP
CVE
CYP
CZK
DDM
DEM
DJF
DKK
DOP
DZD
ECS
ECV
EEK
EGP
EQE
ERN
ESA
ESB
ESP
ETB
EUR
FIM
FJD
FKP
FRF
GBP
GEK
GEL
GHC
GHP
GHS
GIP
GMD
GNE
GNF
GNS
GQE
GRD
GTQ
GWE
GWP
GYD
HKD
HNL
HRD
HRK
HTG
HUF
IDR
IEP
ILP
ILR
ILS
INR
IQD
IRR
ISJ
ISK
ITL
JMD
JOD
JPY
KES
KGS
KHR
KMF
KPW
KRW
KWD
KYD
KZT
LAJ
LAK
LBP
LKR
LRD
LSL
LSM
LTL
LTT
LUC
LUF
LUL
LVL
LVR
LYD
MAD
MAF
MDL
MGA
MGF
MKD
MLF
MMK
MNT
MOP
MRO
MTL
MTP
MUR
MVQ
MVR
MWK
MXN
MXP
MXV
MYR
MZE
MZM
MZN
NAD
NGN
NIC
NIO
NLG
NOK
NPR
NZD
OMR
PAB
PEH
PEI
PEN
PES
PGK
PHP
PKR
PLN
PLZ
PTE
PYG
QAR
RHD
ROK
ROL
RON
RSD
RUB
RUR
RWF
SAR
SBD
SCR
SDD
SDG
SDP
SEK
SGD
SHP
SIT
SKK
SLL
SOS
SRD
SRG
SSP
STD
SUR
SVC
SYP
SZL
THB
TJR
TJS
TMM
TMT
TND
TOP
TPE
TRL
TRY
TTD
TWD
TZS
UAH
UAK
UGS
UGW
UGX
USD
USN
USS
UYI
UYN
UYP
UYU
UZS
VEB
VEF
VNC
VND
VUV
WST
XAF
XAG
XAU
XBA
XBB
XBC
XBD
XCD
XDR
XEU
XFO
XFU
XOF
XPD
XPF
XPT
XRE
XSU
XTS
XUA
XXX
YDD
YER
YUD
YUM
YUN
ZAL
ZAR
ZMK
ZRN
ZRZ
ZWC
ZWD
ZWL
ZWN
ZWR
\.
diff --git a/src/sql/model.sql b/src/sql/model.sql
index 994e601..5e769f4 100644
--- a/src/sql/model.sql
+++ b/src/sql/model.sql
@@ -1,46 +1,42 @@
\set ECHO none
/* roles.sql must be executed previously */
\set ON_ERROR_STOP on
BEGIN;
drop schema if exists market cascade;
create schema market;
set search_path to market;
--- DROP EXTENSION IF EXISTS flowf;
--- CREATE EXTENSION flowf WITH VERSION '0.1';
-
-
\i sql/roles.sql
GRANT USAGE ON SCHEMA market TO role_com;
\i sql/util.sql
\i sql/tables.sql
\i sql/pushpull.sql
\i sql/prims.sql
\i sql/currencies.sql
\i sql/algo.sql
\i sql/openclose.sql
create view vord as (SELECT
(ord).id,
(ord).oid,
own,
(ord).qtt_requ,
(ord).qua_requ,
CASE WHEN (ord).type=1 THEN 'limit' ELSE 'best' END typ,
(ord).qtt_prov,
(ord).qua_prov,
(ord).qtt,
(ord).own as own_id,
usr
-- duration
FROM market.torder order by (ord).id asc);
select fsetvar('INSTALLED',1);
COMMIT;
select * from fversion();
\echo model installed.
diff --git a/src/sql/openclose.sql b/src/sql/openclose.sql
index 5c27710..698a602 100644
--- a/src/sql/openclose.sql
+++ b/src/sql/openclose.sql
@@ -1,329 +1,449 @@
/*
--------------------------------------------------------------------------------
-- BGW_OPENCLOSE
--------------------------------------------------------------------------------
openclose() is processed by postgres in background using a background worker called
BGW_OPENCLOSE (see src/worker_ob.c).
It performs transitions between daily market phases. Surprisingly, the sequence
of operations does not depend on time and is always performed in the same order.
They are just special operations waiting until the end of the current gphase.
*/
--------------------------------------------------------------------------------
INSERT INTO tvar(name,value) VALUES
- ('OC_CURRENT_PHASE',101); -- phase of the market when it's model is settled
+ ('OC_CURRENT_PHASE',102), -- phase of the market when it's model is settled
+ ('OC_BGW_CONSUMESTACK_ACTIVE',1),
+ ('OC_BGW_OPENCLOSE_ACTIVE',0);
CREATE TABLE tmsgdaysbefore(LIKE tmsg);
GRANT SELECT ON tmsgdaysbefore TO role_com;
-- index and unique constraint are not cloned
--------------------------------------------------------------------------------
CREATE FUNCTION openclose() RETURNS int AS $$
/*
The structure of the code is:
_phase := OC_CURRENT_PHASE
CASE _phase
....
WHEN X THEN
dowait := operation_of_phase(X)
OC_CURRENT_PHASE := _next_phase
....
return dowait
This code is executed by the BGW_OPENCLOSE doing following:
while(true)
dowait := market.openclose()
if (dowait >=0):
wait for dowait milliseconds
elif dowait == -100:
VACUUM FULL
else:
error
*/
DECLARE
_phase int;
_dowait int := 0;
_rp yerrorprim;
_stock_id int;
_owner text;
_done boolean;
BEGIN
+ IF(fgetvar('OC_BGW_OPENCLOSE_ACTIVE')=0) THEN
+ _dowait := 60000;
+ -- waits one minute before testing again
+ RETURN _dowait;
+ END IF;
+
_phase := fgetvar('OC_CURRENT_PHASE');
CASE _phase
------------------------------------------------------------------------
-- GPHASE 0 BEGIN OF DAY --
------------------------------------------------------------------------
WHEN 0 THEN -- creating the timetable of the day
PERFORM foc_create_timesum();
-- tmsg is archived to tmsgdaysbefore
WITH t AS (DELETE FROM tmsg RETURNING * )
INSERT INTO tmsgdaysbefore SELECT * FROM t ;
TRUNCATE tmsg;
PERFORM setval('tmsg_id_seq',1,false);
PERFORM foc_next(1,'tmsg archived');
- WHEN 1 THEN -- waiting for opening
+ WHEN 1 THEN -- asking for VACUUM FULL execution
+ _dowait := 60000; -- 1 minute
+ -- vacuum.bash should be called
+
+ WHEN 2 THEN -- VACUUM FULL execution
+ _dowait := 60000; -- 1 minute
+
+ WHEN 3 THEN -- waiting for opening
+ -- vacuum.bash is finished
IF(foc_in_gphase(_phase)) THEN
_dowait := 60000; -- 1 minute
ELSE
PERFORM foc_next(101,'Start opening sequence');
END IF;
------------------------------------------------------------------------
-- GPHASE 1 MARKET OPENED --
------------------------------------------------------------------------
WHEN 101 THEN -- client access opening.
REVOKE role_co_closed FROM role_client;
GRANT role_co TO role_client;
PERFORM foc_next(102,'Client access opened');
WHEN 102 THEN -- market is opened to client access, waiting for closing.
IF(foc_in_gphase(_phase)) THEN
PERFORM foc_clean_outdated_orders();
_dowait := 60000; -- 1 minute
ELSE
PERFORM foc_next(120,'Start closing');
END IF;
WHEN 120 THEN -- market is closing.
REVOKE role_co FROM role_client;
GRANT role_co_closed TO role_client;
PERFORM foc_next(121,'Client access revoked');
WHEN 121 THEN -- waiting until the stack is empty
-- checks wether BGW_CONSUMESTACK purged the stack
_done := fstackdone();
IF(not _done) THEN
_dowait := 60000;
-- waits one minute before testing again
ELSE
-- the stack is purged
PERFORM foc_next(200,'Last primitives performed');
END IF;
------------------------------------------------------------------------
-- GPHASE 2 - MARKET CLOSED --
------------------------------------------------------------------------
WHEN 200 THEN -- removing orders of the order book
SELECT (o.ord).id,w.name INTO _stock_id,_owner FROM torder o
INNER JOIN towner w ON w.id=(o.ord).own
WHERE (o.ord).oid = (o.ord).id LIMIT 1;
IF(FOUND) THEN
_rp := fsubmitrmorder(_owner,_stock_id);
IF(_rp.error.code != 0 ) THEN
RAISE EXCEPTION 'Error while removing orders %',_rp;
END IF;
-- repeate again until order_book is empty
ELSE
PERFORM foc_next(201,'Order book is emptied');
END IF;
WHEN 201 THEN -- waiting until the stack is empty
-- checks wether BGW_CONSUMESTACK purged the stack
_done := fstackdone();
IF(not _done) THEN
_dowait := 60000;
-- waits one minute before testing again
ELSE
-- the stack is purged
PERFORM foc_next(202,'rm primitives are processed');
END IF;
WHEN 202 THEN -- truncating tables except tmsg
truncate torder;
truncate tstack;
PERFORM setval('tstack_id_seq',1,false);
PERFORM setval('tmvt_id_seq',1,false);
truncate towner;
PERFORM setval('towner_id_seq',1,false);
PERFORM foc_next(203,'tables torder,tsack,tmvt,towner are truncated');
- WHEN 203 THEN -- asking for VACUUM FULL execution
-
- _dowait := -100;
- PERFORM foc_next(204,'VACUUM FULL is lauched');
- WHEN 204 THEN -- waiting till the end of the day
+ WHEN 203 THEN -- waiting till the end of the day
IF(foc_in_gphase(_phase)) THEN
_dowait := 60000; -- 1 minute
-- waits before testing again
ELSE
PERFORM foc_next(0,'End of the day');
END IF;
ELSE
RAISE EXCEPTION 'Should not reach this point with phase=%',_phase;
END CASE;
-- RAISE LOG 'Phase=% _dowait=%',_phase,_dowait;
RETURN _dowait;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION openclose() TO role_bo;
+-------------------------------------------------------------------------------
+CREATE FUNCTION fstart_phases(_start int) RETURNS text AS $$
+/* */
+DECLARE
+ _txt text;
+ _phase int;
+BEGIN
+ _phase := fgetvar('OC_CURRENT_PHASE');
+ IF( _start = 1 ) THEN
+
+ IF( fgetconst('DEBUG')!= 0) THEN
+ RAISE EXCEPTION 'Cannot be started while DEBUG=True';
+ END IF;
+
+ IF(fgetconst('QUAPROVUSR')= 0) THEN
+ RAISE EXCEPTION 'Cannot be started while QUAPROVUSR=0';
+ END IF;
+
+ IF(fgetconst('OWNUSR')= 0) THEN
+ RAISE EXCEPTION 'Cannot be started while OWNUSR=0';
+ END IF;
+
+ IF(fgetvar('OC_BGW_OPENCLOSE_ACTIVE')!=0) THEN
+ RETURN 'Already started OC_BGW_OPENCLOSE_ACTIVE=true';
+ END IF;
+
+ UPDATE tvar set VALUE=1 WHERE name = 'OC_BGW_OPENCLOSE_ACTIVE';
+ _txt := 'Started - ' || fget_settings();
+ ELSIF (_start = -1 ) THEN
+
+
+ IF(fgetvar('OC_BGW_OPENCLOSE_ACTIVE')=0) THEN
+ RETURN 'Already stopped OC_BGW_OPENCLOSE_ACTIVE=false';
+ END IF;
+
+ UPDATE tvar set VALUE=0 WHERE name = 'OC_BGW_OPENCLOSE_ACTIVE';
+ _txt := 'Stopped - ' || fget_settings();
+ ELSE
+ _txt := fget_settings();
+ END IF;
+
+ RETURN _txt;
+END;
+$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
+-------------------------------------------------------------------------------
+CREATE FUNCTION fget_settings() RETURNS text AS $$
+/* */
+DECLARE
+ _txt text;
+BEGIN
+ _txt := 'OC_CURRENT_PHASE=' || fgetvar('OC_CURRENT_PHASE') || ' - QUAPROVUSR=' || fgetconst('QUAPROVUSR') || ' - OWNUSR=' || fgetconst('OWNUSR') || ' - OC_BGW_OPENCLOSE_ACTIVE=' || fgetvar('OC_BGW_OPENCLOSE_ACTIVE') || ' - OC_BGW_CONSUMESTACK_ACTIVE=' || fgetvar('OC_BGW_CONSUMESTACK_ACTIVE') || ' - DEBUG=' || fgetconst('DEBUG');
+ RETURN _txt;
+END;
+$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
+-------------------------------------------------------------------------------
+CREATE FUNCTION fset_debug(_debug boolean) RETURNS text AS $$
+/* */
+DECLARE
+ _txt text;
+BEGIN
+ IF(fgetvar('OC_BGW_OPENCLOSE_ACTIVE')!=0) THEN
+ RAISE EXCEPTION 'select start_phases(-1) must be called first';
+ END IF;
+
+ IF(_debug) THEN
+ UPDATE tconst set VALUE=0 WHERE name in ('QUAPROVUSR','OWNUSR');
+ UPDATE tconst set VALUE=1 WHERE name = 'DEBUG';
+ UPDATE tvar set VALUE=1 WHERE name = 'OC_BGW_CONSUMESTACK_ACTIVE';
+ _txt := 'Test conditions set. ' || fget_settings();
+ ELSE
+ UPDATE tconst set VALUE=1 WHERE name in ('QUAPROVUSR','OWNUSR');
+ UPDATE tconst set VALUE=0 WHERE name = 'DEBUG';
+ UPDATE tvar set VALUE=1 WHERE name = 'OC_BGW_CONSUMESTACK_ACTIVE';
+ _txt := 'Test conditions reset.' || fget_settings();
+ END IF;
+ RETURN _txt;
+END;
+$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
-------------------------------------------------------------------------------
CREATE FUNCTION foc_next(_phase int,_msg text) RETURNS void AS $$
BEGIN
PERFORM fsetvar('OC_CURRENT_PHASE',_phase);
RAISE LOG 'MARKET PHASE %: %',_phase,_msg;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_next(int,text) TO role_bo;
--------------------------------------------------------------------------------
INSERT INTO tvar(name,value) VALUES
('OC_CURRENT_OPENED',0); -- sub-state of the opened phase
CREATE FUNCTION foc_clean_outdated_orders() RETURNS void AS $$
/* each 5 calls, cleans outdated orders */
DECLARE
_cnt int;
BEGIN
UPDATE tvar SET value=((value+1) % 5 ) WHERE name='OC_CURRENT_OPENED'
RETURNING value INTO _cnt ;
IF(_cnt !=0) THEN
RETURN;
END IF;
-- delete outdated order from the order book and related sub-orders
DELETE FROM torder o USING torder po
WHERE (o.ord).oid = (po.ord).id -- having a parent order that
AND NOT (po.duration IS NULL) -- have a timeout defined
AND (po.created + po.duration) <= clock_timestamp(); -- and is outdated
RETURN;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_clean_outdated_orders() TO role_bo;
+--------------------------------------------------------------------------------
+/* used by vacuum.bash that
+while foc_start_vacuum(true) != 2
+ wait
+VACUUM FULL
+foc_start_vacuum(false)
+*/
+
+CREATE FUNCTION foc_start_vacuum(_start boolean) RETURNS int AS $$
+DECLARE
+ _ret int := 0;
+ _phase int;
+BEGIN
+ _phase := fgetvar('OC_CURRENT_PHASE');
+ IF ( _start) THEN
+ IF (_phase = 1) THEN
+ PERFORM foc_next(2,'VACUUM started');
+ _ret := 2;
+ END IF;
+ ELSE
+ IF (_phase = 2) THEN
+ PERFORM foc_next(3,'VACUUM performed');
+ END IF;
+ END IF;
+
+ RETURN _ret;
+END;
+$$ LANGUAGE PLPGSQL set search_path = market,public;
+
+
--------------------------------------------------------------------------------
CREATE VIEW vmsg3 AS WITH t AS (SELECT * from tmsg WHERE usr = session_user
UNION ALL SELECT * from tmsgdaysbefore WHERE usr = session_user
) SELECT created,id,typ,jso
from t order by created ASC,id ASC;
GRANT SELECT ON vmsg3 TO role_com;
/*
--------------------------------------------------------------------------------
-- TIME DEPENDANT FUNCTION foc_in_gphase(_phase int)
--------------------------------------------------------------------------------
The day is shared in 3 gphases. A table tdelay defines the durations of these gphases.
When the model is settled and each day, the table timesum is built from tdelay to set
the planning of the market. foc_in_gphase(_phase) returns true when the current time
is in the planning of the _phase.
*/
--------------------------------------------------------------------------------
-- delays of the phases
--------------------------------------------------------------------------------
create table tdelay(
id serial,
delay interval
);
GRANT SELECT ON tdelay TO role_com;
/*
NB_GPHASE = 3, with id between [0,NB_GPHASE-1]
delay are defined for [0,NB_GPHASE-2],the last waits the end of the day
OC_DELAY_i is the duration of a gphase for i in [0,NB_GPHASE-2]
*/
INSERT INTO tdelay (delay) VALUES
('30 minutes'::interval), -- starts at 0h 30'
('23 hours'::interval) -- stops at 23h 30'
-- sum of delays < 24 hours
;
--------------------------------------------------------------------------------
CREATE FUNCTION foc_create_timesum() RETURNS void AS $$
/* creates the table timesum from tdelay where each record
defines for a gphase the delay between the begin of the day
and the end of this phase.
builds timesum with rows (k,ends) such as:
ends[0] = 0
ends[k] = sum(tdelay[i] for i in [0,k])
*/
DECLARE
_inter interval;
_cnt int;
BEGIN
-- DROP TABLE IF EXISTS timesum;
SELECT count(*) INTO STRICT _cnt FROM tdelay;
CREATE TABLE timesum (id,ends) AS
SELECT t.id,sum(d.delay) OVER w FROM generate_series(1,_cnt) t(id)
LEFT JOIN tdelay d ON (t.id=d.id) WINDOW w AS (order by t.id );
INSERT INTO timesum VALUES (0,'0'::interval);
SELECT max(ends) INTO _inter FROM timesum;
IF( _inter >= '24 hours'::interval) THEN
RAISE EXCEPTION 'sum(delay) = % > 24 hours',_inter;
END IF;
END;
$$ LANGUAGE PLPGSQL set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_create_timesum() TO role_bo;
select market.foc_create_timesum();
--------------------------------------------------------------------------------
CREATE FUNCTION foc_in_gphase(_phase int) RETURNS boolean AS $$
/* returns TRUE when current time is between the limits of the gphase
gphase is defined as _phase/100 */
DECLARE
_actual_gphase int := _phase /100;
_planned_gphase int;
_time interval;
BEGIN
-- the time since the beginning of the day
_time := now() - date_trunc('day',now());
SELECT max(id) INTO _planned_gphase FROM timesum where ends < _time ;
-- _planned_gphase is such as
-- _time is in the interval (timesum[ _planned_gphase ],timesum[ _planned_gphase+1 ])
IF (_planned_gphase = _actual_gphase) THEN
RETURN true;
ELSE
RETURN false;
END IF;
END;
$$ LANGUAGE PLPGSQL set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_create_timesum() TO role_bo;
diff --git a/src/sql/prims.sql b/src/sql/prims.sql
index 650350a..66f39c6 100644
--- a/src/sql/prims.sql
+++ b/src/sql/prims.sql
@@ -1,525 +1,542 @@
/*
--------------------------------------------------------------------------------
-- Market primitives
--------------------------------------------------------------------------------
A function fsubmit<primitive> is used by users to push the primitive on the stack.
A function fprocess<primitive> describes the checks made before pushing it on the stack
and it's processing when popped from the stack.
A type yp_<primitive> is defined to json encode the primitive parameters in the stack.
*/
--------------------------------------------------------------------------------
CREATE TYPE eprimphase AS ENUM ('submit', 'execute');
--------------------------------------------------------------------------------
-- order primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_order AS (
kind eprimitivetype,
type eordertype,
owner dtext,
qua_requ dtext,
qtt_requ dqtt,
qua_prov dtext,
qtt_prov dqtt
);
--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitorder(_type eordertype,_owner dtext,
_qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_order;
BEGIN
_prim := ROW('order',_type,_owner,_qua_requ,_qtt_requ,_qua_prov,_qtt_prov)::yp_order;
_res := fprocessorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitorder(eordertype,dtext,dtext,dqtt,dtext,dqtt) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessorder(_phase eprimphase, _t tstack,_s yp_order)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_ir int;
_o yorder;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_prov,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_res := fpushprimitive(_r,'order',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
/*
IF(
(_s.duration IS NOT NULL) AND (_s.submitted + _s.duration) < clock_timestamp()
) THEN
_r.reason := 'barter order - the order is too old';
_r.code := -19;
END IF; */
_wid := fgetowner(_s.owner);
_o := ROW(CASE WHEN _s.type='limit' THEN 1 ELSE 2 END,
_t.id,_wid,_t.id,
_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt_prov,
box('(0,0)'::point,'(0,0)'::point),box('(0,0)'::point,'(0,0)'::point),
0.0,earth_get_square('(0,0)'::point,0.0)
)::yorder;
_ir := insertorder(_s.owner,_o,_t.usr,_t.submitted,'1 day');
RETURN ROW(_t.id,_r,_t.jso,
row_to_json(ROW(_o.id,_o.qtt,_o.qua_prov,_s.owner,_t.usr)::yj_stock),
NULL
)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- child order primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_childorder AS (
kind eprimitivetype,
owner dtext,
qua_requ dtext,
qtt_requ dqtt,
stock_id int
);
--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitchildorder(_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_stock_id int)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_childorder;
BEGIN
_prim := ROW('childorder',_owner,_qua_requ,_qtt_requ,_stock_id)::yp_childorder;
_res := fprocesschildorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitchildorder(dtext,dtext,dqtt,int) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocesschildorder(_phase eprimphase, _t tstack,_s yp_childorder)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_otype int;
_ir int;
_o yorder;
_op torder%rowtype;
_sp tstack%rowtype;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_requ,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid;
IF (NOT FOUND) THEN -- not found in the order book
-- but could be found in the stack
SELECT * INTO _sp FROM tstack WHERE
id = _s.stock_id AND usr = session_user AND _s.owner = jso->'owner' AND kind='order';
IF (NOT FOUND) THEN
_r.code := -200;
_r.reason := 'the order was not found for this user and owner';
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
END IF;
_res := fpushprimitive(_r,'childorder',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid;
IF (NOT FOUND) THEN
_r.code := -201;
_r.reason := 'the stock is not in the order book';
RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
END IF;
_o := _op.ord;
_o.id := _id;
_o.qua_requ := _s.qua_requ;
_o.qtt_requ := _s.qtt_requ;
_ir := insertorder(_s.owner,_o,_s.usr,_s.submitted,_op.duration);
RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- rm primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_rmorder AS (
kind eprimitivetype,
owner dtext,
stock_id int
);
--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitrmorder(_owner dtext,_stock_id int)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_rmorder;
BEGIN
_prim := ROW('rmorder',_owner,_stock_id)::yp_rmorder;
_res := fprocessrmorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitrmorder(dtext,int) TO role_co,role_bo;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessrmorder(_phase eprimphase, _t tstack,_s yp_rmorder)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_otype int;
_ir int;
_o yorder;
_opy yorder; -- parent_order
_op torder%rowtype;
_te text;
_pusr text;
_sp tstack%rowtype;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN -- not found in the order book
-- but could be found in the stack
SELECT * INTO _sp FROM tstack WHERE
id = _s.stock_id AND usr = session_user AND _s.owner = jso->'owner' AND kind='order' AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
_r.code := -300;
_r.reason := 'the order was not found for this user and owner';
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
END IF;
_res := fpushprimitive(_r,'rmorder',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
_r.code := -301;
_r.reason := 'the stock is not in the order book';
RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
END IF;
-- delete order and sub-orders from the book
DELETE FROM torder o WHERE (o.ord).oid = _yo.oid;
-- id,error,primitive,result
RETURN ROW(_t.id,_r,_t.jso,
ROW((_op.ord).id,(_op.ord).qtt,(_op.ord).qua_prov,_s.owner,_op.usr)::yj_stock,
ROW((_op.ord).qua_prov,(_op.ord).qtt)::yj_value
)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- quote
--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitquote(_type eordertype,_owner dtext,
_qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_order;
BEGIN
_prim := ROW('quote',_type,_owner,_qua_requ,_qtt_requ,_qua_prov,_qtt_prov)::yp_order;
_res := fprocessquote('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitquote(eordertype,dtext,dtext,dqtt,dtext,dqtt) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessquote(_phase eprimphase, _t tstack,_s yp_order)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_ir int;
_o yorder;
_type int;
_json_res json;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_prov,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_res := fpushprimitive(_r,'quote',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
_type := CASE WHEN _s.type='limit' THEN 1 ELSE 2 END;
_o := ROW( _type,
_t.id,_wid,_t.id,
_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt_prov,
box('(0,0)'::point,'(0,0)'::point),box('(0,0)'::point,'(0,0)'::point),
0.0,earth_get_square('(0,0)'::point,0.0)
)::yorder;
_json_res := fproducequote(_o,true,false,_s.type='limit',false);
RETURN ROW(_t.id,_r,_t.jso,_json_res,NULL)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
/*
--------------------------------------------------------------------------------
-- primitive processing
--------------------------------------------------------------------------------
meets in a single function all primitive executions
*/
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessprimitive( _s tstack)
RETURNS yj_primitive AS $$
DECLARE
_res yj_primitive;
_kind eprimitivetype;
BEGIN
_kind := _s.kind;
CASE
WHEN (_kind = 'order' ) THEN
_res := fprocessorder('execute',_s,json_populate_record(NULL::yp_order,_s.jso));
WHEN (_kind = 'childorder' ) THEN
_res := fprocesschildorder('execute',_s,json_populate_record(NULL::yp_childorder,_s.jso));
WHEN (_kind = 'rmorder' ) THEN
_res := fprocessrmorder('execute',_s,json_populate_record(NULL::yp_rmorder,_s.jso));
WHEN (_kind = 'quote' ) THEN
_res := fprocessquote('execute',_s,json_populate_record(NULL::yp_order,_s.jso));
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
RETURN _res;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- check params
-- code in [-9,0]
--------------------------------------------------------------------------------
CREATE FUNCTION
fcheckquaown(_r yj_error,_own dtext,
_qua_requ dtext,_pos_requ point,_qua_prov dtext,_pos_prov point,_dist float8)
RETURNS yj_error AS $$
DECLARE
_r yj_error;
_i int;
BEGIN
IF(NOT ((yflow_checktxt(_own)&1)=1)) THEN
_r.reason := '_own is empty string';
_r.code := -1;
RETURN _r;
END IF;
IF(_qua_prov IS NULL) THEN
IF(NOT ((yflow_checktxt(_qua_requ)&1)=1)) THEN
_r.reason := '_qua_requ is empty string';
_r.code := -2;
RETURN _r;
END IF;
ELSE
IF(_qua_prov = _qua_requ) THEN
_r.reason := 'qua_prov == qua_requ';
_r.code := -3;
return _r;
END IF;
_i = yflow_checkquaownpos(_own,_qua_requ,_pos_requ,_qua_prov,_pos_prov,_dist);
IF (_i != 0) THEN
_r.reason := 'rejected by yflow_checkquaownpos';
_r.code := _i; -- -9<=i<=-5
return _r;
END IF;
END IF;
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
CREATE FUNCTION fcheckquaprovusr(_r yj_error,_qua_prov dtext,_usr dtext) RETURNS yj_error AS $$
+/* check the form of quality's name
+when QUAPROVUSR:
+
+ prefix@suffix
+ the suffix is the user's name
+ prefix
+ is a currency
+else:
+ no check
+*/
DECLARE
_QUAPROVUSR boolean := fgetconst('QUAPROVUSR')=1;
_p int;
_suffix text;
BEGIN
IF (NOT _QUAPROVUSR) THEN
RETURN _r;
END IF;
_p := position('@' IN _qua_prov);
IF (_p = 0) THEN
-- without prefix, it should be a currency
SELECT count(*) INTO _p FROM tcurrency WHERE _qua_prov = name;
IF (_p = 1) THEN
RETURN _r;
ELSE
_r.code := -12;
_r.reason := 'the quality provided that is not a currency must be prefixed';
RETURN _r;
END IF;
END IF;
-- with prefix
IF (char_length(substring(_qua_prov FROM 1 FOR (_p-1))) <1) THEN
_r.code := -13;
_r.reason := 'the prefix of the quality provided cannot be empty';
RETURN _r;
END IF;
_suffix := substring(_qua_prov FROM (_p+1));
- _suffix := replace(_suffix,'.','_'); -- change . to _
+ -- _suffix := replace(_suffix,'.','_'); -- change . to _
-- it must be the username
IF ( _suffix!= _usr) THEN
_r.code := -14;
_r.reason := 'the prefix of the quality provided must by the user name';
RETURN _r;
END IF;
- RETURN _r;
+ RETURN _r; -- _r unchanged
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
CREATE FUNCTION fchecknameowner(_r yj_error,_name dtext,_usr dtext) RETURNS yj_error AS $$
+/* check the form of owner's name
+when OWNUSR:
+ prefix@suffix
+ the suffix is the user's name, or a well know auth provider
+else:
+ no check
+*/
DECLARE
_p int;
_OWNUSR boolean := fgetconst('OWNUSR')=1;
_suffix text;
BEGIN
IF (NOT _OWNUSR) THEN
RETURN _r;
END IF;
_p := position('@' IN _name);
IF (char_length(substring(_name FROM 1 FOR (_p-1))) <1) THEN
_r.code := -20;
_r.reason := 'the owner name has an empty prefix';
RETURN _r;
END IF;
_suffix := substring(_name FROM (_p+1));
SELECT count(*) INTO _p FROM townauth WHERE _suffix = name;
IF (_p = 1) THEN
RETURN _r; --well known auth provider
END IF;
-- change . to _
- _suffix := replace(_suffix,'.','_');
+ -- _suffix := replace(_suffix,'.','_');
IF ( _suffix= _usr) THEN
- RETURN _r; -- owners name suffixed by users name
+ RETURN _r; -- owners name suffixed by users name, _r unchanged
END IF;
_r.code := -21;
_r.reason := 'if the owner name is not prefixed by a well know provider, it must be prefixed by user name';
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
diff --git a/src/sql/pushpull.sql b/src/sql/pushpull.sql
index ca91aff..007dffa 100644
--- a/src/sql/pushpull.sql
+++ b/src/sql/pushpull.sql
@@ -1,194 +1,205 @@
/*
--------------------------------------------------------------------------------
-- stack primitive management
--------------------------------------------------------------------------------
The stack is a FIFO (first in first out) used to submit primitives and execute
them in the order of submission.
*/
create table tstack (
id serial UNIQUE not NULL,
usr dtext,
kind eprimitivetype,
jso json,
submitted timestamp not NULL,
PRIMARY KEY (id)
);
comment on table tstack is 'Records the stack of primitives';
comment on column tstack.id is 'id of this primitive. For an order, it''s it is also the id of the order';
comment on column tstack.usr is 'user submitting the primitive';
comment on column tstack.kind is 'type of primitive';
comment on column tstack.jso is 'representation of the primitive';
comment on column tstack.submitted is 'timestamp when the primitive was successfully submitted';
alter sequence tstack_id_seq owned by tstack.id;
GRANT SELECT ON tstack TO role_com;
GRANT SELECT ON tstack_id_seq TO role_com;
INSERT INTO tvar(name,value) VALUES
('STACK_TOP',0), -- last primitive submitted
('STACK_EXECUTED',0); -- last primitive executed
/*
--------------------------------------------------------------------------------
fstackdone returns true when the stack is empty
*/
CREATE FUNCTION fstackdone()
RETURNS boolean AS $$
DECLARE
_top int;
_exe int;
BEGIN
SELECT value INTO _top FROM tvar WHERE name = 'STACK_TOP';
SELECT value INTO _exe FROM tvar WHERE name = 'STACK_EXECUTED';
RETURN (_top = _exe);
END;
$$ LANGUAGE PLPGSQL set search_path to market;
/*
--------------------------------------------------------------------------------
fpushprimitive is used for submission of a primitive recording it's type, parameters
and the name of the user that submit it.
*/
CREATE FUNCTION fpushprimitive(_r yj_error,_kind eprimitivetype,_jso json)
RETURNS yj_primitive AS $$
DECLARE
_tid int;
_ir int;
BEGIN
IF (_r.code!=0) THEN
RAISE EXCEPTION 'Primitive cannot be pushed due to error %: %',_r.code,_r.reason;
END IF;
+ -- acquires a lock on STACK_TOP, a UNIQUE index is set on name required for the lock
+ SELECT value INTO _tid FROM tvar WHERE name = 'STACK_TOP' FOR UPDATE;
+
INSERT INTO tstack(usr,kind,jso,submitted)
VALUES (session_user,_kind,_jso,statement_timestamp())
RETURNING id into _tid;
UPDATE tvar SET value=_tid WHERE name = 'STACK_TOP';
RETURN ROW(_tid,_r,_jso,NULL,NULL)::yj_primitive;
END;
$$ LANGUAGE PLPGSQL;
/*
--------------------------------------------------------------------------------
-- consumestack()
--------------------------------------------------------------------------------
consumestack() is processed by postgres in background using a background_worker called
BGW_CONSUMESTACK (see src/worker_ob.c).
It consumes the stack of primitives executing each primitive in the order of submission.
Each primitive is wrapped in a single transaction by the background_worker.
*/
CREATE FUNCTION consumestack() RETURNS int AS $$
/*
* This code is executed by BGW_CONSUMESTACK doing following:
while(true)
dowait := market.consumestack()
if (dowait):
waits for dowait milliseconds
*/
DECLARE
_s tstack%rowtype;
_res yj_primitive;
_cnt int;
_txt text;
_detail text;
_ctx text;
+ _dowait int;
BEGIN
+
+ IF(fgetvar('OC_BGW_CONSUMESTACK_ACTIVE')=0) THEN
+ _dowait := 60000;
+ -- waits one minute before testing again
+ RETURN _dowait;
+ END IF;
+
DELETE FROM tstack
WHERE id IN (SELECT id FROM tstack ORDER BY id ASC LIMIT 1)
RETURNING * INTO _s;
IF(NOT FOUND) THEN -- if the stack is empty
RETURN 20; -- waits for 20 milliseconds
END IF;
-- else, process it
_res := fprocessprimitive(_s);
-- and records the result in tmsg
INSERT INTO tmsg (usr,typ,jso,created)
VALUES (_s.usr,'response',row_to_json(_res),statement_timestamp());
UPDATE tvar SET value=_s.id WHERE name = 'STACK_EXECUTED';
RETURN 0;
EXCEPTION WHEN OTHERS THEN
GET STACKED DIAGNOSTICS
_txt = MESSAGE_TEXT,
_detail = PG_EXCEPTION_DETAIL,
_ctx = PG_EXCEPTION_CONTEXT;
RAISE WARNING 'market.consumestack() failed:''%'' ''%'' ''%''',_txt,_detail,_ctx;
- RAISE WARNING 'for fprocessprimitive(%)',_s;
+ RAISE WARNING 'while fprocessprimitive(%)',_s;
RETURN 0;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market;
GRANT EXECUTE ON FUNCTION consumestack() TO role_bo;
/*
--------------------------------------------------------------------------------
-- message acknowledgement
--------------------------------------------------------------------------------
The execution result of a primitive submitted by a user is stores in tmsg or tmsgdaysbefore.
ackmsg(_id,_date) is called by this user to acknowledge this message.
*/
CREATE TABLE tmsgack (LIKE tmsg);
GRANT SELECT ON tmsgack TO role_com;
--------------------------------------------------------------------------------
CREATE FUNCTION ackmsg(_id int,_date date) RETURNS int AS $$
/*
If found in tmsg or tmsgdaysbefore
the message is archived and returns 1. Otherwise, returns 0
*/
DECLARE
_cnt int;
BEGIN
WITH t AS (
DELETE FROM tmsg
WHERE id = _id and (created::date) = _date AND usr=session_user
RETURNING *
) INSERT INTO tmsgack SELECT * FROM t ;
GET DIAGNOSTICS _cnt = ROW_COUNT;
IF( _cnt = 0 ) THEN
WITH t AS (
DELETE FROM tmsgdaysbefore
WHERE id = _id and (created::date) = _date AND usr=session_user
RETURNING *
) INSERT INTO tmsgack SELECT * FROM t ;
GET DIAGNOSTICS _cnt = ROW_COUNT;
IF( _cnt = 0 ) THEN
RAISE INFO 'The message could not be found';
RETURN 0;
ELSIF(_cnt = 1) THEN
RETURN 1;
ELSE
RAISE EXCEPTION 'Error';
END IF;
ELSIF(_cnt = 1) THEN
RETURN 1;
ELSE
RAISE EXCEPTION 'Error';
END IF;
RETURN _cnt;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER;
GRANT EXECUTE ON FUNCTION ackmsg(int,date) TO role_com;
diff --git a/src/sql/roles.sql b/src/sql/roles.sql
index c7b342b..9cb9d7b 100644
--- a/src/sql/roles.sql
+++ b/src/sql/roles.sql
@@ -1,100 +1,105 @@
/*
\set ECHO none
\set ON_ERROR_STOP on
-- script executed for the whole cluster
SET client_min_messages = warning;
SET log_error_verbosity = terse;
BEGIN; */
/* flowf extension */
-- drop extension if exists btree_gin cascade;
-- create extension btree_gin with version '1.0';
DROP EXTENSION IF EXISTS flowf;
CREATE EXTENSION flowf WITH VERSION '0.1';
--------------------------------------------------------------------------------
CREATE FUNCTION _create_role(_role text) RETURNS int AS $$
BEGIN
BEGIN
EXECUTE 'CREATE ROLE ' || _role;
EXCEPTION WHEN duplicate_object THEN
NULL;
END;
EXECUTE 'ALTER ROLE ' || _role || ' NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOREPLICATION';
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/* definition of roles
-- role_com ----> role_co --------->role_client---->clientA
| \-->clientB
|\-> role_co_closed
|
\-> role_bo---->user_bo
+ \-->user_vacuum
access by clients can be disabled/enabled with a single command:
REVOKE role_co FROM role_client
GRANT role_co TO role_client
opening/closing of market is performed by switching role_client
between role_co and role_co_closed
same thing for role batch with role_bo:
REVOKE role_bo FROM user_bo
GRANT role_bo TO user_bo
*/
--------------------------------------------------------------------------------
select _create_role('prod'); -- owner of market objects
ALTER ROLE prod WITH createrole;
/* so that prod can modify roles at opening and closing. */
select _create_role('role_com');
SELECT _create_role('role_co'); -- when market is opened
GRANT role_com TO role_co;
SELECT _create_role('role_co_closed'); -- when market is closed
GRANT role_com TO role_co_closed;
SELECT _create_role('role_client');
-GRANT role_co_closed TO role_client; -- maket phase 101
+GRANT role_co TO role_client; -- maket phase 102
-- role_com ---> role_bo----> user_bo
SELECT _create_role('role_bo');
GRANT role_com TO role_bo;
-- ALTER ROLE role_bo INHERIT;
SELECT _create_role('user_bo');
GRANT role_bo TO user_bo;
-- two connections are allowed for background_workers
-- BGW_OPENCLOSE and BGW_CONSUMESTACK
ALTER ROLE user_bo WITH LOGIN CONNECTION LIMIT 2;
-
--------------------------------------------------------------------------------
-select _create_role('test_clienta');
-ALTER ROLE test_clienta WITH login;
-GRANT role_client TO test_clienta;
-
-select _create_role('test_clientb');
-ALTER ROLE test_clientb WITH login;
-GRANT role_client TO test_clientb;
-select _create_role('test_clientc');
-ALTER ROLE test_clientc WITH login;
-GRANT role_client TO test_clientc;
+CREATE FUNCTION create_client(_role text) RETURNS int AS $$
+BEGIN
+ BEGIN
+ EXECUTE 'CREATE ROLE ' || _role;
+ EXCEPTION WHEN duplicate_object THEN
+ RETURN 0;
+ END;
+ EXECUTE 'ALTER ROLE ' || _role || ' NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOREPLICATION';
+ EXECUTE 'ALTER ROLE ' || _role || ' WITH LOGIN';
+ EXECUTE 'GRANT role_client TO ' || _role;
+ RETURN 1;
+END;
+$$ LANGUAGE PLPGSQL;
+--------------------------------------------------------------------------------
-select _create_role('test_clientd');
-ALTER ROLE test_clientd WITH login;
-GRANT role_client TO test_clientd;
+select create_client('test_clienta');
+select create_client('test_clientb');
+select create_client('test_clientc');
+select create_client('test_clientd');
-- COMMIT;
diff --git a/src/sql/tables.sql b/src/sql/tables.sql
index d517a1a..787a8a6 100644
--- a/src/sql/tables.sql
+++ b/src/sql/tables.sql
@@ -1,244 +1,280 @@
--------------------------------------------------------------------------------
ALTER DEFAULT PRIVILEGES REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON SEQUENCES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TYPES FROM PUBLIC;
--------------------------------------------------------------------------------
create domain dqtt AS int8 check( VALUE>0);
create domain dtext AS text check( char_length(VALUE)>0);
--------------------------------------------------------------------------------
-- Main constants of the model
--------------------------------------------------------------------------------
create table tconst(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
GRANT SELECT ON tconst TO role_com;
--------------------------------------------------------------------------------
INSERT INTO tconst (name,value) VALUES
('MAXCYCLE',64), -- must be less than yflow_get_maxdim()
('MAXPATHFETCHED',1024),-- maximum depth of the graph exploration
('MAXMVTPERTRANS',128), -- maximum number of movements per transaction
-- if this limit is reached, next cycles are not performed but all others
-- are included in the current transaction
('VERSION-X',2),('VERSION-Y',1),('VERSION-Z',0),
-- booleans, 0 == false and !=0 == true
('QUAPROVUSR',0), -- when true, the quality provided by a barter is suffixed by user name
-- 1 prod
('OWNUSR',0), -- when true, the owner is suffixed by user name
-- 1 prod
- ('DEBUG',1); --
+ ('DEBUG',1);
+ --
--------------------------------------------------------------------------------
create table tvar(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
-- btree index tvar_pkey on name
INSERT INTO tvar (name,value)
VALUES ('INSTALLED',0); -- set to 1 when the model is installed
GRANT SELECT ON tvar TO role_com;
--------------------------------------------------------------------------------
-- TOWNER
--------------------------------------------------------------------------------
create table towner (
id serial UNIQUE not NULL,
name dtext UNIQUE not NULL,
PRIMARY KEY (id)
);
comment on table towner is 'owners of values exchanged';
comment on column towner.id is 'id of this owner';
comment on column towner.name is 'the name of the owner';
alter sequence towner_id_seq owned by towner.id;
create index towner_name_idx on towner(name);
SELECT _reference_time('towner');
GRANT SELECT ON towner TO role_com;
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
CREATE FUNCTION fgetowner(_name text) RETURNS int AS $$
DECLARE
_wid int;
BEGIN
LOOP
SELECT id INTO _wid FROM towner WHERE name=_name;
IF found THEN
return _wid;
END IF;
BEGIN
INSERT INTO towner (name) VALUES (_name) RETURNING id INTO _wid;
return _wid;
EXCEPTION WHEN unique_violation THEN
NULL;
END;
END LOOP;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- MOVEMENTS
--------------------------------------------------------------------------------
CREATE SEQUENCE tmvt_id_seq;
--------------------------------------------------------------------------------
-- ORDER BOOK
--------------------------------------------------------------------------------
-- type = type_flow | type_primitive <<8 | type_mode <<16
create domain dtypeorder AS int check(VALUE >=0 AND VALUE < ((1<<24)-1));
-- type_flow &3 1 order limit,2 order best
-- type_flow &12 bit reserved for c internal calculations
-- 4 no qttlimit
-- 8 ignoreomega
-- yorder.type is a type_flow = type & 255
CREATE TYPE eordertype AS ENUM ('best','limit');
CREATE TYPE eprimitivetype AS ENUM ('order','childorder','rmorder','quote','prequote');
create table torder (
usr dtext,
own dtext,
ord yorder, --defined by the extension flowf
created timestamp not NULL,
updated timestamp,
duration interval
);
comment on table torder is 'Order book';
comment on column torder.usr is 'user that inserted the order ';
comment on column torder.ord is 'the order';
comment on column torder.created is 'time when the order was put on the stack';
comment on column torder.updated is 'time when the (quantity) of the order was updated by the order book';
comment on column torder.duration is 'the life time of the order';
GRANT SELECT ON torder TO role_com;
create index torder_qua_prov_idx on torder(((ord).qua_prov)); -- using gin(((ord).qua_prov) text_ops);
create index torder_id_idx on torder(((ord).id));
create index torder_oid_idx on torder(((ord).oid));
-- id,type,own,oid,qtt_requ,qua_requ,qtt_prov,qua_prov,qtt
create view vorder as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where o.usr=session_user;
GRANT SELECT ON vorder TO role_com;
-- sans dates ni filtre sur usr
create view vorder2 as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt
from torder o left join towner w on ((o.ord).own=w.id);
GRANT SELECT ON vorder2 TO role_com;
-- only parent for all users
create view vbarter as
select (o.ord).id as id,(o.ord).type as type,o.usr as user,w.name as own,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where (o.ord).oid=(o.ord).id;
GRANT SELECT ON vbarter TO role_com;
-- parent and childs for all users, used with vmvto
create view vordero as
select id,
(case when (type & 3=1) then 'limit' else 'best' end)::eordertype as type,
own as owner,
case when id=oid then (qtt::text || ' ' || qua_prov) else '' end as stock,
'(' || qtt_prov::text || '/' || qtt_requ::text || ') ' ||
qua_prov || ' / '|| qua_requ as expected_Ï,
case when id=oid then '' else oid::text end as oid
from vorder order by id asc;
GRANT SELECT ON vordero TO role_com;
comment on view vordero is 'order book for all users, to be used with vmvto';
comment on column vordero.id is 'the id of the order';
comment on column vordero.owner is 'the owner';
comment on column vordero.stock is 'for a parent order the stock offered by the owner';
comment on column vordero.expected_Ï is 'the Ï of the order';
comment on column vordero.oid is 'for a child-order, the id of the parent-order';
--------------------------------------------------------------------------------
-- MSG
--------------------------------------------------------------------------------
CREATE TYPE emsgtype AS ENUM ('response', 'exchange');
create table tmsg (
id serial UNIQUE not NULL,
usr dtext default NULL, -- the user receiver of this message
typ emsgtype not NULL,
jso json default NULL,
created timestamp not NULL
);
alter sequence tmsg_id_seq owned by tmsg.id;
GRANT SELECT ON tmsg TO role_com;
GRANT SELECT ON tmsg_id_seq TO role_com;
SELECT fifo_init('tmsg');
CREATE VIEW vmsg AS select * from tmsg WHERE usr = session_user;
GRANT SELECT ON vmsg TO role_com;
+
+CREATE FUNCTION fmtmvt(_j json) RETURNS text AS $$
+DECLARE
+ _r text;
+BEGIN
+ _r := json_extract_path_text(_j,'own') || ' (' || json_extract_path_text(_j,'qtt') || ' ' || json_extract_path_text(_j,'nat') || ')';
+ RETURN _r;
+END;
+$$ LANGUAGE PLPGSQL SECURITY DEFINER;
+
+-- DROP VIEW IF EXISTS vmsg_mvt;
+CREATE VIEW vmsg_mvt AS SELECT
+json_extract_path_text(jso,'cycle')::int as grp,
+json_extract_path_text(jso,'mvt_from','id')::int as mvt_from_id,
+json_extract_path_text(jso,'stock','own') as own,
+fmtmvt(json_extract_path(jso,'mvt_from')) as gives,
+fmtmvt(json_extract_path(jso,'mvt_to')) as receives,
+id as msg_id,
+json_extract_path_text(jso,'orde','id')::int as order_id,
+json_extract_path_text(jso,'stock','id')::int as stock_id,
+json_extract_path_text(jso,'orig')::int as orig_id,
+-- json_extract_path_text(jso,'stock','qtt')::bigint as stock_remain
+json_extract_path_text(jso,'stock','qtt') || ' ' || json_extract_path_text(jso,'mvt_from','nat') as stock_remains
+ from tmsg WHERE typ='exchange' and usr = session_user order by id;
+
+CREATE VIEW vmsg_resp AS SELECT
+json_extract_path_text(jso,'id')::int as msg_id,
+created::date as date,
+CASE WHEN (json_extract_path_text(jso,'error','reason') IS NULL)THEN '' ELSE json_extract_path_text(jso,'error','reason') END as error,
+json_extract_path_text(jso,'primitive','owner') as owner,
+json_extract_path_text(jso,'primitive','kind') as primitive,
+json_extract_path_text(jso,'result','id') as prim_id,
+json_extract_path_text(jso,'value') as value
+ from tmsg WHERE typ='response' and usr = session_user order by id;
+
--------------------------------------------------------------------------------
CREATE TYPE yj_error AS (
code int,
reason text
);
CREATE TYPE yerrorprim AS (
id int,
error yj_error
);
CREATE TYPE yj_value AS (
qtt int8,
nat text
);
CREATE TYPE yj_stock AS (
id int,
qtt int8,
nat text,
own text,
usr text
);
CREATE TYPE yj_Ï AS (
id int,
qtt_prov int8,
qtt_requ int8,
type eordertype
);
CREATE TYPE yj_mvt AS (
id int,
cycle int,
orde yj_Ï,
stock yj_stock,
mvt_from yj_stock,
mvt_to yj_stock,
orig int
);
CREATE TYPE yj_order AS (
id int,
error yj_error
);
CREATE TYPE yj_primitive AS (
id int,
error yj_error,
primitive json,
result json,
value json
);
diff --git a/src/sql/tu_doc.sql b/src/sql/tu_doc.sql
new file mode 100644
index 0000000..583d715
--- /dev/null
+++ b/src/sql/tu_doc.sql
@@ -0,0 +1,55 @@
+-- Trilateral exchange between owners with distinct users
+---------------------------------------------------------
+--variables
+--USER:admin
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
+---------------------------------------------------------
+\set ECHO all
+SELECT * FROM fsubmitorder('best','Mallory','Fe',200,'Ni',40);
+SELECT * FROM fsubmitorder('best','Luc','Ni',100,'Co',50);
+SELECT * FROM fsubmitorder('best','Bob','Fe',40,'Co',50);
+
+SELECT * FROM fsubmitorder('best','Alice','Co',80,'Fe',100);
+
+SELECT * from vmsg_mvt;
+
+/*
+à l'init du modèle et en tests:
+tconst ('QUAPROVUSR',0),('OWNUSR',0),('DEBUG',1)
+tvar ('OC_CURRENT_PHASE',102),('OC_BGW_CONSUMESTACK_ACTIVE',1),('OC_BGW_OPENCLOSE_ACTIVE',0)
+
+en prod- possible qd foc_in_gphase(100)
+tconst ('QUAPROVUSR',1),('OWNUSR',1),('DEBUG',0)
+tvar ('OC_CURRENT_PHASE',102),('OC_BGW_CONSUMESTACK_ACTIVE',1),('OC_BGW_OPENCLOSE_ACTIVE',1)
+
+stopper l'évolution des phases
+
+POUR CREER UN CLIENT
+select create_client('nom_du_client');
+retourne 1 si c'est fait, 0 sinon
+
+POUR DEMARRER LE MARCHE
+select fset_debug(false)
+select fstart_phases(1)
+
+POUR ARRETER LE MARCHER
+select fstart_phase(-1)
+
+POUR FAIRE DES TESTS
+select fset_debug(true)
+
+*****************************************************************************************
+TODO
+site www.openbarter.org
+ copier le site www.openbarter.org en local
+ ajouter les actualités et pages statiques
+ rédaction des parties fixes
+ rédaction de l'acticle annonçant la publication
+finir la doc
+ * paramètrage ssl serveur
+ * arret marche du BGW
+ * tests
+ * nommages user et qualités
+chargement sur github
+annonce sur la liste et neil peters, etc
+
diff --git a/src/test/expected/tu_1.res b/src/test/expected/tu_1.res
index 683ab67..abfe0c0 100644
--- a/src/test/expected/tu_1.res
+++ b/src/test/expected/tu_1.res
@@ -1,49 +1,51 @@
-- Bilateral exchange between owners with distinct users (limit)
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
+OC_BGW_CONSUMESTACK_ACTIVE 1
+OC_BGW_OPENCLOSE_ACTIVE 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 10,'a',20);
id error
+---------+---------
2 (0,)
--Bilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wb 10 b limit 20 a 10 2test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wb @test_clientb
stock id:1 remaining after exchange: 0 'b'
3:Cycle id:1 Exchange id:1 for wb @test_clientb:
1:mvt_from wb @test_clientb : 10 'a' -> wa @test_clienta
2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 10 'a'
4:Primitive id:2 from test_clientb: OK
diff --git a/src/test/expected/tu_2.res b/src/test/expected/tu_2.res
index 92e6ba1..7133006 100644
--- a/src/test/expected/tu_2.res
+++ b/src/test/expected/tu_2.res
@@ -1,65 +1,67 @@
-- Bilateral exchange between owners with distinct users (best+limit)
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
+OC_BGW_CONSUMESTACK_ACTIVE 1
+OC_BGW_OPENCLOSE_ACTIVE 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('best','wa','a', 10,'b',5);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('best','wb','b', 20,'a',10);
id error
+---------+---------
2 (0,)
--Bilateral exchange expected
SELECT * FROM market.fsubmitorder('limit','wa','a', 10,'b',5);
id error
+---------+---------
3 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('best','wb','b', 20,'a',10);
id error
+---------+---------
4 (0,)
--No exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 10 a best 5 b 0 1test_clienta
2 2 wb 20 b best 10 a 5 2test_clientb
3 3 wa 10 a limit 5 b 5 1test_clientb
4 4 wb 20 b best 10 a 10 2test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 5 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 5 'a' <- wb @test_clientb
stock id:1 remaining after exchange: 0 'b'
3:Cycle id:1 Exchange id:1 for wb @test_clientb:
1:mvt_from wb @test_clientb : 5 'a' -> wa @test_clienta
2:mvt_to wb @test_clientb : 5 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 5 'a'
4:Primitive id:2 from test_clientb: OK
5:Primitive id:3 from test_clientb: OK
6:Primitive id:4 from test_clientb: OK
diff --git a/src/test/expected/tu_3.res b/src/test/expected/tu_3.res
index 6283f8d..79490b8 100644
--- a/src/test/expected/tu_3.res
+++ b/src/test/expected/tu_3.res
@@ -1,61 +1,63 @@
-- Trilateral exchange between owners with distinct users
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
+OC_BGW_CONSUMESTACK_ACTIVE 1
+OC_BGW_OPENCLOSE_ACTIVE 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wc','c', 10,'a',20);
id error
+---------+---------
3 (0,)
--Trilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wb 20 b limit 40 c 30 2test_clientb
3 3 wc 10 c limit 20 a 10 3test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Cycle id:1 Exchange id:3 for wb @test_clientb:
3:mvt_from wb @test_clientb : 10 'c' -> wc @test_clientb
2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 30 'c'
4:Cycle id:1 Exchange id:1 for wc @test_clientb:
1:mvt_from wc @test_clientb : 10 'a' -> wa @test_clienta
3:mvt_to wc @test_clientb : 10 'c' <- wb @test_clientb
stock id:3 remaining after exchange: 10 'a'
5:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wc @test_clientb
stock id:1 remaining after exchange: 0 'b'
6:Primitive id:3 from test_clientb: OK
diff --git a/src/test/expected/tu_4.res b/src/test/expected/tu_4.res
index 88a3478..ed1c985 100644
--- a/src/test/expected/tu_4.res
+++ b/src/test/expected/tu_4.res
@@ -1,62 +1,64 @@
-- Trilateral exchange between owners with two owners
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
+OC_BGW_CONSUMESTACK_ACTIVE 1
+OC_BGW_OPENCLOSE_ACTIVE 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
-- The profit is shared equally between wa and wb
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',20);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wb','c', 10,'a',20);
id error
+---------+---------
3 (0,)
--Trilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 20 b 0 1test_clienta
2 2 wb 20 b limit 40 c 20 2test_clientb
3 3 wb 10 c limit 20 a 0 2test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Cycle id:1 Exchange id:3 for wb @test_clientb:
3:mvt_from wb @test_clientb : 20 'c' -> wb @test_clientb
2:mvt_to wb @test_clientb : 20 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 20 'c'
4:Cycle id:1 Exchange id:1 for wb @test_clientb:
1:mvt_from wb @test_clientb : 20 'a' -> wa @test_clienta
3:mvt_to wb @test_clientb : 20 'c' <- wb @test_clientb
stock id:3 remaining after exchange: 0 'a'
5:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 20 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 20 'a' <- wb @test_clientb
stock id:1 remaining after exchange: 0 'b'
6:Primitive id:3 from test_clientb: OK
diff --git a/src/test/expected/tu_5.res b/src/test/expected/tu_5.res
index 4f8e5c3..09127e2 100644
--- a/src/test/expected/tu_5.res
+++ b/src/test/expected/tu_5.res
@@ -1,61 +1,63 @@
-- Trilateral exchange by one owners
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
+OC_BGW_CONSUMESTACK_ACTIVE 1
+OC_BGW_OPENCLOSE_ACTIVE 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wa','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wa','c', 10,'a',20);
id error
+---------+---------
3 (0,)
--Trilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wa 20 b limit 40 c 30 1test_clientb
3 3 wa 10 c limit 20 a 10 1test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Cycle id:1 Exchange id:3 for wa @test_clientb:
3:mvt_from wa @test_clientb : 10 'c' -> wa @test_clientb
2:mvt_to wa @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 30 'c'
4:Cycle id:1 Exchange id:1 for wa @test_clientb:
1:mvt_from wa @test_clientb : 10 'a' -> wa @test_clienta
3:mvt_to wa @test_clientb : 10 'c' <- wa @test_clientb
stock id:3 remaining after exchange: 10 'a'
5:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wa @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wa @test_clientb
stock id:1 remaining after exchange: 0 'b'
6:Primitive id:3 from test_clientb: OK
diff --git a/src/test/expected/tu_6.res b/src/test/expected/tu_6.res
index fd9febe..d592f91 100644
--- a/src/test/expected/tu_6.res
+++ b/src/test/expected/tu_6.res
@@ -1,108 +1,110 @@
-- 7-exchange with 7 partners
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
+OC_BGW_CONSUMESTACK_ACTIVE 1
+OC_BGW_OPENCLOSE_ACTIVE 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wc','c', 20,'d',40);
id error
+---------+---------
3 (0,)
SELECT * FROM market.fsubmitorder('limit','wd','d', 20,'e',40);
id error
+---------+---------
4 (0,)
SELECT * FROM market.fsubmitorder('limit','we','e', 20,'f',40);
id error
+---------+---------
5 (0,)
SELECT * FROM market.fsubmitorder('limit','wf','f', 20,'g',40);
id error
+---------+---------
6 (0,)
SELECT * FROM market.fsubmitorder('limit','wg','g', 20,'a',40);
id error
+---------+---------
7 (0,)
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wb 20 b limit 40 c 30 2test_clientb
3 3 wc 20 c limit 40 d 30 3test_clientb
4 4 wd 20 d limit 40 e 30 4test_clientb
5 5 we 20 e limit 40 f 30 5test_clientb
6 6 wf 20 f limit 40 g 30 6test_clientb
7 7 wg 20 g limit 40 a 30 7test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Primitive id:3 from test_clientb: OK
4:Primitive id:4 from test_clientb: OK
5:Primitive id:5 from test_clientb: OK
6:Primitive id:6 from test_clientb: OK
7:Cycle id:1 Exchange id:7 for wf @test_clientb:
7:mvt_from wf @test_clientb : 10 'g' -> wg @test_clientb
6:mvt_to wf @test_clientb : 10 'f' <- we @test_clientb
stock id:6 remaining after exchange: 30 'g'
8:Cycle id:1 Exchange id:1 for wg @test_clientb:
1:mvt_from wg @test_clientb : 10 'a' -> wa @test_clienta
7:mvt_to wg @test_clientb : 10 'g' <- wf @test_clientb
stock id:7 remaining after exchange: 30 'a'
9:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wg @test_clientb
stock id:1 remaining after exchange: 0 'b'
10:Cycle id:1 Exchange id:3 for wb @test_clientb:
3:mvt_from wb @test_clientb : 10 'c' -> wc @test_clientb
2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 30 'c'
11:Cycle id:1 Exchange id:4 for wc @test_clientb:
4:mvt_from wc @test_clientb : 10 'd' -> wd @test_clientb
3:mvt_to wc @test_clientb : 10 'c' <- wb @test_clientb
stock id:3 remaining after exchange: 30 'd'
12:Cycle id:1 Exchange id:5 for wd @test_clientb:
5:mvt_from wd @test_clientb : 10 'e' -> we @test_clientb
4:mvt_to wd @test_clientb : 10 'd' <- wc @test_clientb
stock id:4 remaining after exchange: 30 'e'
13:Cycle id:1 Exchange id:6 for we @test_clientb:
6:mvt_from we @test_clientb : 10 'f' -> wf @test_clientb
5:mvt_to we @test_clientb : 10 'e' <- wd @test_clientb
stock id:5 remaining after exchange: 30 'f'
14:Primitive id:7 from test_clientb: OK
diff --git a/src/test/expected/tu_7.res b/src/test/expected/tu_7.res
index 209f783..c3ad626 100644
--- a/src/test/expected/tu_7.res
+++ b/src/test/expected/tu_7.res
@@ -1,79 +1,81 @@
-- Competition between bilateral and multilateral exchange 1/2
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
+OC_BGW_CONSUMESTACK_ACTIVE 1
+OC_BGW_OPENCLOSE_ACTIVE 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 80,'a',40);
id error
+---------+---------
1 (0,)
SELECT * FROM market.fsubmitorder('limit','wc','b', 20,'d',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wd','d', 20,'a',40);
id error
+---------+---------
3 (0,)
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 40,'b',80);
id error
+---------+---------
4 (0,)
-- omega better for the trilateral exchange
-- it wins, the rest is used with a bilateral exchange
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wb 80 b limit 40 a 20 1test_clientb
2 2 wc 20 b limit 40 d 0 2test_clientb
3 3 wd 20 d limit 40 a 0 3test_clientb
4 4 wa 40 a limit 80 b 0 4test_clienta
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clientb: OK
2:Primitive id:2 from test_clientb: OK
3:Primitive id:3 from test_clientb: OK
4:Cycle id:1 Exchange id:3 for wd @test_clientb:
3:mvt_from wd @test_clientb : 40 'a' -> wa @test_clienta
2:mvt_to wd @test_clientb : 40 'd' <- wc @test_clientb
stock id:3 remaining after exchange: 0 'a'
5:Cycle id:1 Exchange id:1 for wa @test_clienta:
1:mvt_from wa @test_clienta : 40 'b' -> wc @test_clientb
3:mvt_to wa @test_clienta : 40 'a' <- wd @test_clientb
stock id:4 remaining after exchange: 40 'b'
6:Cycle id:1 Exchange id:2 for wc @test_clientb:
2:mvt_from wc @test_clientb : 40 'd' -> wd @test_clientb
1:mvt_to wc @test_clientb : 40 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 0 'd'
7:Cycle id:4 Exchange id:5 for wb @test_clientb:
5:mvt_from wb @test_clientb : 20 'a' -> wa @test_clienta
4:mvt_to wb @test_clientb : 40 'b' <- wa @test_clienta
stock id:1 remaining after exchange: 20 'a'
8:Cycle id:4 Exchange id:4 for wa @test_clienta:
4:mvt_from wa @test_clienta : 40 'b' -> wb @test_clientb
5:mvt_to wa @test_clienta : 20 'a' <- wb @test_clientb
stock id:4 remaining after exchange: 0 'b'
9:Primitive id:4 from test_clienta: OK
diff --git a/src/test/py/test_ti.py b/src/test/py/test_ti.py
index 4560875..8578958 100644
--- a/src/test/py/test_ti.py
+++ b/src/test/py/test_ti.py
@@ -1,273 +1,272 @@
# -*- coding: utf-8 -*-
'''
Packages required
-apt-get install python-psycopg2
+ apt-get install python-psycopg2
sudo easy_install simplejson
-
'''
import sys,os,time,logging
import psycopg2
import psycopg2.extensions
import traceback
import srvob_conf
import molet
import utilt
import sys
sys.path = [os.path.abspath(os.path.join(os.path.abspath(__file__),"../../../../simu/liquid"))]+ sys.path
#print sys.path
import distrib
PARLEN=80
prtest = utilt.PrTest(PARLEN,'=')
import random
import csv
import simplejson
import sys
def build_ti(options):
''' build a .sql file with a bump of submit
options.build is the number of tests to be generated
'''
#print options.build
#return
#conf = srvob_conf.dbBO
curdir,sqldir,resultsdir,expecteddir = utilt.get_paths()
_frs = os.path.join(sqldir,'test_ti.csv')
MAX_OWNER = 10
MAX_QLT = 20
QTT_PROV = 10000
prtest.title('generating tests cases for quotes')
def gen(nborders,frs):
for i in range(nborders):
# choose an owner
w = random.randint(1,MAX_OWNER)
# choose a couple of qualities
qlt_prov,qlt_requ = distrib.couple(distrib.uniformQlt,MAX_QLT)
# choose an omega between 0.5 and 1.5
r = random.random()+0.5
qtt_requ = int(QTT_PROV * r)
# 10% of orders are limit
lb= 'limit' if (random.random()>0.9) else 'best'
frs.writerow(['admin',lb,'w%i'%w,'q%i'%qlt_requ,qtt_requ,'q%i'%qlt_prov,QTT_PROV])
with open(_frs,'w') as f:
spamwriter = csv.writer(f)
gen(options.build,spamwriter)
if(molet.removeFile(os.path.join(expecteddir,'test_ti.res'),ignoreWarning = True)):
prtest.center('test_ti.res removed')
prtest.center('done')
prtest.line()
def test_ti(options):
_reset,titre_test = options.test_ti_reset,''
curdir,sqldir,resultsdir,expecteddir = utilt.get_paths()
prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
dump = utilt.Dumper(srvob_conf.dbBO,options,None)
if _reset:
print '\tReset: Clearing market ...'
titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
print '\t\tDone'
fn = os.path.join(sqldir,'test_ti.csv')
if( not os.path.exists(fn)):
raise ValueError('The data %s is not found' % fn)
with open(fn,'r') as f:
spamreader = csv.reader(f)
values_prov = {}
_nbtest = 0
for row in spamreader:
_nbtest +=1
qua_prov,qtt_prov = row[5],row[6]
if not qua_prov in values_prov.keys():
values_prov[qua_prov] = 0
values_prov[qua_prov] = values_prov[qua_prov] + int(qtt_prov)
#print values_prov
cur_login = None
titre_test = None
inst = utilt.ExecInst(dump)
user = None
fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
fmtjsosr = '''SELECT jso from market.tmsg
where json_extract_path_text(jso,'id')::int=%i and typ='response' '''
fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
from market.tmsg
where json_extract_path_text(jso,'orig')::int=%i
and json_extract_path_text(jso,'orde','id')::int=%i
and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
'''
the order that produced the exchange has the qualities expected
'''
i= 0
if _reset:
print '\tSubmission: sending a series of %i tests where a random set of arguments' % _nbtest
print '\t\tis used to submit a quote, then an order'
with open(fn,'r') as f:
spamreader = csv.reader(f)
compte = utilt.Delai()
for row in spamreader:
user = row[0]
params = tuple(row[1:])
cursor = inst.exe( fmtquote % params,user)
cursor = inst.exe( fmtorder % params,user)
i +=1
if i % 100 == 0:
prtest.progress(i/float(_nbtest))
delai = compte.getSecs()
print '\t\t%i quote & order primitives in %f seconds' % (_nbtest*2,delai)
print '\tExecution: Waiting for end of execution ...'
#utilt.wait_for_true(srvob_conf.dbBO,1000,"SELECT market.fstackdone()",prtest=prtest)
utilt.wait_for_empty_stack(srvob_conf.dbBO,prtest)
delai = compte.getSecs()
print '\t\t Done: mean time per primitive %f seconds' % (delai/(_nbtest*2),)
fmtiter = '''SELECT json_extract_path_text(jso,'id')::int id,json_extract_path_text(jso,'primitive','type') typ
from market.tmsg where typ='response' and json_extract_path_text(jso,'primitive','kind')='quote'
order by id asc limit 10 offset %i'''
i = 0
_notnull,_ko,_limit,_limitko = 0,0,0,0
print '\tChecking: identity of quote result and order result for each %i test cases' % _nbtest
print '\t\tusing the content of market.tmsg'
while True:
cursor = inst.exe( fmtiter % i,user)
vec = []
for re in cursor:
vec.append(re)
l = len(vec)
if l == 0:
break
for idq,_type in vec:
i += 1
if _type == 'limit':
_limit += 1
# result of the quote for idq
_cur = inst.exe(fmtjsosr %idq,user)
res = _cur.fetchone()
res_quote =simplejson.loads(res[0])
expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
#result of the order for idq+1
_cur = inst.exe(fmtjsose %(idq+1,idq+1),user)
res = _cur.fetchone()
if res is None:
result = 0,0
else:
ido_,qtt_prov_,qtt_reci_ = res
result = qtt_prov_,qtt_reci_
_notnull +=1
if _type == 'limit':
if float(expected[0])/float(expected[1]) < float(qtt_prov_)/float(qtt_reci_):
_limitko +=1
if result != expected:
_ko += 1
print idq,res,res_quote
if i %100 == 0:
prtest.progress(i/float(_nbtest))
'''
if i == 100:
print '\t\t.',
else:
print '.',
sys.stdout.flush()
if(_ko != 0): _errs = ' - %i errors' %_ko
else: _errs = ''
print ('\t\t%i quote & order\t%i quotes returned a result %s' % (i-_ko,_notnull,_errs))
'''
_valuesko = check_values(inst,values_prov,user)
prtest.title('Results checkings')
print ''
print '\t\t%i\torders returned a result different from the previous quote' % _ko
print '\t\t\twith the same arguments\n'
print '\t\t%i\tlimit orders returned a result where the limit is not observed\n' % _limitko
print '\t\t%i\tqualities where the quantity is not preserved by the market\n' % _valuesko
prtest.line()
if(_ko == 0 and _limitko == 0 and _valuesko == 0):
prtest.center('\tAll %i tests passed' % i)
else:
prtest.center('\tSome of %i tests failed' % (i,))
prtest.line()
inst.close()
return titre_test
def check_values(inst,values_input,user):
'''
Values_input is for each quality, the sum of quantities submitted to the market
Values_remain is for each quality, the sum of quantities remaining in the order book
Values_output is for each quality, the sum of quantities of mvt_from.qtt of tmsg
Checks that for each quality q:
Values_input[q] == Values_remain[q] + Values_output[q]
'''
sql = "select (ord).qua_prov,sum((ord).qtt) from market.torder where (ord).oid=(ord).id group by (ord).qua_prov"
cursor = inst.exe( sql,user)
values_remain = {}
for qua_prov,qtt in cursor:
values_remain[qua_prov] = qtt
sql = '''select json_extract_path_text(jso,'mvt_from','nat'),sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint)
from market.tmsg where typ='exchange'
group by json_extract_path_text(jso,'mvt_from','nat')
'''
cursor = inst.exe( sql,user)
values_output = {}
for qua_prov,qtt in cursor:
values_output[qua_prov] = qtt
_errs = 0
for qua,vin in values_input.iteritems():
_out = values_output.get(qua,None)
_remain = values_remain.get(qua,None)
if _out is None or _remain is None:
_errs += 1
continue
if vin != (_out+ _remain):
print qua,vin,_out,_remain
_errs += 1
return _errs
diff --git a/src/test/py/utilt.py b/src/test/py/utilt.py
index 3121063..c3932c7 100644
--- a/src/test/py/utilt.py
+++ b/src/test/py/utilt.py
@@ -1,320 +1,321 @@
# -*- coding: utf-8 -*-
import string
import os.path
import time, sys
import molet
def get_paths():
curdir = os.path.abspath(__file__)
curdir = os.path.dirname(curdir)
curdir = os.path.dirname(curdir)
sqldir = os.path.join(curdir,'sql')
resultsdir,expecteddir = os.path.join(curdir,'results'),os.path.join(curdir,'expected')
molet.mkdir(resultsdir,ignoreWarning = True)
molet.mkdir(expecteddir,ignoreWarning = True)
tup = (curdir,sqldir,resultsdir,expecteddir)
return tup
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
class PrTest(object):
''' results printing '''
def __init__(self,parlen,sep):
self.parlen = parlen+ parlen%2
self.sep = sep
def title(self,title):
_l = len(title)
_p = max(_l%2 +_l,40)
_x = self.parlen -_p
if (_x > 2):
print (_x/2)*self.sep + string.center(title,_p) + (_x/2)*self.sep
else:
print string.center(text,self.parlen)
def line(self):
print self.parlen*self.sep
def center(self,text):
print string.center(text,self.parlen)
def progress(self,progress):
# update_progress() : Displays or updates a console progress bar
## Accepts a float between 0 and 1. Any int will be converted to a float.
## A value under 0 represents a 'halt'.
## A value at 1 or bigger represents 100%
barLength = 40 # Modify this to change the length of the progress bar
status = ""
if isinstance(progress, int):
progress = float(progress)
if not isinstance(progress, float):
progress = 0
status = "error: progress var must be float\r\n"
if progress < 0:
progress = 0
status = "Halt...\r\n"
if progress >= 1:
progress = 1
status = "Done...\r\n"
block = int(round(barLength*progress))
text = "\r\t\t\t[{0}] {1}% {2}".format( "#"*block + "-"*(barLength-block), progress*100, status)
sys.stdout.write(text)
sys.stdout.flush()
'''---------------------------------------------------------------------------
file comparison
---------------------------------------------------------------------------'''
import filecmp
def files_clones(f1,f2):
#res = filecmp.cmp(f1,f2)
return (md5sum(f1) == md5sum(f2))
import hashlib
def md5sum(filename, blocksize=65536):
hash = hashlib.md5()
with open(filename, "r+b") as f:
for block in iter(lambda: f.read(blocksize), ""):
hash.update(block)
return hash.hexdigest()
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
SEPARATOR = '\n'+'-'*80 +'\n'
import json
class Dumper(object):
def __init__(self,conf,options,fdr):
self.options =options
self.conf = conf
self.fdr = fdr
def getConf(self):
return self.conf
def torder(self,cur):
self.write(SEPARATOR)
self.write('table: torder\n')
cur.execute('SELECT * FROM market.vord order by id asc')
self.cur(cur)
'''
yorder not shown:
pos_requ box, -- box (point(lat,lon),point(lat,lon))
pos_prov box, -- box (point(lat,lon),point(lat,lon))
dist float8,
carre_prov box -- carre_prov @> pos_requ
'''
return
def write(self,txt):
if self.fdr:
self.fdr.write(txt)
def cur(self,cur,_len=10):
#print cur.description
if(cur.description is None): return
#print type(cur)
cols = [e.name for e in cur.description]
row_format = ('{:>'+str(_len)+'}')*len(cols)
self.write(row_format.format(*cols)+'\n')
self.write(row_format.format(*(['+'+'-'*(_len-1)]*len(cols)))+'\n')
for res in cur:
self.write(row_format.format(*res)+'\n')
return
def tmsg(self,cur):
self.write(SEPARATOR)
self.write('table: tmsg')
self.write(SEPARATOR)
cur.execute('SELECT id,typ,usr,jso FROM market.tmsg order by id asc')
for res in cur:
_id,typ,usr,jso = res
_jso = json.loads(jso)
if typ == 'response':
if _jso['error']['code']==0:
_msg = 'Primitive id:%i from %s: OK\n' % (_jso['id'],usr)
else:
_msg = 'Primitive id:%i from %s: ERROR(%i,%s)\n' % (_jso['id'],usr,
_jso['error']['code'],_jso['error']['reason'])
elif typ == 'exchange':
_fmt = '''Cycle id:%i Exchange id:%i for %s @%s:
\t%i:mvt_from %s @%s : %i \'%s\' -> %s @%s
\t%i:mvt_to %s @%s : %i \'%s\' <- %s @%s
\tstock id:%i remaining after exchange: %i \'%s\' \n'''
_dat = (
_jso['cycle'],_jso['id'],_jso['stock']['own'],usr,
_jso['mvt_from']['id'],_jso['stock']['own'],usr,_jso['mvt_from']['qtt'],_jso['mvt_from']['nat'],_jso['mvt_from']['own'],_jso['mvt_from']['usr'],
_jso['mvt_to']['id'], _jso['stock']['own'],usr,_jso['mvt_to']['qtt'],_jso['mvt_to']['nat'],_jso['mvt_to']['own'],_jso['mvt_to']['usr'],
_jso['stock']['id'],_jso['stock']['qtt'],_jso['stock']['nat'])
_msg = _fmt %_dat
else:
_msg = str(res)
self.write('\t%i:'%_id+_msg+'\n')
if self.options.verbose:
print jso
return
'''---------------------------------------------------------------------------
wait until a command returns true with timeout
---------------------------------------------------------------------------'''
import molet
import time
def wait_for_true(conf,delai,sql,msg=None):
_i = 0;
_w = 0;
while True:
_i +=1
conn = molet.DbData(conf)
try:
with molet.DbCursor(conn) as cur:
cur.execute(sql)
r = cur.fetchone()
+ # print r
if r[0] == True:
break
finally:
conn.close()
if msg is None:
pass
elif(_i%10)==0:
print msg
_a = 0.1;
_w += _a;
if _w > delai: # seconds
raise ValueError('After %f seconds, %s != True' % (_w,sql))
time.sleep(_a)
'''---------------------------------------------------------------------------
wait for stack empty
---------------------------------------------------------------------------'''
def wait_for_empty_stack(conf,prtest):
_i = 0;
_w = 0;
sql = "SELECT name,value FROM market.tvar WHERE name in ('STACK_TOP','STACK_EXECUTED')"
while True:
_i +=1
conn = molet.DbData(conf)
try:
with molet.DbCursor(conn) as cur:
cur.execute(sql)
re = {}
for r in cur:
re[r[0]] = r[1]
prtest.progress(float(re['STACK_EXECUTED'])/float(re['STACK_TOP']))
if re['STACK_TOP'] == re['STACK_EXECUTED']:
break
finally:
conn.close()
time.sleep(2)
'''---------------------------------------------------------------------------
executes a script
---------------------------------------------------------------------------'''
def exec_script(dump,dirsql,fn):
fn = os.path.join(dirsql,fn)
if( not os.path.exists(fn)):
raise ValueError('The script %s is not found' % fn)
cur_login = None
titre_test = None
inst = ExecInst(dump)
with open(fn,'r') as f:
for line in f:
line = line.strip()
if len(line) == 0:
continue
dump.write(line+'\n')
if line.startswith('--'):
if titre_test is None:
titre_test = line
elif line.startswith('--USER:'):
cur_login = line[7:].strip()
else:
cursor = inst.exe(line,cur_login)
dump.cur(cursor)
inst.close()
return titre_test
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
class ExecInst(object):
def __init__(self,dump):
self.login = None
self.conn = None
self.cur = None
self.dump = dump
def exe(self,sql,login):
#print login
if self.login != login:
self.close()
if self.conn is None:
self.login = login
_login = None if login == 'admin' else login
self.conn = molet.DbData(self.dump.getConf(),login = _login)
self.cur = self.conn.con.cursor()
# print sql
self.cur.execute(sql)
return self.cur
def close(self):
if not(self.conn is None):
if not(self.cur is None):
self.cur.close()
self.conn.close()
self.conn = None
def execinst(dump,cur_login,sql):
if cur_login == 'admin':
cur_login = None
conn = molet.DbData(dump.getConf(),login = cur_login)
try:
with molet.DbCursor(conn,exit = True) as _cur:
_cur.execute(sql)
dump.cur(_cur)
finally:
conn.close()
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
from datetime import datetime
class Delai(object):
def __init__(self):
self.debut = datetime.now()
def getSecs(self):
return self._duree(self.debut,datetime.now())
def _duree(self,begin,end):
""" returns a float; the number of seconds elapsed between begin and end
"""
if(not isinstance(begin,datetime)): raise ValueError('begin is not datetime object')
if(not isinstance(end,datetime)): raise ValueError('end is not datetime object')
duration = end - begin
secs = duration.days*3600*24 + duration.seconds + duration.microseconds/1000000.
return secs
diff --git a/src/worker_ob.c b/src/worker_ob.c
index e93b9fc..3349454 100644
--- a/src/worker_ob.c
+++ b/src/worker_ob.c
@@ -1,389 +1,385 @@
/* -------------------------------------------------------------------------
*
* worker_ob.c
* Code based on worker_spi.c
*
* This code connects to a database, lauches two background workers.
for i in [0,1],workeri do the following:
while(true)
dowait := market.workeri()
if (dowait):
wait(dowait) // dowait millisecond
These worker do nothing if the schema market is not installed
To force restarting of a bg_worker,send a SIGHUP signal to the worker process
*
* -------------------------------------------------------------------------
*/
#include "postgres.h"
/* These are always necessary for a bgworker */
#include "miscadmin.h"
#include "postmaster/bgworker.h"
#include "storage/ipc.h"
#include "storage/latch.h"
#include "storage/lwlock.h"
#include "storage/proc.h"
#include "storage/shmem.h"
/* these headers are used by this particular worker's code */
#include "access/xact.h"
#include "executor/spi.h"
#include "fmgr.h"
#include "lib/stringinfo.h"
#include "pgstat.h"
#include "utils/builtins.h"
#include "utils/snapmgr.h"
#include "tcop/utility.h"
#define BGW_NBWORKERS 2
static char *worker_names[] = {"openclose","consumestack"};
#define BGW_OPENCLOSE 0
#define BGW_CONSUMESTACK 1
// PG_MODULE_MAGIC;
void _PG_init(void);
/* flags set by signal handlers */
static volatile sig_atomic_t got_sighup = false;
static volatile sig_atomic_t got_sigterm = false;
/* GUC variable */
static char *worker_ob_database = "market";
/* others */
static char *worker_ob_user = "user_bo";
/* two connexions are allowed for this user */
typedef struct worktable
{
const char *function_name;
int dowait;
} worktable;
/*
* Signal handler for SIGTERM
* Set a flag to let the main loop to terminate, and set our latch to wake
* it up.
*/
static void
worker_spi_sigterm(SIGNAL_ARGS)
{
int save_errno = errno;
got_sigterm = true;
if (MyProc)
SetLatch(&MyProc->procLatch);
errno = save_errno;
}
/*
* Signal handler for SIGHUP
* Set a flag to tell the main loop to reread the config file, and set
* our latch to wake it up.
*/
static void
worker_spi_sighup(SIGNAL_ARGS)
{
int save_errno = errno;
got_sighup = true;
if (MyProc)
SetLatch(&MyProc->procLatch);
errno = save_errno;
}
static int _spi_exec_select_ret_int(StringInfoData buf) {
int ret;
int ntup;
bool isnull;
ret = SPI_execute(buf.data, true, 1); // read_only -- one row returned
pfree(buf.data);
if (ret != SPI_OK_SELECT)
elog(FATAL, "SPI_execute failed: error code %d", ret);
if (SPI_processed != 1)
elog(FATAL, "not a singleton result");
ntup = DatumGetInt32(SPI_getbinval(SPI_tuptable->vals[0],
SPI_tuptable->tupdesc,
1, &isnull));
if (isnull)
elog(FATAL, "null result");
return ntup;
}
static bool _test_market_installed() {
int ret;
StringInfoData buf;
initStringInfo(&buf);
appendStringInfo(&buf, "select count(*) from pg_namespace where nspname = 'market'");
ret = _spi_exec_select_ret_int(buf);
if(ret == 0)
return false;
initStringInfo(&buf);
appendStringInfo(&buf, "select value from market.tvar where name = 'INSTALLED'");
ret = _spi_exec_select_ret_int(buf);
if(ret == 0)
return false;
return true;
}
+/*
+static bool _test_bgw_active() {
+ int ret;
+ StringInfoData buf;
+
+ initStringInfo(&buf);
+ appendStringInfo(&buf, "select value from market.tvar where name = 'OC_BGW_ACTIVE'");
+ ret = _spi_exec_select_ret_int(buf);
+ if(ret == 0)
+ return false;
+ return true;
+} */
/*
*/
static bool
_worker_ob_installed()
{
bool installed;
SetCurrentStatementStartTimestamp();
StartTransactionCommand();
SPI_connect();
PushActiveSnapshot(GetTransactionSnapshot());
pgstat_report_activity(STATE_RUNNING, "initializing spi_worker");
installed = _test_market_installed();
if (installed)
elog(LOG, "%s starting",MyBgworkerEntry->bgw_name);
else
elog(LOG, "%s waiting for installation",MyBgworkerEntry->bgw_name);
SPI_finish();
PopActiveSnapshot();
CommitTransactionCommand();
pgstat_report_activity(STATE_IDLE, NULL);
return installed;
}
-/*
-static void _openclose_vacuum() {
- // called by openclose bg_worker
- StringInfoData buf;
- int ret;
- initStringInfo(&buf);
- appendStringInfo(&buf,"VACUUM FULL");
- pgstat_report_activity(STATE_RUNNING, buf.data);
- elog(LOG, "vacuum full starting");
-
-
- ret = SPI_exec(buf.data, 0); */
- /* fails with an exception:
+ /* "VACUUM FULL" fails with an exception:
ERROR: VACUUM cannot be executed from a function or multi-command string
CONTEXT: SQL statement "VACUUM FULL"
*/
- /*
- pfree(buf.data);
- elog(LOG, "vacuum full stopped an returned %d",ret);
-
- if (ret != SPI_OK_UTILITY) // SPI_OK_UPDATE_RETURNING,SPI_OK_SELECT
- // TODO CODE DE RETOUR?????
- elog(FATAL, "cannot execute VACUUM FULL: error code %d",ret);
- return;
-} */
static void
worker_ob_main(Datum main_arg)
{
int index = DatumGetInt32(main_arg);
worktable *table;
StringInfoData buf;
bool installed;
table = palloc(sizeof(worktable));
table->function_name = pstrdup(worker_names[index]);
table->dowait = 0;
/* Establish signal handlers before unblocking signals. */
pqsignal(SIGHUP, worker_spi_sighup);
pqsignal(SIGTERM, worker_spi_sigterm);
/* We're now ready to receive signals */
BackgroundWorkerUnblockSignals();
/* Connect to the database */
if(!(worker_ob_database && *worker_ob_database))
elog(FATAL, "database name undefined");
BackgroundWorkerInitializeConnection(worker_ob_database, worker_ob_user);
installed = _worker_ob_installed();
initStringInfo(&buf);
appendStringInfo(&buf,"SELECT %s FROM market.%s()",
table->function_name, table->function_name);
/*
* Main loop: do this until the SIGTERM handler tells us to terminate
*/
while (!got_sigterm)
{
int ret;
int rc;
int _worker_ob_naptime; // = worker_ob_naptime * 1000L;
+ //bool bgw_active;
if(installed) // && !table->dowait)
_worker_ob_naptime = table->dowait;
else
_worker_ob_naptime = 1000L; // 1 second
/*
* Background workers mustn't call usleep() or any direct equivalent:
* instead, they may wait on their process latch, which sleeps as
* necessary, but is awakened if postmaster dies. That way the
* background process goes away immediately in an emergency.
*/
/* done even if _worker_ob_naptime == 0 */
// elog(LOG, "%s start waiting for %i",table->function_name,_worker_ob_naptime);
rc = WaitLatch(&MyProc->procLatch,
WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
_worker_ob_naptime );
ResetLatch(&MyProc->procLatch);
/* emergency bailout if postmaster has died */
if (rc & WL_POSTMASTER_DEATH)
proc_exit(1);
/*
* In case of a SIGHUP, just reload the configuration.
*/
if (got_sighup)
{
got_sighup = false;
ProcessConfigFile(PGC_SIGHUP);
installed = _worker_ob_installed();
}
if( !installed) continue;
/*
* Start a transaction on which we can run queries. Note that each
* StartTransactionCommand() call should be preceded by a
* SetCurrentStatementStartTimestamp() call, which sets both the time
* for the statement we're about the run, and also the transaction
* start time. Also, each other query sent to SPI should probably be
* preceded by SetCurrentStatementStartTimestamp(), so that statement
* start time is always up to date.
*
* The SPI_connect() call lets us run queries through the SPI manager,
* and the PushActiveSnapshot() call creates an "active" snapshot
* which is necessary for queries to have MVCC data to work on.
*
* The pgstat_report_activity() call makes our activity visible
* through the pgstat views.
*/
SetCurrentStatementStartTimestamp();
StartTransactionCommand();
SPI_connect();
PushActiveSnapshot(GetTransactionSnapshot());
- pgstat_report_activity(STATE_RUNNING, buf.data);
-
- /* We can now execute queries via SPI */
- ret = SPI_execute(buf.data, false, 0);
-
- if (ret != SPI_OK_SELECT) // SPI_OK_UPDATE_RETURNING)
- elog(FATAL, "cannot execute market.%s(): error code %d",
- table->function_name, ret);
+ //bgw_active = _test_bgw_active();
+
+ if (true) { // bgw_active) {
+ pgstat_report_activity(STATE_RUNNING, buf.data);
+
+ /* We can now execute queries via SPI */
+ ret = SPI_execute(buf.data, false, 0);
+
+ if (ret != SPI_OK_SELECT) // SPI_OK_UPDATE_RETURNING)
+ elog(FATAL, "cannot execute market.%s(): error code %d",
+ table->function_name, ret);
+
+ if (SPI_processed != 1) // number of tuple returned
+ elog(FATAL, "market.%s() returned %d tuples instead of one",
+ table->function_name, SPI_processed);
+
+ {
+ bool isnull;
+ int32 val;
+
+ val = DatumGetInt32(SPI_getbinval(SPI_tuptable->vals[0],
+ SPI_tuptable->tupdesc,
+ 1, &isnull));
+
+ if (isnull)
+ elog(FATAL, "market.%s() returned null",table->function_name);
+
+ table->dowait = 0;
+ if (val >=0)
+ table->dowait = val;
+ else {
+ if ((index == BGW_OPENCLOSE) && (val == -100))
+ ; // _openclose_vacuum();
+ else
+ elog(FATAL, "market.%s() returned illegal <0",table->function_name);
+ }
- if (SPI_processed != 1) // number of tuple returned
- elog(FATAL, "market.%s() returned %d tuples instead of one",
- table->function_name, SPI_processed);
-
- {
- bool isnull;
- int32 val;
-
- val = DatumGetInt32(SPI_getbinval(SPI_tuptable->vals[0],
- SPI_tuptable->tupdesc,
- 1, &isnull));
-
- if (isnull)
- elog(FATAL, "market.%s() returned null",table->function_name);
-
- table->dowait = 0;
- if (val >=0)
- table->dowait = val;
- else {
- if ((index == BGW_OPENCLOSE) && (val == -100))
- ; // _openclose_vacuum();
- else
- elog(FATAL, "market.%s() returned illegal <0",table->function_name);
}
-
}
/*
* And finish our transaction.
*/
SPI_finish();
PopActiveSnapshot();
CommitTransactionCommand();
pgstat_report_activity(STATE_IDLE, NULL);
}
proc_exit(0);
}
/*
* Entrypoint of this module.
*
* We register more than one worker process here, to demonstrate how that can
* be done.
*/
void
_PG_init(void)
{
BackgroundWorker worker;
unsigned int i;
/* get the configuration */
/*
DefineCustomIntVariable("worker_ob.naptime",
"Mimimum duration of wait time (in milliseconds).",
NULL,
&worker_ob_naptime,
100,
1,
INT_MAX,
PGC_SIGHUP,
0,
NULL,
NULL,
NULL); */
DefineCustomStringVariable("worker_ob.database",
"Name of the database.",
NULL,
&worker_ob_database,
"market",
PGC_SIGHUP, 0,
NULL,NULL,NULL);
/* set up common data for all our workers */
worker.bgw_flags = BGWORKER_SHMEM_ACCESS |
BGWORKER_BACKEND_DATABASE_CONNECTION;
worker.bgw_start_time = BgWorkerStart_RecoveryFinished;
worker.bgw_restart_time = 60; // BGW_NEVER_RESTART;
worker.bgw_main = worker_ob_main;
/*
* Now fill in worker-specific data, and do the actual registrations.
*/
for (i = 0; i < BGW_NBWORKERS; i++)
{
snprintf(worker.bgw_name, BGW_MAXLEN, "market.%s", worker_names[i]);
worker.bgw_main_arg = Int32GetDatum(i);
RegisterBackgroundWorker(&worker);
}
}
diff --git a/vacuum.bash b/vacuum.bash
new file mode 100755
index 0000000..c384ca0
--- /dev/null
+++ b/vacuum.bash
@@ -0,0 +1,9 @@
+#! /bin/bash
+# should be executed with postgres super user priviledge
+while [ "$(psql market -t -c 'SELECT market.foc_start_vacuum(true)')" -ne 2 ]
+do
+ sleep 1
+done
+psql market -t -c 'VACUUM FULL'
+psql market -t -c 'SELECT market.foc_start_vacuum(false)'
+echo 'VACUUM FULL done'
|
olivierch/openBarter
|
2b5c8cfe5f91f86859ad6e7a7e6c10508286d35c
|
corrections for bison 3.0
|
diff --git a/simu/liquid/cliquid.py b/simu/liquid/cliquid.py
index 5499630..90b8dd8 100644
--- a/simu/liquid/cliquid.py
+++ b/simu/liquid/cliquid.py
@@ -1,180 +1,178 @@
#!/usr/bin/python
# -*- coding: utf8 -*-
import distrib
import os.path
DB_NAME='liquid'
DB_USER='olivier'
DB_PWD=''
DB_HOST='localhost'
DB_PORT=5432
PATH_ICI = os.path.dirname(os.path.abspath(__file__))
-# PATH_SRC="/home/olivier/Bureau/ob92/src"
PATH_SRC= os.path.join(os.path.dirname(PATH_ICI),'src')
-# PATH_DATA="/home/olivier/Bureau/ob92/simu/liquid/data"
PATH_DATA=os.path.join(PATH_ICI,'data')
MAX_TOWNER=10000 # maximum number of owners in towner
MAX_TORDER=1000000 # maximum size of the order book
MAX_TSTACK=100
MAX_QLT=100 # maximum number of qualities
#
QTT_PROV = 10000 # quantity provided
class Exec1:
def __init__(self):
self.NAME = "X1"
# model
self.MAXCYCLE=64
self.MAXPATHFETCHED=1024*5
self.MAXMVTPERTRANS=128
class Exec2:
def __init__(self):
self.NAME = "X2"
# model
self.MAXCYCLE=32
self.MAXPATHFETCHED=1024*10
self.MAXMVTPERTRANS=128
class Exec3:
def __init__(self):
self.NAME = "X3"
# model
self.MAXCYCLE=64
self.MAXPATHFETCHED=1024*10
self.MAXMVTPERTRANS=128
class Exec4:
def __init__(self):
self.NAME = "X4"
# model
self.MAXCYCLE=64
self.MAXPATHFETCHED=1024*10
self.MAXMVTPERTRANS=256
class Exec5:
def __init__(self):
self.NAME = "E5_Y2P1024_10M256"
# model
self.MAXCYCLE=2
self.MAXPATHFETCHED=1024*10
self.MAXMVTPERTRANS=256
class Basic10:
def __init__(self):
self.CONF_NAME='1e1uni'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=10 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.uniformQlt
self.coupleQlt = distrib.couple
# etendue des tests
self.LIQ_PAS = 500
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
class Basic100:
def __init__(self):
self.CONF_NAME='1e2uni'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=100 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.betaQlt
self.coupleQlt = distrib.couple
# etendue des tests
self.LIQ_PAS = 500
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
class Basic1000:
def __init__(self):
self.CONF_NAME='B3'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=1000 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.betaQlt
self.coupleQlt = distrib.couple
# etendue des tests
self.LIQ_PAS = 500
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
class Basic10000:
def __init__(self):
self.CONF_NAME='1e4uni'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=10000 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.betaQlt
self.coupleQlt = distrib.couple
# etendue des tests
self.LIQ_PAS = 500
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
class Basic100large:
def __init__(self):
self.CONF_NAME='1E4UNI'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=10000 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.uniformQlt
self.coupleQlt = distrib.couple
# etendue des tests
self.LIQ_PAS = 3000
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
class Money100:
def __init__(self):
self.CONF_NAME='money100'
self.MAX_OWNER=min(100,MAX_TOWNER) # maximum number of owners
self.MAX_QLT=100 # maximum number of qualities
"""
fonction de distribution des qualités
"""
self.distribQlt = distrib.uniformQlt
self.coupleQlt = distrib.couple_money
# etendue des tests
self.LIQ_PAS = 500
self.LIQ_ITER = min(30,MAX_TORDER/self.LIQ_PAS)
diff --git a/simu/liquid/concat.py b/simu/liquid/concat.py
index bb29f00..85eb569 100755
--- a/simu/liquid/concat.py
+++ b/simu/liquid/concat.py
@@ -1,116 +1,115 @@
#!/usr/bin/python
# -*- coding: utf8 -*-
"""
from
https://google-developers.appspot.com/chart/interactive/docs/gallery/linechart
"""
import os
import csv
import cliquid
visplot_tmp ="""
<html>
<head>
<script type="text/javascript" src="https://www.google.com/jsapi"></script>
<script type="text/javascript">
google.load("visualization", "1", {packages:["corechart"]});
google.setOnLoadCallback(drawChart);
function drawChart() {
%s
}
</script>
</head>
<body>
<table>
<tr>
<td><div id="chart_delay" style="width: 450px; height: 250px;"></div></td>
<td><div id="chart_fluidity" style="width: 450px; height: 250px;"></div></td>
</tr>
<tr>
<td><div id="chart_nbcycle" style="width: 450px; height: 250px;"></div></td>
<td><div id="chart_gain" style="width: 450px; height: 250px;"></div></td>
</tr>
</table>
</body>
</html>
"""
visplot_graph ="""
var data = google.visualization.arrayToDataTable(%s);
var options = {
title: '%s',
hAxis: {title: 'Volume of the order book'},
legend: {position: 'out'},
vAxes:[{title:'%s'}]
};
var chart = new google.visualization.LineChart(document.getElementById('chart_%s'));
chart.draw(data, options);
"""
-#PATH_DATA = "/home/olivier/Bureau/ob92/simu/liquid/test"
+
PATH_DATA = cliquid.PATH_DATA
def makeHtml(content):
fn = os.path.join(PATH_DATA,'result.html')
with open(fn,'w') as f:
f.write(visplot_tmp % (content,))
def makeGraph(title,arr,unite):
return (visplot_graph % (arr,title,unite,title))
-
-
+
def makeVis(prefix):
fils = []
for root,dirs,files in os.walk(PATH_DATA):
for fil in files:
if(not fil.endswith('.txt')):
continue
if(fil.startswith(prefix)):
fils.append(os.path.join(root,fil))
content = []
for clef,valeurs in {'delay':(1,'seconds'),'fluidity':(2,'%'),'nbcycle':(3,'nbcycle'),'gain':(4,'%')}.iteritems():
indice,unite = valeurs
resus = {}
for fil in fils:
resu = []
with open(fil,'rb') as f:
reader = csv.reader(f, delimiter=';', quotechar='|')
for lin in reader:
l= [lin[0],lin[indice]]
resu.append(l)
nam = fil.split(prefix)
nam = nam[1].split('.txt')
nam = nam[0]
resus[nam] = resu
keys = resus.keys()
titles = [[clef]+keys,]
mat = []
cnt = len(resu)
begin = True
for k in keys:
if(begin):
begin = False;
for i in range(cnt):
mat.append([resus[k][i][0]])
for i in range(cnt):
lin = mat[i]
lin.append(float(resus[k][i][1]))
mat[i] = lin
content.append(makeGraph(clef,titles+mat,unite))
makeHtml('\n'.join(content))
if __name__ == "__main__":
makeVis('result_')
diff --git a/src/Makefile b/src/Makefile
index a8637c5..b88b557 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -1,39 +1,39 @@
MODULE_big = flowf
OBJS = flowm.o yflow.o yflowparse.o yorder.o flowc.o earthrad.o worker_ob.o
EXTENSION = flowf
DATA = flowf--0.1.sql flowf--unpackaged--0.1.sql
REGRESS = testflow_1 testflow_2 testflow_2a
EXTRA_CLEAN = yflowparse.c yflowscan.c test/results/*.res test/py/*.pyc ../doc/*.pdf ../simu/liquid/data/*
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
yflowparse.o: yflowscan.c
yflowparse.c: yflowparse.y
ifdef BISON
- $(BISON) $(BISONFLAGS) -o $@ $<
+ $(BISON) $(BISONFLAGS) --name-prefix=yflow_yy -o $@ $<
else
@$(missing) bison $< $@
endif
yflowscan.c: yflowscan.l
ifdef FLEX
$(FLEX) $(FLEXFLAGS) -o'$@' $<
else
@$(missing) flex $< $@
endif
test: installcheck test/py/*.py test/sql/*.sql
cd test; python py/run.py; cd ..
cd test; python py/run.py -i -r;cd ..
doc:
soffice --invisible --norestore --convert-to pdf --outdir ../doc ../doc/*.odt
diff --git a/src/sql/openclose.sql b/src/sql/openclose.sql
index e4281ac..5c27710 100644
--- a/src/sql/openclose.sql
+++ b/src/sql/openclose.sql
@@ -1,329 +1,329 @@
/*
--------------------------------------------------------------------------------
-- BGW_OPENCLOSE
--------------------------------------------------------------------------------
openclose() is processed by postgres in background using a background worker called
BGW_OPENCLOSE (see src/worker_ob.c).
It performs transitions between daily market phases. Surprisingly, the sequence
of operations does not depend on time and is always performed in the same order.
They are just special operations waiting until the end of the current gphase.
*/
--------------------------------------------------------------------------------
INSERT INTO tvar(name,value) VALUES
('OC_CURRENT_PHASE',101); -- phase of the market when it's model is settled
CREATE TABLE tmsgdaysbefore(LIKE tmsg);
GRANT SELECT ON tmsgdaysbefore TO role_com;
-- index and unique constraint are not cloned
--------------------------------------------------------------------------------
CREATE FUNCTION openclose() RETURNS int AS $$
/*
The structure of the code is:
_phase := OC_CURRENT_PHASE
CASE _phase
....
WHEN X THEN
dowait := operation_of_phase(X)
OC_CURRENT_PHASE := _next_phase
....
return dowait
This code is executed by the BGW_OPENCLOSE doing following:
while(true)
dowait := market.openclose()
if (dowait >=0):
wait for dowait milliseconds
elif dowait == -100:
VACUUM FULL
else:
error
*/
DECLARE
_phase int;
_dowait int := 0;
_rp yerrorprim;
_stock_id int;
_owner text;
_done boolean;
BEGIN
_phase := fgetvar('OC_CURRENT_PHASE');
CASE _phase
------------------------------------------------------------------------
-- GPHASE 0 BEGIN OF DAY --
------------------------------------------------------------------------
WHEN 0 THEN -- creating the timetable of the day
PERFORM foc_create_timesum();
-- tmsg is archived to tmsgdaysbefore
WITH t AS (DELETE FROM tmsg RETURNING * )
INSERT INTO tmsgdaysbefore SELECT * FROM t ;
TRUNCATE tmsg;
PERFORM setval('tmsg_id_seq',1,false);
PERFORM foc_next(1,'tmsg archived');
WHEN 1 THEN -- waiting for opening
IF(foc_in_gphase(_phase)) THEN
_dowait := 60000; -- 1 minute
ELSE
PERFORM foc_next(101,'Start opening sequence');
END IF;
------------------------------------------------------------------------
-- GPHASE 1 MARKET OPENED --
------------------------------------------------------------------------
WHEN 101 THEN -- client access opening.
REVOKE role_co_closed FROM role_client;
GRANT role_co TO role_client;
PERFORM foc_next(102,'Client access opened');
WHEN 102 THEN -- market is opened to client access, waiting for closing.
IF(foc_in_gphase(_phase)) THEN
PERFORM foc_clean_outdated_orders();
_dowait := 60000; -- 1 minute
ELSE
PERFORM foc_next(120,'Start closing');
END IF;
WHEN 120 THEN -- market is closing.
REVOKE role_co FROM role_client;
GRANT role_co_closed TO role_client;
PERFORM foc_next(121,'Client access revoked');
WHEN 121 THEN -- waiting until the stack is empty
-- checks wether BGW_CONSUMESTACK purged the stack
_done := fstackdone();
IF(not _done) THEN
_dowait := 60000;
-- waits one minute before testing again
ELSE
-- the stack is purged
PERFORM foc_next(200,'Last primitives performed');
END IF;
------------------------------------------------------------------------
-- GPHASE 2 - MARKET CLOSED --
------------------------------------------------------------------------
WHEN 200 THEN -- removing orders of the order book
SELECT (o.ord).id,w.name INTO _stock_id,_owner FROM torder o
INNER JOIN towner w ON w.id=(o.ord).own
WHERE (o.ord).oid = (o.ord).id LIMIT 1;
IF(FOUND) THEN
_rp := fsubmitrmorder(_owner,_stock_id);
IF(_rp.error.code != 0 ) THEN
RAISE EXCEPTION 'Error while removing orders %',_rp;
END IF;
-- repeate again until order_book is empty
ELSE
PERFORM foc_next(201,'Order book is emptied');
END IF;
WHEN 201 THEN -- waiting until the stack is empty
-- checks wether BGW_CONSUMESTACK purged the stack
_done := fstackdone();
IF(not _done) THEN
_dowait := 60000;
-- waits one minute before testing again
ELSE
-- the stack is purged
PERFORM foc_next(202,'rm primitives are processed');
END IF;
WHEN 202 THEN -- truncating tables except tmsg
truncate torder;
truncate tstack;
PERFORM setval('tstack_id_seq',1,false);
PERFORM setval('tmvt_id_seq',1,false);
truncate towner;
PERFORM setval('towner_id_seq',1,false);
PERFORM foc_next(203,'tables torder,tsack,tmvt,towner are truncated');
WHEN 203 THEN -- asking for VACUUM FULL execution
_dowait := -100;
PERFORM foc_next(204,'VACUUM FULL is lauched');
WHEN 204 THEN -- waiting till the end of the day
IF(foc_in_gphase(_phase)) THEN
_dowait := 60000; -- 1 minute
-- waits before testing again
ELSE
PERFORM foc_next(0,'End of the day');
END IF;
ELSE
RAISE EXCEPTION 'Should not reach this point with phase=%',_phase;
END CASE;
-- RAISE LOG 'Phase=% _dowait=%',_phase,_dowait;
RETURN _dowait;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION openclose() TO role_bo;
-------------------------------------------------------------------------------
CREATE FUNCTION foc_next(_phase int,_msg text) RETURNS void AS $$
BEGIN
PERFORM fsetvar('OC_CURRENT_PHASE',_phase);
RAISE LOG 'MARKET PHASE %: %',_phase,_msg;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_next(int,text) TO role_bo;
--------------------------------------------------------------------------------
INSERT INTO tvar(name,value) VALUES
('OC_CURRENT_OPENED',0); -- sub-state of the opened phase
CREATE FUNCTION foc_clean_outdated_orders() RETURNS void AS $$
/* each 5 calls, cleans outdated orders */
DECLARE
_cnt int;
BEGIN
UPDATE tvar SET value=((value+1) % 5 ) WHERE name='OC_CURRENT_OPENED'
RETURNING value INTO _cnt ;
IF(_cnt !=0) THEN
RETURN;
END IF;
-- delete outdated order from the order book and related sub-orders
DELETE FROM torder o USING torder po
WHERE (o.ord).oid = (po.ord).id -- having a parent order that
AND NOT (po.duration IS NULL) -- have a timeout defined
AND (po.created + po.duration) <= clock_timestamp(); -- and is outdated
RETURN;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_clean_outdated_orders() TO role_bo;
--------------------------------------------------------------------------------
CREATE VIEW vmsg3 AS WITH t AS (SELECT * from tmsg WHERE usr = session_user
UNION ALL SELECT * from tmsgdaysbefore WHERE usr = session_user
) SELECT created,id,typ,jso
from t order by created ASC,id ASC;
GRANT SELECT ON vmsg3 TO role_com;
/*
--------------------------------------------------------------------------------
-- TIME DEPENDANT FUNCTION foc_in_gphase(_phase int)
--------------------------------------------------------------------------------
The day is shared in 3 gphases. A table tdelay defines the durations of these gphases.
When the model is settled and each day, the table timesum is built from tdelay to set
the planning of the market. foc_in_gphase(_phase) returns true when the current time
is in the planning of the _phase.
*/
--------------------------------------------------------------------------------
-- delays of the phases
--------------------------------------------------------------------------------
create table tdelay(
id serial,
delay interval
);
GRANT SELECT ON tdelay TO role_com;
/*
NB_GPHASE = 3, with id between [0,NB_GPHASE-1]
delay are defined for [0,NB_GPHASE-2],the last waits the end of the day
OC_DELAY_i is the duration of a gphase for i in [0,NB_GPHASE-2]
*/
INSERT INTO tdelay (delay) VALUES
('30 minutes'::interval), -- starts at 0h 30'
('23 hours'::interval) -- stops at 23h 30'
-- sum of delays < 24 hours
;
--------------------------------------------------------------------------------
CREATE FUNCTION foc_create_timesum() RETURNS void AS $$
/* creates the table timesum from tdelay where each record
defines for a gphase the delay between the begin of the day
and the end of this phase.
builds timesum with rows (k,ends) such as:
ends[0] = 0
ends[k] = sum(tdelay[i] for i in [0,k])
*/
DECLARE
_inter interval;
_cnt int;
BEGIN
- DROP TABLE IF EXISTS timesum;
+ -- DROP TABLE IF EXISTS timesum;
SELECT count(*) INTO STRICT _cnt FROM tdelay;
CREATE TABLE timesum (id,ends) AS
SELECT t.id,sum(d.delay) OVER w FROM generate_series(1,_cnt) t(id)
LEFT JOIN tdelay d ON (t.id=d.id) WINDOW w AS (order by t.id );
INSERT INTO timesum VALUES (0,'0'::interval);
SELECT max(ends) INTO _inter FROM timesum;
IF( _inter >= '24 hours'::interval) THEN
RAISE EXCEPTION 'sum(delay) = % > 24 hours',_inter;
END IF;
END;
$$ LANGUAGE PLPGSQL set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_create_timesum() TO role_bo;
select market.foc_create_timesum();
--------------------------------------------------------------------------------
CREATE FUNCTION foc_in_gphase(_phase int) RETURNS boolean AS $$
/* returns TRUE when current time is between the limits of the gphase
gphase is defined as _phase/100 */
DECLARE
_actual_gphase int := _phase /100;
_planned_gphase int;
_time interval;
BEGIN
-- the time since the beginning of the day
_time := now() - date_trunc('day',now());
SELECT max(id) INTO _planned_gphase FROM timesum where ends < _time ;
-- _planned_gphase is such as
-- _time is in the interval (timesum[ _planned_gphase ],timesum[ _planned_gphase+1 ])
IF (_planned_gphase = _actual_gphase) THEN
RETURN true;
ELSE
RETURN false;
END IF;
END;
$$ LANGUAGE PLPGSQL set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_create_timesum() TO role_bo;
diff --git a/src/sql/roles.sql b/src/sql/roles.sql
index 7f97f0a..c7b342b 100644
--- a/src/sql/roles.sql
+++ b/src/sql/roles.sql
@@ -1,99 +1,100 @@
+/*
\set ECHO none
\set ON_ERROR_STOP on
-/* script executed for the whole cluster */
+-- script executed for the whole cluster
SET client_min_messages = warning;
SET log_error_verbosity = terse;
-BEGIN;
+BEGIN; */
/* flowf extension */
-- drop extension if exists btree_gin cascade;
-- create extension btree_gin with version '1.0';
DROP EXTENSION IF EXISTS flowf;
CREATE EXTENSION flowf WITH VERSION '0.1';
--------------------------------------------------------------------------------
CREATE FUNCTION _create_role(_role text) RETURNS int AS $$
BEGIN
BEGIN
EXECUTE 'CREATE ROLE ' || _role;
EXCEPTION WHEN duplicate_object THEN
NULL;
END;
EXECUTE 'ALTER ROLE ' || _role || ' NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOREPLICATION';
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/* definition of roles
-- role_com ----> role_co --------->role_client---->clientA
| \-->clientB
|\-> role_co_closed
|
\-> role_bo---->user_bo
access by clients can be disabled/enabled with a single command:
REVOKE role_co FROM role_client
GRANT role_co TO role_client
opening/closing of market is performed by switching role_client
between role_co and role_co_closed
same thing for role batch with role_bo:
REVOKE role_bo FROM user_bo
GRANT role_bo TO user_bo
*/
--------------------------------------------------------------------------------
select _create_role('prod'); -- owner of market objects
ALTER ROLE prod WITH createrole;
/* so that prod can modify roles at opening and closing. */
select _create_role('role_com');
SELECT _create_role('role_co'); -- when market is opened
GRANT role_com TO role_co;
SELECT _create_role('role_co_closed'); -- when market is closed
GRANT role_com TO role_co_closed;
SELECT _create_role('role_client');
GRANT role_co_closed TO role_client; -- maket phase 101
-- role_com ---> role_bo----> user_bo
SELECT _create_role('role_bo');
GRANT role_com TO role_bo;
-- ALTER ROLE role_bo INHERIT;
SELECT _create_role('user_bo');
GRANT role_bo TO user_bo;
-- two connections are allowed for background_workers
-- BGW_OPENCLOSE and BGW_CONSUMESTACK
ALTER ROLE user_bo WITH LOGIN CONNECTION LIMIT 2;
--------------------------------------------------------------------------------
select _create_role('test_clienta');
ALTER ROLE test_clienta WITH login;
GRANT role_client TO test_clienta;
select _create_role('test_clientb');
ALTER ROLE test_clientb WITH login;
GRANT role_client TO test_clientb;
select _create_role('test_clientc');
ALTER ROLE test_clientc WITH login;
GRANT role_client TO test_clientc;
select _create_role('test_clientd');
ALTER ROLE test_clientd WITH login;
GRANT role_client TO test_clientd;
-COMMIT;
+-- COMMIT;
diff --git a/src/test/py/run.py b/src/test/py/run.py
index 91b91c4..4b39b3e 100755
--- a/src/test/py/run.py
+++ b/src/test/py/run.py
@@ -1,159 +1,159 @@
# -*- coding: utf-8 -*-
'''
Framework de tests tu_*
***************************************************************************
execution:
reset_market.sql
soumis:
list de primitives t_*.sql
résultats:
état de l'order book
état de tmsg
comparaison attendu/obtenu
dans src/test/
run.py
sql/reset_market.sql
sql/t_*.sql
expected/t_*.res
obtained/t_*.res
boucle pour chaque t_*.sql:
reset_market.sql
exécuter t_*.sql
dumper les résultats dans obtained/t_.res
comparer expected et obtained
'''
import sys,os,time,logging
import psycopg2
import psycopg2.extensions
import traceback
import srvob_conf
import molet
import utilt
import test_ti
-import test_volume
+
import sys
PARLEN=80
prtest = utilt.PrTest(PARLEN,'=')
def tests_tu(options):
titre_test = "UNDEFINED"
curdir,sqldir,resultsdir,expecteddir = utilt.get_paths()
try:
utilt.wait_for_true(srvob_conf.dbBO,0.1,"SELECT value=102,value FROM market.tvar WHERE name='OC_CURRENT_PHASE'",
msg="Waiting for market opening")
except psycopg2.OperationalError,e:
print "Please adjust DB_NAME,DB_USER,DB_PWD,DB_PORT parameters of the file src/test/py/srv_conf.py"
print "The test program could not connect to the market"
exit(1)
if options.test is None:
_fts = [f for f in os.listdir(sqldir) if f.startswith('tu_') and f[-4:]=='.sql']
_fts.sort(lambda x,y: cmp(x,y))
else:
_nt = options.test + '.sql'
_fts = os.path.join(sqldir,_nt)
if not os.path.exists(_fts):
print 'This test \'%s\' was not found' % _fts
return
else:
_fts = [_nt]
_tok,_terr,_num_test = 0,0,0
prtest.title('running tests on database "%s"' % (srvob_conf.DB_NAME,))
#print '='*PARLEN
print ''
print 'Num\tStatus\tName'
print ''
for file_test in _fts: # itération on test cases
_nom_test = file_test[:-4]
_terr +=1
_num_test +=1
file_result = file_test[:-4]+'.res'
_fte = os.path.join(resultsdir,file_result)
_fre = os.path.join(expecteddir,file_result)
with open(_fte,'w') as f:
cur = None
dump = utilt.Dumper(srvob_conf.dbBO,options,None)
titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
dump = utilt.Dumper(srvob_conf.dbBO,options,f)
titre_test = utilt.exec_script(dump,sqldir,file_test)
utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
conn = molet.DbData(srvob_conf.dbBO)
try:
with molet.DbCursor(conn) as cur:
dump.torder(cur)
dump.tmsg(cur)
finally:
conn.close()
if(os.path.exists(_fre)):
if(utilt.files_clones(_fte,_fre)):
_tok +=1
_terr -=1
print '%i\tY\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\tN\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\t?\t%s\t%s' % (_num_test,_nom_test,titre_test)
# display global results
print ''
print 'Test status: (Y)expected ==results, (N)expected!=results,(F)failed, (?)expected undefined'
prtest.line()
if(_terr == 0):
prtest.center('\tAll %i tests passed' % _tok)
else:
prtest.center('\t%i tests KO, %i passed' % (_terr,_tok))
prtest.line()
'''---------------------------------------------------------------------------
arguments
---------------------------------------------------------------------------'''
from optparse import OptionParser
import os
def main():
#global options
usage = """usage: %prog [options]
tests """
parser = OptionParser(usage)
parser.add_option("-t","--test",action="store",type="string",dest="test",help="test",
default= None)
parser.add_option("-v","--verbose",action="store_true",dest="verbose",help="verbose",default=False)
parser.add_option("-b","--build",dest="build",type="int",help="generates random test cases for test_ti",default=0)
parser.add_option("-i","--ti",action="store_true",dest="test_ti",help="execute test_ti",default=False)
parser.add_option("-r","--reset",action="store_true",dest="test_ti_reset",help="clean before execution test_ti",default=False)
(options, args) = parser.parse_args()
# um = os.umask(0177) # u=rw,g=,o=
if options.build:
test_ti.build_ti(options)
elif options.test_ti:
test_ti.test_ti(options)
else:
tests_tu(options)
if __name__ == "__main__":
main()
diff --git a/src/test/py/test_ti.py b/src/test/py/test_ti.py
index 9fe3682..4560875 100644
--- a/src/test/py/test_ti.py
+++ b/src/test/py/test_ti.py
@@ -1,353 +1,273 @@
# -*- coding: utf-8 -*-
+'''
+Packages required
+apt-get install python-psycopg2
+ sudo easy_install simplejson
+'''
import sys,os,time,logging
import psycopg2
import psycopg2.extensions
import traceback
import srvob_conf
import molet
import utilt
import sys
sys.path = [os.path.abspath(os.path.join(os.path.abspath(__file__),"../../../../simu/liquid"))]+ sys.path
#print sys.path
import distrib
PARLEN=80
prtest = utilt.PrTest(PARLEN,'=')
import random
import csv
import simplejson
import sys
def build_ti(options):
''' build a .sql file with a bump of submit
options.build is the number of tests to be generated
'''
#print options.build
#return
#conf = srvob_conf.dbBO
curdir,sqldir,resultsdir,expecteddir = utilt.get_paths()
_frs = os.path.join(sqldir,'test_ti.csv')
MAX_OWNER = 10
MAX_QLT = 20
QTT_PROV = 10000
prtest.title('generating tests cases for quotes')
def gen(nborders,frs):
for i in range(nborders):
# choose an owner
w = random.randint(1,MAX_OWNER)
# choose a couple of qualities
qlt_prov,qlt_requ = distrib.couple(distrib.uniformQlt,MAX_QLT)
# choose an omega between 0.5 and 1.5
r = random.random()+0.5
qtt_requ = int(QTT_PROV * r)
# 10% of orders are limit
lb= 'limit' if (random.random()>0.9) else 'best'
frs.writerow(['admin',lb,'w%i'%w,'q%i'%qlt_requ,qtt_requ,'q%i'%qlt_prov,QTT_PROV])
with open(_frs,'w') as f:
spamwriter = csv.writer(f)
gen(options.build,spamwriter)
if(molet.removeFile(os.path.join(expecteddir,'test_ti.res'),ignoreWarning = True)):
prtest.center('test_ti.res removed')
prtest.center('done')
prtest.line()
def test_ti(options):
_reset,titre_test = options.test_ti_reset,''
curdir,sqldir,resultsdir,expecteddir = utilt.get_paths()
prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
dump = utilt.Dumper(srvob_conf.dbBO,options,None)
if _reset:
print '\tReset: Clearing market ...'
titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
print '\t\tDone'
fn = os.path.join(sqldir,'test_ti.csv')
if( not os.path.exists(fn)):
raise ValueError('The data %s is not found' % fn)
with open(fn,'r') as f:
spamreader = csv.reader(f)
values_prov = {}
_nbtest = 0
for row in spamreader:
_nbtest +=1
qua_prov,qtt_prov = row[5],row[6]
if not qua_prov in values_prov.keys():
values_prov[qua_prov] = 0
values_prov[qua_prov] = values_prov[qua_prov] + int(qtt_prov)
#print values_prov
cur_login = None
titre_test = None
inst = utilt.ExecInst(dump)
user = None
fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
fmtjsosr = '''SELECT jso from market.tmsg
where json_extract_path_text(jso,'id')::int=%i and typ='response' '''
fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
from market.tmsg
where json_extract_path_text(jso,'orig')::int=%i
and json_extract_path_text(jso,'orde','id')::int=%i
and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
'''
the order that produced the exchange has the qualities expected
'''
i= 0
if _reset:
print '\tSubmission: sending a series of %i tests where a random set of arguments' % _nbtest
print '\t\tis used to submit a quote, then an order'
with open(fn,'r') as f:
spamreader = csv.reader(f)
compte = utilt.Delai()
for row in spamreader:
user = row[0]
params = tuple(row[1:])
cursor = inst.exe( fmtquote % params,user)
cursor = inst.exe( fmtorder % params,user)
i +=1
if i % 100 == 0:
prtest.progress(i/float(_nbtest))
delai = compte.getSecs()
print '\t\t%i quote & order primitives in %f seconds' % (_nbtest*2,delai)
print '\tExecution: Waiting for end of execution ...'
#utilt.wait_for_true(srvob_conf.dbBO,1000,"SELECT market.fstackdone()",prtest=prtest)
utilt.wait_for_empty_stack(srvob_conf.dbBO,prtest)
delai = compte.getSecs()
print '\t\t Done: mean time per primitive %f seconds' % (delai/(_nbtest*2),)
fmtiter = '''SELECT json_extract_path_text(jso,'id')::int id,json_extract_path_text(jso,'primitive','type') typ
from market.tmsg where typ='response' and json_extract_path_text(jso,'primitive','kind')='quote'
order by id asc limit 10 offset %i'''
i = 0
_notnull,_ko,_limit,_limitko = 0,0,0,0
print '\tChecking: identity of quote result and order result for each %i test cases' % _nbtest
print '\t\tusing the content of market.tmsg'
while True:
cursor = inst.exe( fmtiter % i,user)
vec = []
for re in cursor:
vec.append(re)
l = len(vec)
if l == 0:
break
for idq,_type in vec:
i += 1
if _type == 'limit':
_limit += 1
# result of the quote for idq
_cur = inst.exe(fmtjsosr %idq,user)
res = _cur.fetchone()
res_quote =simplejson.loads(res[0])
expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
#result of the order for idq+1
_cur = inst.exe(fmtjsose %(idq+1,idq+1),user)
res = _cur.fetchone()
if res is None:
result = 0,0
else:
ido_,qtt_prov_,qtt_reci_ = res
result = qtt_prov_,qtt_reci_
_notnull +=1
if _type == 'limit':
if float(expected[0])/float(expected[1]) < float(qtt_prov_)/float(qtt_reci_):
_limitko +=1
if result != expected:
_ko += 1
print idq,res,res_quote
if i %100 == 0:
prtest.progress(i/float(_nbtest))
'''
if i == 100:
print '\t\t.',
else:
print '.',
sys.stdout.flush()
if(_ko != 0): _errs = ' - %i errors' %_ko
else: _errs = ''
print ('\t\t%i quote & order\t%i quotes returned a result %s' % (i-_ko,_notnull,_errs))
'''
_valuesko = check_values(inst,values_prov,user)
prtest.title('Results checkings')
print ''
print '\t\t%i\torders returned a result different from the previous quote' % _ko
print '\t\t\twith the same arguments\n'
print '\t\t%i\tlimit orders returned a result where the limit is not observed\n' % _limitko
print '\t\t%i\tqualities where the quantity is not preserved by the market\n' % _valuesko
prtest.line()
if(_ko == 0 and _limitko == 0 and _valuesko == 0):
prtest.center('\tAll %i tests passed' % i)
else:
prtest.center('\tSome of %i tests failed' % (i,))
prtest.line()
inst.close()
return titre_test
def check_values(inst,values_input,user):
'''
Values_input is for each quality, the sum of quantities submitted to the market
Values_remain is for each quality, the sum of quantities remaining in the order book
Values_output is for each quality, the sum of quantities of mvt_from.qtt of tmsg
Checks that for each quality q:
Values_input[q] == Values_remain[q] + Values_output[q]
'''
sql = "select (ord).qua_prov,sum((ord).qtt) from market.torder where (ord).oid=(ord).id group by (ord).qua_prov"
cursor = inst.exe( sql,user)
values_remain = {}
for qua_prov,qtt in cursor:
values_remain[qua_prov] = qtt
sql = '''select json_extract_path_text(jso,'mvt_from','nat'),sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint)
from market.tmsg where typ='exchange'
group by json_extract_path_text(jso,'mvt_from','nat')
'''
cursor = inst.exe( sql,user)
values_output = {}
for qua_prov,qtt in cursor:
values_output[qua_prov] = qtt
_errs = 0
for qua,vin in values_input.iteritems():
_out = values_output.get(qua,None)
_remain = values_remain.get(qua,None)
if _out is None or _remain is None:
_errs += 1
continue
if vin != (_out+ _remain):
print qua,vin,_out,_remain
_errs += 1
return _errs
-
-def test_ti_old(options):
-
- curdir,sqldir,resultsdir,expecteddir = utilt.get_paths()
- prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
-
- dump = utilt.Dumper(srvob_conf.dbBO,options,None)
- titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
-
- fn = os.path.join(sqldir,'test_ti.csv')
- if( not os.path.exists(fn)):
- raise ValueError('The data %s is not found' % fn)
-
- cur_login = None
- titre_test = None
-
- inst = utilt.ExecInst(dump)
- quote = False
-
- with open(fn,'r') as f:
- spamreader = csv.reader(f)
- i= 0
- usr = None
- fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
- fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
- fmtjsosr = "SELECT jso from market.tmsg where json_extract_path_text(jso,'id')::int=%i and typ='response'"
- fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
- sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
- sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
- from market.tmsg
- where json_extract_path_text(jso,'orde','id')::int=%i
- and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
- '''
- the order that produced the exchange has the qualities expected
- '''
- _notnull,_ko = 0,0
- for row in spamreader:
- i += 1
- user = row[0]
- params = tuple(row[1:])
-
- cursor = inst.exe( fmtquote % params,user)
- idq,err = cursor.fetchone()
- if err != '(0,)':
- raise ValueError('Quote returned an error "%s"' % err)
- utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
- cursor = inst.exe(fmtjsosr %idq,user)
- res = cursor.fetchone() # result of the quote
- res_quote =simplejson.loads(res[0])
- expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
- #print res_quote
- #print ''
-
- cursor = inst.exe( fmtorder % params,user)
- ido,err = cursor.fetchone()
- if err != '(0,)':
- raise ValueError('Order returned an error "%s"' % err)
- utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
- cursor = inst.exe(fmtjsose %ido,user)
- res = cursor.fetchone()
-
- if res is None:
- result = 0,0
- else:
- ido_,qtt_prov_,qtt_reci_ = res
- result = qtt_prov_,qtt_reci_
- _notnull +=1
-
- if result != expected:
- _ko += 1
- print qtt_prov_,qtt_reci_,res_quote
-
- if i %100 == 0:
- if(_ko != 0): _errs = ' - %i errors' %_ko
- else: _errs = ''
- print ('\t%i quote & order - %i quotes returned a result %s' % (i-_ko,_notnull,_errs))
-
- if(_ko == 0):
- prtest.title(' all %i tests passed' % i)
- else:
- prtest.title('%i checked %i tests failed' % (i,_ko))
-
-
- inst.close()
- return titre_test
diff --git a/src/yflowparse.y b/src/yflowparse.y
index 0f692e5..1e63481 100644
--- a/src/yflowparse.y
+++ b/src/yflowparse.y
@@ -1,111 +1,105 @@
%{
/* contrib/yflow/yflowparse.y */
-#define YYPARSE_PARAM resultat /* need this to pass a pointer (void *) to yyparse */
+// #define YYPARSE_PARAM resultat /* need this to pass a pointer (void *) to yyparse */
// #define YYSTYPE char *
#define YYDEBUG 1
#include "postgres.h"
#include "wolf.h"
-/*
- * Bison doesn't allocate anything that needs to live across parser calls,
- * so we can easily have it use palloc instead of malloc. This prevents
- * memory leaks if we error out during parsing. Note this only works with
- * bison >= 2.0. However, in bison 1.875 the default is to use alloca()
- * if possible, so there's not really much problem anyhow, at least if
- * you're building with gcc.
- */
+
#define YYMALLOC palloc
#define YYFREE pfree
extern int yflow_yylex(void);
static char *scanbuf;
static int scanbuflen;
-void yflow_yyerror(const char *message);
-int yflow_yyparse(void *resultat);
+extern int yflow_yyparse(Tflow **resultat);
+extern void yflow_yyerror(Tflow **resultat, const char *message);
%}
/* BISON Declarations */
+%parse-param {Tflow **resultat}
%expect 0
-%name-prefix="yflow_yy"
+// %name-prefix="yflow_yy"
%union {
char * text;
double dval;
int64 in;
}
%token <text> O_PAREN
%token <text> C_PAREN
%token <text> O_BRACKET
%token <text> C_BRACKET
%token <text> COMMA
%token <in> FLOWINT
%token <dval> NUMBER
%start box
/* Grammar follows */
%%
box:
O_BRACKET C_BRACKET {
//empty
}
|
O_BRACKET bid_list C_BRACKET {
//((Tflow * )result)->lastRelRefused = $2;
//yflow_compute((Tflow * )result);
}
;
bid_list:
bid {
;
}
|
bid_list COMMA bid {
Tflow **pf = resultat;
if ((*pf)->dim > FLOW_MAX_DIM) {
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("bad yflow representation"),
errdetail("A yflow cannot have more than %d orders.",FLOW_MAX_DIM)));
YYABORT;
}
}
bid:
O_PAREN FLOWINT COMMA FLOWINT COMMA FLOWINT COMMA FLOWINT COMMA FLOWINT COMMA FLOWINT COMMA FLOWINT COMMA NUMBER C_PAREN {
Tflow **pf = resultat;
Tfl s;
// id,oid,own,qtt_requ,qtt_prov,qtt,proba
s.type = (int32) ($2);
if (!ORDER_TYPE_IS_VALID(s.type)) {
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("bad order representation in yflow"),
errdetail("A order in yflow cannot have type %d.",s.type)));
YYABORT;
}
s.id = (int32) ($4);
s.oid = (int32) ($6);
s.own = (int32) $8;
s.qtt_requ = $10;
s.qtt_prov = $12;
s.qtt = $14;
s.proba = $16;
*pf = flowm_extends(&s,*pf,false);
}
%%
#include "yflowscan.c"
diff --git a/src/yflowscan.l b/src/yflowscan.l
index b9fa00d..238d72d 100644
--- a/src/yflowscan.l
+++ b/src/yflowscan.l
@@ -1,129 +1,129 @@
%{
/*
* A scanner for EMP-style numeric ranges
* contrib/yflow/yflowscan.l
*/
#include "postgres.h"
/* No reason to constrain amount of data slurped */
#define YY_READ_BUF_SIZE 16777216
/*
* flex emits a yy_fatal_error() function that it calls in response to
* critical errors like malloc failure, file I/O errors, and detection of
* internal inconsistency. That function prints a message and calls exit().
* Mutate it to instead call our handler, which jumps out of the parser.
*/
#undef fprintf
#define fprintf(file, fmt, msg) yflow_flex_fatal(msg)
/* Handles to the buffer that the lexer uses internally */
static YY_BUFFER_STATE scanbufhandle;
static char *scanbuf;
static int scanbuflen;
/* flex 2.5.4 doesn't bother with a decl for this */
int yflow_yylex(void);
void yflow_scanner_init(const char *str);
void yflow_scanner_finish(void);
static int yflow_flex_fatal(const char *msg);
/*
([0-9]+|([0-9]*\.[0-9]+)([eE][-+]?[0-9]+)?)
*/
%}
%option 8bit
%option never-interactive
%option nodefault
%option noinput
%option nounput
%option noyywrap
%option prefix="yflow_yy"
%%
([0-9]*\.[0-9]+)([eE][-+]?[0-9]+)? {
yylval.dval = atof(yflow_yytext);
return NUMBER;
}
([0-9]+) {
yylval.in = atoll(yflow_yytext); return FLOWINT;
}
\[ yylval.text = "("; return O_BRACKET;
\] yylval.text = ")"; return C_BRACKET;
\( yylval.text = "("; return O_PAREN;
\) yylval.text = ")"; return C_PAREN;
\, yylval.text = ")"; return COMMA;
[ \t\n\r\f]+ /* discard spaces */
. return yflow_yytext[0]; /* alert parser of the garbage */
%%
void
-yflow_yyerror(const char *message)
+yflow_yyerror(Tflow **resultat, const char *message)
{
if (*yflow_yytext == YY_END_OF_BUFFER_CHAR)
{
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("bad yflow representation"),
/* translator: %s is typically "syntax error" */
errdetail("%s at end of input", message)));
}
else
{
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("bad yflow representation"),
/* translator: first %s is typically "syntax error" */
errdetail("%s at or near \"%s\"", message, yflow_yytext)));
}
}
/*
* Called before any actual parsing is done
*/
void
yflow_scanner_init(const char *str)
{
Size slen = strlen(str);
/*
* Might be left over after ereport()
*/
if (YY_CURRENT_BUFFER)
yy_delete_buffer(YY_CURRENT_BUFFER);
/*
* Make a scan buffer with special termination needed by flex.
*/
scanbuflen = slen;
scanbuf = palloc(slen + 2);
memcpy(scanbuf, str, slen);
scanbuf[slen] = scanbuf[slen + 1] = YY_END_OF_BUFFER_CHAR;
scanbufhandle = yy_scan_buffer(scanbuf, slen + 2);
BEGIN(INITIAL);
}
/*
* Called after parsing is done to clean up after yflow_scanner_init()
*/
void
yflow_scanner_finish(void)
{
yy_delete_buffer(scanbufhandle);
pfree(scanbuf);
}
static int
yflow_flex_fatal(const char *msg)
{
elog(FATAL, "%s\n",msg);
return 0; /* keep compiler quiet */
}
|
olivierch/openBarter
|
29726ebc3f548d2c992de5c0c169fba9b9376c5b
|
all tests are OK
|
diff --git a/src/sql/algo.sql b/src/sql/algo.sql
index 27cdad4..3cf69b4 100644
--- a/src/sql/algo.sql
+++ b/src/sql/algo.sql
@@ -1,497 +1,495 @@
--------------------------------------------------------------------------------
/* function fcreate_tmp
It is the central query of openbarter
for an order O fcreate_tmp creates a temporary table _tmp of objects.
-Each object represents a chain of orders - a flows - going to O.
+Each object represents a possible chain of orders - a 'flow' - going to O.
The table has columns
debut the first order of the path
path the path
fin the end of the path (O)
depth the exploration depth
cycle a boolean true when the path contains the new order
The number of paths fetched is limited to MAXPATHFETCHED
Among those objects representing chains of orders,
only those making a potential exchange (draft) are recorded.
*/
--------------------------------------------------------------------------------
/*
CREATE VIEW vorderinsert AS
SELECT id,yorder_get(id,own,nr,qtt_requ,np,qtt_prov,qtt) as ord,np,nr
FROM torder ORDER BY ((qtt_prov::double precision)/(qtt_requ::double precision)) DESC; */
--------------------------------------------------------------------------------
CREATE FUNCTION fcreate_tmp(_ord yorder) RETURNS int AS $$
DECLARE
_MAXPATHFETCHED int := fgetconst('MAXPATHFETCHED');
_MAXCYCLE int := fgetconst('MAXCYCLE');
_cnt int;
BEGIN
/* the statement LIMIT would not avoid deep exploration if the condition
was specified on Z in the search_backward WHERE condition */
-- fails when qua_prov == qua_requ
IF((_ord).qua_prov = (_ord).qua_requ) THEN
RAISE EXCEPTION 'quality provided and required are the same: %',_ord;
END IF;
CREATE TEMPORARY TABLE _tmp ON COMMIT DROP AS (
SELECT yflow_finish(Z.debut,Z.path,Z.fin) as cycle FROM (
WITH RECURSIVE search_backward(debut,path,fin,depth,cycle) AS(
SELECT _ord,yflow_init(_ord),
_ord,1,false
-- FROM torder WHERE (ord).id= _ordid
UNION ALL
SELECT X.ord,yflow_grow_backward(X.ord,Y.debut,Y.path),
Y.fin,Y.depth+1,yflow_contains_oid((X.ord).oid,Y.path)
FROM torder X,search_backward Y
WHERE yflow_match(X.ord,Y.debut) -- (X.ord).qua_prov=(Y.debut).qua_requ
AND ((X.duration IS NULL) OR ((X.created + X.duration) > clock_timestamp()))
AND Y.depth < _MAXCYCLE
AND NOT cycle
AND (X.ord).carre_prov @> (Y.debut).pos_requ -- use if gist(carre_prov)
AND NOT yflow_contains_oid((X.ord).oid,Y.path)
) SELECT debut,path,fin from search_backward
LIMIT _MAXPATHFETCHED
) Z WHERE /* (Z.fin).qua_prov=(Z.debut).qua_requ
AND */ yflow_match(Z.fin,Z.debut) -- it is a cycle
AND yflow_is_draft(yflow_finish(Z.debut,Z.path,Z.fin)) -- and a draft
);
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- order unstacked and inserted into torder
/* if the referenced oid is found,
the order is inserted, and the process is loached
else a movement is created
*/
--------------------------------------------------------------------------------
CREATE TYPE yresflow AS (
mvts int[], -- list of id des mvts
qtts int8[], -- value.qtt moved
nats text[], -- value.nat moved
grp int,
owns text[],
usrs text[],
ords yorder[]
);
--------------------------------------------------------------------------------
-CREATE FUNCTION insertorder(_owner dtext,_o yorder,_usr dtext,_created timestamp,_duration interval) RETURNS int AS $$
+CREATE FUNCTION insertorder(_owner dtext,_o yorder,_usr dtext,_created timestamp,_duration interval)
+ RETURNS int AS $$
DECLARE
_fmvtids int[];
- -- _first_mvt int;
- -- _err int;
- --_flows json[]:= ARRAY[]::json[];
_cyclemax yflow;
- -- _mvts int[];
_res int8[];
_MAXMVTPERTRANS int := fgetconst('MAXMVTPERTRANS');
_nbmvts int := 0;
_qtt_give int8 := 0;
_qtt_reci int8 := 0;
_cnt int;
_resflow yresflow;
BEGIN
lock table torder in share update exclusive mode NOWAIT;
-- immediatly aborts the order if the lock cannot be acquired
INSERT INTO torder(usr,own,ord,created,updated,duration) VALUES (_usr,_owner,_o,_created,NULL,_duration);
_fmvtids := ARRAY[]::int[];
-- _time_begin := clock_timestamp();
_cnt := fcreate_tmp(_o);
-- RAISE WARNING 'insertbarter A % %',_o,_cnt;
LOOP
SELECT yflow_max(cycle) INTO _cyclemax FROM _tmp WHERE yflow_is_draft(cycle);
IF(NOT yflow_is_draft(_cyclemax)) THEN
EXIT; -- from LOOP
END IF;
-- RAISE WARNING 'insertbarter B %',_cyclemax;
_nbmvts := _nbmvts + yflow_dim(_cyclemax);
IF(_nbmvts > _MAXMVTPERTRANS) THEN
EXIT;
END IF;
_resflow := fexecute_flow(_cyclemax);
_cnt := foncreatecycle(_o,_resflow);
_fmvtids := _fmvtids || _resflow.mvts;
_res := yflow_qtts(_cyclemax);
_qtt_reci := _qtt_reci + _res[1];
_qtt_give := _qtt_give + _res[2];
UPDATE _tmp SET cycle = yflow_reduce(cycle,_cyclemax,false);
DELETE FROM _tmp WHERE NOT yflow_is_draft(cycle);
END LOOP;
-- RAISE WARNING 'insertbarter C % % % % %',_qtt_give,_qtt_reci,_o.qtt_prov,_o.qtt_requ,_fmvtids;
IF ( (_qtt_give != 0) AND ((_o.type & 3) = 1) -- ORDER_LIMIT
AND ((_qtt_give::double precision) /(_qtt_reci::double precision)) >
((_o.qtt_prov::double precision) /(_o.qtt_requ::double precision))
) THEN
- RAISE EXCEPTION 'pb: Omega of the flows obtained is not limited by the order limit' USING ERRCODE='YA003';
+ RAISE EXCEPTION 'pb: Omega of the flows obtained is not limited by the order limit'
+ USING ERRCODE='YA003';
END IF;
-- set the number of movements in this transaction
-- UPDATE tmvt SET nbt= array_length(_fmvtids,1) WHERE id = ANY (_fmvtids);
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
CREATE FUNCTION foncreatecycle(_orig yorder,_r yresflow) RETURNS int AS $$
DECLARE
_usr_src text;
_cnt int;
_i int;
_nbcommit int;
_iprev int;
_inext int;
_o yorder;
BEGIN
_nbcommit := array_length(_r.ords,1);
_i := _nbcommit;
_iprev := _i -1;
FOR _inext IN 1.._nbcommit LOOP
_usr_src := _r.usrs[_i];
_o := _r.ords[_i];
INSERT INTO tmsg (typ,jso,usr,created) VALUES (
'exchange',
row_to_json(ROW(
_r.mvts[_i],
_r.grp,
ROW( -- order
_o.id,
_o.qtt_prov,
_o.qtt_requ,
CASE WHEN _o.type&3 =1 THEN 'limit' ELSE 'best' END
)::yj_Ï,
ROW( -- stock
_o.oid,
_o.qtt,
_r.nats[_i],
_r.owns[_i],
_r.usrs[_i]
)::yj_stock,
ROW( -- mvt_from
_r.mvts[_i],
_r.qtts[_i],
_r.nats[_i],
_r.owns[_inext],
_r.usrs[_inext]
)::yj_stock,
ROW( --mvt_to
_r.mvts[_iprev],
_r.qtts[_iprev],
_r.nats[_iprev],
_r.owns[_iprev],
_r.usrs[_iprev]
)::yj_stock,
_orig.id -- orig
)::yj_mvt),
_usr_src,statement_timestamp());
_iprev := _i;
_i := _inext;
END LOOP;
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/* fexecute_flow used for a barter
from a flow representing a draft, for each order:
inserts a new movement
updates the order book
*/
--------------------------------------------------------------------------------
CREATE FUNCTION fexecute_flow(_flw yflow) RETURNS yresflow AS $$
DECLARE
_i int;
_next_i int;
_prev_i int;
_nbcommit int;
_first_mvt int;
_exhausted boolean;
_mvtexhausted boolean;
_cntExhausted int;
_mvt_id int;
_cnt int;
_resflow yresflow;
--_mvts int[];
--_oids int[];
_qtt int8;
_flowr int8;
_qttmin int8;
_qttmax int8;
_o yorder;
_usr text;
_usrnext text;
-- _own text;
-- _ownnext text;
-- _idownnext int;
-- _pidnext int;
_or torder%rowtype;
_mat int8[][];
_om_exp double precision;
_om_rea double precision;
BEGIN
_nbcommit := yflow_dim(_flw);
-- sanity check
IF( _nbcommit <2 ) THEN
RAISE EXCEPTION 'the flow should be draft:_nbcommit = %',_nbcommit
USING ERRCODE='YA003';
END IF;
_first_mvt := NULL;
_exhausted := false;
-- _resx.nbc := _nbcommit;
_resflow.mvts := ARRAY[]::int[];
_resflow.qtts := ARRAY[]::int8[];
_resflow.nats := ARRAY[]::text[];
_resflow.owns := ARRAY[]::text[];
_resflow.usrs := ARRAY[]::text[];
_resflow.ords := ARRAY[]::yorder[];
_mat := yflow_to_matrix(_flw);
_i := _nbcommit;
_prev_i := _i - 1;
FOR _next_i IN 1 .. _nbcommit LOOP
------------------------------------------------------------------------
_o.id := _mat[_i][1];
_o.own := _mat[_i][2];
_o.oid := _mat[_i][3];
_o.qtt := _mat[_i][6];
_flowr := _mat[_i][7];
-- _idownnext := _mat[_next_i][2];
-- _pidnext := _mat[_next_i][3];
-- sanity check
SELECT count(*),min((ord).qtt),max((ord).qtt) INTO _cnt,_qttmin,_qttmax
FROM torder WHERE (ord).oid = _o.oid;
IF(_cnt = 0) THEN
RAISE EXCEPTION 'the stock % expected does not exist',_o.oid USING ERRCODE='YU002';
END IF;
IF( _qttmin != _qttmax ) THEN
RAISE EXCEPTION 'the value of stock % is not the same value for all orders',_o.oid USING ERRCODE='YU002';
END IF;
_cntExhausted := 0;
_mvtexhausted := false;
IF( _qttmin < _flowr ) THEN
RAISE EXCEPTION 'the stock % is smaller than the flow (% < %)',_o.oid,_qttmin,_flowr USING ERRCODE='YU002';
ELSIF (_qttmin = _flowr) THEN
_cntExhausted := _cnt;
_exhausted := true;
_mvtexhausted := true;
END IF;
-- update all stocks of the order book
UPDATE torder SET ord.qtt = (ord).qtt - _flowr ,updated = statement_timestamp()
WHERE (ord).oid = _o.oid;
GET DIAGNOSTICS _cnt = ROW_COUNT;
IF(_cnt = 0) THEN
RAISE EXCEPTION 'no orders with the stock % exist',_o.oid USING ERRCODE='YU002';
END IF;
SELECT * INTO _or FROM torder WHERE (ord).id = _o.id LIMIT 1; -- child order
-- RAISE WARNING 'ici %',_or.ord;
_om_exp := (((_or.ord).qtt_prov)::double precision) / (((_or.ord).qtt_requ)::double precision);
_om_rea := ((_flowr)::double precision) / ((_mat[_prev_i][7])::double precision);
/*
SELECT name INTO STRICT _ownnext FROM towner WHERE id=_idownnext;
SELECT name INTO STRICT _own FROM towner WHERE id=_o.own;
SELECT usr INTO STRICT _usrnext FROM torder WHERE (ord).id=_pidnext;
INSERT INTO tmvt (nbc,nbt,grp,
xid,usr_src,usr_dst,xoid,own_src,own_dst,qtt,nat,ack,
exhausted,order_created,created,om_exp,om_rea)
VALUES(_nbcommit,1,_first_mvt,
_o.id,_or.usr,_usrnext,_o.oid,_own,_ownnext,_flowr,(_or.ord).qua_prov,_cycleack,
_mvtexhausted,_or.created,statement_timestamp(),_om_exp,_om_rea)
RETURNING id INTO _mvt_id;
*/
SELECT nextval('tmvt_id_seq') INTO _mvt_id;
IF(_first_mvt IS NULL) THEN
_first_mvt := _mvt_id;
_resflow.grp := _mvt_id;
-- _resx.first_mvt := _mvt_id;
-- UPDATE tmvt SET grp = _first_mvt WHERE id = _first_mvt;
END IF;
_resflow.mvts := array_append(_resflow.mvts,_mvt_id);
_resflow.qtts := array_append(_resflow.qtts,_flowr);
_resflow.nats := array_append(_resflow.nats,(_or.ord).qua_prov);
_resflow.owns := array_append(_resflow.owns,_or.own::text);
_resflow.usrs := array_append(_resflow.usrs,_or.usr::text);
_resflow.ords := array_append(_resflow.ords,_or.ord);
_prev_i := _i;
_i := _next_i;
------------------------------------------------------------------------
END LOOP;
IF( NOT _exhausted ) THEN
-- some order should be exhausted
RAISE EXCEPTION 'the cycle should exhaust some order'
USING ERRCODE='YA003';
END IF;
RETURN _resflow;
END;
$$ LANGUAGE PLPGSQL;
CREATE TYPE yr_quote AS (
qtt_reci int8,
qtt_give int8
);
--------------------------------------------------------------------------------
-- quote execution at the output of the stack
--------------------------------------------------------------------------------
CREATE FUNCTION fproducequote(_ord yorder,_isquote boolean,_isnoqttlimit boolean,_islimit boolean,_isignoreomega boolean)
/*
_isquote := true;
it can be a quote or a prequote
_isnoqttlimit := false;
when true the quantity provided is not limited by the stock available
_islimit:= (_t.jso->'type')='limit';
type of the quoted order
_isignoreomega := -- (_t.type & 8) = 8
*/
RETURNS json AS $$
DECLARE
_cnt int;
_cyclemax yflow;
_cycle yflow;
_res int8[];
_firstloop boolean := true;
_freezeOmega boolean;
_mid int;
_nbmvts int;
_wid int;
_MAXMVTPERTRANS int := fgetconst('MAXMVTPERTRANS');
_barter text;
_paths text;
_qtt_reci int8 := 0;
_qtt_give int8 := 0;
_qtt_prov int8 := 0;
_qtt_requ int8 := 0;
_qtt int8 := 0;
_resjso json;
BEGIN
_cnt := fcreate_tmp(_ord);
_nbmvts := 0;
_paths := '';
LOOP
SELECT yflow_max(cycle) INTO _cyclemax FROM _tmp WHERE yflow_is_draft(cycle);
IF(NOT yflow_is_draft(_cyclemax)) THEN
EXIT; -- from LOOP
END IF;
_nbmvts := _nbmvts + yflow_dim(_cyclemax);
IF(_nbmvts > _MAXMVTPERTRANS) THEN
EXIT;
END IF;
_res := yflow_qtts(_cyclemax);
_qtt_reci := _qtt_reci + _res[1];
_qtt_give := _qtt_give + _res[2];
IF(_isquote) THEN
IF(_firstloop) THEN
_qtt_requ := _res[3];
_qtt_prov := _res[4];
END IF;
IF(_isnoqttlimit) THEN
_qtt := _qtt + _res[5];
ELSE
IF(_firstloop) THEN
_qtt := _res[5];
END IF;
END IF;
END IF;
-- for a PREQUOTE they remain 0
_freezeOmega := _firstloop AND _isignoreomega AND _isquote;
/* yflow_reduce:
for all orders except for node with NOQTTLIMIT:
qtt = qtt -flowr
for the last order, if is IGNOREOMEGA:
- omega is set:
_cycle[last].qtt_requ,_cycle[last].qtt_prov
:= _cyclemax[last].qtt_requ,_cyclemax[last].qtt_prov
- if _freezeOmega the IGNOREOMEGA is reset
*/
UPDATE _tmp SET cycle = yflow_reduce(cycle,_cyclemax,_freezeOmega);
DELETE FROM _tmp WHERE NOT yflow_is_draft(cycle);
_firstloop := false;
END LOOP;
IF ( (_qtt_requ != 0)
AND _islimit AND _isquote
AND ((_qtt_give::double precision) /(_qtt_reci::double precision)) >
((_qtt_prov::double precision) /(_qtt_requ::double precision))
) THEN
RAISE EXCEPTION 'pq: Omega of the flows obtained is not limited by the order limit' USING ERRCODE='YA003';
END IF;
_resjso := row_to_json(ROW(_qtt_reci,_qtt_give)::yr_quote);
RETURN _resjso;
END;
$$ LANGUAGE PLPGSQL;
diff --git a/src/sql/currencies.sql b/src/sql/currencies.sql
index eb9dc46..2b7f5ca 100644
--- a/src/sql/currencies.sql
+++ b/src/sql/currencies.sql
@@ -1,317 +1,317 @@
CREATE TABLE townauth (
name text
);
-SELECT _grant_read('townauth');
+GRANT SELECT ON townauth TO role_com;
COPY townauth (name) FROM stdin;
google.com
\.
CREATE TABLE tcurrency (
name text
);
-SELECT _grant_read('tcurrency');
+GRANT SELECT ON tcurrency TO role_com;
COPY tcurrency (name) FROM stdin;
ADP
AED
AFA
AFN
ALK
ALL
AMD
ANG
AOA
AOK
AON
AOR
ARA
ARP
ARS
ARY
ATS
AUD
AWG
AYM
AZM
AZN
BAD
BAM
BBD
BDT
BEC
BEF
BEL
BGJ
BGK
BGL
BGN
BHD
BIF
BMD
BND
BOB
BOP
BOV
BRB
BRC
BRE
BRL
BRN
BRR
BSD
BTN
BUK
BWP
BYB
BYR
BZD
CAD
CDF
CHC
CHE
CHF
CHW
CLF
CLP
CNX
CNY
COP
COU
CRC
CSD
CSJ
CSK
CUC
CUP
CVE
CYP
CZK
DDM
DEM
DJF
DKK
DOP
DZD
ECS
ECV
EEK
EGP
EQE
ERN
ESA
ESB
ESP
ETB
EUR
FIM
FJD
FKP
FRF
GBP
GEK
GEL
GHC
GHP
GHS
GIP
GMD
GNE
GNF
GNS
GQE
GRD
GTQ
GWE
GWP
GYD
HKD
HNL
HRD
HRK
HTG
HUF
IDR
IEP
ILP
ILR
ILS
INR
IQD
IRR
ISJ
ISK
ITL
JMD
JOD
JPY
KES
KGS
KHR
KMF
KPW
KRW
KWD
KYD
KZT
LAJ
LAK
LBP
LKR
LRD
LSL
LSM
LTL
LTT
LUC
LUF
LUL
LVL
LVR
LYD
MAD
MAF
MDL
MGA
MGF
MKD
MLF
MMK
MNT
MOP
MRO
MTL
MTP
MUR
MVQ
MVR
MWK
MXN
MXP
MXV
MYR
MZE
MZM
MZN
NAD
NGN
NIC
NIO
NLG
NOK
NPR
NZD
OMR
PAB
PEH
PEI
PEN
PES
PGK
PHP
PKR
PLN
PLZ
PTE
PYG
QAR
RHD
ROK
ROL
RON
RSD
RUB
RUR
RWF
SAR
SBD
SCR
SDD
SDG
SDP
SEK
SGD
SHP
SIT
SKK
SLL
SOS
SRD
SRG
SSP
STD
SUR
SVC
SYP
SZL
THB
TJR
TJS
TMM
TMT
TND
TOP
TPE
TRL
TRY
TTD
TWD
TZS
UAH
UAK
UGS
UGW
UGX
USD
USN
USS
UYI
UYN
UYP
UYU
UZS
VEB
VEF
VNC
VND
VUV
WST
XAF
XAG
XAU
XBA
XBB
XBC
XBD
XCD
XDR
XEU
XFO
XFU
XOF
XPD
XPF
XPT
XRE
XSU
XTS
XUA
XXX
YDD
YER
YUD
YUM
YUN
ZAL
ZAR
ZMK
ZRN
ZRZ
ZWC
ZWD
ZWL
ZWN
ZWR
\.
diff --git a/src/sql/events.sql b/src/sql/events.sql
deleted file mode 100644
index ffc2f3d..0000000
--- a/src/sql/events.sql
+++ /dev/null
@@ -1,252 +0,0 @@
-/*------------------------------------------------------------------------------
-external interface
-
-ce fichier est inutile
-********************************************************************************
-
-------------------------------------------------------------------------------*/
-
-/*------------------------------------------------------------------------------*/
-CREATE FUNCTION fgivetojson(_o torder,_mprev tmvt,_m tmvt,_mnext tmvt)
- RETURNS json AS $$
-DECLARE
- _value json;
-BEGIN
- _value := row_to_json(ROW(
- _m.id,
- _m.grp,
- ROW( -- order
- (_o.ord).id,
- ROW(
- (_o.ord).qtt,
- (_o.ord).qua_prov
- )::yj_value,
- _m.own_src,
- _o.usr
- )::yj_barter,
- ROW( -- given
- _m.id,
- _m.qtt,
- _m.nat,
- _mnext.own_src,
- _mnext.usr_src
- )::yj_gvalue,
- ROW( --received
- _mprev.id,
- _mprev.qtt,
- _mprev.nat,
- _mprev.own_src,
- _mprev.usr_src
- )::yj_rvalue
- )::yj_mvt);
-
- RETURN _value;
-END;
-$$ LANGUAGE PLPGSQL;
-
-/*------------------------------------------------------------------------------
-deletes barter and sub-barter and output a message for this barter
-returns the number of barter+sub-barter deleted from the book
-------------------------------------------------------------------------------*/
-
-CREATE FUNCTION fincancelbarter(_o torder,_final ebarterfinal) RETURNS int AS $$
-DECLARE
- _cnt int;
- _yo yorder%rowtype;
- _te json;
- _owner text;
- _DEBUG boolean := fgetconst('DEBUG')=1;
-BEGIN
- _yo := _o.ord;
-
- IF(_DEBUG) THEN
- SELECT count(*) INTO STRICT _cnt FROM tmvt WHERE xoid = _yo.oid AND cack is NULL;
- if(_cnt != 0) THEN -- some mvt is pending for this barter
- RAISE EXCEPTION 'fincancelbarter called while some mvt are pending';
- END IF;
- END IF;
-
- IF(_final = 'exhausted') AND (_yo.qtt != 0) THEN
- RAISE EXCEPTION 'barter % is exhausted while qtt % remain',_yo.oid,_yo.qtt;
- END IF;
-
- -- delete barter and sub-barter from the book
- DELETE FROM torder o WHERE (o.ord).oid = _yo.oid;
- GET DIAGNOSTICS _cnt = ROW_COUNT;
-
- IF (_cnt != 0) THEN -- notify deletion of the barter
-
- SELECT name INTO STRICT _owner FROM towner WHERE id = _yo.own;
- _te := row_to_json(ROW(_yo.oid,ROW(_yo.qtt,_yo.qua_prov)::yj_value,_owner,_o.usr)::yj_barter);
-
- INSERT INTO tmsg (obj,sta,oid,json,usr,created)
- VALUES ('barter',(_final::text)::eobjstatus,_yo.oid,_te,_o.usr,statement_timestamp());
- END IF;
- -- the value is moved from torder->tmsg
-
- RETURN _cnt;
-
-END;
-$$ LANGUAGE PLPGSQL SECURITY DEFINER;
-
-/*------------------------------------------------------------------------------
-Delete the barter.
-_o is the barter (parent order )
-_final 'exhausted','outdated','cancelled'
-------------------------------------------------------------------------------*/
-
-CREATE FUNCTION fondeletebarter(_o torder,_final ebarterfinal) RETURNS int AS $$
-DECLARE
- _res eresack;
- _cnt int;
- _gid int;
-BEGIN
-
- FOR _gid IN SELECT grp FROM tmvt WHERE xoid = (_o.ord).oid AND (cack is NULL) GROUP BY grp LOOP
-
- -- only cycles pending are considered, not those already accepted or refused
- _res := frefusecycle(_gid,(_o.ord).id);
-
- END LOOP;
-
- -- the quantity of the parent order has changed, _o is updated
- SELECT * INTO STRICT _o FROM torder WHERE (ord).id = (_o.ord).oid;
-
- _cnt := fincancelbarter(_o,_final);
- RETURN 1;
-
-END;
-$$ LANGUAGE PLPGSQL SECURITY DEFINER;
-
-
-/*------------------------------------------------------------------------------
--- used only here
-------------------------------------------------------------------------------*/
-CREATE FUNCTION fgenmsg(_gid int,_status eobjstatus) RETURNS int AS $$
-DECLARE
- _iprev int;
- _inext int;
- _i int;
- _cntgrp int := 0;
- _m tmvt;
- _mprev tmvt;
- _mnext tmvt;
- _am tmvt[] := ARRAY[]::tmvt[];
- _o torder;
-BEGIN
- FOR _m IN SELECT * FROM tmvt WHERE grp=_gid ORDER BY id ASC LOOP
- _am := _am || _m;
- _cntgrp := _cntgrp + 1;
- END LOOP;
-
- _i := _cntgrp;
- _iprev := _i -1;
- FOR _inext IN 1 .. _cntgrp LOOP
-
- _mprev := _am[_iprev];
- _m := _am[_i];
- _mnext := _am[_inext];
-
- SELECT * INTO STRICT _o FROM torder WHERE (ord).id = _m.xoid; -- the stock
-
- INSERT INTO tmsg (obj,sta,oid,json,usr,created)
- VALUES ('movement',_status,_m.id,fgivetojson(_o,_mprev,_m,_mnext),_m.usr_src,statement_timestamp());
- _iprev := _i;
- _i := _inext;
- END LOOP;
-
- RETURN _cntgrp;
-END;
-$$ LANGUAGE PLPGSQL;
-
-/*------------------------------------------------------------------------------
-the cycle is accepted
-for each movement
- > Mm-executed for each movement
- if a barter is exhausted and no other cycle pending:
- cancelbarter(o,exhausted)
- elif a barter is outdated:
- cancelbarter(o,outdated)
-
---the value is moved from tmvt->tmsg: it is an output value
-------------------------------------------------------------------------------*/
-CREATE OR REPLACE FUNCTION fexecutecycle(_gid int) RETURNS eresack AS $$
-DECLARE
- _cnt int;
- _cntgrp int := 0;
- _m tmvt%rowtype;
- _yo yorder%rowtype;
- _o torder%rowtype;
- _oids int[];
- _oid int;
- _te text;
- _cntoutdated int;
- _value json;
- -- _approved boolean := true;
- _status eobjstatus;
- _now timestamp;
-BEGIN
- _oids := ARRAY[]::int[];
- FOR _oid IN SELECT xoid FROM tmvt WHERE grp=_gid LOOP
- IF(NOT _oids @> ARRAY[_oid]) THEN
- _oids := _oids || _oid;
- END IF;
- END LOOP;
-
- _now := clock_timestamp();
-
- -- _oids is the set of parent ids
- SELECT count(*) into strict _cntoutdated from torder WHERE (ord).oid= ANY(_oids)
- AND (NOT(_o.duration IS NULL) ) AND ((_o.created + _o.duration) <= _now);
-
- IF (_cntoutdated != 0) THEN
- _status := 'outdated';
- ELSE
- _status := 'approved';
- END IF;
-
- _cntgrp := fgenmsg(_gid,_status);
-
- IF (_status = 'approved') THEN
- --
- UPDATE tmvt SET cack = true WHERE grp = _gid;
-
- FOR _o IN SELECT * FROM torder WHERE (ord).oid= ANY(_oids) AND (ord).qtt = 0 LOOP
- _cntgrp := fincancelbarter(_o,'exhausted');
- END LOOP;
-
- RETURN 'cycle_approved';
-
- ELSE -- status = 'outdated'
- -- some parent order are obsolete
- UPDATE tmvt SET cack = false WHERE grp = _gid;
-
- FOR _o IN SELECT * FROM torder WHERE (ord).oid= ANY(_oids)
- AND (NOT(_o.duration IS NULL) ) AND ((_o.created + _o.duration) <= _now)
- LOOP
- _cntgrp := fincancelbarter(_o,'outdated');
- END LOOP;
-
- RETURN 'cycle_outdated';
-
- END IF;
-END;
-$$ LANGUAGE PLPGSQL SECURITY DEFINER;
-
-/*------------------------------------------------------------------------------
-When a cycle is created, the following messages are generated:
- movement,pending with qtt
- barter,changed with QTT-qtt
-------------------------------------------------------------------------------*/
-
-
-/* END
- _ownsrcs text[];
- _fmvtids := ARRAY[]::text[];
- IF(NOT _ownsrcs @> ARRAY[_m.ownsrc]) THEN
- _ownsrcs := _ownsrcs || _m.ownsrc;
- END IF;
-*/
-
-
-
diff --git a/src/sql/model.sql b/src/sql/model.sql
index fc1e046..994e601 100644
--- a/src/sql/model.sql
+++ b/src/sql/model.sql
@@ -1,43 +1,46 @@
\set ECHO none
/* roles.sql must be executed previously */
-
+\set ON_ERROR_STOP on
+BEGIN;
drop schema if exists market cascade;
create schema market;
set search_path to market;
-- DROP EXTENSION IF EXISTS flowf;
-- CREATE EXTENSION flowf WITH VERSION '0.1';
\i sql/roles.sql
GRANT USAGE ON SCHEMA market TO role_com;
\i sql/util.sql
\i sql/tables.sql
-\i sql/prims.sql
\i sql/pushpull.sql
+\i sql/prims.sql
\i sql/currencies.sql
\i sql/algo.sql
\i sql/openclose.sql
create view vord as (SELECT
(ord).id,
(ord).oid,
own,
(ord).qtt_requ,
(ord).qua_requ,
CASE WHEN (ord).type=1 THEN 'limit' ELSE 'best' END typ,
(ord).qtt_prov,
(ord).qua_prov,
(ord).qtt,
(ord).own as own_id,
usr
-- duration
FROM market.torder order by (ord).id asc);
select fsetvar('INSTALLED',1);
+COMMIT;
select * from fversion();
\echo model installed.
+
diff --git a/src/sql/openclose.sql b/src/sql/openclose.sql
index e05ee8a..e4281ac 100644
--- a/src/sql/openclose.sql
+++ b/src/sql/openclose.sql
@@ -1,272 +1,329 @@
/*
-manage transitions between daily market phases.
+--------------------------------------------------------------------------------
+-- BGW_OPENCLOSE
+--------------------------------------------------------------------------------
+
+openclose() is processed by postgres in background using a background worker called
+BGW_OPENCLOSE (see src/worker_ob.c).
+
+It performs transitions between daily market phases. Surprisingly, the sequence
+of operations does not depend on time and is always performed in the same order.
+They are just special operations waiting until the end of the current gphase.
+
*/
--------------------------------------------------------------------------------
INSERT INTO tvar(name,value) VALUES
- ('OC_CURRENT_PHASE',101), -- phase of the model when settled
- ('OC_CURRENT_OPENED',0); -- sub-state of the opened phase
+ ('OC_CURRENT_PHASE',101); -- phase of the market when it's model is settled
CREATE TABLE tmsgdaysbefore(LIKE tmsg);
-SELECT _grant_read('tmsgdaysbefore');
+GRANT SELECT ON tmsgdaysbefore TO role_com;
-- index and unique constraint are not cloned
--------------------------------------------------------------------------------
CREATE FUNCTION openclose() RETURNS int AS $$
/*
- * This code is executed by the bg_worker 1 doing following:
- while(true)
- status := market.openclose()
- if (status >=0):
- do_wait := status
- elif status == -100:
- VACUUM FULL
- do_wait := 0
-
- wait(do_wait) milliseconds
-
-
+The structure of the code is:
+ _phase := OC_CURRENT_PHASE
+ CASE _phase
+ ....
+ WHEN X THEN
+ dowait := operation_of_phase(X)
+ OC_CURRENT_PHASE := _next_phase
+ ....
+ return dowait
+
+This code is executed by the BGW_OPENCLOSE doing following:
+
+ while(true)
+ dowait := market.openclose()
+ if (dowait >=0):
+ wait for dowait milliseconds
+ elif dowait == -100:
+ VACUUM FULL
+ else:
+ error
*/
DECLARE
- _phase int;
- _dowait int := 0; -- not DOWAIT
- _cnt int;
- _rp yerrorprim;
- _stock_id int;
- _owner text;
- _done boolean;
+
+ _phase int;
+ _dowait int := 0;
+ _rp yerrorprim;
+ _stock_id int;
+ _owner text;
+ _done boolean;
+
BEGIN
- set search_path to market;
- _phase := fgetvar('OC_CURRENT_PHASE');
+
+ _phase := fgetvar('OC_CURRENT_PHASE');
+
CASE _phase
+ ------------------------------------------------------------------------
+ -- GPHASE 0 BEGIN OF DAY --
+ ------------------------------------------------------------------------
+
+ WHEN 0 THEN -- creating the timetable of the day
+
+ PERFORM foc_create_timesum();
+
+ -- tmsg is archived to tmsgdaysbefore
+
+ WITH t AS (DELETE FROM tmsg RETURNING * )
+ INSERT INTO tmsgdaysbefore SELECT * FROM t ;
+ TRUNCATE tmsg;
+ PERFORM setval('tmsg_id_seq',1,false);
+
+ PERFORM foc_next(1,'tmsg archived');
+
+ WHEN 1 THEN -- waiting for opening
+
+ IF(foc_in_gphase(_phase)) THEN
+ _dowait := 60000; -- 1 minute
+ ELSE
+ PERFORM foc_next(101,'Start opening sequence');
+ END IF;
+
+ ------------------------------------------------------------------------
+ -- GPHASE 1 MARKET OPENED --
+ ------------------------------------------------------------------------
+
+ WHEN 101 THEN -- client access opening.
+
+ REVOKE role_co_closed FROM role_client;
+ GRANT role_co TO role_client;
+
+ PERFORM foc_next(102,'Client access opened');
+
+ WHEN 102 THEN -- market is opened to client access, waiting for closing.
+
+ IF(foc_in_gphase(_phase)) THEN
+ PERFORM foc_clean_outdated_orders();
+ _dowait := 60000; -- 1 minute
+ ELSE
+ PERFORM foc_next(120,'Start closing');
+ END IF;
+
+ WHEN 120 THEN -- market is closing.
+
+ REVOKE role_co FROM role_client;
+ GRANT role_co_closed TO role_client;
+
+ PERFORM foc_next(121,'Client access revoked');
+
+ WHEN 121 THEN -- waiting until the stack is empty
+
+ -- checks wether BGW_CONSUMESTACK purged the stack
+ _done := fstackdone();
+
+ IF(not _done) THEN
+ _dowait := 60000;
+ -- waits one minute before testing again
+ ELSE
+ -- the stack is purged
+ PERFORM foc_next(200,'Last primitives performed');
+ END IF;
+
+ ------------------------------------------------------------------------
+ -- GPHASE 2 - MARKET CLOSED --
+ ------------------------------------------------------------------------
+
+ WHEN 200 THEN -- removing orders of the order book
+
+ SELECT (o.ord).id,w.name INTO _stock_id,_owner FROM torder o
+ INNER JOIN towner w ON w.id=(o.ord).own
+ WHERE (o.ord).oid = (o.ord).id LIMIT 1;
+
+ IF(FOUND) THEN
+ _rp := fsubmitrmorder(_owner,_stock_id);
+ IF(_rp.error.code != 0 ) THEN
+ RAISE EXCEPTION 'Error while removing orders %',_rp;
+ END IF;
+ -- repeate again until order_book is empty
+ ELSE
+ PERFORM foc_next(201,'Order book is emptied');
+ END IF;
+
+ WHEN 201 THEN -- waiting until the stack is empty
+
+ -- checks wether BGW_CONSUMESTACK purged the stack
+ _done := fstackdone();
+
+ IF(not _done) THEN
+ _dowait := 60000;
+ -- waits one minute before testing again
+ ELSE
+ -- the stack is purged
+ PERFORM foc_next(202,'rm primitives are processed');
+ END IF;
+
+ WHEN 202 THEN -- truncating tables except tmsg
+
+ truncate torder;
+ truncate tstack;
+ PERFORM setval('tstack_id_seq',1,false);
+
+ PERFORM setval('tmvt_id_seq',1,false);
+
+ truncate towner;
+ PERFORM setval('towner_id_seq',1,false);
+
+ PERFORM foc_next(203,'tables torder,tsack,tmvt,towner are truncated');
+
+ WHEN 203 THEN -- asking for VACUUM FULL execution
+
+ _dowait := -100;
+ PERFORM foc_next(204,'VACUUM FULL is lauched');
+
+ WHEN 204 THEN -- waiting till the end of the day
+
+ IF(foc_in_gphase(_phase)) THEN
+ _dowait := 60000; -- 1 minute
+ -- waits before testing again
+ ELSE
+ PERFORM foc_next(0,'End of the day');
+ END IF;
+
+ ELSE
+
+ RAISE EXCEPTION 'Should not reach this point with phase=%',_phase;
- /* PHASE 0XX BEGIN OF THE DAY
- */
- WHEN 0 THEN
- /* creates the timetable */
- PERFORM foc_create_timesum();
-
- /* pruge tmsg - single transaction */
- WITH t AS (DELETE FROM tmsg RETURNING * )
- INSERT INTO tmsgdaysbefore SELECT * FROM t ;
- TRUNCATE tmsg;
- PERFORM setval('tmsg_id_seq',1,false);
-
- PERFORM foc_next(1,'tmsg archived');
-
- WHEN 1 THEN
-
- IF(foc_in_phase(_phase)) THEN
- _dowait := 60000; -- 1 minute
- ELSE
- PERFORM foc_next(101,'Start opening sequence');
- END IF;
-
- /* PHASE 1XX -- MARKET OPENED
- */
- WHEN 101 THEN
- /* open client access */
-
- REVOKE role_co_closed FROM role_client;
- GRANT role_co TO role_client;
-
- PERFORM foc_next(102,'Client access opened');
-
- WHEN 102 THEN
- /* market is opened to client access:
- While in phase,
- OC_CURRENT_OPENED <- OC_CURRENT_OPENED % 5
- if 0:
- delete outdated order and sub-orders from the book
- do_wait = 1 minute
- else,
- phase <- 120
- */
-
- IF(foc_in_phase(_phase)) THEN
- UPDATE tvar SET value=((value+1)/5) WHERE name='OC_CURRENT_OPENED'
- RETURNING value INTO _cnt ;
- _dowait := 60000; -- 1 minute
- IF(_cnt =0) THEN
- -- every 5 calls(5 minutes),
- -- delete outdated order and sub-orders from the book
- DELETE FROM torder o USING torder po
- WHERE (o.ord).oid = (po.ord).id
- -- outdated parent orders
- AND (po.ord).oid = (po.ord).oid
- AND NOT (po.duration IS NULL)
- AND (po.created + po.duration) <= clock_timestamp();
- END IF;
- ELSE
- PERFORM foc_next(120,'Start closing');
- END IF;
-
- WHEN 120 THEN
- /* market closing
-
- revoke client access
- */
- REVOKE role_co FROM role_client;
- GRANT role_co_closed TO role_client;
-
- PERFORM foc_next(121,'Client access revoked');
-
- WHEN 121 THEN
- /* wait until the stack is compleatly consumed */
-
- -- waiting worker2 stack purge
- _done := fstackdone();
- -- SELECT count(*) INTO _cnt FROM tstack;
-
- IF(not _done) THEN
- _dowait := 60000; -- 1 minute
- -- wait and test again
- ELSE
- PERFORM foc_next(200,'Last primitives performed');
- END IF;
-
- /* PHASE 2XX MARKET CLOSED */
- WHEN 200 THEN
- /* remove orders of the order book */
-
- SELECT (o.ord).id,w.name INTO _stock_id,_owner FROM torder o
- INNER JOIN towner w ON w.id=(o.ord).own
- WHERE (o.ord).oid = (o.ord).id LIMIT 1;
-
- IF(FOUND) THEN
- _rp := fsubmitrmorder(_owner,_stock_id);
- -- repeate again until order_book is empty
- ELSE
- PERFORM foc_next(201,'Order book is emptied');
- END IF;
-
- WHEN 201 THEN
- /* wait until stack is empty */
-
- -- waiting worker2 stack purge
- _done := fstackdone();
- -- SELECT count(*) INTO _cnt FROM tstack;
-
- IF(not _done) THEN
- _dowait := 60000; -- 1 minute
- -- wait and test again
- ELSE
- PERFORM foc_next(202,'rm primitives are processed');
- END IF;
-
- WHEN 202 THEN
- /* tables truncated except tmsg */
-
- truncate torder;
- truncate tstack;
- PERFORM setval('tstack_id_seq',1,false);
-
- PERFORM setval('tmvt_id_seq',1,false);
-
- truncate towner;
- PERFORM setval('towner_id_seq',1,false);
-
- PERFORM foc_next(203,'tables torder,tsack,tmvt,towner are truncated');
-
- WHEN 203 THEN
-
- _dowait := -100; -- VACUUM FULL; executed by pg_worker 1 openclose
- PERFORM foc_next(204,'VACUUM FULL is lauched');
-
- WHEN 204 THEN
- /* wait till the end of the day */
-
- IF(foc_in_phase(_phase)) THEN
- _dowait := 60000; -- 1 minute
- -- wait and test again
- ELSE
- PERFORM foc_next(0,'End of the day');
- END IF;
-
- ELSE
- RAISE EXCEPTION 'Should not reach this point';
END CASE;
- RETURN _dowait; -- DOWAIT or VACUUM FULL
+
+ -- RAISE LOG 'Phase=% _dowait=%',_phase,_dowait;
+
+ RETURN _dowait;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION openclose() TO role_bo;
-------------------------------------------------------------------------------
CREATE FUNCTION foc_next(_phase int,_msg text) RETURNS void AS $$
BEGIN
- PERFORM fsetvar('OC_CURRENT_PHASE',_phase);
- RAISE LOG 'MARKET PHASE %: %',_phase,_msg;
+ PERFORM fsetvar('OC_CURRENT_PHASE',_phase);
+ RAISE LOG 'MARKET PHASE %: %',_phase,_msg;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_next(int,text) TO role_bo;
--------------------------------------------------------------------------------
-/* access by clients can be disabled/enabled with a single command:
- REVOKE role_co FROM role_client
- GRANT role_co TO role_client
-
- same thing for role batch with role_bo:
- REVOKE role_bo FROM user_bo
- GRANT role_bo TO user_bo
-*/
+INSERT INTO tvar(name,value) VALUES
+ ('OC_CURRENT_OPENED',0); -- sub-state of the opened phase
+CREATE FUNCTION foc_clean_outdated_orders() RETURNS void AS $$
+/* each 5 calls, cleans outdated orders */
+DECLARE
+ _cnt int;
+BEGIN
+ UPDATE tvar SET value=((value+1) % 5 ) WHERE name='OC_CURRENT_OPENED'
+ RETURNING value INTO _cnt ;
+ IF(_cnt !=0) THEN
+ RETURN;
+ END IF;
+
+ -- delete outdated order from the order book and related sub-orders
+ DELETE FROM torder o USING torder po
+ WHERE (o.ord).oid = (po.ord).id -- having a parent order that
+ AND NOT (po.duration IS NULL) -- have a timeout defined
+ AND (po.created + po.duration) <= clock_timestamp(); -- and is outdated
+ RETURN;
+END;
+$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
+GRANT EXECUTE ON FUNCTION foc_clean_outdated_orders() TO role_bo;
+--------------------------------------------------------------------------------
CREATE VIEW vmsg3 AS WITH t AS (SELECT * from tmsg WHERE usr = session_user
- UNION ALL SELECT * from tmsgdaysbefore WHERE usr = session_user
- ) SELECT created,id,typ,jso
+ UNION ALL SELECT * from tmsgdaysbefore WHERE usr = session_user
+ ) SELECT created,id,typ,jso
from t order by created ASC,id ASC;
-SELECT _grant_read('vmsg3');
+GRANT SELECT ON vmsg3 TO role_com;
+/*
+--------------------------------------------------------------------------------
+-- TIME DEPENDANT FUNCTION foc_in_gphase(_phase int)
+--------------------------------------------------------------------------------
-/*------------------------------------------------------------------------------
- TIME DEPENDANT FUNCTION
-------------------------------------------------------------------------------*/
-/* the day is shared in NB_PHASE, with id between [0,NB_PHASE-1]
-delay are defined for [0,NB_PHASE-2],the last waits the end of the day
+The day is shared in 3 gphases. A table tdelay defines the durations of these gphases.
+When the model is settled and each day, the table timesum is built from tdelay to set
+the planning of the market. foc_in_gphase(_phase) returns true when the current time
+is in the planning of the _phase.
-OC_DELAY_i are number of seconds for a phase for i in [0,NB_PHASE-2]
*/
+--------------------------------------------------------------------------------
+-- delays of the phases
+--------------------------------------------------------------------------------
+create table tdelay(
+ id serial,
+ delay interval
+);
+GRANT SELECT ON tdelay TO role_com;
+/*
+NB_GPHASE = 3, with id between [0,NB_GPHASE-1]
+delay are defined for [0,NB_GPHASE-2],the last waits the end of the day
-INSERT INTO tconst (name,value) VALUES
- ('OC_DELAY_0',30*60), -- stops at 0h 30'
- ('OC_DELAY_1',23*60*60) -- stops at 23h 30'
- -- sum of delays < 24*60*60
- ;
+OC_DELAY_i is the duration of a gphase for i in [0,NB_GPHASE-2]
+*/
+INSERT INTO tdelay (delay) VALUES
+ ('30 minutes'::interval), -- starts at 0h 30'
+ ('23 hours'::interval) -- stops at 23h 30'
+ -- sum of delays < 24 hours
+ ;
+--------------------------------------------------------------------------------
CREATE FUNCTION foc_create_timesum() RETURNS void AS $$
+/* creates the table timesum from tdelay where each record
+defines for a gphase the delay between the begin of the day
+and the end of this phase.
+
+ builds timesum with rows (k,ends) such as:
+ ends[0] = 0
+ ends[k] = sum(tdelay[i] for i in [0,k])
+*/
DECLARE
- _cnt int;
+ _inter interval;
+ _cnt int;
BEGIN
- DROP TABLE IF EXISTS timesum;
- SELECT count(*) INTO STRICT _cnt FROM tconst WHERE name like 'OC_DELAY_%';
- CREATE TABLE timesum (id,ends) AS
- SELECT t.id+1,sum(d.value) OVER w FROM generate_series(0,_cnt-1) t(id)
- LEFT JOIN tconst d ON (('OC_DELAY_' ||(t.id)::text) = d.name)
- WINDOW w AS (order by t.id );
- INSERT INTO timesum VALUES (0,0);
+
+ DROP TABLE IF EXISTS timesum;
+
+ SELECT count(*) INTO STRICT _cnt FROM tdelay;
+
+ CREATE TABLE timesum (id,ends) AS
+ SELECT t.id,sum(d.delay) OVER w FROM generate_series(1,_cnt) t(id)
+ LEFT JOIN tdelay d ON (t.id=d.id) WINDOW w AS (order by t.id );
+
+ INSERT INTO timesum VALUES (0,'0'::interval);
+
+ SELECT max(ends) INTO _inter FROM timesum;
+ IF( _inter >= '24 hours'::interval) THEN
+ RAISE EXCEPTION 'sum(delay) = % > 24 hours',_inter;
+ END IF;
END;
-$$ LANGUAGE PLPGSQL;
+$$ LANGUAGE PLPGSQL set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_create_timesum() TO role_bo;
-select foc_create_timesum();
+select market.foc_create_timesum();
--------------------------------------------------------------------------------
-CREATE FUNCTION foc_in_phase(_phase int) RETURNS boolean AS $$
--- returns TRUE when in phase, else returns the suffix of the archive
+CREATE FUNCTION foc_in_gphase(_phase int) RETURNS boolean AS $$
+/* returns TRUE when current time is between the limits of the gphase
+ gphase is defined as _phase/100 */
DECLARE
- _actual_gphase int := _phase /100;
- _planned_gphase int;
+ _actual_gphase int := _phase /100;
+ _planned_gphase int;
+ _time interval;
BEGIN
- -- the number of seconds since the beginning of the day
- -- in the interval (timesum[id],timesum[id+1])
- SELECT max(id) INTO _planned_gphase FROM
- timesum where ends < (EXTRACT(HOUR FROM now()) *60*60)
- + (EXTRACT(MINUTE FROM now()) *60)
- + EXTRACT(SECOND FROM now()) ;
-
- IF (_planned_gphase = _actual_gphase) THEN
- RETURN true;
- ELSE
- RETURN false;
- END IF;
+ -- the time since the beginning of the day
+ _time := now() - date_trunc('day',now());
+
+
+ SELECT max(id) INTO _planned_gphase FROM timesum where ends < _time ;
+ -- _planned_gphase is such as
+ -- _time is in the interval (timesum[ _planned_gphase ],timesum[ _planned_gphase+1 ])
+
+ IF (_planned_gphase = _actual_gphase) THEN
+ RETURN true;
+ ELSE
+ RETURN false;
+ END IF;
END;
-$$ LANGUAGE PLPGSQL;
+$$ LANGUAGE PLPGSQL set search_path = market,public;
+GRANT EXECUTE ON FUNCTION foc_create_timesum() TO role_bo;
diff --git a/src/sql/prims.sql b/src/sql/prims.sql
index f8a2d77..650350a 100644
--- a/src/sql/prims.sql
+++ b/src/sql/prims.sql
@@ -1,491 +1,525 @@
-
---------------------------------------------------------------------------------
--- check params
--- code in [-9,0]
+/*
+--------------------------------------------------------------------------------
+-- Market primitives
--------------------------------------------------------------------------------
-CREATE FUNCTION
- fcheckquaown(_r yj_error,_own dtext,_qua_requ dtext,_pos_requ point,_qua_prov dtext,_pos_prov point,_dist float8)
- RETURNS yj_error AS $$
-DECLARE
- _r yj_error;
- _i int;
-BEGIN
- IF(NOT ((yflow_checktxt(_own)&1)=1)) THEN
- _r.reason := '_own is empty string';
- _r.code := -1;
- RETURN _r;
- END IF;
- IF(_qua_prov IS NULL) THEN
- IF(NOT ((yflow_checktxt(_qua_requ)&1)=1)) THEN
- _r.reason := '_qua_requ is empty string';
- _r.code := -2;
- RETURN _r;
- END IF;
- ELSE
- IF(_qua_prov = _qua_requ) THEN
- _r.reason := 'qua_prov == qua_requ';
- _r.code := -3;
- return _r;
- END IF;
- _i = yflow_checkquaownpos(_own,_qua_requ,_pos_requ,_qua_prov,_pos_prov,_dist);
- IF (_i != 0) THEN
- _r.reason := 'rejected by yflow_checkquaownpos';
- _r.code := _i; -- -9<=i<=-5
- return _r;
- END IF;
- END IF;
+A function fsubmit<primitive> is used by users to push the primitive on the stack.
-
- RETURN _r;
-END;
-$$ LANGUAGE PLPGSQL;
---------------------------------------------------------------------------------
-CREATE FUNCTION fcheckquaprovusr(_r yj_error,_qua_prov dtext,_usr dtext) RETURNS yj_error AS $$
-DECLARE
- _QUAPROVUSR boolean := fgetconst('QUAPROVUSR')=1;
- _p int;
- _suffix text;
-BEGIN
- IF (NOT _QUAPROVUSR) THEN
- RETURN _r;
- END IF;
- _p := position('@' IN _qua_prov);
- IF (_p = 0) THEN
- -- without prefix, it should be a currency
- SELECT count(*) INTO _p FROM tcurrency WHERE _qua_prov = name;
- IF (_p = 1) THEN
- RETURN _r;
- ELSE
- _r.code := -12;
- _r.reason := 'the quality provided that is not a currency must be prefixed';
- RETURN _r;
- END IF;
- END IF;
+A function fprocess<primitive> describes the checks made before pushing it on the stack
+and it's processing when popped from the stack.
- -- with prefix
- IF (char_length(substring(_qua_prov FROM 1 FOR (_p-1))) <1) THEN
- _r.code := -13;
- _r.reason := 'the prefix of the quality provided cannot be empty';
- RETURN _r;
- END IF;
-
- _suffix := substring(_qua_prov FROM (_p+1));
- _suffix := replace(_suffix,'.','_'); -- change . to _
-
- -- it must be the username
- IF ( _suffix!= _usr) THEN
- _r.code := -14;
- _r.reason := 'the prefix of the quality provided must by the user name';
- RETURN _r;
- END IF;
+A type yp_<primitive> is defined to json encode the primitive parameters in the stack.
+*/
- RETURN _r;
-END;
-$$ LANGUAGE PLPGSQL;
-
---------------------------------------------------------------------------------
-CREATE FUNCTION fchecknameowner(_r yj_error,_name dtext,_usr dtext) RETURNS yj_error AS $$
-DECLARE
- _p int;
- _OWNUSR boolean := fgetconst('OWNUSR')=1;
- _suffix text;
-BEGIN
- IF (NOT _OWNUSR) THEN
- RETURN _r;
- END IF;
- _p := position('@' IN _name);
- IF (char_length(substring(_name FROM 1 FOR (_p-1))) <1) THEN
- _r.code := -20;
- _r.reason := 'the owner name has an empty prefix';
- RETURN _r;
- END IF;
- _suffix := substring(_name FROM (_p+1));
- SELECT count(*) INTO _p FROM townauth WHERE _suffix = name;
- IF (_p = 1) THEN
- RETURN _r; --well known auth provider
- END IF;
- -- change . to _
- _suffix := replace(_suffix,'.','_');
- IF ( _suffix= _usr) THEN
- RETURN _r; -- owners name suffixed by users name
- END IF;
- _r.code := -21;
- _r.reason := 'if the owner name is not prefixed by a well know provider, it must be prefixed by user name';
- RETURN _r;
-END;
-$$ LANGUAGE PLPGSQL;
+--------------------------------------------------------------------------------
+CREATE TYPE eprimphase AS ENUM ('submit', 'execute');
--------------------------------------------------------------------------------
-- order primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_order AS (
kind eprimitivetype,
type eordertype,
owner dtext,
qua_requ dtext,
qtt_requ dqtt,
qua_prov dtext,
qtt_prov dqtt
);
-CREATE FUNCTION fsubmitorder(_type eordertype,_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
+
+--------------------------------------------------------------------------------
+CREATE FUNCTION fsubmitorder(_type eordertype,_owner dtext,
+ _qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_order;
BEGIN
+
_prim := ROW('order',_type,_owner,_qua_requ,_qtt_requ,_qua_prov,_qtt_prov)::yp_order;
_res := fprocessorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
+
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitorder(eordertype,dtext,dtext,dqtt,dtext,dqtt) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessorder(_phase eprimphase, _t tstack,_s yp_order)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_ir int;
_o yorder;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_prov,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_res := fpushprimitive(_r,'order',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
/*
IF(
(_s.duration IS NOT NULL) AND (_s.submitted + _s.duration) < clock_timestamp()
) THEN
_r.reason := 'barter order - the order is too old';
_r.code := -19;
END IF; */
_wid := fgetowner(_s.owner);
_o := ROW(CASE WHEN _s.type='limit' THEN 1 ELSE 2 END,
_t.id,_wid,_t.id,
_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt_prov,
box('(0,0)'::point,'(0,0)'::point),box('(0,0)'::point,'(0,0)'::point),
0.0,earth_get_square('(0,0)'::point,0.0)
)::yorder;
_ir := insertorder(_s.owner,_o,_t.usr,_t.submitted,'1 day');
RETURN ROW(_t.id,_r,_t.jso,
row_to_json(ROW(_o.id,_o.qtt,_o.qua_prov,_s.owner,_t.usr)::yj_stock),
NULL
)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
-
-
+
END;
$$ LANGUAGE PLPGSQL;
+
--------------------------------------------------------------------------------
-- child order primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_childorder AS (
kind eprimitivetype,
owner dtext,
qua_requ dtext,
qtt_requ dqtt,
stock_id int
);
+
+--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitchildorder(_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_stock_id int)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_childorder;
BEGIN
_prim := ROW('childorder',_owner,_qua_requ,_qtt_requ,_stock_id)::yp_childorder;
_res := fprocesschildorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
+
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitchildorder(dtext,dtext,dqtt,int) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocesschildorder(_phase eprimphase, _t tstack,_s yp_childorder)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_otype int;
_ir int;
_o yorder;
_op torder%rowtype;
_sp tstack%rowtype;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_requ,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid;
- IF (NOT FOUND) THEN
- /* could be found in the stack */
+ IF (NOT FOUND) THEN -- not found in the order book
+ -- but could be found in the stack
SELECT * INTO _sp FROM tstack WHERE
id = _s.stock_id AND usr = session_user AND _s.owner = jso->'owner' AND kind='order';
IF (NOT FOUND) THEN
_r.code := -200;
_r.reason := 'the order was not found for this user and owner';
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
END IF;
_res := fpushprimitive(_r,'childorder',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid;
IF (NOT FOUND) THEN
_r.code := -201;
_r.reason := 'the stock is not in the order book';
RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
END IF;
+
_o := _op.ord;
_o.id := _id;
_o.qua_requ := _s.qua_requ;
_o.qtt_requ := _s.qtt_requ;
_ir := insertorder(_s.owner,_o,_s.usr,_s.submitted,_op.duration);
RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- rm primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_rmorder AS (
kind eprimitivetype,
owner dtext,
stock_id int
);
+
+--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitrmorder(_owner dtext,_stock_id int)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_rmorder;
BEGIN
_prim := ROW('rmorder',_owner,_stock_id)::yp_rmorder;
_res := fprocessrmorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
+
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitrmorder(dtext,int) TO role_co,role_bo;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessrmorder(_phase eprimphase, _t tstack,_s yp_rmorder)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_otype int;
_ir int;
_o yorder;
_opy yorder; -- parent_order
_op torder%rowtype;
_te text;
_pusr text;
_sp tstack%rowtype;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid AND (ord).id=(ord).oid;
- IF (NOT FOUND) THEN
- /* could be found in the stack */
+ IF (NOT FOUND) THEN -- not found in the order book
+ -- but could be found in the stack
SELECT * INTO _sp FROM tstack WHERE
id = _s.stock_id AND usr = session_user AND _s.owner = jso->'owner' AND kind='order' AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
_r.code := -300;
_r.reason := 'the order was not found for this user and owner';
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
END IF;
_res := fpushprimitive(_r,'rmorder',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
_r.code := -301;
_r.reason := 'the stock is not in the order book';
RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
END IF;
-- delete order and sub-orders from the book
DELETE FROM torder o WHERE (o.ord).oid = _yo.oid;
-- id,error,primitive,result
RETURN ROW(_t.id,_r,_t.jso,
ROW((_op.ord).id,(_op.ord).qtt,(_op.ord).qua_prov,_s.owner,_op.usr)::yj_stock,
ROW((_op.ord).qua_prov,(_op.ord).qtt)::yj_value
)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- quote
--------------------------------------------------------------------------------
-CREATE FUNCTION fsubmitquote(_type eordertype,_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
+CREATE FUNCTION fsubmitquote(_type eordertype,_owner dtext,
+ _qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_order;
BEGIN
_prim := ROW('quote',_type,_owner,_qua_requ,_qtt_requ,_qua_prov,_qtt_prov)::yp_order;
_res := fprocessquote('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
+
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitquote(eordertype,dtext,dtext,dqtt,dtext,dqtt) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessquote(_phase eprimphase, _t tstack,_s yp_order)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_ir int;
_o yorder;
_type int;
_json_res json;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_prov,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_res := fpushprimitive(_r,'quote',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
_type := CASE WHEN _s.type='limit' THEN 1 ELSE 2 END;
_o := ROW( _type,
_t.id,_wid,_t.id,
_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt_prov,
box('(0,0)'::point,'(0,0)'::point),box('(0,0)'::point,'(0,0)'::point),
0.0,earth_get_square('(0,0)'::point,0.0)
)::yorder;
-/*fproducequote(_ord yorder,_isquote boolean,_isnoqttlimit boolean,_islimit boolean,_isignoreomega boolean)
-*/
_json_res := fproducequote(_o,true,false,_s.type='limit',false);
RETURN ROW(_t.id,_r,_t.jso,_json_res,NULL)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
-
+/*
--------------------------------------------------------------------------------
-- primitive processing
--------------------------------------------------------------------------------
-CREATE FUNCTION fprocessprimitive(_phase eprimphase, _s tstack)
+meets in a single function all primitive executions
+*/
+--------------------------------------------------------------------------------
+CREATE FUNCTION fprocessprimitive( _s tstack)
RETURNS yj_primitive AS $$
DECLARE
_res yj_primitive;
_kind eprimitivetype;
BEGIN
_kind := _s.kind;
CASE
WHEN (_kind = 'order' ) THEN
- _res := fprocessorder(_phase,_s,json_populate_record(NULL::yp_order,_s.jso));
+ _res := fprocessorder('execute',_s,json_populate_record(NULL::yp_order,_s.jso));
WHEN (_kind = 'childorder' ) THEN
- _res := fprocesschildorder(_phase,_s,json_populate_record(NULL::yp_childorder,_s.jso));
+ _res := fprocesschildorder('execute',_s,json_populate_record(NULL::yp_childorder,_s.jso));
WHEN (_kind = 'rmorder' ) THEN
- _res := fprocessrmorder(_phase,_s,json_populate_record(NULL::yp_rmorder,_s.jso));
+ _res := fprocessrmorder('execute',_s,json_populate_record(NULL::yp_rmorder,_s.jso));
WHEN (_kind = 'quote' ) THEN
- _res := fprocessquote(_phase,_s,json_populate_record(NULL::yp_order,_s.jso));
+ _res := fprocessquote('execute',_s,json_populate_record(NULL::yp_order,_s.jso));
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
RETURN _res;
END;
$$ LANGUAGE PLPGSQL;
+--------------------------------------------------------------------------------
+-- check params
+-- code in [-9,0]
+--------------------------------------------------------------------------------
+CREATE FUNCTION
+ fcheckquaown(_r yj_error,_own dtext,
+ _qua_requ dtext,_pos_requ point,_qua_prov dtext,_pos_prov point,_dist float8)
+ RETURNS yj_error AS $$
+DECLARE
+ _r yj_error;
+ _i int;
+BEGIN
+ IF(NOT ((yflow_checktxt(_own)&1)=1)) THEN
+ _r.reason := '_own is empty string';
+ _r.code := -1;
+ RETURN _r;
+ END IF;
+
+ IF(_qua_prov IS NULL) THEN
+ IF(NOT ((yflow_checktxt(_qua_requ)&1)=1)) THEN
+ _r.reason := '_qua_requ is empty string';
+ _r.code := -2;
+ RETURN _r;
+ END IF;
+ ELSE
+ IF(_qua_prov = _qua_requ) THEN
+ _r.reason := 'qua_prov == qua_requ';
+ _r.code := -3;
+ return _r;
+ END IF;
+ _i = yflow_checkquaownpos(_own,_qua_requ,_pos_requ,_qua_prov,_pos_prov,_dist);
+ IF (_i != 0) THEN
+ _r.reason := 'rejected by yflow_checkquaownpos';
+ _r.code := _i; -- -9<=i<=-5
+ return _r;
+ END IF;
+ END IF;
+
+
+ RETURN _r;
+END;
+$$ LANGUAGE PLPGSQL;
+--------------------------------------------------------------------------------
+CREATE FUNCTION fcheckquaprovusr(_r yj_error,_qua_prov dtext,_usr dtext) RETURNS yj_error AS $$
+DECLARE
+ _QUAPROVUSR boolean := fgetconst('QUAPROVUSR')=1;
+ _p int;
+ _suffix text;
+BEGIN
+ IF (NOT _QUAPROVUSR) THEN
+ RETURN _r;
+ END IF;
+ _p := position('@' IN _qua_prov);
+ IF (_p = 0) THEN
+ -- without prefix, it should be a currency
+ SELECT count(*) INTO _p FROM tcurrency WHERE _qua_prov = name;
+ IF (_p = 1) THEN
+ RETURN _r;
+ ELSE
+ _r.code := -12;
+ _r.reason := 'the quality provided that is not a currency must be prefixed';
+ RETURN _r;
+ END IF;
+ END IF;
+
+ -- with prefix
+ IF (char_length(substring(_qua_prov FROM 1 FOR (_p-1))) <1) THEN
+ _r.code := -13;
+ _r.reason := 'the prefix of the quality provided cannot be empty';
+ RETURN _r;
+ END IF;
+
+ _suffix := substring(_qua_prov FROM (_p+1));
+ _suffix := replace(_suffix,'.','_'); -- change . to _
+
+ -- it must be the username
+ IF ( _suffix!= _usr) THEN
+ _r.code := -14;
+ _r.reason := 'the prefix of the quality provided must by the user name';
+ RETURN _r;
+ END IF;
+
+ RETURN _r;
+END;
+$$ LANGUAGE PLPGSQL;
+
+--------------------------------------------------------------------------------
+CREATE FUNCTION fchecknameowner(_r yj_error,_name dtext,_usr dtext) RETURNS yj_error AS $$
+DECLARE
+ _p int;
+ _OWNUSR boolean := fgetconst('OWNUSR')=1;
+ _suffix text;
+BEGIN
+ IF (NOT _OWNUSR) THEN
+ RETURN _r;
+ END IF;
+ _p := position('@' IN _name);
+ IF (char_length(substring(_name FROM 1 FOR (_p-1))) <1) THEN
+ _r.code := -20;
+ _r.reason := 'the owner name has an empty prefix';
+ RETURN _r;
+ END IF;
+ _suffix := substring(_name FROM (_p+1));
+ SELECT count(*) INTO _p FROM townauth WHERE _suffix = name;
+ IF (_p = 1) THEN
+ RETURN _r; --well known auth provider
+ END IF;
+ -- change . to _
+ _suffix := replace(_suffix,'.','_');
+ IF ( _suffix= _usr) THEN
+ RETURN _r; -- owners name suffixed by users name
+ END IF;
+ _r.code := -21;
+ _r.reason := 'if the owner name is not prefixed by a well know provider, it must be prefixed by user name';
+ RETURN _r;
+END;
+$$ LANGUAGE PLPGSQL;
+
+
+
+
diff --git a/src/sql/pushpull.sql b/src/sql/pushpull.sql
index 82974ff..ca91aff 100644
--- a/src/sql/pushpull.sql
+++ b/src/sql/pushpull.sql
@@ -1,114 +1,194 @@
+/*
+--------------------------------------------------------------------------------
+-- stack primitive management
+--------------------------------------------------------------------------------
+
+The stack is a FIFO (first in first out) used to submit primitives and execute
+them in the order of submission.
+
+*/
+
+create table tstack (
+ id serial UNIQUE not NULL,
+ usr dtext,
+ kind eprimitivetype,
+ jso json,
+ submitted timestamp not NULL,
+ PRIMARY KEY (id)
+);
+
+comment on table tstack is 'Records the stack of primitives';
+comment on column tstack.id is 'id of this primitive. For an order, it''s it is also the id of the order';
+comment on column tstack.usr is 'user submitting the primitive';
+comment on column tstack.kind is 'type of primitive';
+comment on column tstack.jso is 'representation of the primitive';
+comment on column tstack.submitted is 'timestamp when the primitive was successfully submitted';
+
+alter sequence tstack_id_seq owned by tstack.id;
+
+GRANT SELECT ON tstack TO role_com;
+GRANT SELECT ON tstack_id_seq TO role_com;
+
+
INSERT INTO tvar(name,value) VALUES
- ('STACK_TOP',0),
- ('STACK_EXECUTED',0); -- last primitive executed
+ ('STACK_TOP',0), -- last primitive submitted
+ ('STACK_EXECUTED',0); -- last primitive executed
+
+/*
--------------------------------------------------------------------------------
+fstackdone returns true when the stack is empty
+*/
CREATE FUNCTION fstackdone()
RETURNS boolean AS $$
DECLARE
_top int;
_exe int;
BEGIN
SELECT value INTO _top FROM tvar WHERE name = 'STACK_TOP';
SELECT value INTO _exe FROM tvar WHERE name = 'STACK_EXECUTED';
RETURN (_top = _exe);
END;
$$ LANGUAGE PLPGSQL set search_path to market;
+/*
--------------------------------------------------------------------------------
+fpushprimitive is used for submission of a primitive recording it's type, parameters
+and the name of the user that submit it.
+*/
CREATE FUNCTION fpushprimitive(_r yj_error,_kind eprimitivetype,_jso json)
RETURNS yj_primitive AS $$
DECLARE
_tid int;
_ir int;
BEGIN
IF (_r.code!=0) THEN
RAISE EXCEPTION 'Primitive cannot be pushed due to error %: %',_r.code,_r.reason;
END IF;
- -- id,usr,kind,jso,submitted
+
INSERT INTO tstack(usr,kind,jso,submitted)
- VALUES (session_user,_kind,_jso,statement_timestamp())
+ VALUES (session_user,_kind,_jso,statement_timestamp())
RETURNING id into _tid;
UPDATE tvar SET value=_tid WHERE name = 'STACK_TOP';
RETURN ROW(_tid,_r,_jso,NULL,NULL)::yj_primitive;
END;
$$ LANGUAGE PLPGSQL;
+/*
--------------------------------------------------------------------------------
-
+-- consumestack()
--------------------------------------------------------------------------------
+
+consumestack() is processed by postgres in background using a background_worker called
+BGW_CONSUMESTACK (see src/worker_ob.c).
+
+It consumes the stack of primitives executing each primitive in the order of submission.
+Each primitive is wrapped in a single transaction by the background_worker.
+
+*/
CREATE FUNCTION consumestack() RETURNS int AS $$
/*
- * This code is executed by the bg_orker doing following:
+ * This code is executed by BGW_CONSUMESTACK doing following:
while(true)
- dowait := market.worker2()
+ dowait := market.consumestack()
if (dowait):
- wait(dowait) -- milliseconds
+ waits for dowait milliseconds
*/
DECLARE
_s tstack%rowtype;
_res yj_primitive;
_cnt int;
_txt text;
_detail text;
_ctx text;
BEGIN
DELETE FROM tstack
WHERE id IN (SELECT id FROM tstack ORDER BY id ASC LIMIT 1)
RETURNING * INTO _s;
- IF(NOT FOUND) THEN
- RETURN 20; -- OB_DOWAIT 20 milliseconds
+ IF(NOT FOUND) THEN -- if the stack is empty
+ RETURN 20; -- waits for 20 milliseconds
END IF;
- _res := fprocessprimitive('execute',_s);
+ -- else, process it
+ _res := fprocessprimitive(_s);
+ -- and records the result in tmsg
INSERT INTO tmsg (usr,typ,jso,created)
VALUES (_s.usr,'response',row_to_json(_res),statement_timestamp());
UPDATE tvar SET value=_s.id WHERE name = 'STACK_EXECUTED';
RETURN 0;
EXCEPTION WHEN OTHERS THEN
GET STACKED DIAGNOSTICS
_txt = MESSAGE_TEXT,
_detail = PG_EXCEPTION_DETAIL,
_ctx = PG_EXCEPTION_CONTEXT;
RAISE WARNING 'market.consumestack() failed:''%'' ''%'' ''%''',_txt,_detail,_ctx;
+ RAISE WARNING 'for fprocessprimitive(%)',_s;
+
RETURN 0;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market;
GRANT EXECUTE ON FUNCTION consumestack() TO role_bo;
+/*
+--------------------------------------------------------------------------------
+-- message acknowledgement
+--------------------------------------------------------------------------------
+The execution result of a primitive submitted by a user is stores in tmsg or tmsgdaysbefore.
+ackmsg(_id,_date) is called by this user to acknowledge this message.
+*/
CREATE TABLE tmsgack (LIKE tmsg);
-SELECT _grant_read('tmsgack');
+GRANT SELECT ON tmsgack TO role_com;
--------------------------------------------------------------------------------
CREATE FUNCTION ackmsg(_id int,_date date) RETURNS int AS $$
+/*
+If found in tmsg or tmsgdaysbefore
+the message is archived and returns 1. Otherwise, returns 0
+*/
DECLARE
_cnt int;
BEGIN
WITH t AS (
- DELETE FROM tmsg WHERE id = _id and (created::date) = _date AND usr=session_user
+ DELETE FROM tmsg
+ WHERE id = _id and (created::date) = _date AND usr=session_user
RETURNING *
) INSERT INTO tmsgack SELECT * FROM t ;
GET DIAGNOSTICS _cnt = ROW_COUNT;
IF( _cnt = 0 ) THEN
WITH t AS (
- DELETE FROM tmsgdaysbefore WHERE id = _id and (created::date) = _date AND usr=session_user
+ DELETE FROM tmsgdaysbefore
+ WHERE id = _id and (created::date) = _date AND usr=session_user
RETURNING *
) INSERT INTO tmsgack SELECT * FROM t ;
GET DIAGNOSTICS _cnt = ROW_COUNT;
IF( _cnt = 0 ) THEN
RAISE INFO 'The message could not be found';
+ RETURN 0;
+
+ ELSIF(_cnt = 1) THEN
+ RETURN 1;
+
+ ELSE
+ RAISE EXCEPTION 'Error';
END IF;
+
+ ELSIF(_cnt = 1) THEN
+ RETURN 1;
+
+ ELSE
+ RAISE EXCEPTION 'Error';
+
END IF;
RETURN _cnt;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER;
-GRANT EXECUTE ON FUNCTION ackmsg(int,date) TO role_com;
+GRANT EXECUTE ON FUNCTION ackmsg(int,date) TO role_com;
diff --git a/src/sql/quote.sql b/src/sql/quote.sql
index ea1ff48..823d8a7 100644
--- a/src/sql/quote.sql
+++ b/src/sql/quote.sql
@@ -1,325 +1,325 @@
-
+/* not used */
--------------------------------------------------------------------------------
-- prequote -- err_offset -30
--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitprequote(_own text,_qua_requ text,_pos_requ point,_qua_prov text,_pos_prov point,_dist float8)
RETURNS yerrororder AS $$
DECLARE
_r yerrororder%rowtype;
_s tstack%rowtype;
BEGIN
-- ORDER_BEST 2 NOQTTLIMIT 4 IGNOREOMEGA 8 PREQUOTE 64
_s := ROW(NULL,NULL,_own,NULL,2 | 4 | 8 | 64,_qua_requ,NULL,_pos_requ,_qua_prov,NULL,NULL,_pos_prov,_dist,NULL,NULL,NULL,NULL,NULL)::tstack;
_r := fprocessprequote('submitted',_s);
RETURN _r;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER;
-- GRANT EXECUTE ON FUNCTION fsubmitprequote(text,text,point,text,point,float8) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessprequote(_state eorderstate, _s tstack)
RETURNS yerrororder AS $$
DECLARE
_r yerrororder;
_wid int;
_o yorder;
_tx text;
BEGIN
_r := ROW(_s.id,0,NULL);
CASE
WHEN (_state = 'submitted') THEN -- before stack insertion
_r := fcheckquaown(_r,_s.own,_s.qua_requ,_s.pos_requ,_s.qua_prov,_s.pos_prov,_s.dist);
if(_r.code !=0) THEN
_r.code := _r.code -1000;
RETURN _r;
END IF;
IF (NOT fchecknameowner(_s.own,session_user)) THEN
_r.code := -1001;
_r.reason := 'illegal owner name';
RETURN _r;
END IF;
_r := fpushorder(_r,_s);
RETURN _r;
WHEN (_state = 'pending') THEN -- execution
_wid := fgetowner(_s.own);
_s.oid := _s.id;
IF(_s.qtt_requ IS NULL) THEN _s.qtt_requ := 1; END IF;
IF(_s.qtt_prov IS NULL) THEN _s.qtt_prov := 1; END IF;
IF(_s.qtt IS NULL) THEN _s.qtt := 0; END IF;
_o := ROW(_s.type,_s.id,_wid,_s.oid,_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt,
box(_s.pos_requ,_s.pos_requ),box(_s.pos_prov,_s.pos_prov),
_s.dist,earth_get_square(_s.pos_prov,_s.dist))::yorder;
_tx := fproducequote(_o,_s,true);
RETURN _r;
WHEN (_state = 'aborted') THEN -- failure on stack output
return _r;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER;
--------------------------------------------------------------------------------
-- quote all forms -- err_offset -40
-- type & ~3 == 0
--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitquote(_type dtypeorder,_own text,_qua_requ text,_qtt_requ int8,_pos_requ point,_qua_prov text,_qtt_prov int8,_qtt int8,_pos_prov point,_dist float8)
RETURNS yerrororder AS $$
DECLARE
_r yerrororder%rowtype;
_s tstack%rowtype;
_otype int;
BEGIN
_otype := (_type & 3) | 128; -- QUOTE 128
IF((_qtt IS NULL) AND (_qtt_requ IS NULL) AND (_qtt_prov IS NULL) ) THEN -- quote first form
_otype := _otype | 4 | 8; -- NOQTTLIMIT 4 IGNOREOMEGA 8
ELSIF((_qtt IS NULL) AND (_qtt_requ >0) AND (_qtt_prov >0) ) THEN -- quote second form
_otype := _otype | 4 ; -- NOQTTLIMIT 4
ELSIF((_qtt > 0) AND (_qtt_requ >0) AND (_qtt_prov >0) ) THEN -- quote third form
_otype := _otype ;
ELSE
_otype := _otype | 32; -- bit d'erreur inséré
END IF;
_s := ROW(NULL,NULL,_own,NULL,_otype,_qua_requ,_qtt_requ,_pos_requ,_qua_prov,_qtt_prov,_qtt,_pos_prov,_dist,NULL,NULL,NULL,NULL,NULL)::tstack;
_r := fprocessquote('submitted',_s);
RETURN _r;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER;
-- GRANT EXECUTE ON FUNCTION fsubmitquote(dtypeorder,text,text,int8,point,text,int8,int8,point,float8) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitlquote(_own text,_qua_requ text,_qtt_requ int8,_qua_prov text,_qtt_prov int8)
RETURNS yerrororder AS $$
DECLARE
_r yerrororder%rowtype;
_s tstack%rowtype;
_otype int;
BEGIN
_r := ROW(NULL,0,NULL);
_otype := 2 | 128; -- Best 2 QUOTE 128
IF(NOT((_qtt_requ >0) AND (_qtt_prov >0)) ) THEN -- quote third form
_r.code := -2000;
_r.reason := 'quantities required';
return _r;
END IF;
_s := ROW(NULL,NULL,_own,NULL,_otype,_qua_requ,_qtt_requ,'(0,0)'::point,_qua_prov,_qtt_prov,_qtt_prov,'(0,0)'::point,0,NULL,NULL,NULL,NULL,NULL)::tstack;
_r := fprocessquote('submitted',_s);
RETURN _r;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER;
GRANT EXECUTE ON FUNCTION fsubmitlquote(text,text,int8,text,int8) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessquote(_state eorderstate, _s tstack)
RETURNS yerrororder AS $$
DECLARE
_r yerrororder;
_wid int;
_o yorder;
_tx text;
BEGIN
_r := ROW(_s.id,0,NULL);
CASE
WHEN (_state = 'submitted') THEN -- before stack insertion
_r := fcheckquaown(_r,_s.own,_s.qua_requ,_s.pos_requ,_s.qua_prov,_s.pos_prov,_s.dist);
if(_r.code !=0) THEN
_r.code := _r.code - 1100;
RETURN _r;
END IF;
IF ((_s.type & 32)=32) THEN -- error detected in submit
_r.code := -1110;
_r.reason := 'Illegal parameters for a quote';
return _r;
END IF;
_r := fpushorder(_r,_s);
RETURN _r;
WHEN (_state = 'pending') THEN -- execution
_wid := fgetowner(_s.own);
_s.oid := _s.id;
IF(_s.qtt_requ IS NULL) THEN _s.qtt_requ := 1; END IF;
IF(_s.qtt_prov IS NULL) THEN _s.qtt_prov := 1; END IF;
IF(_s.qtt IS NULL) THEN _s.qtt := 0; END IF;
_o := ROW(_s.type,_s.id,_wid,_s.oid,_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt,
box(_s.pos_requ,_s.pos_requ),box(_s.pos_prov,_s.pos_prov),
_s.dist,earth_get_square(_s.pos_prov,_s.dist))::yorder;
_tx := fproducequote(_o,_s,true);
RETURN _r;
WHEN (_state = 'aborted') THEN -- failure on stack output
return _r;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/*
CREATE TYPE yquotebarter AS (
type int,
qua_requ text,
qtt_requ int,
qua_prov text,
qtt_prov int,
qtt int
); */
CREATE TYPE yr_quote AS (
qtt_reci int8,
qtt_give int8
);
--------------------------------------------------------------------------------
-- quote execution at the output of the stack
--------------------------------------------------------------------------------
CREATE FUNCTION fproducequote(_ord yorder,_isquote boolean,_isnoqttlimit boolean,_islimit boolean,_isignoreomega boolean)
/*
_isquote := true; -- (_t.type & 128) = 128
it can be a quote or a prequote
_isnoqttlimit := false; -- (_t.type & 4) = 4
when true the quantity provided is not limited by the stock available
_islimit:= (_t.jso->'type')='limit'; -- (_t.type & 3) = 1
type of the quoted order
_isignoreomega := -- (_t.type & 8) = 8
*/
RETURNS json AS $$
DECLARE
_cnt int;
_cyclemax yflow;
_cycle yflow;
_res int8[];
_firstloop boolean := true;
_freezeOmega boolean;
_mid int;
_nbmvts int;
_wid int;
_MAXMVTPERTRANS int := fgetconst('MAXMVTPERTRANS');
_barter text;
_paths text;
_qtt_reci int8 := 0;
_qtt_give int8 := 0;
_qtt_prov int8 := 0;
_qtt_requ int8 := 0;
_qtt int8 := 0;
_resjso json;
BEGIN
_cnt := fcreate_tmp(_ord);
_nbmvts := 0;
_paths := '';
LOOP
SELECT yflow_max(cycle) INTO _cyclemax FROM _tmp WHERE yflow_is_draft(cycle);
IF(NOT yflow_is_draft(_cyclemax)) THEN
EXIT; -- from LOOP
END IF;
_nbmvts := _nbmvts + yflow_dim(_cyclemax);
IF(_nbmvts > _MAXMVTPERTRANS) THEN
EXIT;
END IF;
/*
IF(NOT _firstloop) THEN
_paths := _paths || ',' || chr(10); -- ,\n
END IF;
*/
_res := yflow_qtts(_cyclemax);
-- _res = [qtt_in,qtt_out,qtt_requ,qtt_prov,qtt]
-- _res = [x[f->dim-2].flowr,x[f->dim-1].flowr,x[f->dim-1].qtt_requ,x[f->dim-1].qtt_prov,x[f->dim-1].qtt]
_qtt_reci := _qtt_reci + _res[1];
_qtt_give := _qtt_give + _res[2];
-- _paths := _paths || yflow_to_jsona(_cyclemax);
-- for a QUOTE, set _ro.qtt_requ,_ro.qtt_prov,_ro.qtt
IF(_isquote) THEN -- QUOTE
IF(_firstloop) THEN
_qtt_requ := _res[3]; -- qtt_requ
_qtt_prov := _res[4]; -- qtt_prov
END IF;
IF(_isnoqttlimit) THEN -- NOLIMITQTT
_qtt := _qtt + _res[5]; -- qtt
ELSE
IF(_firstloop) THEN
_qtt := _res[5]; -- qtt
END IF;
END IF;
END IF;
-- for a PREQUOTE they remain 0
/* if _setOmega, for all remaining orders:
- omega is set to _ro.qtt_requ,_ro.qtt_prov
- IGNOREOMEGA is reset
for all updates, yflow_reduce is applied except for node with NOQTTLIMIT
*/
_freezeOmega := _firstloop AND _isignoreomega AND _isquote; --((_t.type & (8|128)) = (8|128));
UPDATE _tmp SET cycle = yflow_reduce(cycle,_cyclemax,_freezeOmega);
_firstloop := false;
DELETE FROM _tmp WHERE NOT yflow_is_draft(cycle);
END LOOP;
IF ( (_qtt_requ != 0)
AND _islimit AND _isquote
AND ((_qtt_give::double precision) /(_qtt_reci::double precision)) >
((_qtt_prov::double precision) /(_qtt_requ::double precision))
) THEN
RAISE EXCEPTION 'pq: Omega of the flows obtained is not limited by the order limit' USING ERRCODE='YA003';
END IF;
/*
_paths := '{"qtt_reci":' || _qtt_reci || ',"qtt_give":' || _qtt_give ||
',"paths":[' || chr(10) || _paths || chr(10) ||']}';
IF((_t.type & (128)) = 128) THEN
_barter := row_to_json(ROW(_t.type&(~3),_t.qua_requ,_qtt_requ,_t.qua_prov,_qtt_prov,_qtt)::yquotebarter);
_paths := '{"object":' || row_to_json(_t)::text || ',"quoted":' || _barter || ',"result":' || _paths || '}';
ELSE -- prequote
_paths := '{"object":' || row_to_json(_t)::text || ',"result":' || _paths || '}';
END IF; */
_resjso := row_to_json(ROW(_qtt_reci,_qtt_give)::yr_quote);
RETURN _resjso;
END;
$$ LANGUAGE PLPGSQL;
diff --git a/src/sql/roles.sql b/src/sql/roles.sql
index f2194f7..7f97f0a 100644
--- a/src/sql/roles.sql
+++ b/src/sql/roles.sql
@@ -1,95 +1,99 @@
\set ECHO none
+\set ON_ERROR_STOP on
/* script executed for the whole cluster */
SET client_min_messages = warning;
SET log_error_verbosity = terse;
-
+BEGIN;
/* flowf extension */
-- drop extension if exists btree_gin cascade;
-- create extension btree_gin with version '1.0';
DROP EXTENSION IF EXISTS flowf;
CREATE EXTENSION flowf WITH VERSION '0.1';
--------------------------------------------------------------------------------
CREATE FUNCTION _create_role(_role text) RETURNS int AS $$
BEGIN
BEGIN
EXECUTE 'CREATE ROLE ' || _role;
EXCEPTION WHEN duplicate_object THEN
NULL;
END;
EXECUTE 'ALTER ROLE ' || _role || ' NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOREPLICATION';
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/* definition of roles
-- role_com ----> role_co --------->role_client---->clientA
| \-->clientB
|\-> role_co_closed
|
\-> role_bo---->user_bo
access by clients can be disabled/enabled with a single command:
REVOKE role_co FROM role_client
GRANT role_co TO role_client
opening/closing of market is performed by switching role_client
between role_co and role_co_closed
same thing for role batch with role_bo:
REVOKE role_bo FROM user_bo
GRANT role_bo TO user_bo
*/
--------------------------------------------------------------------------------
select _create_role('prod'); -- owner of market objects
ALTER ROLE prod WITH createrole;
/* so that prod can modify roles at opening and closing. */
select _create_role('role_com');
SELECT _create_role('role_co'); -- when market is opened
GRANT role_com TO role_co;
SELECT _create_role('role_co_closed'); -- when market is closed
GRANT role_com TO role_co_closed;
SELECT _create_role('role_client');
GRANT role_co_closed TO role_client; -- maket phase 101
-- role_com ---> role_bo----> user_bo
SELECT _create_role('role_bo');
GRANT role_com TO role_bo;
-- ALTER ROLE role_bo INHERIT;
SELECT _create_role('user_bo');
GRANT role_bo TO user_bo;
--- two connections are allowed for user_bo
+-- two connections are allowed for background_workers
+-- BGW_OPENCLOSE and BGW_CONSUMESTACK
ALTER ROLE user_bo WITH LOGIN CONNECTION LIMIT 2;
--------------------------------------------------------------------------------
select _create_role('test_clienta');
ALTER ROLE test_clienta WITH login;
GRANT role_client TO test_clienta;
select _create_role('test_clientb');
ALTER ROLE test_clientb WITH login;
GRANT role_client TO test_clientb;
select _create_role('test_clientc');
ALTER ROLE test_clientc WITH login;
GRANT role_client TO test_clientc;
select _create_role('test_clientd');
ALTER ROLE test_clientd WITH login;
GRANT role_client TO test_clientd;
+COMMIT;
+
diff --git a/src/sql/tables.sql b/src/sql/tables.sql
index 56e719c..d517a1a 100644
--- a/src/sql/tables.sql
+++ b/src/sql/tables.sql
@@ -1,355 +1,244 @@
--------------------------------------------------------------------------------
ALTER DEFAULT PRIVILEGES REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON SEQUENCES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TYPES FROM PUBLIC;
--------------------------------------------------------------------------------
create domain dqtt AS int8 check( VALUE>0);
create domain dtext AS text check( char_length(VALUE)>0);
--------------------------------------------------------------------------------
--- main constants of the model
+-- Main constants of the model
--------------------------------------------------------------------------------
create table tconst(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
GRANT SELECT ON tconst TO role_com;
--------------------------------------------------------------------------------
-/* for booleans, 0 == false and !=0 == true
-*/
INSERT INTO tconst (name,value) VALUES
('MAXCYCLE',64), -- must be less than yflow_get_maxdim()
('MAXPATHFETCHED',1024),-- maximum depth of the graph exploration
('MAXMVTPERTRANS',128), -- maximum number of movements per transaction
-- if this limit is reached, next cycles are not performed but all others
-- are included in the current transaction
('VERSION-X',2),('VERSION-Y',1),('VERSION-Z',0),
- ('OWNERINSERT',1), -- boolean when true, owner inserted when not found
- ('QUAPROVUSR',0), -- boolean when true, the quality provided by a barter is suffixed by user name
+ -- booleans, 0 == false and !=0 == true
+
+ ('QUAPROVUSR',0), -- when true, the quality provided by a barter is suffixed by user name
-- 1 prod
- ('OWNUSR',0), -- boolean when true, the owner is suffixed by user name
+ ('OWNUSR',0), -- when true, the owner is suffixed by user name
-- 1 prod
- ('DEBUG',1);
+ ('DEBUG',1); --
--------------------------------------------------------------------------------
create table tvar(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
-INSERT INTO tvar (name,value) VALUES ('INSTALLED',0);
+-- btree index tvar_pkey on name
+INSERT INTO tvar (name,value)
+ VALUES ('INSTALLED',0); -- set to 1 when the model is installed
GRANT SELECT ON tvar TO role_com;
--------------------------------------------------------------------------------
-- TOWNER
--------------------------------------------------------------------------------
create table towner (
id serial UNIQUE not NULL,
name dtext UNIQUE not NULL,
PRIMARY KEY (id)
);
-comment on table towner is 'owners of values exchanged';
+
+comment on table towner is 'owners of values exchanged';
+comment on column towner.id is 'id of this owner';
+comment on column towner.name is 'the name of the owner';
+
alter sequence towner_id_seq owned by towner.id;
create index towner_name_idx on towner(name);
SELECT _reference_time('towner');
-SELECT _grant_read('towner');
+GRANT SELECT ON towner TO role_com;
+--------------------------------------------------------------------------------
+--------------------------------------------------------------------------------
+CREATE FUNCTION fgetowner(_name text) RETURNS int AS $$
+DECLARE
+ _wid int;
+BEGIN
+ LOOP
+ SELECT id INTO _wid FROM towner WHERE name=_name;
+ IF found THEN
+ return _wid;
+ END IF;
+
+ BEGIN
+ INSERT INTO towner (name) VALUES (_name) RETURNING id INTO _wid;
+ return _wid;
+ EXCEPTION WHEN unique_violation THEN
+ NULL;
+ END;
+ END LOOP;
+END;
+$$ LANGUAGE PLPGSQL;
+--------------------------------------------------------------------------------
+-- MOVEMENTS
+--------------------------------------------------------------------------------
+CREATE SEQUENCE tmvt_id_seq;
--------------------------------------------------------------------------------
-- ORDER BOOK
--------------------------------------------------------------------------------
-- type = type_flow | type_primitive <<8 | type_mode <<16
-create domain dtypeorder AS int check(VALUE >=0 AND VALUE < 16777215); --((1<<24)-1)
-
+create domain dtypeorder AS int check(VALUE >=0 AND VALUE < ((1<<24)-1));
-- type_flow &3 1 order limit,2 order best
--- type_flow &12 bit set for c calculations
+-- type_flow &12 bit reserved for c internal calculations
-- 4 no qttlimit
-- 8 ignoreomega
-- yorder.type is a type_flow = type & 255
--- type_primitive
--- 1 order
--- 2 rmorder
--- 3 quote
--- 4 prequote
CREATE TYPE eordertype AS ENUM ('best','limit');
CREATE TYPE eprimitivetype AS ENUM ('order','childorder','rmorder','quote','prequote');
create table torder (
usr dtext,
own dtext,
ord yorder, --defined by the extension flowf
created timestamp not NULL,
updated timestamp,
duration interval
);
-comment on table torder is 'Order book';
-comment on column torder.usr is 'user that inserted the order ';
-comment on column torder.ord is 'the order';
+comment on table torder is 'Order book';
+comment on column torder.usr is 'user that inserted the order ';
+comment on column torder.ord is 'the order';
comment on column torder.created is 'time when the order was put on the stack';
comment on column torder.updated is 'time when the (quantity) of the order was updated by the order book';
comment on column torder.duration is 'the life time of the order';
-SELECT _grant_read('torder');
+GRANT SELECT ON torder TO role_com;
create index torder_qua_prov_idx on torder(((ord).qua_prov)); -- using gin(((ord).qua_prov) text_ops);
create index torder_id_idx on torder(((ord).id));
create index torder_oid_idx on torder(((ord).oid));
-- id,type,own,oid,qtt_requ,qua_requ,qtt_prov,qua_prov,qtt
create view vorder as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where o.usr=session_user;
-SELECT _grant_read('vorder');
+GRANT SELECT ON vorder TO role_com;
-- sans dates ni filtre sur usr
create view vorder2 as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt
from torder o left join towner w on ((o.ord).own=w.id);
-SELECT _grant_read('vorder2');
+GRANT SELECT ON vorder2 TO role_com;
-- only parent for all users
create view vbarter as
select (o.ord).id as id,(o.ord).type as type,o.usr as user,w.name as own,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where (o.ord).oid=(o.ord).id;
-SELECT _grant_read('vbarter');
+GRANT SELECT ON vbarter TO role_com;
-- parent and childs for all users, used with vmvto
create view vordero as
select id,
(case when (type & 3=1) then 'limit' else 'best' end)::eordertype as type,
own as owner,
case when id=oid then (qtt::text || ' ' || qua_prov) else '' end as stock,
'(' || qtt_prov::text || '/' || qtt_requ::text || ') ' ||
qua_prov || ' / '|| qua_requ as expected_Ï,
case when id=oid then '' else oid::text end as oid
from vorder order by id asc;
GRANT SELECT ON vordero TO role_com;
comment on view vordero is 'order book for all users, to be used with vmvto';
comment on column vordero.id is 'the id of the order';
comment on column vordero.owner is 'the owner';
comment on column vordero.stock is 'for a parent order the stock offered by the owner';
comment on column vordero.expected_Ï is 'the Ï of the order';
comment on column vordero.oid is 'for a child-order, the id of the parent-order';
---------------------------------------------------------------------------------
-
---------------------------------------------------------------------------------
-CREATE FUNCTION fgetowner(_name text) RETURNS int AS $$
-DECLARE
- _wid int;
- _OWNERINSERT boolean := fgetconst('OWNERINSERT')=1;
-BEGIN
- LOOP
- SELECT id INTO _wid FROM towner WHERE name=_name;
- IF found THEN
- return _wid;
- END IF;
- IF (NOT _OWNERINSERT) THEN
- RAISE EXCEPTION 'The owner does not exist' USING ERRCODE='YU001';
- END IF;
- BEGIN
- INSERT INTO towner (name) VALUES (_name) RETURNING id INTO _wid;
- -- RAISE NOTICE 'owner % created',_name;
- return _wid;
- EXCEPTION WHEN unique_violation THEN
- NULL;--
- END;
- END LOOP;
-END;
-$$ LANGUAGE PLPGSQL;
-
---------------------------------------------------------------------------------
--- TMVT
--- id,nbc,nbt,grp,xid,usr_src,usr_dst,xoid,own_src,own_dst,qtt,nat,ack,exhausted,order_created,created
---------------------------------------------------------------------------------
-/*
-create table tmvt (
- id serial UNIQUE not NULL,
- nbc int default NULL,
- nbt int default NULL,
- grp int default NULL,
- xid int default NULL,
- usr_src text default NULL,
- usr_dst text default NULL,
- xoid int default NULL,
- own_src text default NULL,
- own_dst text default NULL,
- qtt int8 default NULL,
- nat text default NULL,
- ack boolean default NULL,
- cack boolean default NULL,
- exhausted boolean default NULL,
- order_created timestamp default NULL,
- created timestamp default NULL,
- om_exp double precision default NULL,
- om_rea double precision default NULL,
-
- CONSTRAINT ctmvt_grp FOREIGN KEY (grp) references tmvt(id) ON UPDATE CASCADE
-);
-
-GRANT SELECT ON tmvt TO role_com;
-
-comment on table tmvt is 'Records ownership changes';
-
-comment on column tmvt.nbc is 'number of movements of the exchange cycle';
-comment on column tmvt.nbt is 'number of movements of the transaction containing several exchange cycles';
-comment on column tmvt.grp is 'references the first movement of the exchange';
-comment on column tmvt.xid is 'references the order.id';
-comment on column tmvt.usr_src is 'usr provider';
-comment on column tmvt.usr_dst is 'usr receiver';
-comment on column tmvt.xoid is 'references the order.oid';
-comment on column tmvt.own_src is 'owner provider';
-comment on column tmvt.own_dst is 'owner receiver';
-comment on column tmvt.qtt is 'quantity of the value moved';
-comment on column tmvt.nat is 'quality of the value moved';
-comment on column tmvt.ack is 'set when movement has been acknowledged';
-comment on column tmvt.cack is 'set when the cycle has been acknowledged';
-comment on column tmvt.exhausted is 'set when the movement exhausted the order providing the value';
-comment on column tmvt.om_exp is 'Ï expected by the order';
-comment on column tmvt.om_rea is 'real Ï of movement';
-
-alter sequence tmvt_id_seq owned by tmvt.id;
-GRANT SELECT ON tmvt_id_seq TO role_com;
-
-create index tmvt_grp_idx on tmvt(grp);
-create index tmvt_nat_idx on tmvt(nat);
-create index tmvt_own_src_idx on tmvt(own_src);
-create index tmvt_own_dst_idx on tmvt(own_dst);
-
-CREATE VIEW vmvt AS select * from tmvt;
-GRANT SELECT ON vmvt TO role_com;
-
-CREATE VIEW vmvt_tu AS select id,nbc,grp,xid,xoid,own_src,own_dst,qtt,nat,ack,cack,exhausted from tmvt;
-GRANT SELECT ON vmvt_tu TO role_com;
-
-create view vmvto as
- select id,grp,
- usr_src as from_usr,
- own_src as from_own,
- qtt::text || ' ' || nat as value,
- usr_dst as to_usr,
- own_dst as to_own,
- to_char(om_exp, 'FM999.9999990') as expected_Ï,
- to_char(om_rea, 'FM999.9999990') as actual_Ï,
- ack
- from tmvt where cack is NULL order by id asc;
-GRANT SELECT ON vmvto TO role_com;
-*/
-CREATE SEQUENCE tmvt_id_seq;
-
---------------------------------------------------------------------------------
--- STACK id,usr,kind,jso,submitted
---------------------------------------------------------------------------------
-create table tstack (
- id serial UNIQUE not NULL,
- usr dtext,
- kind eprimitivetype,
- jso json, -- representation of the primitive
- submitted timestamp not NULL,
- PRIMARY KEY (id)
-);
-
-comment on table tstack is 'Records the stack of primitives';
-comment on column tstack.id is 'id of this primitive';
-comment on column tstack.usr is 'user submitting the primitive';
-comment on column tstack.kind is 'type of primitive';
-comment on column tstack.jso is 'primitive payload';
-comment on column tstack.submitted is 'timestamp when the primitive was successfully submitted';
-
-alter sequence tstack_id_seq owned by tstack.id;
-
-GRANT SELECT ON tstack TO role_com;
-SELECT fifo_init('tstack');
-GRANT SELECT ON tstack_id_seq TO role_com;
-
-
---------------------------------------------------------------------------------
-CREATE TYPE eprimphase AS ENUM ('submit', 'execute');
-
--------------------------------------------------------------------------------
-- MSG
--------------------------------------------------------------------------------
CREATE TYPE emsgtype AS ENUM ('response', 'exchange');
create table tmsg (
id serial UNIQUE not NULL,
usr dtext default NULL, -- the user receiver of this message
typ emsgtype not NULL,
jso json default NULL,
created timestamp not NULL
);
alter sequence tmsg_id_seq owned by tmsg.id;
-SELECT _grant_read('tmsg');
-SELECT _grant_read('tmsg_id_seq');
+GRANT SELECT ON tmsg TO role_com;
+GRANT SELECT ON tmsg_id_seq TO role_com;
SELECT fifo_init('tmsg');
CREATE VIEW vmsg AS select * from tmsg WHERE usr = session_user;
-SELECT _grant_read('vmsg');
+GRANT SELECT ON vmsg TO role_com;
--------------------------------------------------------------------------------
CREATE TYPE yj_error AS (
code int,
reason text
);
CREATE TYPE yerrorprim AS (
id int,
error yj_error
);
CREATE TYPE yj_value AS (
qtt int8,
nat text
);
CREATE TYPE yj_stock AS (
id int,
qtt int8,
nat text,
own text,
usr text
);
CREATE TYPE yj_Ï AS (
id int,
qtt_prov int8,
qtt_requ int8,
type eordertype
);
CREATE TYPE yj_mvt AS (
id int,
cycle int,
orde yj_Ï,
stock yj_stock,
mvt_from yj_stock,
mvt_to yj_stock,
orig int
);
CREATE TYPE yj_order AS (
id int,
error yj_error
);
CREATE TYPE yj_primitive AS (
id int,
error yj_error,
primitive json,
result json,
value json
);
diff --git a/src/sql/util.sql b/src/sql/util.sql
index ca262ea..fcbd287 100644
--- a/src/sql/util.sql
+++ b/src/sql/util.sql
@@ -1,125 +1,125 @@
/* utilities */
--------------------------------------------------------------------------------
-- fetch a constant, and verify consistancy
CREATE FUNCTION fgetconst(_name text) RETURNS int AS $$
DECLARE
_ret int;
BEGIN
SELECT value INTO _ret FROM tconst WHERE name=_name;
IF(NOT FOUND) THEN
RAISE EXCEPTION 'the const % is not found',_name USING ERRCODE= 'YA002';
END IF;
IF(_name = 'MAXCYCLE' AND _ret >yflow_get_maxdim()) THEN
RAISE EXCEPTION 'MAXVALUE must be <=%',yflow_get_maxdim() USING ERRCODE='YA002';
END IF;
RETURN _ret;
END;
$$ LANGUAGE PLPGSQL STABLE set search_path to market;
--------------------------------------------------------------------------------
CREATE FUNCTION fsetvar(_name text,_value int) RETURNS int AS $$
DECLARE
_ret int;
BEGIN
UPDATE tvar SET value=_value WHERE name=_name;
GET DIAGNOSTICS _ret = ROW_COUNT;
IF(_ret !=1) THEN
RAISE EXCEPTION 'the var % is not found',_name USING ERRCODE= 'YA002';
END IF;
RETURN 0;
END;
$$ LANGUAGE PLPGSQL set search_path to market;
--------------------------------------------------------------------------------
CREATE FUNCTION fgetvar(_name text) RETURNS int AS $$
DECLARE
_ret int;
BEGIN
SELECT value INTO _ret FROM tvar WHERE name=_name;
IF(NOT FOUND) THEN
RAISE EXCEPTION 'the var % is not found',_name USING ERRCODE= 'YA002';
END IF;
RETURN _ret;
END;
$$ LANGUAGE PLPGSQL set search_path to market;
--------------------------------------------------------------------------------
CREATE FUNCTION fversion() RETURNS text AS $$
DECLARE
_ret text;
_x int;
_y int;
_z int;
BEGIN
SELECT value INTO _x FROM tconst WHERE name='VERSION-X';
SELECT value INTO _y FROM tconst WHERE name='VERSION-Y';
SELECT value INTO _z FROM tconst WHERE name='VERSION-Z';
RETURN 'openBarter VERSION-' || ((_x)::text) || '.' || ((_y)::text)|| '.' || ((_z)::text);
END;
$$ LANGUAGE PLPGSQL STABLE set search_path to market;
GRANT EXECUTE ON FUNCTION fversion() TO role_com;
--------------------------------------------------------------------------------
CREATE FUNCTION fifo_init(_name text) RETURNS void AS $$
BEGIN
EXECUTE 'CREATE INDEX ' || _name || '_id_idx ON ' || _name || '((id) ASC)';
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- trigger before insert on some tables
--------------------------------------------------------------------------------
CREATE FUNCTION ftime_updated()
RETURNS trigger AS $$
BEGIN
IF (TG_OP = 'INSERT') THEN
NEW.created := statement_timestamp();
ELSE
NEW.updated := statement_timestamp();
END IF;
RETURN NEW;
END;
$$ LANGUAGE PLPGSQL;
comment on FUNCTION ftime_updated() is
'trigger updating fields created and updated';
--------------------------------------------------------------------------------
-- add ftime_updated trigger to the table
--------------------------------------------------------------------------------
CREATE FUNCTION _reference_time(_table text) RETURNS int AS $$
DECLARE
_res int;
_tablem text;
_tl text;
_tr text;
BEGIN
_tablem := _table;
- LOOP -- remplace les points par des tirets
+ LOOP -- remplaces dots by underscores
_res := position('.' in _tablem);
EXIT WHEN _res=0;
_tl := substring(_tablem for _res-1);
_tr := substring(_tablem from _res+1);
_tablem := _tl || '_' || _tr;
END LOOP;
EXECUTE 'ALTER TABLE ' || _table || ' ADD created timestamp';
EXECUTE 'ALTER TABLE ' || _table || ' ADD updated timestamp';
EXECUTE 'CREATE TRIGGER trig_befa_' || _tablem || ' BEFORE INSERT
OR UPDATE ON ' || _table || ' FOR EACH ROW
EXECUTE PROCEDURE ftime_updated()' ;
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
CREATE FUNCTION _grant_read(_table text) RETURNS void AS $$
/* deprecated, use GRANT SELECT ON _table TO role_com instead */
BEGIN
EXECUTE 'GRANT SELECT ON ' || _table || ' TO role_com';
RETURN;
END;
$$ LANGUAGE PLPGSQL;
---SELECT _grant_read('tconst');
+--GRANT SELECT ON tconst TO role_com;
diff --git a/src/test/expected/tu_1.res b/src/test/expected/tu_1.res
index f6b39be..683ab67 100644
--- a/src/test/expected/tu_1.res
+++ b/src/test/expected/tu_1.res
@@ -1,50 +1,49 @@
-- Bilateral exchange between owners with distinct users (limit)
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
-OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 10,'a',20);
id error
+---------+---------
2 (0,)
--Bilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wb 10 b limit 20 a 10 2test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wb @test_clientb
stock id:1 remaining after exchange: 0 'b'
3:Cycle id:1 Exchange id:1 for wb @test_clientb:
1:mvt_from wb @test_clientb : 10 'a' -> wa @test_clienta
2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 10 'a'
4:Primitive id:2 from test_clientb: OK
diff --git a/src/test/expected/tu_2.res b/src/test/expected/tu_2.res
index 357662d..92e6ba1 100644
--- a/src/test/expected/tu_2.res
+++ b/src/test/expected/tu_2.res
@@ -1,66 +1,65 @@
-- Bilateral exchange between owners with distinct users (best+limit)
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
-OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('best','wa','a', 10,'b',5);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('best','wb','b', 20,'a',10);
id error
+---------+---------
2 (0,)
--Bilateral exchange expected
SELECT * FROM market.fsubmitorder('limit','wa','a', 10,'b',5);
id error
+---------+---------
3 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('best','wb','b', 20,'a',10);
id error
+---------+---------
4 (0,)
--No exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 10 a best 5 b 0 1test_clienta
2 2 wb 20 b best 10 a 5 2test_clientb
3 3 wa 10 a limit 5 b 5 1test_clientb
4 4 wb 20 b best 10 a 10 2test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 5 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 5 'a' <- wb @test_clientb
stock id:1 remaining after exchange: 0 'b'
3:Cycle id:1 Exchange id:1 for wb @test_clientb:
1:mvt_from wb @test_clientb : 5 'a' -> wa @test_clienta
2:mvt_to wb @test_clientb : 5 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 5 'a'
4:Primitive id:2 from test_clientb: OK
5:Primitive id:3 from test_clientb: OK
6:Primitive id:4 from test_clientb: OK
diff --git a/src/test/expected/tu_3.res b/src/test/expected/tu_3.res
index fdf455c..6283f8d 100644
--- a/src/test/expected/tu_3.res
+++ b/src/test/expected/tu_3.res
@@ -1,62 +1,61 @@
-- Trilateral exchange between owners with distinct users
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
-OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wc','c', 10,'a',20);
id error
+---------+---------
3 (0,)
--Trilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wb 20 b limit 40 c 30 2test_clientb
3 3 wc 10 c limit 20 a 10 3test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Cycle id:1 Exchange id:3 for wb @test_clientb:
3:mvt_from wb @test_clientb : 10 'c' -> wc @test_clientb
2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 30 'c'
4:Cycle id:1 Exchange id:1 for wc @test_clientb:
1:mvt_from wc @test_clientb : 10 'a' -> wa @test_clienta
3:mvt_to wc @test_clientb : 10 'c' <- wb @test_clientb
stock id:3 remaining after exchange: 10 'a'
5:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wc @test_clientb
stock id:1 remaining after exchange: 0 'b'
6:Primitive id:3 from test_clientb: OK
diff --git a/src/test/expected/tu_4.res b/src/test/expected/tu_4.res
index fdabf22..88a3478 100644
--- a/src/test/expected/tu_4.res
+++ b/src/test/expected/tu_4.res
@@ -1,63 +1,62 @@
-- Trilateral exchange between owners with two owners
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
-OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
-- The profit is shared equally between wa and wb
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',20);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wb','c', 10,'a',20);
id error
+---------+---------
3 (0,)
--Trilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 20 b 0 1test_clienta
2 2 wb 20 b limit 40 c 20 2test_clientb
3 3 wb 10 c limit 20 a 0 2test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Cycle id:1 Exchange id:3 for wb @test_clientb:
3:mvt_from wb @test_clientb : 20 'c' -> wb @test_clientb
2:mvt_to wb @test_clientb : 20 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 20 'c'
4:Cycle id:1 Exchange id:1 for wb @test_clientb:
1:mvt_from wb @test_clientb : 20 'a' -> wa @test_clienta
3:mvt_to wb @test_clientb : 20 'c' <- wb @test_clientb
stock id:3 remaining after exchange: 0 'a'
5:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 20 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 20 'a' <- wb @test_clientb
stock id:1 remaining after exchange: 0 'b'
6:Primitive id:3 from test_clientb: OK
diff --git a/src/test/expected/tu_5.res b/src/test/expected/tu_5.res
index fda10b9..4f8e5c3 100644
--- a/src/test/expected/tu_5.res
+++ b/src/test/expected/tu_5.res
@@ -1,62 +1,61 @@
-- Trilateral exchange by one owners
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
-OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wa','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wa','c', 10,'a',20);
id error
+---------+---------
3 (0,)
--Trilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wa 20 b limit 40 c 30 1test_clientb
3 3 wa 10 c limit 20 a 10 1test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Cycle id:1 Exchange id:3 for wa @test_clientb:
3:mvt_from wa @test_clientb : 10 'c' -> wa @test_clientb
2:mvt_to wa @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 30 'c'
4:Cycle id:1 Exchange id:1 for wa @test_clientb:
1:mvt_from wa @test_clientb : 10 'a' -> wa @test_clienta
3:mvt_to wa @test_clientb : 10 'c' <- wa @test_clientb
stock id:3 remaining after exchange: 10 'a'
5:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wa @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wa @test_clientb
stock id:1 remaining after exchange: 0 'b'
6:Primitive id:3 from test_clientb: OK
diff --git a/src/test/expected/tu_6.res b/src/test/expected/tu_6.res
index 71e1776..fd9febe 100644
--- a/src/test/expected/tu_6.res
+++ b/src/test/expected/tu_6.res
@@ -1,109 +1,108 @@
-- 7-exchange with 7 partners
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
-OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wc','c', 20,'d',40);
id error
+---------+---------
3 (0,)
SELECT * FROM market.fsubmitorder('limit','wd','d', 20,'e',40);
id error
+---------+---------
4 (0,)
SELECT * FROM market.fsubmitorder('limit','we','e', 20,'f',40);
id error
+---------+---------
5 (0,)
SELECT * FROM market.fsubmitorder('limit','wf','f', 20,'g',40);
id error
+---------+---------
6 (0,)
SELECT * FROM market.fsubmitorder('limit','wg','g', 20,'a',40);
id error
+---------+---------
7 (0,)
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wb 20 b limit 40 c 30 2test_clientb
3 3 wc 20 c limit 40 d 30 3test_clientb
4 4 wd 20 d limit 40 e 30 4test_clientb
5 5 we 20 e limit 40 f 30 5test_clientb
6 6 wf 20 f limit 40 g 30 6test_clientb
7 7 wg 20 g limit 40 a 30 7test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Primitive id:3 from test_clientb: OK
4:Primitive id:4 from test_clientb: OK
5:Primitive id:5 from test_clientb: OK
6:Primitive id:6 from test_clientb: OK
7:Cycle id:1 Exchange id:7 for wf @test_clientb:
7:mvt_from wf @test_clientb : 10 'g' -> wg @test_clientb
6:mvt_to wf @test_clientb : 10 'f' <- we @test_clientb
stock id:6 remaining after exchange: 30 'g'
8:Cycle id:1 Exchange id:1 for wg @test_clientb:
1:mvt_from wg @test_clientb : 10 'a' -> wa @test_clienta
7:mvt_to wg @test_clientb : 10 'g' <- wf @test_clientb
stock id:7 remaining after exchange: 30 'a'
9:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wg @test_clientb
stock id:1 remaining after exchange: 0 'b'
10:Cycle id:1 Exchange id:3 for wb @test_clientb:
3:mvt_from wb @test_clientb : 10 'c' -> wc @test_clientb
2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 30 'c'
11:Cycle id:1 Exchange id:4 for wc @test_clientb:
4:mvt_from wc @test_clientb : 10 'd' -> wd @test_clientb
3:mvt_to wc @test_clientb : 10 'c' <- wb @test_clientb
stock id:3 remaining after exchange: 30 'd'
12:Cycle id:1 Exchange id:5 for wd @test_clientb:
5:mvt_from wd @test_clientb : 10 'e' -> we @test_clientb
4:mvt_to wd @test_clientb : 10 'd' <- wc @test_clientb
stock id:4 remaining after exchange: 30 'e'
13:Cycle id:1 Exchange id:6 for we @test_clientb:
6:mvt_from we @test_clientb : 10 'f' -> wf @test_clientb
5:mvt_to we @test_clientb : 10 'e' <- wd @test_clientb
stock id:5 remaining after exchange: 30 'f'
14:Primitive id:7 from test_clientb: OK
diff --git a/src/test/expected/tu_7.res b/src/test/expected/tu_7.res
index 9dcab26..209f783 100644
--- a/src/test/expected/tu_7.res
+++ b/src/test/expected/tu_7.res
@@ -1,80 +1,79 @@
-- Competition between bilateral and multilateral exchange 1/2
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
name value
+---------+---------
INSTALLED 1
-OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 80,'a',40);
id error
+---------+---------
1 (0,)
SELECT * FROM market.fsubmitorder('limit','wc','b', 20,'d',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wd','d', 20,'a',40);
id error
+---------+---------
3 (0,)
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 40,'b',80);
id error
+---------+---------
4 (0,)
-- omega better for the trilateral exchange
-- it wins, the rest is used with a bilateral exchange
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wb 80 b limit 40 a 20 1test_clientb
2 2 wc 20 b limit 40 d 0 2test_clientb
3 3 wd 20 d limit 40 a 0 3test_clientb
4 4 wa 40 a limit 80 b 0 4test_clienta
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clientb: OK
2:Primitive id:2 from test_clientb: OK
3:Primitive id:3 from test_clientb: OK
4:Cycle id:1 Exchange id:3 for wd @test_clientb:
3:mvt_from wd @test_clientb : 40 'a' -> wa @test_clienta
2:mvt_to wd @test_clientb : 40 'd' <- wc @test_clientb
stock id:3 remaining after exchange: 0 'a'
5:Cycle id:1 Exchange id:1 for wa @test_clienta:
1:mvt_from wa @test_clienta : 40 'b' -> wc @test_clientb
3:mvt_to wa @test_clienta : 40 'a' <- wd @test_clientb
stock id:4 remaining after exchange: 40 'b'
6:Cycle id:1 Exchange id:2 for wc @test_clientb:
2:mvt_from wc @test_clientb : 40 'd' -> wd @test_clientb
1:mvt_to wc @test_clientb : 40 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 0 'd'
7:Cycle id:4 Exchange id:5 for wb @test_clientb:
5:mvt_from wb @test_clientb : 20 'a' -> wa @test_clienta
4:mvt_to wb @test_clientb : 40 'b' <- wa @test_clienta
stock id:1 remaining after exchange: 20 'a'
8:Cycle id:4 Exchange id:4 for wa @test_clienta:
4:mvt_from wa @test_clienta : 40 'b' -> wb @test_clientb
5:mvt_to wa @test_clienta : 20 'a' <- wb @test_clientb
stock id:4 remaining after exchange: 0 'b'
9:Primitive id:4 from test_clienta: OK
diff --git a/src/test/py/run.py b/src/test/py/run.py
index 48d3c63..6d8441b 100755
--- a/src/test/py/run.py
+++ b/src/test/py/run.py
@@ -1,488 +1,158 @@
# -*- coding: utf-8 -*-
'''
Framework de tests tu_*
***************************************************************************
execution:
reset_market.sql
soumis:
list de primitives t_*.sql
résultats:
état de l'order book
état de tmsg
comparaison attendu/obtenu
dans src/test/
run.py
sql/reset_market.sql
sql/t_*.sql
expected/t_*.res
obtained/t_*.res
boucle pour chaque t_*.sql:
reset_market.sql
exécuter t_*.sql
dumper les résultats dans obtained/t_.res
comparer expected et obtained
'''
import sys,os,time,logging
import psycopg2
import psycopg2.extensions
import traceback
import srvob_conf
import molet
import utilt
-
+import test_ti
import sys
-sys.path = [os.path.abspath(os.path.join(os.path.abspath(__file__),"../../../../simu/liquid"))]+ sys.path
-#print sys.path
-import distrib
PARLEN=80
prtest = utilt.PrTest(PARLEN,'=')
-def get_paths():
- curdir = os.path.abspath(__file__)
- curdir = os.path.dirname(curdir)
- curdir = os.path.dirname(curdir)
- sqldir = os.path.join(curdir,'sql')
- resultsdir,expecteddir = os.path.join(curdir,'results'),os.path.join(curdir,'expected')
- molet.mkdir(resultsdir,ignoreWarning = True)
- molet.mkdir(expecteddir,ignoreWarning = True)
- tup = (curdir,sqldir,resultsdir,expecteddir)
- return tup
-
def tests_tu(options):
titre_test = "UNDEFINED"
- curdir,sqldir,resultsdir,expecteddir = get_paths()
+ curdir,sqldir,resultsdir,expecteddir = test_ti.get_paths()
try:
utilt.wait_for_true(srvob_conf.dbBO,0.1,"SELECT value=102,value FROM market.tvar WHERE name='OC_CURRENT_PHASE'",
msg="Waiting for market opening")
except psycopg2.OperationalError,e:
print "Please adjust DB_NAME,DB_USER,DB_PWD,DB_PORT parameters of the file src/test/py/srv_conf.py"
print "The test program could not connect to the market"
exit(1)
if options.test is None:
_fts = [f for f in os.listdir(sqldir) if f.startswith('tu_') and f[-4:]=='.sql']
_fts.sort(lambda x,y: cmp(x,y))
else:
_nt = options.test + '.sql'
_fts = os.path.join(sqldir,_nt)
if not os.path.exists(_fts):
print 'This test \'%s\' was not found' % _fts
return
else:
_fts = [_nt]
_tok,_terr,_num_test = 0,0,0
prtest.title('running tests on database "%s"' % (srvob_conf.DB_NAME,))
#print '='*PARLEN
print ''
print 'Num\tStatus\tName'
print ''
for file_test in _fts: # itération on test cases
_nom_test = file_test[:-4]
_terr +=1
_num_test +=1
file_result = file_test[:-4]+'.res'
_fte = os.path.join(resultsdir,file_result)
_fre = os.path.join(expecteddir,file_result)
with open(_fte,'w') as f:
cur = None
dump = utilt.Dumper(srvob_conf.dbBO,options,None)
titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
dump = utilt.Dumper(srvob_conf.dbBO,options,f)
titre_test = utilt.exec_script(dump,sqldir,file_test)
utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
conn = molet.DbData(srvob_conf.dbBO)
try:
with molet.DbCursor(conn) as cur:
dump.torder(cur)
dump.tmsg(cur)
finally:
conn.close()
if(os.path.exists(_fre)):
if(utilt.files_clones(_fte,_fre)):
_tok +=1
_terr -=1
print '%i\tY\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\tN\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\t?\t%s\t%s' % (_num_test,_nom_test,titre_test)
# display global results
print ''
print 'Test status: (Y)expected ==results, (N)expected!=results,(F)failed, (?)expected undefined'
prtest.line()
if(_terr == 0):
prtest.center('\tAll %i tests passed' % _tok)
else:
prtest.center('\t%i tests KO, %i passed' % (_terr,_tok))
prtest.line()
-import random
-import csv
-import simplejson
-import sys
-
-MAX_ORDER = 100
-def build_ti(options):
- ''' build a .sql file with a bump of submit
- '''
- #conf = srvob_conf.dbBO
- curdir,sqldir,resultsdir,expecteddir = get_paths()
- _frs = os.path.join(sqldir,'test_ti.csv')
-
- MAX_OWNER = 10
- MAX_QLT = 20
- QTT_PROV = 10000
-
- prtest.title('generating tests cases for quotes')
- def gen(nborders,frs):
- for i in range(nborders):
- w = random.randint(1,MAX_OWNER)
- qlt_prov,qlt_requ = distrib.couple(distrib.uniformQlt,MAX_QLT)
- r = random.random()+0.5
- qtt_requ = int(QTT_PROV * r)
- lb= 'limit' if (random.random()>0.9) else 'best'
- frs.writerow(['admin',lb,'w%i'%w,'q%i'%qlt_requ,qtt_requ,'q%i'%qlt_prov,QTT_PROV])
-
- with open(_frs,'w') as f:
- spamwriter = csv.writer(f)
- gen(MAX_ORDER,spamwriter)
-
- molet.removeFile(os.path.join(expecteddir,'test_ti.res'),ignoreWarning = True)
- prtest.center('done, test_ti.res removed')
- prtest.line()
-
-def test_ti(options):
-
- _reset,titre_test = options.test_ti_reset,''
-
- curdir,sqldir,resultsdir,expecteddir = get_paths()
- prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
-
- dump = utilt.Dumper(srvob_conf.dbBO,options,None)
- if _reset:
- print '\tReset: Clearing market ...'
- titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
- print '\t\tDone'
-
- fn = os.path.join(sqldir,'test_ti.csv')
- if( not os.path.exists(fn)):
- raise ValueError('The data %s is not found' % fn)
-
- with open(fn,'r') as f:
- spamreader = csv.reader(f)
- values_prov = {}
- _nbtest = 0
- for row in spamreader:
- _nbtest +=1
- qua_prov,qtt_prov = row[5],row[6]
- if not qua_prov in values_prov.keys():
- values_prov[qua_prov] = 0
- values_prov[qua_prov] = values_prov[qua_prov] + int(qtt_prov)
-
- #print values_prov
-
- cur_login = None
- titre_test = None
-
- inst = utilt.ExecInst(dump)
-
-
- user = None
- fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
- fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
-
- fmtjsosr = '''SELECT jso from market.tmsg
- where json_extract_path_text(jso,'id')::int=%i and typ='response' '''
-
- fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
- sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
- sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
- from market.tmsg
- where json_extract_path_text(jso,'orig')::int=%i
- and json_extract_path_text(jso,'orde','id')::int=%i
- and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
- '''
- the order that produced the exchange has the qualities expected
- '''
- i= 0
- if _reset:
- print '\tSubmission: sending a series of %i tests where a random set of arguments' % _nbtest
- print '\t\tis used to submit a quote, then an order'
- with open(fn,'r') as f:
-
- spamreader = csv.reader(f)
-
- compte = utilt.Delai()
- for row in spamreader:
- user = row[0]
- params = tuple(row[1:])
-
-
- cursor = inst.exe( fmtquote % params,user)
- cursor = inst.exe( fmtorder % params,user)
- i +=1
- if i % 100 == 0:
- prtest.progress(i/float(_nbtest))
-
- delai = compte.getSecs()
-
- print '\t\t%i quote & order primitives in %f seconds' % (_nbtest*2,delai)
- print '\tExecution: Waiting for end of execution ...'
- #utilt.wait_for_true(srvob_conf.dbBO,1000,"SELECT market.fstackdone()",prtest=prtest)
- utilt.wait_for_empty_stack(srvob_conf.dbBO,prtest)
- delai = compte.getSecs()
- print '\t\t Done: mean time per primitive %f seconds' % (delai/(_nbtest*2),)
-
- fmtiter = '''SELECT json_extract_path_text(jso,'id')::int id,json_extract_path_text(jso,'primitive','type') typ
- from market.tmsg where typ='response' and json_extract_path_text(jso,'primitive','kind')='quote'
- order by id asc limit 10 offset %i'''
- i = 0
- _notnull,_ko,_limit,_limitko = 0,0,0,0
- print '\tChecking: identity of quote result and order result for each %i test cases' % _nbtest
- print '\t\tusing the content of market.tmsg'
- while True:
- cursor = inst.exe( fmtiter % i,user)
-
- vec = []
- for re in cursor:
- vec.append(re)
-
- l = len(vec)
-
- if l == 0:
- break
-
- for idq,_type in vec:
- i += 1
- if _type == 'limit':
- _limit += 1
-
- # result of the quote for idq
- _cur = inst.exe(fmtjsosr %idq,user)
- res = _cur.fetchone()
- res_quote =simplejson.loads(res[0])
- expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
-
- #result of the order for idq+1
- _cur = inst.exe(fmtjsose %(idq+1,idq+1),user)
- res = _cur.fetchone()
-
- if res is None:
- result = 0,0
- else:
- ido_,qtt_prov_,qtt_reci_ = res
- result = qtt_prov_,qtt_reci_
- _notnull +=1
- if _type == 'limit':
- if float(expected[0])/float(expected[1]) < float(qtt_prov_)/float(qtt_reci_):
- _limitko +=1
-
- if result != expected:
- _ko += 1
- print idq,res,res_quote
-
- if i %100 == 0:
- prtest.progress(i/float(_nbtest))
- '''
- if i == 100:
- print '\t\t.',
- else:
- print '.',
- sys.stdout.flush()
-
- if(_ko != 0): _errs = ' - %i errors' %_ko
- else: _errs = ''
- print ('\t\t%i quote & order\t%i quotes returned a result %s' % (i-_ko,_notnull,_errs))
- '''
- _valuesko = check_values(inst,values_prov,user)
- prtest.title('Results checkings')
-
- print ''
- print '\t\t%i\torders returned a result different from the previous quote' % _ko
- print '\t\t\twith the same arguments\n'
-
- print '\t\t%i\tlimit orders returned a result where the limit is not observed\n' % _limitko
- print '\t\t%i\tqualities where the quantity is not preserved by the market\n' % _valuesko
-
- prtest.line()
-
- if(_ko == 0 and _limitko == 0 and _valuesko == 0):
- prtest.center('\tAll %i tests passed' % i)
- else:
- prtest.center('\tSome of %i tests failed' % (i,))
- prtest.line()
-
- inst.close()
- return titre_test
-
-def check_values(inst,values_input,user):
- '''
- Values_input is for each quality, the sum of quantities submitted to the market
- Values_remain is for each quality, the sum of quantities remaining in the order book
- Values_output is for each quality, the sum of quantities of mvt_from.qtt of tmsg
- Checks that for each quality q:
- Values_input[q] == Values_remain[q] + Values_output[q]
- '''
- sql = "select (ord).qua_prov,sum((ord).qtt) from market.torder where (ord).oid=(ord).id group by (ord).qua_prov"
- cursor = inst.exe( sql,user)
- values_remain = {}
- for qua_prov,qtt in cursor:
- values_remain[qua_prov] = qtt
-
- sql = '''select json_extract_path_text(jso,'mvt_from','nat'),sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint)
- from market.tmsg where typ='exchange'
- group by json_extract_path_text(jso,'mvt_from','nat')
- '''
- cursor = inst.exe( sql,user)
- values_output = {}
- for qua_prov,qtt in cursor:
- values_output[qua_prov] = qtt
-
- _errs = 0
- for qua,vin in values_input.iteritems():
- vexpect = values_output.get(qua,0)+values_remain.get(qua,0)
- if vin != vexpect:
- print qua,vin,values_output.get(qua,0),values_remain.get(qua,0)
- _errs += 1
- # print '%i errors'% _errs
- return _errs
-
-def test_ti_old(options):
-
- curdir,sqldir,resultsdir,expecteddir = get_paths()
- prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
-
- dump = utilt.Dumper(srvob_conf.dbBO,options,None)
- titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
-
- fn = os.path.join(sqldir,'test_ti.csv')
- if( not os.path.exists(fn)):
- raise ValueError('The data %s is not found' % fn)
-
- cur_login = None
- titre_test = None
-
- inst = utilt.ExecInst(dump)
- quote = False
-
- with open(fn,'r') as f:
- spamreader = csv.reader(f)
- i= 0
- usr = None
- fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
- fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
- fmtjsosr = "SELECT jso from market.tmsg where json_extract_path_text(jso,'id')::int=%i and typ='response'"
- fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
- sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
- sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
- from market.tmsg
- where json_extract_path_text(jso,'orde','id')::int=%i
- and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
- '''
- the order that produced the exchange has the qualities expected
- '''
- _notnull,_ko = 0,0
- for row in spamreader:
- i += 1
- user = row[0]
- params = tuple(row[1:])
-
- cursor = inst.exe( fmtquote % params,user)
- idq,err = cursor.fetchone()
- if err != '(0,)':
- raise ValueError('Quote returned an error "%s"' % err)
- utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
- cursor = inst.exe(fmtjsosr %idq,user)
- res = cursor.fetchone() # result of the quote
- res_quote =simplejson.loads(res[0])
- expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
- #print res_quote
- #print ''
-
- cursor = inst.exe( fmtorder % params,user)
- ido,err = cursor.fetchone()
- if err != '(0,)':
- raise ValueError('Order returned an error "%s"' % err)
- utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
- cursor = inst.exe(fmtjsose %ido,user)
- res = cursor.fetchone()
-
- if res is None:
- result = 0,0
- else:
- ido_,qtt_prov_,qtt_reci_ = res
- result = qtt_prov_,qtt_reci_
- _notnull +=1
-
- if result != expected:
- _ko += 1
- print qtt_prov_,qtt_reci_,res_quote
-
- if i %100 == 0:
- if(_ko != 0): _errs = ' - %i errors' %_ko
- else: _errs = ''
- print ('\t%i quote & order - %i quotes returned a result %s' % (i-_ko,_notnull,_errs))
-
- if(_ko == 0):
- prtest.title(' all %i tests passed' % i)
- else:
- prtest.title('%i checked %i tests failed' % (i,_ko))
-
-
- inst.close()
- return titre_test
-
-
-
'''---------------------------------------------------------------------------
arguments
---------------------------------------------------------------------------'''
from optparse import OptionParser
import os
def main():
#global options
usage = """usage: %prog [options]
tests """
parser = OptionParser(usage)
parser.add_option("-t","--test",action="store",type="string",dest="test",help="test",
default= None)
parser.add_option("-v","--verbose",action="store_true",dest="verbose",help="verbose",default=False)
- parser.add_option("-b","--build",action="store_true",dest="build",help="generates random test cases for test_ti",default=False)
+ parser.add_option("-b","--build",dest="build",type="int",help="generates random test cases for test_ti",default=0)
parser.add_option("-i","--ti",action="store_true",dest="test_ti",help="execute test_ti",default=False)
parser.add_option("-r","--reset",action="store_true",dest="test_ti_reset",help="clean before execution test_ti",default=False)
+ #parser.add_option("-x","--x",dest="cst",type="int",help="test",default=1)
+
(options, args) = parser.parse_args()
# um = os.umask(0177) # u=rw,g=,o=
if options.build:
- build_ti(options)
+ test_ti.build_ti(options)
elif options.test_ti:
- test_ti(options)
+ test_ti.test_ti(options)
else:
tests_tu(options)
if __name__ == "__main__":
main()
diff --git a/src/test/py/test_ti.py b/src/test/py/test_ti.py
new file mode 100644
index 0000000..fd349e2
--- /dev/null
+++ b/src/test/py/test_ti.py
@@ -0,0 +1,358 @@
+# -*- coding: utf-8 -*-
+
+import sys,os,time,logging
+import psycopg2
+import psycopg2.extensions
+import traceback
+
+import srvob_conf
+import molet
+import utilt
+
+import sys
+sys.path = [os.path.abspath(os.path.join(os.path.abspath(__file__),"../../../../simu/liquid"))]+ sys.path
+#print sys.path
+import distrib
+
+PARLEN=80
+prtest = utilt.PrTest(PARLEN,'=')
+
+def get_paths():
+ curdir = os.path.abspath(__file__)
+ curdir = os.path.dirname(curdir)
+ curdir = os.path.dirname(curdir)
+ sqldir = os.path.join(curdir,'sql')
+ resultsdir,expecteddir = os.path.join(curdir,'results'),os.path.join(curdir,'expected')
+ molet.mkdir(resultsdir,ignoreWarning = True)
+ molet.mkdir(expecteddir,ignoreWarning = True)
+ tup = (curdir,sqldir,resultsdir,expecteddir)
+ return tup
+
+import random
+import csv
+import simplejson
+import sys
+
+def build_ti(options):
+ ''' build a .sql file with a bump of submit
+ options.build is the number of tests to be generated
+ '''
+ #print options.build
+ #return
+ #conf = srvob_conf.dbBO
+ curdir,sqldir,resultsdir,expecteddir = get_paths()
+ _frs = os.path.join(sqldir,'test_ti.csv')
+
+ MAX_OWNER = 10
+ MAX_QLT = 20
+ QTT_PROV = 10000
+
+ prtest.title('generating tests cases for quotes')
+ def gen(nborders,frs):
+ for i in range(nborders):
+
+ # choose an owner
+ w = random.randint(1,MAX_OWNER)
+
+ # choose a couple of qualities
+ qlt_prov,qlt_requ = distrib.couple(distrib.uniformQlt,MAX_QLT)
+
+ # choose an omega between 0.5 and 1.5
+ r = random.random()+0.5
+ qtt_requ = int(QTT_PROV * r)
+
+ # 10% of orders are limit
+ lb= 'limit' if (random.random()>0.9) else 'best'
+
+ frs.writerow(['admin',lb,'w%i'%w,'q%i'%qlt_requ,qtt_requ,'q%i'%qlt_prov,QTT_PROV])
+
+ with open(_frs,'w') as f:
+ spamwriter = csv.writer(f)
+ gen(options.build,spamwriter)
+
+ if(molet.removeFile(os.path.join(expecteddir,'test_ti.res'),ignoreWarning = True)):
+ prtest.center('test_ti.res removed')
+
+ prtest.center('done')
+ prtest.line()
+
+def test_ti(options):
+
+ _reset,titre_test = options.test_ti_reset,''
+
+ curdir,sqldir,resultsdir,expecteddir = get_paths()
+ prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
+
+ dump = utilt.Dumper(srvob_conf.dbBO,options,None)
+ if _reset:
+ print '\tReset: Clearing market ...'
+ titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
+ print '\t\tDone'
+
+ fn = os.path.join(sqldir,'test_ti.csv')
+ if( not os.path.exists(fn)):
+ raise ValueError('The data %s is not found' % fn)
+
+ with open(fn,'r') as f:
+ spamreader = csv.reader(f)
+ values_prov = {}
+ _nbtest = 0
+ for row in spamreader:
+ _nbtest +=1
+ qua_prov,qtt_prov = row[5],row[6]
+ if not qua_prov in values_prov.keys():
+ values_prov[qua_prov] = 0
+ values_prov[qua_prov] = values_prov[qua_prov] + int(qtt_prov)
+
+ #print values_prov
+
+ cur_login = None
+ titre_test = None
+
+ inst = utilt.ExecInst(dump)
+
+
+ user = None
+ fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
+ fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
+
+ fmtjsosr = '''SELECT jso from market.tmsg
+ where json_extract_path_text(jso,'id')::int=%i and typ='response' '''
+
+ fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
+ sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
+ sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
+ from market.tmsg
+ where json_extract_path_text(jso,'orig')::int=%i
+ and json_extract_path_text(jso,'orde','id')::int=%i
+ and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
+ '''
+ the order that produced the exchange has the qualities expected
+ '''
+ i= 0
+ if _reset:
+ print '\tSubmission: sending a series of %i tests where a random set of arguments' % _nbtest
+ print '\t\tis used to submit a quote, then an order'
+ with open(fn,'r') as f:
+
+ spamreader = csv.reader(f)
+
+ compte = utilt.Delai()
+ for row in spamreader:
+ user = row[0]
+ params = tuple(row[1:])
+
+
+ cursor = inst.exe( fmtquote % params,user)
+ cursor = inst.exe( fmtorder % params,user)
+ i +=1
+ if i % 100 == 0:
+ prtest.progress(i/float(_nbtest))
+
+ delai = compte.getSecs()
+
+ print '\t\t%i quote & order primitives in %f seconds' % (_nbtest*2,delai)
+ print '\tExecution: Waiting for end of execution ...'
+ #utilt.wait_for_true(srvob_conf.dbBO,1000,"SELECT market.fstackdone()",prtest=prtest)
+ utilt.wait_for_empty_stack(srvob_conf.dbBO,prtest)
+ delai = compte.getSecs()
+ print '\t\t Done: mean time per primitive %f seconds' % (delai/(_nbtest*2),)
+
+ fmtiter = '''SELECT json_extract_path_text(jso,'id')::int id,json_extract_path_text(jso,'primitive','type') typ
+ from market.tmsg where typ='response' and json_extract_path_text(jso,'primitive','kind')='quote'
+ order by id asc limit 10 offset %i'''
+ i = 0
+ _notnull,_ko,_limit,_limitko = 0,0,0,0
+ print '\tChecking: identity of quote result and order result for each %i test cases' % _nbtest
+ print '\t\tusing the content of market.tmsg'
+ while True:
+ cursor = inst.exe( fmtiter % i,user)
+
+ vec = []
+ for re in cursor:
+ vec.append(re)
+
+ l = len(vec)
+
+ if l == 0:
+ break
+
+ for idq,_type in vec:
+ i += 1
+ if _type == 'limit':
+ _limit += 1
+
+ # result of the quote for idq
+ _cur = inst.exe(fmtjsosr %idq,user)
+ res = _cur.fetchone()
+ res_quote =simplejson.loads(res[0])
+ expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
+
+ #result of the order for idq+1
+ _cur = inst.exe(fmtjsose %(idq+1,idq+1),user)
+ res = _cur.fetchone()
+
+ if res is None:
+ result = 0,0
+ else:
+ ido_,qtt_prov_,qtt_reci_ = res
+ result = qtt_prov_,qtt_reci_
+ _notnull +=1
+ if _type == 'limit':
+ if float(expected[0])/float(expected[1]) < float(qtt_prov_)/float(qtt_reci_):
+ _limitko +=1
+
+ if result != expected:
+ _ko += 1
+ print idq,res,res_quote
+
+ if i %100 == 0:
+ prtest.progress(i/float(_nbtest))
+ '''
+ if i == 100:
+ print '\t\t.',
+ else:
+ print '.',
+ sys.stdout.flush()
+
+ if(_ko != 0): _errs = ' - %i errors' %_ko
+ else: _errs = ''
+ print ('\t\t%i quote & order\t%i quotes returned a result %s' % (i-_ko,_notnull,_errs))
+ '''
+ _valuesko = check_values(inst,values_prov,user)
+ prtest.title('Results checkings')
+
+ print ''
+ print '\t\t%i\torders returned a result different from the previous quote' % _ko
+ print '\t\t\twith the same arguments\n'
+
+ print '\t\t%i\tlimit orders returned a result where the limit is not observed\n' % _limitko
+ print '\t\t%i\tqualities where the quantity is not preserved by the market\n' % _valuesko
+
+ prtest.line()
+
+ if(_ko == 0 and _limitko == 0 and _valuesko == 0):
+ prtest.center('\tAll %i tests passed' % i)
+ else:
+ prtest.center('\tSome of %i tests failed' % (i,))
+ prtest.line()
+
+ inst.close()
+ return titre_test
+
+def check_values(inst,values_input,user):
+ '''
+ Values_input is for each quality, the sum of quantities submitted to the market
+ Values_remain is for each quality, the sum of quantities remaining in the order book
+ Values_output is for each quality, the sum of quantities of mvt_from.qtt of tmsg
+ Checks that for each quality q:
+ Values_input[q] == Values_remain[q] + Values_output[q]
+ '''
+ sql = "select (ord).qua_prov,sum((ord).qtt) from market.torder where (ord).oid=(ord).id group by (ord).qua_prov"
+ cursor = inst.exe( sql,user)
+ values_remain = {}
+ for qua_prov,qtt in cursor:
+ values_remain[qua_prov] = qtt
+
+ sql = '''select json_extract_path_text(jso,'mvt_from','nat'),sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint)
+ from market.tmsg where typ='exchange'
+ group by json_extract_path_text(jso,'mvt_from','nat')
+ '''
+ cursor = inst.exe( sql,user)
+ values_output = {}
+ for qua_prov,qtt in cursor:
+ values_output[qua_prov] = qtt
+
+ _errs = 0
+ for qua,vin in values_input.iteritems():
+ vexpect = values_output.get(qua,0)+values_remain.get(qua,0)
+ if vin != vexpect:
+ print qua,vin,values_output.get(qua,0),values_remain.get(qua,0)
+ _errs += 1
+ # print '%i errors'% _errs
+ return _errs
+
+def test_ti_old(options):
+
+ curdir,sqldir,resultsdir,expecteddir = get_paths()
+ prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
+
+ dump = utilt.Dumper(srvob_conf.dbBO,options,None)
+ titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
+
+ fn = os.path.join(sqldir,'test_ti.csv')
+ if( not os.path.exists(fn)):
+ raise ValueError('The data %s is not found' % fn)
+
+ cur_login = None
+ titre_test = None
+
+ inst = utilt.ExecInst(dump)
+ quote = False
+
+ with open(fn,'r') as f:
+ spamreader = csv.reader(f)
+ i= 0
+ usr = None
+ fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
+ fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
+ fmtjsosr = "SELECT jso from market.tmsg where json_extract_path_text(jso,'id')::int=%i and typ='response'"
+ fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
+ sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
+ sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
+ from market.tmsg
+ where json_extract_path_text(jso,'orde','id')::int=%i
+ and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
+ '''
+ the order that produced the exchange has the qualities expected
+ '''
+ _notnull,_ko = 0,0
+ for row in spamreader:
+ i += 1
+ user = row[0]
+ params = tuple(row[1:])
+
+ cursor = inst.exe( fmtquote % params,user)
+ idq,err = cursor.fetchone()
+ if err != '(0,)':
+ raise ValueError('Quote returned an error "%s"' % err)
+ utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
+ cursor = inst.exe(fmtjsosr %idq,user)
+ res = cursor.fetchone() # result of the quote
+ res_quote =simplejson.loads(res[0])
+ expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
+ #print res_quote
+ #print ''
+
+ cursor = inst.exe( fmtorder % params,user)
+ ido,err = cursor.fetchone()
+ if err != '(0,)':
+ raise ValueError('Order returned an error "%s"' % err)
+ utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
+ cursor = inst.exe(fmtjsose %ido,user)
+ res = cursor.fetchone()
+
+ if res is None:
+ result = 0,0
+ else:
+ ido_,qtt_prov_,qtt_reci_ = res
+ result = qtt_prov_,qtt_reci_
+ _notnull +=1
+
+ if result != expected:
+ _ko += 1
+ print qtt_prov_,qtt_reci_,res_quote
+
+ if i %100 == 0:
+ if(_ko != 0): _errs = ' - %i errors' %_ko
+ else: _errs = ''
+ print ('\t%i quote & order - %i quotes returned a result %s' % (i-_ko,_notnull,_errs))
+
+ if(_ko == 0):
+ prtest.title(' all %i tests passed' % i)
+ else:
+ prtest.title('%i checked %i tests failed' % (i,_ko))
+
+
+ inst.close()
+ return titre_test
diff --git a/src/test/sql/tu_1.sql b/src/test/sql/tu_1.sql
index 7218210..ed1041e 100644
--- a/src/test/sql/tu_1.sql
+++ b/src/test/sql/tu_1.sql
@@ -1,12 +1,12 @@
-- Bilateral exchange between owners with distinct users (limit)
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 10,'a',20);
--Bilateral exchange expected
diff --git a/src/test/sql/tu_2.sql b/src/test/sql/tu_2.sql
index 9730e43..2b978d5 100644
--- a/src/test/sql/tu_2.sql
+++ b/src/test/sql/tu_2.sql
@@ -1,16 +1,16 @@
-- Bilateral exchange between owners with distinct users (best+limit)
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('best','wa','a', 10,'b',5);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('best','wb','b', 20,'a',10);
--Bilateral exchange expected
SELECT * FROM market.fsubmitorder('limit','wa','a', 10,'b',5);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('best','wb','b', 20,'a',10);
--No exchange expected
diff --git a/src/test/sql/tu_3.sql b/src/test/sql/tu_3.sql
index 9aba74b..d294104 100644
--- a/src/test/sql/tu_3.sql
+++ b/src/test/sql/tu_3.sql
@@ -1,14 +1,14 @@
-- Trilateral exchange between owners with distinct users
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
SELECT * FROM market.fsubmitorder('limit','wc','c', 10,'a',20);
--Trilateral exchange expected
diff --git a/src/test/sql/tu_4.sql b/src/test/sql/tu_4.sql
index a530d15..4405fd4 100644
--- a/src/test/sql/tu_4.sql
+++ b/src/test/sql/tu_4.sql
@@ -1,16 +1,16 @@
-- Trilateral exchange between owners with two owners
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
-- The profit is shared equally between wa and wb
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',20);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
SELECT * FROM market.fsubmitorder('limit','wb','c', 10,'a',20);
--Trilateral exchange expected
diff --git a/src/test/sql/tu_5.sql b/src/test/sql/tu_5.sql
index 0b2fa3e..74e511b 100644
--- a/src/test/sql/tu_5.sql
+++ b/src/test/sql/tu_5.sql
@@ -1,15 +1,15 @@
-- Trilateral exchange by one owners
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wa','b', 20,'c',40);
SELECT * FROM market.fsubmitorder('limit','wa','c', 10,'a',20);
--Trilateral exchange expected
diff --git a/src/test/sql/tu_6.sql b/src/test/sql/tu_6.sql
index 54868ad..6066214 100644
--- a/src/test/sql/tu_6.sql
+++ b/src/test/sql/tu_6.sql
@@ -1,18 +1,18 @@
-- 7-exchange with 7 partners
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
SELECT * FROM market.fsubmitorder('limit','wc','c', 20,'d',40);
SELECT * FROM market.fsubmitorder('limit','wd','d', 20,'e',40);
SELECT * FROM market.fsubmitorder('limit','we','e', 20,'f',40);
SELECT * FROM market.fsubmitorder('limit','wf','f', 20,'g',40);
SELECT * FROM market.fsubmitorder('limit','wg','g', 20,'a',40);
diff --git a/src/test/sql/tu_7.sql b/src/test/sql/tu_7.sql
index 522b4aa..29c3dc3 100644
--- a/src/test/sql/tu_7.sql
+++ b/src/test/sql/tu_7.sql
@@ -1,17 +1,17 @@
-- Competition between bilateral and multilateral exchange 1/2
---------------------------------------------------------
--variables
--USER:admin
-select * from market.tvar order by name;
+select * from market.tvar where name != 'OC_CURRENT_OPENED' order by name;
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 80,'a',40);
SELECT * FROM market.fsubmitorder('limit','wc','b', 20,'d',40);
SELECT * FROM market.fsubmitorder('limit','wd','d', 20,'a',40);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 40,'b',80);
-- omega better for the trilateral exchange
-- it wins, the rest is used with a bilateral exchange
diff --git a/src/worker_ob.c b/src/worker_ob.c
index a72d2c4..e93b9fc 100644
--- a/src/worker_ob.c
+++ b/src/worker_ob.c
@@ -1,377 +1,389 @@
/* -------------------------------------------------------------------------
*
* worker_ob.c
* Code based on worker_spi.c
*
* This code connects to a database, lauches two background workers.
for i in [0,1],workeri do the following:
while(true)
dowait := market.workeri()
if (dowait):
wait(dowait) // dowait millisecond
These worker do nothing if the schema market is not installed
To force restarting of a bg_worker,send a SIGHUP signal to the worker process
*
* -------------------------------------------------------------------------
*/
#include "postgres.h"
/* These are always necessary for a bgworker */
#include "miscadmin.h"
#include "postmaster/bgworker.h"
#include "storage/ipc.h"
#include "storage/latch.h"
#include "storage/lwlock.h"
#include "storage/proc.h"
#include "storage/shmem.h"
/* these headers are used by this particular worker's code */
#include "access/xact.h"
#include "executor/spi.h"
#include "fmgr.h"
#include "lib/stringinfo.h"
#include "pgstat.h"
#include "utils/builtins.h"
#include "utils/snapmgr.h"
#include "tcop/utility.h"
#define BGW_NBWORKERS 2
-#define BGW_OPENCLOSE 0
static char *worker_names[] = {"openclose","consumestack"};
+
+#define BGW_OPENCLOSE 0
+#define BGW_CONSUMESTACK 1
// PG_MODULE_MAGIC;
void _PG_init(void);
/* flags set by signal handlers */
static volatile sig_atomic_t got_sighup = false;
static volatile sig_atomic_t got_sigterm = false;
/* GUC variable */
static char *worker_ob_database = "market";
/* others */
-
static char *worker_ob_user = "user_bo";
-
-
+/* two connexions are allowed for this user */
typedef struct worktable
{
const char *function_name;
int dowait;
} worktable;
/*
* Signal handler for SIGTERM
* Set a flag to let the main loop to terminate, and set our latch to wake
* it up.
*/
static void
worker_spi_sigterm(SIGNAL_ARGS)
{
int save_errno = errno;
got_sigterm = true;
if (MyProc)
SetLatch(&MyProc->procLatch);
errno = save_errno;
}
/*
* Signal handler for SIGHUP
* Set a flag to tell the main loop to reread the config file, and set
* our latch to wake it up.
*/
static void
worker_spi_sighup(SIGNAL_ARGS)
{
int save_errno = errno;
got_sighup = true;
if (MyProc)
SetLatch(&MyProc->procLatch);
errno = save_errno;
}
static int _spi_exec_select_ret_int(StringInfoData buf) {
int ret;
int ntup;
bool isnull;
ret = SPI_execute(buf.data, true, 1); // read_only -- one row returned
pfree(buf.data);
if (ret != SPI_OK_SELECT)
elog(FATAL, "SPI_execute failed: error code %d", ret);
if (SPI_processed != 1)
elog(FATAL, "not a singleton result");
ntup = DatumGetInt32(SPI_getbinval(SPI_tuptable->vals[0],
SPI_tuptable->tupdesc,
1, &isnull));
if (isnull)
elog(FATAL, "null result");
return ntup;
}
static bool _test_market_installed() {
int ret;
StringInfoData buf;
initStringInfo(&buf);
appendStringInfo(&buf, "select count(*) from pg_namespace where nspname = 'market'");
ret = _spi_exec_select_ret_int(buf);
if(ret == 0)
return false;
initStringInfo(&buf);
appendStringInfo(&buf, "select value from market.tvar where name = 'INSTALLED'");
ret = _spi_exec_select_ret_int(buf);
if(ret == 0)
return false;
return true;
}
/*
*/
static bool
_worker_ob_installed()
{
bool installed;
SetCurrentStatementStartTimestamp();
StartTransactionCommand();
SPI_connect();
PushActiveSnapshot(GetTransactionSnapshot());
pgstat_report_activity(STATE_RUNNING, "initializing spi_worker");
installed = _test_market_installed();
if (installed)
elog(LOG, "%s starting",MyBgworkerEntry->bgw_name);
else
elog(LOG, "%s waiting for installation",MyBgworkerEntry->bgw_name);
SPI_finish();
PopActiveSnapshot();
CommitTransactionCommand();
pgstat_report_activity(STATE_IDLE, NULL);
return installed;
}
-
+/*
static void _openclose_vacuum() {
- /* called by openclose bg_worker */
+ // called by openclose bg_worker
StringInfoData buf;
int ret;
initStringInfo(&buf);
appendStringInfo(&buf,"VACUUM FULL");
pgstat_report_activity(STATE_RUNNING, buf.data);
+ elog(LOG, "vacuum full starting");
+
- ret = SPI_execute(buf.data, false, 0);
+ ret = SPI_exec(buf.data, 0); */
+ /* fails with an exception:
+ ERROR: VACUUM cannot be executed from a function or multi-command string
+ CONTEXT: SQL statement "VACUUM FULL"
+ */
+ /*
pfree(buf.data);
+ elog(LOG, "vacuum full stopped an returned %d",ret);
if (ret != SPI_OK_UTILITY) // SPI_OK_UPDATE_RETURNING,SPI_OK_SELECT
// TODO CODE DE RETOUR?????
elog(FATAL, "cannot execute VACUUM FULL: error code %d",ret);
return;
-}
+} */
static void
worker_ob_main(Datum main_arg)
{
int index = DatumGetInt32(main_arg);
worktable *table;
StringInfoData buf;
- bool installed;
- //char function_name[20];
+ bool installed;
table = palloc(sizeof(worktable));
table->function_name = pstrdup(worker_names[index]);
table->dowait = 0;
/* Establish signal handlers before unblocking signals. */
pqsignal(SIGHUP, worker_spi_sighup);
pqsignal(SIGTERM, worker_spi_sigterm);
/* We're now ready to receive signals */
BackgroundWorkerUnblockSignals();
- /* Connect to our database */
+ /* Connect to the database */
if(!(worker_ob_database && *worker_ob_database))
elog(FATAL, "database name undefined");
BackgroundWorkerInitializeConnection(worker_ob_database, worker_ob_user);
installed = _worker_ob_installed();
initStringInfo(&buf);
appendStringInfo(&buf,"SELECT %s FROM market.%s()",
table->function_name, table->function_name);
/*
* Main loop: do this until the SIGTERM handler tells us to terminate
*/
while (!got_sigterm)
{
int ret;
int rc;
int _worker_ob_naptime; // = worker_ob_naptime * 1000L;
if(installed) // && !table->dowait)
_worker_ob_naptime = table->dowait;
else
_worker_ob_naptime = 1000L; // 1 second
/*
* Background workers mustn't call usleep() or any direct equivalent:
* instead, they may wait on their process latch, which sleeps as
* necessary, but is awakened if postmaster dies. That way the
* background process goes away immediately in an emergency.
*/
/* done even if _worker_ob_naptime == 0 */
+ // elog(LOG, "%s start waiting for %i",table->function_name,_worker_ob_naptime);
rc = WaitLatch(&MyProc->procLatch,
WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
_worker_ob_naptime );
ResetLatch(&MyProc->procLatch);
/* emergency bailout if postmaster has died */
if (rc & WL_POSTMASTER_DEATH)
proc_exit(1);
/*
* In case of a SIGHUP, just reload the configuration.
*/
if (got_sighup)
{
got_sighup = false;
ProcessConfigFile(PGC_SIGHUP);
installed = _worker_ob_installed();
}
if( !installed) continue;
/*
* Start a transaction on which we can run queries. Note that each
* StartTransactionCommand() call should be preceded by a
* SetCurrentStatementStartTimestamp() call, which sets both the time
* for the statement we're about the run, and also the transaction
* start time. Also, each other query sent to SPI should probably be
* preceded by SetCurrentStatementStartTimestamp(), so that statement
* start time is always up to date.
*
* The SPI_connect() call lets us run queries through the SPI manager,
* and the PushActiveSnapshot() call creates an "active" snapshot
* which is necessary for queries to have MVCC data to work on.
*
* The pgstat_report_activity() call makes our activity visible
* through the pgstat views.
*/
SetCurrentStatementStartTimestamp();
StartTransactionCommand();
SPI_connect();
PushActiveSnapshot(GetTransactionSnapshot());
pgstat_report_activity(STATE_RUNNING, buf.data);
/* We can now execute queries via SPI */
ret = SPI_execute(buf.data, false, 0);
if (ret != SPI_OK_SELECT) // SPI_OK_UPDATE_RETURNING)
elog(FATAL, "cannot execute market.%s(): error code %d",
table->function_name, ret);
if (SPI_processed != 1) // number of tuple returned
elog(FATAL, "market.%s() returned %d tuples instead of one",
table->function_name, SPI_processed);
+
{
bool isnull;
int32 val;
val = DatumGetInt32(SPI_getbinval(SPI_tuptable->vals[0],
SPI_tuptable->tupdesc,
1, &isnull));
+
+ if (isnull)
+ elog(FATAL, "market.%s() returned null",table->function_name);
+
table->dowait = 0;
- if (!isnull) {
- if (val >=0)
- table->dowait = val;
- else {
- //
- if((val == -100) && (index == BGW_OPENCLOSE))
- _openclose_vacuum();
-
- }
+ if (val >=0)
+ table->dowait = val;
+ else {
+ if ((index == BGW_OPENCLOSE) && (val == -100))
+ ; // _openclose_vacuum();
+ else
+ elog(FATAL, "market.%s() returned illegal <0",table->function_name);
}
+
}
/*
* And finish our transaction.
*/
SPI_finish();
PopActiveSnapshot();
CommitTransactionCommand();
pgstat_report_activity(STATE_IDLE, NULL);
}
proc_exit(0);
}
/*
* Entrypoint of this module.
*
* We register more than one worker process here, to demonstrate how that can
* be done.
*/
void
_PG_init(void)
{
BackgroundWorker worker;
unsigned int i;
/* get the configuration */
/*
DefineCustomIntVariable("worker_ob.naptime",
"Mimimum duration of wait time (in milliseconds).",
NULL,
&worker_ob_naptime,
100,
1,
INT_MAX,
PGC_SIGHUP,
0,
NULL,
NULL,
NULL); */
DefineCustomStringVariable("worker_ob.database",
"Name of the database.",
NULL,
&worker_ob_database,
"market",
PGC_SIGHUP, 0,
NULL,NULL,NULL);
/* set up common data for all our workers */
worker.bgw_flags = BGWORKER_SHMEM_ACCESS |
BGWORKER_BACKEND_DATABASE_CONNECTION;
worker.bgw_start_time = BgWorkerStart_RecoveryFinished;
worker.bgw_restart_time = 60; // BGW_NEVER_RESTART;
worker.bgw_main = worker_ob_main;
/*
* Now fill in worker-specific data, and do the actual registrations.
*/
for (i = 0; i < BGW_NBWORKERS; i++)
{
snprintf(worker.bgw_name, BGW_MAXLEN, "market.%s", worker_names[i]);
worker.bgw_main_arg = Int32GetDatum(i);
RegisterBackgroundWorker(&worker);
}
}
|
olivierch/openBarter
|
7d86b9eae996bb63251d6390109844c1316ba36e
|
end for v2.1.0
|
diff --git a/META.json b/META.json
index 15c2bbb..bcaef26 100644
--- a/META.json
+++ b/META.json
@@ -1,51 +1,51 @@
{
"name": "openbarter",
"abstract": "Multilateral agreement production engine.",
"description": "openBarter is a central limit order book that performs competition on price even if money is not necessary to express them. It is a barter market allowing cyclic exchange between two partners (buyer and seller) or more in a single transaction. It provides a fluidity equivalent to that of a regular market place.",
"version": "0.1.0",
"maintainer": [
"Olivier Chaussavoine <[email protected]>"
],
"license": "gpl_3",
"prereqs": {
"runtime": {
"requires": {
"plpgsql": "1.0",
"PostgreSQL": "9.3.2"
}
}
},
"provides": {
"openbarter": {
"file": "src/sql/model.sql",
"docfile": "doc/doc-ob.odt",
- "version": "2.1.0",
+ "version": "0.0.1",
"abstract": "Multilateral agreement engine."
}
},
"resources": {
"bugtracker": {
"web": "http://github.com/olivierch/openBarter/issues"
},
"repository": {
"url": "git://github.com/olivierch/openBarter.git",
"web": "http://olivierch.github.com/openBarter",
"type": "git"
}
},
"generated_by": "Olivier Chaussavoine",
"meta-spec": {
"version": "1.0.0",
"url": "http://pgxn.org/meta/spec.txt"
},
"tags": [
"cyclic exchange","non-bilateral","mutilateral",
"central","limit","order","book",
"barter",
"market"
],
"release_status": "stable"
}
diff --git a/src/Makefile b/src/Makefile
index edd15db..a8637c5 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -1,39 +1,39 @@
MODULE_big = flowf
OBJS = flowm.o yflow.o yflowparse.o yorder.o flowc.o earthrad.o worker_ob.o
EXTENSION = flowf
-DATA = flowf--2.1.sql flowf--unpackaged--2.1.sql
+DATA = flowf--0.1.sql flowf--unpackaged--0.1.sql
REGRESS = testflow_1 testflow_2 testflow_2a
EXTRA_CLEAN = yflowparse.c yflowscan.c test/results/*.res test/py/*.pyc ../doc/*.pdf ../simu/liquid/data/*
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
yflowparse.o: yflowscan.c
yflowparse.c: yflowparse.y
ifdef BISON
$(BISON) $(BISONFLAGS) -o $@ $<
else
@$(missing) bison $< $@
endif
yflowscan.c: yflowscan.l
ifdef FLEX
$(FLEX) $(FLEXFLAGS) -o'$@' $<
else
@$(missing) flex $< $@
endif
test: installcheck test/py/*.py test/sql/*.sql
cd test; python py/run.py; cd ..
cd test; python py/run.py -i -r;cd ..
doc:
soffice --invisible --norestore --convert-to pdf --outdir ../doc ../doc/*.odt
diff --git a/src/flowf--2.1.sql b/src/flowf--0.1.sql
similarity index 100%
rename from src/flowf--2.1.sql
rename to src/flowf--0.1.sql
diff --git a/src/flowf--unpackaged--2.1.sql b/src/flowf--unpackaged--0.1.sql
similarity index 100%
rename from src/flowf--unpackaged--2.1.sql
rename to src/flowf--unpackaged--0.1.sql
diff --git a/src/flowf.control b/src/flowf.control
index 19135c8..c14b816 100644
--- a/src/flowf.control
+++ b/src/flowf.control
@@ -1,5 +1,5 @@
# flowf extension
comment = 'data type for cycle of orders'
-default_version = '2.1'
+default_version = '0.1'
module_pathname = '$libdir/flowf'
relocatable = true
diff --git a/src/sql/roles.sql b/src/sql/roles.sql
index 9bee753..f2194f7 100644
--- a/src/sql/roles.sql
+++ b/src/sql/roles.sql
@@ -1,95 +1,95 @@
\set ECHO none
/* script executed for the whole cluster */
SET client_min_messages = warning;
SET log_error_verbosity = terse;
/* flowf extension */
-- drop extension if exists btree_gin cascade;
-- create extension btree_gin with version '1.0';
DROP EXTENSION IF EXISTS flowf;
-CREATE EXTENSION flowf VERSION '2.1';
+CREATE EXTENSION flowf WITH VERSION '0.1';
--------------------------------------------------------------------------------
CREATE FUNCTION _create_role(_role text) RETURNS int AS $$
BEGIN
BEGIN
EXECUTE 'CREATE ROLE ' || _role;
EXCEPTION WHEN duplicate_object THEN
NULL;
END;
EXECUTE 'ALTER ROLE ' || _role || ' NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOREPLICATION';
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/* definition of roles
-- role_com ----> role_co --------->role_client---->clientA
| \-->clientB
|\-> role_co_closed
|
\-> role_bo---->user_bo
access by clients can be disabled/enabled with a single command:
REVOKE role_co FROM role_client
GRANT role_co TO role_client
opening/closing of market is performed by switching role_client
between role_co and role_co_closed
same thing for role batch with role_bo:
REVOKE role_bo FROM user_bo
GRANT role_bo TO user_bo
*/
--------------------------------------------------------------------------------
select _create_role('prod'); -- owner of market objects
ALTER ROLE prod WITH createrole;
/* so that prod can modify roles at opening and closing. */
select _create_role('role_com');
SELECT _create_role('role_co'); -- when market is opened
GRANT role_com TO role_co;
SELECT _create_role('role_co_closed'); -- when market is closed
GRANT role_com TO role_co_closed;
SELECT _create_role('role_client');
GRANT role_co_closed TO role_client; -- maket phase 101
-- role_com ---> role_bo----> user_bo
SELECT _create_role('role_bo');
GRANT role_com TO role_bo;
-- ALTER ROLE role_bo INHERIT;
SELECT _create_role('user_bo');
GRANT role_bo TO user_bo;
-- two connections are allowed for user_bo
ALTER ROLE user_bo WITH LOGIN CONNECTION LIMIT 2;
--------------------------------------------------------------------------------
select _create_role('test_clienta');
ALTER ROLE test_clienta WITH login;
GRANT role_client TO test_clienta;
select _create_role('test_clientb');
ALTER ROLE test_clientb WITH login;
GRANT role_client TO test_clientb;
select _create_role('test_clientc');
ALTER ROLE test_clientc WITH login;
GRANT role_client TO test_clientc;
select _create_role('test_clientd');
ALTER ROLE test_clientd WITH login;
GRANT role_client TO test_clientd;
|
olivierch/openBarter
|
7e532281da97742c3a08f53997789e09ac59c808
|
preparation for v2.1.0
|
diff --git a/META.json b/META.json
index bcaef26..15c2bbb 100644
--- a/META.json
+++ b/META.json
@@ -1,51 +1,51 @@
{
"name": "openbarter",
"abstract": "Multilateral agreement production engine.",
"description": "openBarter is a central limit order book that performs competition on price even if money is not necessary to express them. It is a barter market allowing cyclic exchange between two partners (buyer and seller) or more in a single transaction. It provides a fluidity equivalent to that of a regular market place.",
"version": "0.1.0",
"maintainer": [
"Olivier Chaussavoine <[email protected]>"
],
"license": "gpl_3",
"prereqs": {
"runtime": {
"requires": {
"plpgsql": "1.0",
"PostgreSQL": "9.3.2"
}
}
},
"provides": {
"openbarter": {
"file": "src/sql/model.sql",
"docfile": "doc/doc-ob.odt",
- "version": "0.0.1",
+ "version": "2.1.0",
"abstract": "Multilateral agreement engine."
}
},
"resources": {
"bugtracker": {
"web": "http://github.com/olivierch/openBarter/issues"
},
"repository": {
"url": "git://github.com/olivierch/openBarter.git",
"web": "http://olivierch.github.com/openBarter",
"type": "git"
}
},
"generated_by": "Olivier Chaussavoine",
"meta-spec": {
"version": "1.0.0",
"url": "http://pgxn.org/meta/spec.txt"
},
"tags": [
"cyclic exchange","non-bilateral","mutilateral",
"central","limit","order","book",
"barter",
"market"
],
"release_status": "stable"
}
diff --git a/src/Makefile b/src/Makefile
index a8637c5..edd15db 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -1,39 +1,39 @@
MODULE_big = flowf
OBJS = flowm.o yflow.o yflowparse.o yorder.o flowc.o earthrad.o worker_ob.o
EXTENSION = flowf
-DATA = flowf--0.1.sql flowf--unpackaged--0.1.sql
+DATA = flowf--2.1.sql flowf--unpackaged--2.1.sql
REGRESS = testflow_1 testflow_2 testflow_2a
EXTRA_CLEAN = yflowparse.c yflowscan.c test/results/*.res test/py/*.pyc ../doc/*.pdf ../simu/liquid/data/*
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
yflowparse.o: yflowscan.c
yflowparse.c: yflowparse.y
ifdef BISON
$(BISON) $(BISONFLAGS) -o $@ $<
else
@$(missing) bison $< $@
endif
yflowscan.c: yflowscan.l
ifdef FLEX
$(FLEX) $(FLEXFLAGS) -o'$@' $<
else
@$(missing) flex $< $@
endif
test: installcheck test/py/*.py test/sql/*.sql
cd test; python py/run.py; cd ..
cd test; python py/run.py -i -r;cd ..
doc:
soffice --invisible --norestore --convert-to pdf --outdir ../doc ../doc/*.odt
diff --git a/src/flowf--0.1.sql b/src/flowf--2.1.sql
similarity index 100%
rename from src/flowf--0.1.sql
rename to src/flowf--2.1.sql
diff --git a/src/flowf--unpackaged--0.1.sql b/src/flowf--unpackaged--2.1.sql
similarity index 100%
rename from src/flowf--unpackaged--0.1.sql
rename to src/flowf--unpackaged--2.1.sql
diff --git a/src/flowf.control b/src/flowf.control
index c14b816..19135c8 100644
--- a/src/flowf.control
+++ b/src/flowf.control
@@ -1,5 +1,5 @@
# flowf extension
comment = 'data type for cycle of orders'
-default_version = '0.1'
+default_version = '2.1'
module_pathname = '$libdir/flowf'
relocatable = true
diff --git a/src/sql/pushpull.sql b/src/sql/pushpull.sql
index 51ee705..82974ff 100644
--- a/src/sql/pushpull.sql
+++ b/src/sql/pushpull.sql
@@ -1,101 +1,114 @@
INSERT INTO tvar(name,value) VALUES
('STACK_TOP',0),
('STACK_EXECUTED',0); -- last primitive executed
--------------------------------------------------------------------------------
CREATE FUNCTION fstackdone()
RETURNS boolean AS $$
DECLARE
_top int;
_exe int;
BEGIN
SELECT value INTO _top FROM tvar WHERE name = 'STACK_TOP';
SELECT value INTO _exe FROM tvar WHERE name = 'STACK_EXECUTED';
RETURN (_top = _exe);
END;
$$ LANGUAGE PLPGSQL set search_path to market;
--------------------------------------------------------------------------------
CREATE FUNCTION fpushprimitive(_r yj_error,_kind eprimitivetype,_jso json)
RETURNS yj_primitive AS $$
DECLARE
_tid int;
_ir int;
BEGIN
IF (_r.code!=0) THEN
RAISE EXCEPTION 'Primitive cannot be pushed due to error %: %',_r.code,_r.reason;
END IF;
-- id,usr,kind,jso,submitted
INSERT INTO tstack(usr,kind,jso,submitted)
VALUES (session_user,_kind,_jso,statement_timestamp())
RETURNING id into _tid;
UPDATE tvar SET value=_tid WHERE name = 'STACK_TOP';
RETURN ROW(_tid,_r,_jso,NULL,NULL)::yj_primitive;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
CREATE FUNCTION consumestack() RETURNS int AS $$
/*
* This code is executed by the bg_orker doing following:
while(true)
dowait := market.worker2()
if (dowait):
wait(dowait) -- milliseconds
*/
DECLARE
_s tstack%rowtype;
_res yj_primitive;
_cnt int;
+ _txt text;
+ _detail text;
+ _ctx text;
BEGIN
DELETE FROM tstack
WHERE id IN (SELECT id FROM tstack ORDER BY id ASC LIMIT 1)
RETURNING * INTO _s;
IF(NOT FOUND) THEN
- RETURN 10; -- OB_DOWAIT 10 milliseconds
+ RETURN 20; -- OB_DOWAIT 20 milliseconds
END IF;
_res := fprocessprimitive('execute',_s);
INSERT INTO tmsg (usr,typ,jso,created)
VALUES (_s.usr,'response',row_to_json(_res),statement_timestamp());
UPDATE tvar SET value=_s.id WHERE name = 'STACK_EXECUTED';
RETURN 0;
+
+EXCEPTION WHEN OTHERS THEN
+ GET STACKED DIAGNOSTICS
+ _txt = MESSAGE_TEXT,
+ _detail = PG_EXCEPTION_DETAIL,
+ _ctx = PG_EXCEPTION_CONTEXT;
+
+ RAISE WARNING 'market.consumestack() failed:''%'' ''%'' ''%''',_txt,_detail,_ctx;
+ RETURN 0;
+
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market;
GRANT EXECUTE ON FUNCTION consumestack() TO role_bo;
CREATE TABLE tmsgack (LIKE tmsg);
SELECT _grant_read('tmsgack');
--------------------------------------------------------------------------------
CREATE FUNCTION ackmsg(_id int,_date date) RETURNS int AS $$
DECLARE
_cnt int;
BEGIN
WITH t AS (
DELETE FROM tmsg WHERE id = _id and (created::date) = _date AND usr=session_user
RETURNING *
) INSERT INTO tmsgack SELECT * FROM t ;
GET DIAGNOSTICS _cnt = ROW_COUNT;
IF( _cnt = 0 ) THEN
WITH t AS (
DELETE FROM tmsgdaysbefore WHERE id = _id and (created::date) = _date AND usr=session_user
RETURNING *
) INSERT INTO tmsgack SELECT * FROM t ;
GET DIAGNOSTICS _cnt = ROW_COUNT;
IF( _cnt = 0 ) THEN
RAISE INFO 'The message could not be found';
END IF;
END IF;
RETURN _cnt;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER;
GRANT EXECUTE ON FUNCTION ackmsg(int,date) TO role_com;
diff --git a/src/sql/roles.sql b/src/sql/roles.sql
index f2194f7..9bee753 100644
--- a/src/sql/roles.sql
+++ b/src/sql/roles.sql
@@ -1,95 +1,95 @@
\set ECHO none
/* script executed for the whole cluster */
SET client_min_messages = warning;
SET log_error_verbosity = terse;
/* flowf extension */
-- drop extension if exists btree_gin cascade;
-- create extension btree_gin with version '1.0';
DROP EXTENSION IF EXISTS flowf;
-CREATE EXTENSION flowf WITH VERSION '0.1';
+CREATE EXTENSION flowf VERSION '2.1';
--------------------------------------------------------------------------------
CREATE FUNCTION _create_role(_role text) RETURNS int AS $$
BEGIN
BEGIN
EXECUTE 'CREATE ROLE ' || _role;
EXCEPTION WHEN duplicate_object THEN
NULL;
END;
EXECUTE 'ALTER ROLE ' || _role || ' NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOREPLICATION';
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/* definition of roles
-- role_com ----> role_co --------->role_client---->clientA
| \-->clientB
|\-> role_co_closed
|
\-> role_bo---->user_bo
access by clients can be disabled/enabled with a single command:
REVOKE role_co FROM role_client
GRANT role_co TO role_client
opening/closing of market is performed by switching role_client
between role_co and role_co_closed
same thing for role batch with role_bo:
REVOKE role_bo FROM user_bo
GRANT role_bo TO user_bo
*/
--------------------------------------------------------------------------------
select _create_role('prod'); -- owner of market objects
ALTER ROLE prod WITH createrole;
/* so that prod can modify roles at opening and closing. */
select _create_role('role_com');
SELECT _create_role('role_co'); -- when market is opened
GRANT role_com TO role_co;
SELECT _create_role('role_co_closed'); -- when market is closed
GRANT role_com TO role_co_closed;
SELECT _create_role('role_client');
GRANT role_co_closed TO role_client; -- maket phase 101
-- role_com ---> role_bo----> user_bo
SELECT _create_role('role_bo');
GRANT role_com TO role_bo;
-- ALTER ROLE role_bo INHERIT;
SELECT _create_role('user_bo');
GRANT role_bo TO user_bo;
-- two connections are allowed for user_bo
ALTER ROLE user_bo WITH LOGIN CONNECTION LIMIT 2;
--------------------------------------------------------------------------------
select _create_role('test_clienta');
ALTER ROLE test_clienta WITH login;
GRANT role_client TO test_clienta;
select _create_role('test_clientb');
ALTER ROLE test_clientb WITH login;
GRANT role_client TO test_clientb;
select _create_role('test_clientc');
ALTER ROLE test_clientc WITH login;
GRANT role_client TO test_clientc;
select _create_role('test_clientd');
ALTER ROLE test_clientd WITH login;
GRANT role_client TO test_clientd;
diff --git a/src/sql/tables.sql b/src/sql/tables.sql
index f25c783..56e719c 100644
--- a/src/sql/tables.sql
+++ b/src/sql/tables.sql
@@ -1,355 +1,355 @@
--------------------------------------------------------------------------------
ALTER DEFAULT PRIVILEGES REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON SEQUENCES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TYPES FROM PUBLIC;
--------------------------------------------------------------------------------
create domain dqtt AS int8 check( VALUE>0);
create domain dtext AS text check( char_length(VALUE)>0);
--------------------------------------------------------------------------------
-- main constants of the model
--------------------------------------------------------------------------------
create table tconst(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
GRANT SELECT ON tconst TO role_com;
--------------------------------------------------------------------------------
/* for booleans, 0 == false and !=0 == true
*/
INSERT INTO tconst (name,value) VALUES
('MAXCYCLE',64), -- must be less than yflow_get_maxdim()
('MAXPATHFETCHED',1024),-- maximum depth of the graph exploration
('MAXMVTPERTRANS',128), -- maximum number of movements per transaction
-- if this limit is reached, next cycles are not performed but all others
-- are included in the current transaction
- ('VERSION-X',2),('VERSION-Y',0),('VERSION-Z',2),
+ ('VERSION-X',2),('VERSION-Y',1),('VERSION-Z',0),
('OWNERINSERT',1), -- boolean when true, owner inserted when not found
('QUAPROVUSR',0), -- boolean when true, the quality provided by a barter is suffixed by user name
-- 1 prod
('OWNUSR',0), -- boolean when true, the owner is suffixed by user name
-- 1 prod
('DEBUG',1);
--------------------------------------------------------------------------------
create table tvar(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
INSERT INTO tvar (name,value) VALUES ('INSTALLED',0);
GRANT SELECT ON tvar TO role_com;
--------------------------------------------------------------------------------
-- TOWNER
--------------------------------------------------------------------------------
create table towner (
id serial UNIQUE not NULL,
name dtext UNIQUE not NULL,
PRIMARY KEY (id)
);
comment on table towner is 'owners of values exchanged';
alter sequence towner_id_seq owned by towner.id;
create index towner_name_idx on towner(name);
SELECT _reference_time('towner');
SELECT _grant_read('towner');
--------------------------------------------------------------------------------
-- ORDER BOOK
--------------------------------------------------------------------------------
-- type = type_flow | type_primitive <<8 | type_mode <<16
create domain dtypeorder AS int check(VALUE >=0 AND VALUE < 16777215); --((1<<24)-1)
-- type_flow &3 1 order limit,2 order best
-- type_flow &12 bit set for c calculations
-- 4 no qttlimit
-- 8 ignoreomega
-- yorder.type is a type_flow = type & 255
-- type_primitive
-- 1 order
-- 2 rmorder
-- 3 quote
-- 4 prequote
CREATE TYPE eordertype AS ENUM ('best','limit');
CREATE TYPE eprimitivetype AS ENUM ('order','childorder','rmorder','quote','prequote');
create table torder (
usr dtext,
own dtext,
ord yorder, --defined by the extension flowf
created timestamp not NULL,
updated timestamp,
duration interval
);
comment on table torder is 'Order book';
comment on column torder.usr is 'user that inserted the order ';
comment on column torder.ord is 'the order';
comment on column torder.created is 'time when the order was put on the stack';
comment on column torder.updated is 'time when the (quantity) of the order was updated by the order book';
comment on column torder.duration is 'the life time of the order';
SELECT _grant_read('torder');
create index torder_qua_prov_idx on torder(((ord).qua_prov)); -- using gin(((ord).qua_prov) text_ops);
create index torder_id_idx on torder(((ord).id));
create index torder_oid_idx on torder(((ord).oid));
-- id,type,own,oid,qtt_requ,qua_requ,qtt_prov,qua_prov,qtt
create view vorder as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where o.usr=session_user;
SELECT _grant_read('vorder');
-- sans dates ni filtre sur usr
create view vorder2 as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt
from torder o left join towner w on ((o.ord).own=w.id);
SELECT _grant_read('vorder2');
-- only parent for all users
create view vbarter as
select (o.ord).id as id,(o.ord).type as type,o.usr as user,w.name as own,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where (o.ord).oid=(o.ord).id;
SELECT _grant_read('vbarter');
-- parent and childs for all users, used with vmvto
create view vordero as
select id,
(case when (type & 3=1) then 'limit' else 'best' end)::eordertype as type,
own as owner,
case when id=oid then (qtt::text || ' ' || qua_prov) else '' end as stock,
'(' || qtt_prov::text || '/' || qtt_requ::text || ') ' ||
qua_prov || ' / '|| qua_requ as expected_Ï,
case when id=oid then '' else oid::text end as oid
from vorder order by id asc;
GRANT SELECT ON vordero TO role_com;
comment on view vordero is 'order book for all users, to be used with vmvto';
comment on column vordero.id is 'the id of the order';
comment on column vordero.owner is 'the owner';
comment on column vordero.stock is 'for a parent order the stock offered by the owner';
comment on column vordero.expected_Ï is 'the Ï of the order';
comment on column vordero.oid is 'for a child-order, the id of the parent-order';
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
CREATE FUNCTION fgetowner(_name text) RETURNS int AS $$
DECLARE
_wid int;
_OWNERINSERT boolean := fgetconst('OWNERINSERT')=1;
BEGIN
LOOP
SELECT id INTO _wid FROM towner WHERE name=_name;
IF found THEN
return _wid;
END IF;
IF (NOT _OWNERINSERT) THEN
RAISE EXCEPTION 'The owner does not exist' USING ERRCODE='YU001';
END IF;
BEGIN
INSERT INTO towner (name) VALUES (_name) RETURNING id INTO _wid;
-- RAISE NOTICE 'owner % created',_name;
return _wid;
EXCEPTION WHEN unique_violation THEN
NULL;--
END;
END LOOP;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- TMVT
-- id,nbc,nbt,grp,xid,usr_src,usr_dst,xoid,own_src,own_dst,qtt,nat,ack,exhausted,order_created,created
--------------------------------------------------------------------------------
/*
create table tmvt (
id serial UNIQUE not NULL,
nbc int default NULL,
nbt int default NULL,
grp int default NULL,
xid int default NULL,
usr_src text default NULL,
usr_dst text default NULL,
xoid int default NULL,
own_src text default NULL,
own_dst text default NULL,
qtt int8 default NULL,
nat text default NULL,
ack boolean default NULL,
cack boolean default NULL,
exhausted boolean default NULL,
order_created timestamp default NULL,
created timestamp default NULL,
om_exp double precision default NULL,
om_rea double precision default NULL,
CONSTRAINT ctmvt_grp FOREIGN KEY (grp) references tmvt(id) ON UPDATE CASCADE
);
GRANT SELECT ON tmvt TO role_com;
comment on table tmvt is 'Records ownership changes';
comment on column tmvt.nbc is 'number of movements of the exchange cycle';
comment on column tmvt.nbt is 'number of movements of the transaction containing several exchange cycles';
comment on column tmvt.grp is 'references the first movement of the exchange';
comment on column tmvt.xid is 'references the order.id';
comment on column tmvt.usr_src is 'usr provider';
comment on column tmvt.usr_dst is 'usr receiver';
comment on column tmvt.xoid is 'references the order.oid';
comment on column tmvt.own_src is 'owner provider';
comment on column tmvt.own_dst is 'owner receiver';
comment on column tmvt.qtt is 'quantity of the value moved';
comment on column tmvt.nat is 'quality of the value moved';
comment on column tmvt.ack is 'set when movement has been acknowledged';
comment on column tmvt.cack is 'set when the cycle has been acknowledged';
comment on column tmvt.exhausted is 'set when the movement exhausted the order providing the value';
comment on column tmvt.om_exp is 'Ï expected by the order';
comment on column tmvt.om_rea is 'real Ï of movement';
alter sequence tmvt_id_seq owned by tmvt.id;
GRANT SELECT ON tmvt_id_seq TO role_com;
create index tmvt_grp_idx on tmvt(grp);
create index tmvt_nat_idx on tmvt(nat);
create index tmvt_own_src_idx on tmvt(own_src);
create index tmvt_own_dst_idx on tmvt(own_dst);
CREATE VIEW vmvt AS select * from tmvt;
GRANT SELECT ON vmvt TO role_com;
CREATE VIEW vmvt_tu AS select id,nbc,grp,xid,xoid,own_src,own_dst,qtt,nat,ack,cack,exhausted from tmvt;
GRANT SELECT ON vmvt_tu TO role_com;
create view vmvto as
select id,grp,
usr_src as from_usr,
own_src as from_own,
qtt::text || ' ' || nat as value,
usr_dst as to_usr,
own_dst as to_own,
to_char(om_exp, 'FM999.9999990') as expected_Ï,
to_char(om_rea, 'FM999.9999990') as actual_Ï,
ack
from tmvt where cack is NULL order by id asc;
GRANT SELECT ON vmvto TO role_com;
*/
CREATE SEQUENCE tmvt_id_seq;
--------------------------------------------------------------------------------
-- STACK id,usr,kind,jso,submitted
--------------------------------------------------------------------------------
create table tstack (
id serial UNIQUE not NULL,
usr dtext,
kind eprimitivetype,
jso json, -- representation of the primitive
submitted timestamp not NULL,
PRIMARY KEY (id)
);
comment on table tstack is 'Records the stack of primitives';
comment on column tstack.id is 'id of this primitive';
comment on column tstack.usr is 'user submitting the primitive';
comment on column tstack.kind is 'type of primitive';
comment on column tstack.jso is 'primitive payload';
comment on column tstack.submitted is 'timestamp when the primitive was successfully submitted';
alter sequence tstack_id_seq owned by tstack.id;
GRANT SELECT ON tstack TO role_com;
SELECT fifo_init('tstack');
GRANT SELECT ON tstack_id_seq TO role_com;
--------------------------------------------------------------------------------
CREATE TYPE eprimphase AS ENUM ('submit', 'execute');
--------------------------------------------------------------------------------
-- MSG
--------------------------------------------------------------------------------
CREATE TYPE emsgtype AS ENUM ('response', 'exchange');
create table tmsg (
id serial UNIQUE not NULL,
usr dtext default NULL, -- the user receiver of this message
typ emsgtype not NULL,
jso json default NULL,
created timestamp not NULL
);
alter sequence tmsg_id_seq owned by tmsg.id;
SELECT _grant_read('tmsg');
SELECT _grant_read('tmsg_id_seq');
SELECT fifo_init('tmsg');
CREATE VIEW vmsg AS select * from tmsg WHERE usr = session_user;
SELECT _grant_read('vmsg');
--------------------------------------------------------------------------------
CREATE TYPE yj_error AS (
code int,
reason text
);
CREATE TYPE yerrorprim AS (
id int,
error yj_error
);
CREATE TYPE yj_value AS (
qtt int8,
nat text
);
CREATE TYPE yj_stock AS (
id int,
qtt int8,
nat text,
own text,
usr text
);
CREATE TYPE yj_Ï AS (
id int,
qtt_prov int8,
qtt_requ int8,
type eordertype
);
CREATE TYPE yj_mvt AS (
id int,
cycle int,
orde yj_Ï,
stock yj_stock,
mvt_from yj_stock,
mvt_to yj_stock,
orig int
);
CREATE TYPE yj_order AS (
id int,
error yj_error
);
CREATE TYPE yj_primitive AS (
id int,
error yj_error,
primitive json,
result json,
value json
);
|
olivierch/openBarter
|
9c866db15165ed9806cb9c69f844ae36ccc232cd
|
new test suite added
|
diff --git a/src/Makefile b/src/Makefile
index b64dbac..a8637c5 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -1,38 +1,39 @@
MODULE_big = flowf
OBJS = flowm.o yflow.o yflowparse.o yorder.o flowc.o earthrad.o worker_ob.o
EXTENSION = flowf
DATA = flowf--0.1.sql flowf--unpackaged--0.1.sql
REGRESS = testflow_1 testflow_2 testflow_2a
EXTRA_CLEAN = yflowparse.c yflowscan.c test/results/*.res test/py/*.pyc ../doc/*.pdf ../simu/liquid/data/*
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
yflowparse.o: yflowscan.c
yflowparse.c: yflowparse.y
ifdef BISON
$(BISON) $(BISONFLAGS) -o $@ $<
else
@$(missing) bison $< $@
endif
yflowscan.c: yflowscan.l
ifdef FLEX
$(FLEX) $(FLEXFLAGS) -o'$@' $<
else
@$(missing) flex $< $@
endif
test: installcheck test/py/*.py test/sql/*.sql
cd test; python py/run.py; cd ..
+ cd test; python py/run.py -i -r;cd ..
doc:
soffice --invisible --norestore --convert-to pdf --outdir ../doc ../doc/*.odt
diff --git a/src/test/py/run.py b/src/test/py/run.py
index 6362dc0..48d3c63 100755
--- a/src/test/py/run.py
+++ b/src/test/py/run.py
@@ -1,442 +1,488 @@
# -*- coding: utf-8 -*-
'''
Framework de tests tu_*
***************************************************************************
execution:
reset_market.sql
soumis:
list de primitives t_*.sql
résultats:
état de l'order book
état de tmsg
comparaison attendu/obtenu
dans src/test/
run.py
sql/reset_market.sql
sql/t_*.sql
expected/t_*.res
obtained/t_*.res
boucle pour chaque t_*.sql:
reset_market.sql
exécuter t_*.sql
dumper les résultats dans obtained/t_.res
comparer expected et obtained
'''
import sys,os,time,logging
import psycopg2
import psycopg2.extensions
import traceback
import srvob_conf
import molet
import utilt
import sys
sys.path = [os.path.abspath(os.path.join(os.path.abspath(__file__),"../../../../simu/liquid"))]+ sys.path
#print sys.path
import distrib
PARLEN=80
prtest = utilt.PrTest(PARLEN,'=')
def get_paths():
curdir = os.path.abspath(__file__)
curdir = os.path.dirname(curdir)
curdir = os.path.dirname(curdir)
sqldir = os.path.join(curdir,'sql')
resultsdir,expecteddir = os.path.join(curdir,'results'),os.path.join(curdir,'expected')
molet.mkdir(resultsdir,ignoreWarning = True)
molet.mkdir(expecteddir,ignoreWarning = True)
tup = (curdir,sqldir,resultsdir,expecteddir)
return tup
def tests_tu(options):
titre_test = "UNDEFINED"
curdir,sqldir,resultsdir,expecteddir = get_paths()
try:
utilt.wait_for_true(srvob_conf.dbBO,0.1,"SELECT value=102,value FROM market.tvar WHERE name='OC_CURRENT_PHASE'",
msg="Waiting for market opening")
except psycopg2.OperationalError,e:
print "Please adjust DB_NAME,DB_USER,DB_PWD,DB_PORT parameters of the file src/test/py/srv_conf.py"
print "The test program could not connect to the market"
exit(1)
if options.test is None:
_fts = [f for f in os.listdir(sqldir) if f.startswith('tu_') and f[-4:]=='.sql']
_fts.sort(lambda x,y: cmp(x,y))
else:
_nt = options.test + '.sql'
_fts = os.path.join(sqldir,_nt)
if not os.path.exists(_fts):
print 'This test \'%s\' was not found' % _fts
return
else:
_fts = [_nt]
_tok,_terr,_num_test = 0,0,0
prtest.title('running tests on database "%s"' % (srvob_conf.DB_NAME,))
#print '='*PARLEN
print ''
print 'Num\tStatus\tName'
print ''
for file_test in _fts: # itération on test cases
_nom_test = file_test[:-4]
_terr +=1
_num_test +=1
file_result = file_test[:-4]+'.res'
_fte = os.path.join(resultsdir,file_result)
_fre = os.path.join(expecteddir,file_result)
with open(_fte,'w') as f:
cur = None
dump = utilt.Dumper(srvob_conf.dbBO,options,None)
titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
dump = utilt.Dumper(srvob_conf.dbBO,options,f)
titre_test = utilt.exec_script(dump,sqldir,file_test)
utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
conn = molet.DbData(srvob_conf.dbBO)
try:
with molet.DbCursor(conn) as cur:
dump.torder(cur)
dump.tmsg(cur)
finally:
conn.close()
if(os.path.exists(_fre)):
if(utilt.files_clones(_fte,_fre)):
_tok +=1
_terr -=1
print '%i\tY\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\tN\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\t?\t%s\t%s' % (_num_test,_nom_test,titre_test)
# display global results
print ''
print 'Test status: (Y)expected ==results, (N)expected!=results,(F)failed, (?)expected undefined'
prtest.line()
if(_terr == 0):
prtest.center('\tAll %i tests passed' % _tok)
else:
prtest.center('\t%i tests KO, %i passed' % (_terr,_tok))
prtest.line()
import random
import csv
import simplejson
import sys
-MAX_ORDER = 100000
+MAX_ORDER = 100
def build_ti(options):
''' build a .sql file with a bump of submit
'''
#conf = srvob_conf.dbBO
curdir,sqldir,resultsdir,expecteddir = get_paths()
_frs = os.path.join(sqldir,'test_ti.csv')
MAX_OWNER = 10
MAX_QLT = 20
QTT_PROV = 10000
prtest.title('generating tests cases for quotes')
def gen(nborders,frs):
for i in range(nborders):
w = random.randint(1,MAX_OWNER)
qlt_prov,qlt_requ = distrib.couple(distrib.uniformQlt,MAX_QLT)
r = random.random()+0.5
qtt_requ = int(QTT_PROV * r)
lb= 'limit' if (random.random()>0.9) else 'best'
frs.writerow(['admin',lb,'w%i'%w,'q%i'%qlt_requ,qtt_requ,'q%i'%qlt_prov,QTT_PROV])
with open(_frs,'w') as f:
spamwriter = csv.writer(f)
gen(MAX_ORDER,spamwriter)
molet.removeFile(os.path.join(expecteddir,'test_ti.res'),ignoreWarning = True)
prtest.center('done, test_ti.res removed')
prtest.line()
def test_ti(options):
_reset,titre_test = options.test_ti_reset,''
curdir,sqldir,resultsdir,expecteddir = get_paths()
prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
dump = utilt.Dumper(srvob_conf.dbBO,options,None)
if _reset:
print '\tReset: Clearing market ...'
titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
print '\t\tDone'
fn = os.path.join(sqldir,'test_ti.csv')
if( not os.path.exists(fn)):
raise ValueError('The data %s is not found' % fn)
with open(fn,'r') as f:
+ spamreader = csv.reader(f)
+ values_prov = {}
_nbtest = 0
- for row in f:
+ for row in spamreader:
_nbtest +=1
+ qua_prov,qtt_prov = row[5],row[6]
+ if not qua_prov in values_prov.keys():
+ values_prov[qua_prov] = 0
+ values_prov[qua_prov] = values_prov[qua_prov] + int(qtt_prov)
+
+ #print values_prov
cur_login = None
titre_test = None
inst = utilt.ExecInst(dump)
user = None
fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
fmtjsosr = '''SELECT jso from market.tmsg
where json_extract_path_text(jso,'id')::int=%i and typ='response' '''
fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
from market.tmsg
where json_extract_path_text(jso,'orig')::int=%i
and json_extract_path_text(jso,'orde','id')::int=%i
and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
'''
the order that produced the exchange has the qualities expected
'''
i= 0
if _reset:
print '\tSubmission: sending a series of %i tests where a random set of arguments' % _nbtest
print '\t\tis used to submit a quote, then an order'
with open(fn,'r') as f:
spamreader = csv.reader(f)
compte = utilt.Delai()
for row in spamreader:
user = row[0]
params = tuple(row[1:])
+
cursor = inst.exe( fmtquote % params,user)
cursor = inst.exe( fmtorder % params,user)
i +=1
if i % 100 == 0:
prtest.progress(i/float(_nbtest))
delai = compte.getSecs()
print '\t\t%i quote & order primitives in %f seconds' % (_nbtest*2,delai)
print '\tExecution: Waiting for end of execution ...'
#utilt.wait_for_true(srvob_conf.dbBO,1000,"SELECT market.fstackdone()",prtest=prtest)
utilt.wait_for_empty_stack(srvob_conf.dbBO,prtest)
delai = compte.getSecs()
print '\t\t Done: mean time per primitive %f seconds' % (delai/(_nbtest*2),)
-
-
-
fmtiter = '''SELECT json_extract_path_text(jso,'id')::int id,json_extract_path_text(jso,'primitive','type') typ
from market.tmsg where typ='response' and json_extract_path_text(jso,'primitive','kind')='quote'
order by id asc limit 10 offset %i'''
i = 0
_notnull,_ko,_limit,_limitko = 0,0,0,0
print '\tChecking: identity of quote result and order result for each %i test cases' % _nbtest
print '\t\tusing the content of market.tmsg'
while True:
cursor = inst.exe( fmtiter % i,user)
vec = []
for re in cursor:
vec.append(re)
l = len(vec)
if l == 0:
break
for idq,_type in vec:
i += 1
if _type == 'limit':
_limit += 1
# result of the quote for idq
_cur = inst.exe(fmtjsosr %idq,user)
res = _cur.fetchone()
res_quote =simplejson.loads(res[0])
expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
#result of the order for idq+1
_cur = inst.exe(fmtjsose %(idq+1,idq+1),user)
res = _cur.fetchone()
if res is None:
result = 0,0
else:
ido_,qtt_prov_,qtt_reci_ = res
result = qtt_prov_,qtt_reci_
_notnull +=1
if _type == 'limit':
if float(expected[0])/float(expected[1]) < float(qtt_prov_)/float(qtt_reci_):
_limitko +=1
if result != expected:
_ko += 1
print idq,res,res_quote
if i %100 == 0:
prtest.progress(i/float(_nbtest))
'''
if i == 100:
print '\t\t.',
else:
print '.',
sys.stdout.flush()
if(_ko != 0): _errs = ' - %i errors' %_ko
else: _errs = ''
print ('\t\t%i quote & order\t%i quotes returned a result %s' % (i-_ko,_notnull,_errs))
'''
- prtest.title('Results of %i checkings' % i)
- if(_ko == 0 and _limitko == 0):
- print '\tall checks ok'
-
- print '\t\t%i orders returned a result different from the previous quote' % _ko
- print '\t\t%i limit orders returned a result where the limit is not observed' % _limitko
+ _valuesko = check_values(inst,values_prov,user)
+ prtest.title('Results checkings')
+ print ''
+ print '\t\t%i\torders returned a result different from the previous quote' % _ko
+ print '\t\t\twith the same arguments\n'
+ print '\t\t%i\tlimit orders returned a result where the limit is not observed\n' % _limitko
+ print '\t\t%i\tqualities where the quantity is not preserved by the market\n' % _valuesko
+ prtest.line()
+ if(_ko == 0 and _limitko == 0 and _valuesko == 0):
+ prtest.center('\tAll %i tests passed' % i)
+ else:
+ prtest.center('\tSome of %i tests failed' % (i,))
+ prtest.line()
+
inst.close()
return titre_test
+def check_values(inst,values_input,user):
+ '''
+ Values_input is for each quality, the sum of quantities submitted to the market
+ Values_remain is for each quality, the sum of quantities remaining in the order book
+ Values_output is for each quality, the sum of quantities of mvt_from.qtt of tmsg
+ Checks that for each quality q:
+ Values_input[q] == Values_remain[q] + Values_output[q]
+ '''
+ sql = "select (ord).qua_prov,sum((ord).qtt) from market.torder where (ord).oid=(ord).id group by (ord).qua_prov"
+ cursor = inst.exe( sql,user)
+ values_remain = {}
+ for qua_prov,qtt in cursor:
+ values_remain[qua_prov] = qtt
+
+ sql = '''select json_extract_path_text(jso,'mvt_from','nat'),sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint)
+ from market.tmsg where typ='exchange'
+ group by json_extract_path_text(jso,'mvt_from','nat')
+ '''
+ cursor = inst.exe( sql,user)
+ values_output = {}
+ for qua_prov,qtt in cursor:
+ values_output[qua_prov] = qtt
+
+ _errs = 0
+ for qua,vin in values_input.iteritems():
+ vexpect = values_output.get(qua,0)+values_remain.get(qua,0)
+ if vin != vexpect:
+ print qua,vin,values_output.get(qua,0),values_remain.get(qua,0)
+ _errs += 1
+ # print '%i errors'% _errs
+ return _errs
+
def test_ti_old(options):
curdir,sqldir,resultsdir,expecteddir = get_paths()
prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
dump = utilt.Dumper(srvob_conf.dbBO,options,None)
titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
fn = os.path.join(sqldir,'test_ti.csv')
if( not os.path.exists(fn)):
raise ValueError('The data %s is not found' % fn)
cur_login = None
titre_test = None
inst = utilt.ExecInst(dump)
quote = False
with open(fn,'r') as f:
spamreader = csv.reader(f)
i= 0
usr = None
fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
fmtjsosr = "SELECT jso from market.tmsg where json_extract_path_text(jso,'id')::int=%i and typ='response'"
fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
from market.tmsg
where json_extract_path_text(jso,'orde','id')::int=%i
and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
'''
the order that produced the exchange has the qualities expected
'''
_notnull,_ko = 0,0
for row in spamreader:
i += 1
user = row[0]
params = tuple(row[1:])
cursor = inst.exe( fmtquote % params,user)
idq,err = cursor.fetchone()
if err != '(0,)':
raise ValueError('Quote returned an error "%s"' % err)
utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
cursor = inst.exe(fmtjsosr %idq,user)
res = cursor.fetchone() # result of the quote
res_quote =simplejson.loads(res[0])
expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
#print res_quote
#print ''
cursor = inst.exe( fmtorder % params,user)
ido,err = cursor.fetchone()
if err != '(0,)':
raise ValueError('Order returned an error "%s"' % err)
utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
cursor = inst.exe(fmtjsose %ido,user)
res = cursor.fetchone()
if res is None:
result = 0,0
else:
ido_,qtt_prov_,qtt_reci_ = res
result = qtt_prov_,qtt_reci_
_notnull +=1
if result != expected:
_ko += 1
print qtt_prov_,qtt_reci_,res_quote
if i %100 == 0:
if(_ko != 0): _errs = ' - %i errors' %_ko
else: _errs = ''
print ('\t%i quote & order - %i quotes returned a result %s' % (i-_ko,_notnull,_errs))
if(_ko == 0):
prtest.title(' all %i tests passed' % i)
else:
prtest.title('%i checked %i tests failed' % (i,_ko))
inst.close()
return titre_test
'''---------------------------------------------------------------------------
arguments
---------------------------------------------------------------------------'''
from optparse import OptionParser
import os
def main():
#global options
usage = """usage: %prog [options]
tests """
parser = OptionParser(usage)
parser.add_option("-t","--test",action="store",type="string",dest="test",help="test",
default= None)
parser.add_option("-v","--verbose",action="store_true",dest="verbose",help="verbose",default=False)
parser.add_option("-b","--build",action="store_true",dest="build",help="generates random test cases for test_ti",default=False)
parser.add_option("-i","--ti",action="store_true",dest="test_ti",help="execute test_ti",default=False)
parser.add_option("-r","--reset",action="store_true",dest="test_ti_reset",help="clean before execution test_ti",default=False)
(options, args) = parser.parse_args()
# um = os.umask(0177) # u=rw,g=,o=
if options.build:
build_ti(options)
elif options.test_ti:
test_ti(options)
else:
tests_tu(options)
if __name__ == "__main__":
main()
diff --git a/src/test/py/utilt.py b/src/test/py/utilt.py
index 457f86d..e3d2e63 100644
--- a/src/test/py/utilt.py
+++ b/src/test/py/utilt.py
@@ -1,306 +1,306 @@
# -*- coding: utf-8 -*-
import string
import os.path
import time, sys
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
class PrTest(object):
''' results printing '''
def __init__(self,parlen,sep):
self.parlen = parlen+ parlen%2
self.sep = sep
def title(self,title):
_l = len(title)
_p = max(_l%2 +_l,40)
_x = self.parlen -_p
if (_x > 2):
print (_x/2)*self.sep + string.center(title,_p) + (_x/2)*self.sep
else:
print string.center(text,self.parlen)
def line(self):
print self.parlen*self.sep
def center(self,text):
print string.center(text,self.parlen)
def progress(self,progress):
# update_progress() : Displays or updates a console progress bar
## Accepts a float between 0 and 1. Any int will be converted to a float.
## A value under 0 represents a 'halt'.
## A value at 1 or bigger represents 100%
- barLength = 20 # Modify this to change the length of the progress bar
+ barLength = 40 # Modify this to change the length of the progress bar
status = ""
if isinstance(progress, int):
progress = float(progress)
if not isinstance(progress, float):
progress = 0
status = "error: progress var must be float\r\n"
if progress < 0:
progress = 0
status = "Halt...\r\n"
if progress >= 1:
progress = 1
status = "Done...\r\n"
block = int(round(barLength*progress))
text = "\r\t\t\t[{0}] {1}% {2}".format( "#"*block + "-"*(barLength-block), progress*100, status)
sys.stdout.write(text)
sys.stdout.flush()
'''---------------------------------------------------------------------------
file comparison
---------------------------------------------------------------------------'''
import filecmp
def files_clones(f1,f2):
#res = filecmp.cmp(f1,f2)
return (md5sum(f1) == md5sum(f2))
import hashlib
def md5sum(filename, blocksize=65536):
hash = hashlib.md5()
with open(filename, "r+b") as f:
for block in iter(lambda: f.read(blocksize), ""):
hash.update(block)
return hash.hexdigest()
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
SEPARATOR = '\n'+'-'*80 +'\n'
import json
class Dumper(object):
def __init__(self,conf,options,fdr):
self.options =options
self.conf = conf
self.fdr = fdr
def getConf(self):
return self.conf
def torder(self,cur):
self.write(SEPARATOR)
self.write('table: torder\n')
cur.execute('SELECT * FROM market.vord order by id asc')
self.cur(cur)
'''
yorder not shown:
pos_requ box, -- box (point(lat,lon),point(lat,lon))
pos_prov box, -- box (point(lat,lon),point(lat,lon))
dist float8,
carre_prov box -- carre_prov @> pos_requ
'''
return
def write(self,txt):
if self.fdr:
self.fdr.write(txt)
def cur(self,cur,_len=10):
#print cur.description
if(cur.description is None): return
#print type(cur)
cols = [e.name for e in cur.description]
row_format = ('{:>'+str(_len)+'}')*len(cols)
self.write(row_format.format(*cols)+'\n')
self.write(row_format.format(*(['+'+'-'*(_len-1)]*len(cols)))+'\n')
for res in cur:
self.write(row_format.format(*res)+'\n')
return
def tmsg(self,cur):
self.write(SEPARATOR)
self.write('table: tmsg')
self.write(SEPARATOR)
cur.execute('SELECT id,typ,usr,jso FROM market.tmsg order by id asc')
for res in cur:
_id,typ,usr,jso = res
_jso = json.loads(jso)
if typ == 'response':
if _jso['error']['code']==0:
_msg = 'Primitive id:%i from %s: OK\n' % (_jso['id'],usr)
else:
_msg = 'Primitive id:%i from %s: ERROR(%i,%s)\n' % (_jso['id'],usr,
_jso['error']['code'],_jso['error']['reason'])
elif typ == 'exchange':
_fmt = '''Cycle id:%i Exchange id:%i for %s @%s:
\t%i:mvt_from %s @%s : %i \'%s\' -> %s @%s
\t%i:mvt_to %s @%s : %i \'%s\' <- %s @%s
\tstock id:%i remaining after exchange: %i \'%s\' \n'''
_dat = (
_jso['cycle'],_jso['id'],_jso['stock']['own'],usr,
_jso['mvt_from']['id'],_jso['stock']['own'],usr,_jso['mvt_from']['qtt'],_jso['mvt_from']['nat'],_jso['mvt_from']['own'],_jso['mvt_from']['usr'],
_jso['mvt_to']['id'], _jso['stock']['own'],usr,_jso['mvt_to']['qtt'],_jso['mvt_to']['nat'],_jso['mvt_to']['own'],_jso['mvt_to']['usr'],
_jso['stock']['id'],_jso['stock']['qtt'],_jso['stock']['nat'])
_msg = _fmt %_dat
else:
_msg = str(res)
self.write('\t%i:'%_id+_msg+'\n')
if self.options.verbose:
print jso
return
'''---------------------------------------------------------------------------
wait until a command returns true with timeout
---------------------------------------------------------------------------'''
import molet
import time
def wait_for_true(conf,delai,sql,msg=None):
_i = 0;
_w = 0;
while True:
_i +=1
conn = molet.DbData(conf)
try:
with molet.DbCursor(conn) as cur:
cur.execute(sql)
r = cur.fetchone()
if r[0] == True:
break
finally:
conn.close()
if msg is None:
pass
elif(_i%10)==0:
print msg
_a = 0.1;
_w += _a;
if _w > delai: # seconds
raise ValueError('After %f seconds, %s != True' % (_w,sql))
time.sleep(_a)
'''---------------------------------------------------------------------------
wait for stack empty
---------------------------------------------------------------------------'''
def wait_for_empty_stack(conf,prtest):
_i = 0;
_w = 0;
sql = "SELECT name,value FROM market.tvar WHERE name in ('STACK_TOP','STACK_EXECUTED')"
while True:
_i +=1
conn = molet.DbData(conf)
try:
with molet.DbCursor(conn) as cur:
cur.execute(sql)
re = {}
for r in cur:
re[r[0]] = r[1]
prtest.progress(float(re['STACK_EXECUTED'])/float(re['STACK_TOP']))
if re['STACK_TOP'] == re['STACK_EXECUTED']:
break
finally:
conn.close()
time.sleep(2)
'''---------------------------------------------------------------------------
executes a script
---------------------------------------------------------------------------'''
def exec_script(dump,dirsql,fn):
fn = os.path.join(dirsql,fn)
if( not os.path.exists(fn)):
raise ValueError('The script %s is not found' % fn)
cur_login = None
titre_test = None
inst = ExecInst(dump)
with open(fn,'r') as f:
for line in f:
line = line.strip()
if len(line) == 0:
continue
dump.write(line+'\n')
if line.startswith('--'):
if titre_test is None:
titre_test = line
elif line.startswith('--USER:'):
cur_login = line[7:].strip()
else:
cursor = inst.exe(line,cur_login)
dump.cur(cursor)
inst.close()
return titre_test
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
class ExecInst(object):
def __init__(self,dump):
self.login = None
self.conn = None
self.cur = None
self.dump = dump
def exe(self,sql,login):
#print login
if self.login != login:
self.close()
if self.conn is None:
self.login = login
_login = None if login == 'admin' else login
self.conn = molet.DbData(self.dump.getConf(),login = _login)
self.cur = self.conn.con.cursor()
# print sql
self.cur.execute(sql)
return self.cur
def close(self):
if not(self.conn is None):
if not(self.cur is None):
self.cur.close()
self.conn.close()
self.conn = None
def execinst(dump,cur_login,sql):
if cur_login == 'admin':
cur_login = None
conn = molet.DbData(dump.getConf(),login = cur_login)
try:
with molet.DbCursor(conn,exit = True) as _cur:
_cur.execute(sql)
dump.cur(_cur)
finally:
conn.close()
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
from datetime import datetime
class Delai(object):
def __init__(self):
self.debut = datetime.now()
def getSecs(self):
return self._duree(self.debut,datetime.now())
def _duree(self,begin,end):
""" returns a float; the number of seconds elapsed between begin and end
"""
if(not isinstance(begin,datetime)): raise ValueError('begin is not datetime object')
if(not isinstance(end,datetime)): raise ValueError('end is not datetime object')
duration = end - begin
secs = duration.days*3600*24 + duration.seconds + duration.microseconds/1000000.
return secs
diff --git a/src/test/sql/test_ti.csv b/src/test/sql/test_ti.csv
new file mode 100644
index 0000000..ac802a1
--- /dev/null
+++ b/src/test/sql/test_ti.csv
@@ -0,0 +1,100 @@
+admin,best,w3,q15,14134,q8,10000
+admin,best,w9,q11,7155,q18,10000
+admin,best,w1,q9,12435,q2,10000
+admin,best,w8,q16,5280,q11,10000
+admin,best,w10,q20,6215,q19,10000
+admin,best,w1,q4,9360,q5,10000
+admin,limit,w10,q16,8270,q13,10000
+admin,best,w7,q13,7423,q6,10000
+admin,best,w9,q10,14132,q4,10000
+admin,best,w8,q14,11822,q10,10000
+admin,best,w5,q14,10806,q3,10000
+admin,best,w6,q6,12505,q12,10000
+admin,best,w3,q13,11537,q14,10000
+admin,best,w7,q20,8310,q15,10000
+admin,best,w7,q18,12289,q11,10000
+admin,best,w9,q8,8835,q9,10000
+admin,best,w7,q19,14255,q7,10000
+admin,best,w3,q19,5697,q2,10000
+admin,best,w1,q2,12609,q15,10000
+admin,best,w5,q14,12217,q17,10000
+admin,best,w2,q6,8468,q9,10000
+admin,best,w6,q3,10305,q7,10000
+admin,best,w10,q8,7657,q17,10000
+admin,best,w6,q1,12041,q15,10000
+admin,limit,w2,q12,7304,q15,10000
+admin,best,w10,q14,7254,q12,10000
+admin,best,w4,q12,5637,q8,10000
+admin,best,w7,q16,10012,q7,10000
+admin,best,w8,q17,13869,q1,10000
+admin,best,w4,q9,11991,q17,10000
+admin,best,w8,q5,12418,q10,10000
+admin,best,w8,q14,10582,q9,10000
+admin,best,w10,q10,9335,q15,10000
+admin,best,w3,q2,14167,q9,10000
+admin,limit,w8,q11,6731,q12,10000
+admin,limit,w7,q17,13861,q5,10000
+admin,best,w7,q2,5263,q12,10000
+admin,best,w6,q7,10110,q5,10000
+admin,best,w3,q8,12969,q18,10000
+admin,best,w9,q15,5598,q1,10000
+admin,best,w7,q2,8439,q8,10000
+admin,best,w10,q14,5361,q4,10000
+admin,best,w1,q1,11483,q15,10000
+admin,best,w7,q7,14851,q13,10000
+admin,best,w8,q11,6336,q17,10000
+admin,best,w6,q7,14533,q3,10000
+admin,best,w9,q6,10860,q17,10000
+admin,best,w7,q7,7627,q9,10000
+admin,best,w7,q13,11397,q5,10000
+admin,best,w7,q20,8304,q12,10000
+admin,best,w8,q6,7863,q7,10000
+admin,best,w10,q14,14871,q20,10000
+admin,best,w3,q5,9078,q8,10000
+admin,limit,w9,q13,13683,q5,10000
+admin,best,w5,q19,6963,q8,10000
+admin,best,w8,q7,11426,q5,10000
+admin,best,w10,q5,8630,q12,10000
+admin,limit,w5,q12,13669,q16,10000
+admin,best,w9,q15,13361,q14,10000
+admin,best,w6,q5,13350,q19,10000
+admin,best,w3,q11,7049,q19,10000
+admin,best,w7,q18,14711,q16,10000
+admin,best,w9,q3,12892,q7,10000
+admin,best,w4,q12,5563,q3,10000
+admin,best,w5,q19,13842,q18,10000
+admin,best,w6,q14,13732,q6,10000
+admin,limit,w9,q2,9606,q4,10000
+admin,best,w8,q18,14648,q14,10000
+admin,best,w3,q4,6248,q13,10000
+admin,best,w5,q20,12301,q12,10000
+admin,best,w4,q13,8267,q6,10000
+admin,best,w6,q10,11178,q9,10000
+admin,best,w3,q10,9985,q16,10000
+admin,best,w3,q2,10922,q14,10000
+admin,best,w5,q8,12457,q9,10000
+admin,best,w8,q6,7870,q7,10000
+admin,best,w2,q15,6094,q20,10000
+admin,best,w2,q6,8917,q17,10000
+admin,best,w6,q18,5932,q14,10000
+admin,limit,w7,q19,5192,q17,10000
+admin,best,w10,q15,5777,q10,10000
+admin,best,w3,q12,9826,q2,10000
+admin,limit,w2,q15,9059,q1,10000
+admin,best,w3,q17,10201,q1,10000
+admin,best,w9,q10,12408,q12,10000
+admin,limit,w10,q8,11131,q1,10000
+admin,best,w7,q4,11783,q12,10000
+admin,best,w6,q16,7244,q6,10000
+admin,best,w2,q6,8045,q3,10000
+admin,best,w1,q2,8369,q7,10000
+admin,best,w5,q9,8463,q10,10000
+admin,best,w4,q10,11701,q3,10000
+admin,best,w5,q7,5286,q14,10000
+admin,best,w6,q20,10261,q19,10000
+admin,best,w4,q6,10580,q19,10000
+admin,best,w6,q1,7312,q10,10000
+admin,best,w9,q15,6041,q2,10000
+admin,best,w6,q7,7751,q6,10000
+admin,best,w9,q20,7169,q13,10000
+admin,best,w3,q12,6650,q15,10000
diff --git a/src/test/sql/test_ti1.csv b/src/test/sql/test_ti1.csv
new file mode 100644
index 0000000..3a8a02c
--- /dev/null
+++ b/src/test/sql/test_ti1.csv
@@ -0,0 +1,3 @@
+admin,limit,w1,q1,500,q2,10000
+admin,limit,w2,q2,500,q3,10000
+admin,limit,w3,q3,500,q1,10000
\ No newline at end of file
|
olivierch/openBarter
|
375c4de2b790d972ab3ea3b8d5ee9ce67d253b2e
|
corrections
|
diff --git a/src/sql/algo.sql b/src/sql/algo.sql
index bca3d32..27cdad4 100644
--- a/src/sql/algo.sql
+++ b/src/sql/algo.sql
@@ -1,496 +1,497 @@
--------------------------------------------------------------------------------
/* function fcreate_tmp
It is the central query of openbarter
for an order O fcreate_tmp creates a temporary table _tmp of objects.
Each object represents a chain of orders - a flows - going to O.
The table has columns
debut the first order of the path
path the path
fin the end of the path (O)
depth the exploration depth
cycle a boolean true when the path contains the new order
The number of paths fetched is limited to MAXPATHFETCHED
Among those objects representing chains of orders,
only those making a potential exchange (draft) are recorded.
*/
--------------------------------------------------------------------------------
/*
CREATE VIEW vorderinsert AS
SELECT id,yorder_get(id,own,nr,qtt_requ,np,qtt_prov,qtt) as ord,np,nr
FROM torder ORDER BY ((qtt_prov::double precision)/(qtt_requ::double precision)) DESC; */
--------------------------------------------------------------------------------
CREATE FUNCTION fcreate_tmp(_ord yorder) RETURNS int AS $$
DECLARE
_MAXPATHFETCHED int := fgetconst('MAXPATHFETCHED');
_MAXCYCLE int := fgetconst('MAXCYCLE');
_cnt int;
BEGIN
/* the statement LIMIT would not avoid deep exploration if the condition
was specified on Z in the search_backward WHERE condition */
-- fails when qua_prov == qua_requ
IF((_ord).qua_prov = (_ord).qua_requ) THEN
RAISE EXCEPTION 'quality provided and required are the same: %',_ord;
END IF;
CREATE TEMPORARY TABLE _tmp ON COMMIT DROP AS (
SELECT yflow_finish(Z.debut,Z.path,Z.fin) as cycle FROM (
WITH RECURSIVE search_backward(debut,path,fin,depth,cycle) AS(
SELECT _ord,yflow_init(_ord),
_ord,1,false
-- FROM torder WHERE (ord).id= _ordid
UNION ALL
SELECT X.ord,yflow_grow_backward(X.ord,Y.debut,Y.path),
Y.fin,Y.depth+1,yflow_contains_oid((X.ord).oid,Y.path)
FROM torder X,search_backward Y
WHERE yflow_match(X.ord,Y.debut) -- (X.ord).qua_prov=(Y.debut).qua_requ
AND ((X.duration IS NULL) OR ((X.created + X.duration) > clock_timestamp()))
AND Y.depth < _MAXCYCLE
AND NOT cycle
AND (X.ord).carre_prov @> (Y.debut).pos_requ -- use if gist(carre_prov)
AND NOT yflow_contains_oid((X.ord).oid,Y.path)
) SELECT debut,path,fin from search_backward
LIMIT _MAXPATHFETCHED
) Z WHERE /* (Z.fin).qua_prov=(Z.debut).qua_requ
AND */ yflow_match(Z.fin,Z.debut) -- it is a cycle
AND yflow_is_draft(yflow_finish(Z.debut,Z.path,Z.fin)) -- and a draft
);
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- order unstacked and inserted into torder
/* if the referenced oid is found,
the order is inserted, and the process is loached
else a movement is created
*/
--------------------------------------------------------------------------------
CREATE TYPE yresflow AS (
mvts int[], -- list of id des mvts
qtts int8[], -- value.qtt moved
nats text[], -- value.nat moved
grp int,
owns text[],
usrs text[],
ords yorder[]
);
--------------------------------------------------------------------------------
CREATE FUNCTION insertorder(_owner dtext,_o yorder,_usr dtext,_created timestamp,_duration interval) RETURNS int AS $$
DECLARE
_fmvtids int[];
-- _first_mvt int;
-- _err int;
--_flows json[]:= ARRAY[]::json[];
_cyclemax yflow;
-- _mvts int[];
_res int8[];
_MAXMVTPERTRANS int := fgetconst('MAXMVTPERTRANS');
_nbmvts int := 0;
_qtt_give int8 := 0;
_qtt_reci int8 := 0;
_cnt int;
_resflow yresflow;
BEGIN
lock table torder in share update exclusive mode NOWAIT;
-- immediatly aborts the order if the lock cannot be acquired
INSERT INTO torder(usr,own,ord,created,updated,duration) VALUES (_usr,_owner,_o,_created,NULL,_duration);
_fmvtids := ARRAY[]::int[];
-- _time_begin := clock_timestamp();
_cnt := fcreate_tmp(_o);
-- RAISE WARNING 'insertbarter A % %',_o,_cnt;
LOOP
SELECT yflow_max(cycle) INTO _cyclemax FROM _tmp WHERE yflow_is_draft(cycle);
IF(NOT yflow_is_draft(_cyclemax)) THEN
EXIT; -- from LOOP
END IF;
-- RAISE WARNING 'insertbarter B %',_cyclemax;
_nbmvts := _nbmvts + yflow_dim(_cyclemax);
IF(_nbmvts > _MAXMVTPERTRANS) THEN
EXIT;
END IF;
_resflow := fexecute_flow(_cyclemax);
- _cnt := foncreatecycle(_resflow);
+ _cnt := foncreatecycle(_o,_resflow);
_fmvtids := _fmvtids || _resflow.mvts;
_res := yflow_qtts(_cyclemax);
_qtt_reci := _qtt_reci + _res[1];
_qtt_give := _qtt_give + _res[2];
UPDATE _tmp SET cycle = yflow_reduce(cycle,_cyclemax,false);
DELETE FROM _tmp WHERE NOT yflow_is_draft(cycle);
END LOOP;
-- RAISE WARNING 'insertbarter C % % % % %',_qtt_give,_qtt_reci,_o.qtt_prov,_o.qtt_requ,_fmvtids;
IF ( (_qtt_give != 0) AND ((_o.type & 3) = 1) -- ORDER_LIMIT
AND ((_qtt_give::double precision) /(_qtt_reci::double precision)) >
((_o.qtt_prov::double precision) /(_o.qtt_requ::double precision))
) THEN
RAISE EXCEPTION 'pb: Omega of the flows obtained is not limited by the order limit' USING ERRCODE='YA003';
END IF;
-- set the number of movements in this transaction
-- UPDATE tmvt SET nbt= array_length(_fmvtids,1) WHERE id = ANY (_fmvtids);
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
-CREATE FUNCTION foncreatecycle(_r yresflow) RETURNS int AS $$
+CREATE FUNCTION foncreatecycle(_orig yorder,_r yresflow) RETURNS int AS $$
DECLARE
_usr_src text;
_cnt int;
_i int;
_nbcommit int;
_iprev int;
_inext int;
_o yorder;
BEGIN
_nbcommit := array_length(_r.ords,1);
_i := _nbcommit;
_iprev := _i -1;
FOR _inext IN 1.._nbcommit LOOP
_usr_src := _r.usrs[_i];
_o := _r.ords[_i];
INSERT INTO tmsg (typ,jso,usr,created) VALUES (
'exchange',
row_to_json(ROW(
_r.mvts[_i],
_r.grp,
ROW( -- order
_o.id,
_o.qtt_prov,
_o.qtt_requ,
CASE WHEN _o.type&3 =1 THEN 'limit' ELSE 'best' END
)::yj_Ï,
ROW( -- stock
_o.oid,
_o.qtt,
_r.nats[_i],
_r.owns[_i],
_r.usrs[_i]
)::yj_stock,
ROW( -- mvt_from
_r.mvts[_i],
_r.qtts[_i],
_r.nats[_i],
_r.owns[_inext],
_r.usrs[_inext]
)::yj_stock,
ROW( --mvt_to
_r.mvts[_iprev],
_r.qtts[_iprev],
_r.nats[_iprev],
_r.owns[_iprev],
_r.usrs[_iprev]
- )::yj_stock
+ )::yj_stock,
+ _orig.id -- orig
)::yj_mvt),
_usr_src,statement_timestamp());
_iprev := _i;
_i := _inext;
END LOOP;
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/* fexecute_flow used for a barter
from a flow representing a draft, for each order:
inserts a new movement
updates the order book
*/
--------------------------------------------------------------------------------
CREATE FUNCTION fexecute_flow(_flw yflow) RETURNS yresflow AS $$
DECLARE
_i int;
_next_i int;
_prev_i int;
_nbcommit int;
_first_mvt int;
_exhausted boolean;
_mvtexhausted boolean;
_cntExhausted int;
_mvt_id int;
_cnt int;
_resflow yresflow;
--_mvts int[];
--_oids int[];
_qtt int8;
_flowr int8;
_qttmin int8;
_qttmax int8;
_o yorder;
_usr text;
_usrnext text;
-- _own text;
-- _ownnext text;
-- _idownnext int;
-- _pidnext int;
_or torder%rowtype;
_mat int8[][];
_om_exp double precision;
_om_rea double precision;
BEGIN
_nbcommit := yflow_dim(_flw);
-- sanity check
IF( _nbcommit <2 ) THEN
RAISE EXCEPTION 'the flow should be draft:_nbcommit = %',_nbcommit
USING ERRCODE='YA003';
END IF;
_first_mvt := NULL;
_exhausted := false;
-- _resx.nbc := _nbcommit;
_resflow.mvts := ARRAY[]::int[];
_resflow.qtts := ARRAY[]::int8[];
_resflow.nats := ARRAY[]::text[];
_resflow.owns := ARRAY[]::text[];
_resflow.usrs := ARRAY[]::text[];
_resflow.ords := ARRAY[]::yorder[];
_mat := yflow_to_matrix(_flw);
_i := _nbcommit;
_prev_i := _i - 1;
FOR _next_i IN 1 .. _nbcommit LOOP
------------------------------------------------------------------------
_o.id := _mat[_i][1];
_o.own := _mat[_i][2];
_o.oid := _mat[_i][3];
_o.qtt := _mat[_i][6];
_flowr := _mat[_i][7];
-- _idownnext := _mat[_next_i][2];
-- _pidnext := _mat[_next_i][3];
-- sanity check
SELECT count(*),min((ord).qtt),max((ord).qtt) INTO _cnt,_qttmin,_qttmax
FROM torder WHERE (ord).oid = _o.oid;
IF(_cnt = 0) THEN
RAISE EXCEPTION 'the stock % expected does not exist',_o.oid USING ERRCODE='YU002';
END IF;
IF( _qttmin != _qttmax ) THEN
RAISE EXCEPTION 'the value of stock % is not the same value for all orders',_o.oid USING ERRCODE='YU002';
END IF;
_cntExhausted := 0;
_mvtexhausted := false;
IF( _qttmin < _flowr ) THEN
RAISE EXCEPTION 'the stock % is smaller than the flow (% < %)',_o.oid,_qttmin,_flowr USING ERRCODE='YU002';
ELSIF (_qttmin = _flowr) THEN
_cntExhausted := _cnt;
_exhausted := true;
_mvtexhausted := true;
END IF;
-- update all stocks of the order book
UPDATE torder SET ord.qtt = (ord).qtt - _flowr ,updated = statement_timestamp()
WHERE (ord).oid = _o.oid;
GET DIAGNOSTICS _cnt = ROW_COUNT;
IF(_cnt = 0) THEN
RAISE EXCEPTION 'no orders with the stock % exist',_o.oid USING ERRCODE='YU002';
END IF;
SELECT * INTO _or FROM torder WHERE (ord).id = _o.id LIMIT 1; -- child order
-- RAISE WARNING 'ici %',_or.ord;
_om_exp := (((_or.ord).qtt_prov)::double precision) / (((_or.ord).qtt_requ)::double precision);
_om_rea := ((_flowr)::double precision) / ((_mat[_prev_i][7])::double precision);
/*
SELECT name INTO STRICT _ownnext FROM towner WHERE id=_idownnext;
SELECT name INTO STRICT _own FROM towner WHERE id=_o.own;
SELECT usr INTO STRICT _usrnext FROM torder WHERE (ord).id=_pidnext;
INSERT INTO tmvt (nbc,nbt,grp,
xid,usr_src,usr_dst,xoid,own_src,own_dst,qtt,nat,ack,
exhausted,order_created,created,om_exp,om_rea)
VALUES(_nbcommit,1,_first_mvt,
_o.id,_or.usr,_usrnext,_o.oid,_own,_ownnext,_flowr,(_or.ord).qua_prov,_cycleack,
_mvtexhausted,_or.created,statement_timestamp(),_om_exp,_om_rea)
RETURNING id INTO _mvt_id;
*/
SELECT nextval('tmvt_id_seq') INTO _mvt_id;
IF(_first_mvt IS NULL) THEN
_first_mvt := _mvt_id;
_resflow.grp := _mvt_id;
-- _resx.first_mvt := _mvt_id;
-- UPDATE tmvt SET grp = _first_mvt WHERE id = _first_mvt;
END IF;
_resflow.mvts := array_append(_resflow.mvts,_mvt_id);
_resflow.qtts := array_append(_resflow.qtts,_flowr);
_resflow.nats := array_append(_resflow.nats,(_or.ord).qua_prov);
_resflow.owns := array_append(_resflow.owns,_or.own::text);
_resflow.usrs := array_append(_resflow.usrs,_or.usr::text);
_resflow.ords := array_append(_resflow.ords,_or.ord);
_prev_i := _i;
_i := _next_i;
------------------------------------------------------------------------
END LOOP;
IF( NOT _exhausted ) THEN
-- some order should be exhausted
RAISE EXCEPTION 'the cycle should exhaust some order'
USING ERRCODE='YA003';
END IF;
RETURN _resflow;
END;
$$ LANGUAGE PLPGSQL;
CREATE TYPE yr_quote AS (
qtt_reci int8,
qtt_give int8
);
--------------------------------------------------------------------------------
-- quote execution at the output of the stack
--------------------------------------------------------------------------------
CREATE FUNCTION fproducequote(_ord yorder,_isquote boolean,_isnoqttlimit boolean,_islimit boolean,_isignoreomega boolean)
/*
_isquote := true;
it can be a quote or a prequote
_isnoqttlimit := false;
when true the quantity provided is not limited by the stock available
_islimit:= (_t.jso->'type')='limit';
type of the quoted order
_isignoreomega := -- (_t.type & 8) = 8
*/
RETURNS json AS $$
DECLARE
_cnt int;
_cyclemax yflow;
_cycle yflow;
_res int8[];
_firstloop boolean := true;
_freezeOmega boolean;
_mid int;
_nbmvts int;
_wid int;
_MAXMVTPERTRANS int := fgetconst('MAXMVTPERTRANS');
_barter text;
_paths text;
_qtt_reci int8 := 0;
_qtt_give int8 := 0;
_qtt_prov int8 := 0;
_qtt_requ int8 := 0;
_qtt int8 := 0;
_resjso json;
BEGIN
_cnt := fcreate_tmp(_ord);
_nbmvts := 0;
_paths := '';
LOOP
SELECT yflow_max(cycle) INTO _cyclemax FROM _tmp WHERE yflow_is_draft(cycle);
IF(NOT yflow_is_draft(_cyclemax)) THEN
EXIT; -- from LOOP
END IF;
_nbmvts := _nbmvts + yflow_dim(_cyclemax);
IF(_nbmvts > _MAXMVTPERTRANS) THEN
EXIT;
END IF;
_res := yflow_qtts(_cyclemax);
_qtt_reci := _qtt_reci + _res[1];
_qtt_give := _qtt_give + _res[2];
IF(_isquote) THEN
IF(_firstloop) THEN
_qtt_requ := _res[3];
_qtt_prov := _res[4];
END IF;
IF(_isnoqttlimit) THEN
_qtt := _qtt + _res[5];
ELSE
IF(_firstloop) THEN
_qtt := _res[5];
END IF;
END IF;
END IF;
-- for a PREQUOTE they remain 0
_freezeOmega := _firstloop AND _isignoreomega AND _isquote;
/* yflow_reduce:
for all orders except for node with NOQTTLIMIT:
qtt = qtt -flowr
for the last order, if is IGNOREOMEGA:
- omega is set:
_cycle[last].qtt_requ,_cycle[last].qtt_prov
:= _cyclemax[last].qtt_requ,_cyclemax[last].qtt_prov
- if _freezeOmega the IGNOREOMEGA is reset
*/
UPDATE _tmp SET cycle = yflow_reduce(cycle,_cyclemax,_freezeOmega);
DELETE FROM _tmp WHERE NOT yflow_is_draft(cycle);
_firstloop := false;
END LOOP;
IF ( (_qtt_requ != 0)
AND _islimit AND _isquote
AND ((_qtt_give::double precision) /(_qtt_reci::double precision)) >
((_qtt_prov::double precision) /(_qtt_requ::double precision))
) THEN
RAISE EXCEPTION 'pq: Omega of the flows obtained is not limited by the order limit' USING ERRCODE='YA003';
END IF;
_resjso := row_to_json(ROW(_qtt_reci,_qtt_give)::yr_quote);
RETURN _resjso;
END;
$$ LANGUAGE PLPGSQL;
diff --git a/src/sql/prims.sql b/src/sql/prims.sql
index 6560183..f8a2d77 100644
--- a/src/sql/prims.sql
+++ b/src/sql/prims.sql
@@ -1,491 +1,491 @@
--------------------------------------------------------------------------------
-- check params
-- code in [-9,0]
--------------------------------------------------------------------------------
CREATE FUNCTION
fcheckquaown(_r yj_error,_own dtext,_qua_requ dtext,_pos_requ point,_qua_prov dtext,_pos_prov point,_dist float8)
RETURNS yj_error AS $$
DECLARE
_r yj_error;
_i int;
BEGIN
IF(NOT ((yflow_checktxt(_own)&1)=1)) THEN
_r.reason := '_own is empty string';
_r.code := -1;
RETURN _r;
END IF;
IF(_qua_prov IS NULL) THEN
IF(NOT ((yflow_checktxt(_qua_requ)&1)=1)) THEN
_r.reason := '_qua_requ is empty string';
_r.code := -2;
RETURN _r;
END IF;
ELSE
IF(_qua_prov = _qua_requ) THEN
_r.reason := 'qua_prov == qua_requ';
_r.code := -3;
return _r;
END IF;
_i = yflow_checkquaownpos(_own,_qua_requ,_pos_requ,_qua_prov,_pos_prov,_dist);
IF (_i != 0) THEN
_r.reason := 'rejected by yflow_checkquaownpos';
_r.code := _i; -- -9<=i<=-5
return _r;
END IF;
END IF;
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
CREATE FUNCTION fcheckquaprovusr(_r yj_error,_qua_prov dtext,_usr dtext) RETURNS yj_error AS $$
DECLARE
_QUAPROVUSR boolean := fgetconst('QUAPROVUSR')=1;
_p int;
_suffix text;
BEGIN
IF (NOT _QUAPROVUSR) THEN
RETURN _r;
END IF;
_p := position('@' IN _qua_prov);
IF (_p = 0) THEN
-- without prefix, it should be a currency
SELECT count(*) INTO _p FROM tcurrency WHERE _qua_prov = name;
IF (_p = 1) THEN
RETURN _r;
ELSE
_r.code := -12;
_r.reason := 'the quality provided that is not a currency must be prefixed';
RETURN _r;
END IF;
END IF;
-- with prefix
IF (char_length(substring(_qua_prov FROM 1 FOR (_p-1))) <1) THEN
_r.code := -13;
_r.reason := 'the prefix of the quality provided cannot be empty';
RETURN _r;
END IF;
_suffix := substring(_qua_prov FROM (_p+1));
_suffix := replace(_suffix,'.','_'); -- change . to _
-- it must be the username
IF ( _suffix!= _usr) THEN
_r.code := -14;
_r.reason := 'the prefix of the quality provided must by the user name';
RETURN _r;
END IF;
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
CREATE FUNCTION fchecknameowner(_r yj_error,_name dtext,_usr dtext) RETURNS yj_error AS $$
DECLARE
_p int;
_OWNUSR boolean := fgetconst('OWNUSR')=1;
_suffix text;
BEGIN
IF (NOT _OWNUSR) THEN
RETURN _r;
END IF;
_p := position('@' IN _name);
IF (char_length(substring(_name FROM 1 FOR (_p-1))) <1) THEN
_r.code := -20;
_r.reason := 'the owner name has an empty prefix';
RETURN _r;
END IF;
_suffix := substring(_name FROM (_p+1));
SELECT count(*) INTO _p FROM townauth WHERE _suffix = name;
IF (_p = 1) THEN
RETURN _r; --well known auth provider
END IF;
-- change . to _
_suffix := replace(_suffix,'.','_');
IF ( _suffix= _usr) THEN
RETURN _r; -- owners name suffixed by users name
END IF;
_r.code := -21;
_r.reason := 'if the owner name is not prefixed by a well know provider, it must be prefixed by user name';
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- order primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_order AS (
kind eprimitivetype,
type eordertype,
owner dtext,
qua_requ dtext,
qtt_requ dqtt,
qua_prov dtext,
qtt_prov dqtt
);
CREATE FUNCTION fsubmitorder(_type eordertype,_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_order;
BEGIN
_prim := ROW('order',_type,_owner,_qua_requ,_qtt_requ,_qua_prov,_qtt_prov)::yp_order;
_res := fprocessorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitorder(eordertype,dtext,dtext,dqtt,dtext,dqtt) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessorder(_phase eprimphase, _t tstack,_s yp_order)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_ir int;
_o yorder;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_prov,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_res := fpushprimitive(_r,'order',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
/*
IF(
(_s.duration IS NOT NULL) AND (_s.submitted + _s.duration) < clock_timestamp()
) THEN
_r.reason := 'barter order - the order is too old';
_r.code := -19;
END IF; */
_wid := fgetowner(_s.owner);
_o := ROW(CASE WHEN _s.type='limit' THEN 1 ELSE 2 END,
_t.id,_wid,_t.id,
_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt_prov,
box('(0,0)'::point,'(0,0)'::point),box('(0,0)'::point,'(0,0)'::point),
0.0,earth_get_square('(0,0)'::point,0.0)
)::yorder;
_ir := insertorder(_s.owner,_o,_t.usr,_t.submitted,'1 day');
- RETURN ROW(_t.id,NULL,_t.jso,
+ RETURN ROW(_t.id,_r,_t.jso,
row_to_json(ROW(_o.id,_o.qtt,_o.qua_prov,_s.owner,_t.usr)::yj_stock),
NULL
)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- child order primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_childorder AS (
kind eprimitivetype,
owner dtext,
qua_requ dtext,
qtt_requ dqtt,
stock_id int
);
CREATE FUNCTION fsubmitchildorder(_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_stock_id int)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_childorder;
BEGIN
_prim := ROW('childorder',_owner,_qua_requ,_qtt_requ,_stock_id)::yp_childorder;
_res := fprocesschildorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitchildorder(dtext,dtext,dqtt,int) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocesschildorder(_phase eprimphase, _t tstack,_s yp_childorder)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_otype int;
_ir int;
_o yorder;
_op torder%rowtype;
_sp tstack%rowtype;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_requ,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid;
IF (NOT FOUND) THEN
/* could be found in the stack */
SELECT * INTO _sp FROM tstack WHERE
id = _s.stock_id AND usr = session_user AND _s.owner = jso->'owner' AND kind='order';
IF (NOT FOUND) THEN
_r.code := -200;
_r.reason := 'the order was not found for this user and owner';
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
END IF;
_res := fpushprimitive(_r,'childorder',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid;
IF (NOT FOUND) THEN
_r.code := -201;
_r.reason := 'the stock is not in the order book';
RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
END IF;
_o := _op.ord;
_o.id := _id;
_o.qua_requ := _s.qua_requ;
_o.qtt_requ := _s.qtt_requ;
_ir := insertorder(_s.owner,_o,_s.usr,_s.submitted,_op.duration);
- RETURN ROW(_t.id,NULL,_t.jso,NULL,NULL)::yj_primitive;
+ RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- rm primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_rmorder AS (
kind eprimitivetype,
owner dtext,
stock_id int
);
CREATE FUNCTION fsubmitrmorder(_owner dtext,_stock_id int)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_rmorder;
BEGIN
_prim := ROW('rmorder',_owner,_stock_id)::yp_rmorder;
_res := fprocessrmorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitrmorder(dtext,int) TO role_co,role_bo;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessrmorder(_phase eprimphase, _t tstack,_s yp_rmorder)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_otype int;
_ir int;
_o yorder;
_opy yorder; -- parent_order
_op torder%rowtype;
_te text;
_pusr text;
_sp tstack%rowtype;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
/* could be found in the stack */
SELECT * INTO _sp FROM tstack WHERE
id = _s.stock_id AND usr = session_user AND _s.owner = jso->'owner' AND kind='order' AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
_r.code := -300;
_r.reason := 'the order was not found for this user and owner';
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
END IF;
_res := fpushprimitive(_r,'rmorder',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
_r.code := -301;
_r.reason := 'the stock is not in the order book';
- RETURN ROW(_t.id,_r,_t.json,NULL,NULL)::yj_primitive;
+ RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
END IF;
-- delete order and sub-orders from the book
DELETE FROM torder o WHERE (o.ord).oid = _yo.oid;
-- id,error,primitive,result
- RETURN ROW(_t.id,NULL,_t.json,
+ RETURN ROW(_t.id,_r,_t.jso,
ROW((_op.ord).id,(_op.ord).qtt,(_op.ord).qua_prov,_s.owner,_op.usr)::yj_stock,
ROW((_op.ord).qua_prov,(_op.ord).qtt)::yj_value
)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- quote
--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitquote(_type eordertype,_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_order;
BEGIN
_prim := ROW('quote',_type,_owner,_qua_requ,_qtt_requ,_qua_prov,_qtt_prov)::yp_order;
_res := fprocessquote('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitquote(eordertype,dtext,dtext,dqtt,dtext,dqtt) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessquote(_phase eprimphase, _t tstack,_s yp_order)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_ir int;
_o yorder;
_type int;
_json_res json;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_prov,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_res := fpushprimitive(_r,'quote',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
_type := CASE WHEN _s.type='limit' THEN 1 ELSE 2 END;
_o := ROW( _type,
_t.id,_wid,_t.id,
_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt_prov,
box('(0,0)'::point,'(0,0)'::point),box('(0,0)'::point,'(0,0)'::point),
- 0.0,earth_get_square(box('(0,0)'::point,0.0))
+ 0.0,earth_get_square('(0,0)'::point,0.0)
)::yorder;
/*fproducequote(_ord yorder,_isquote boolean,_isnoqttlimit boolean,_islimit boolean,_isignoreomega boolean)
*/
_json_res := fproducequote(_o,true,false,_s.type='limit',false);
- RETURN ROW(_t.id,NULL,_t.json,_tx,NULL,NULL)::yj_primitive;
+ RETURN ROW(_t.id,_r,_t.jso,_json_res,NULL)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- primitive processing
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessprimitive(_phase eprimphase, _s tstack)
RETURNS yj_primitive AS $$
DECLARE
_res yj_primitive;
_kind eprimitivetype;
BEGIN
_kind := _s.kind;
CASE
WHEN (_kind = 'order' ) THEN
_res := fprocessorder(_phase,_s,json_populate_record(NULL::yp_order,_s.jso));
WHEN (_kind = 'childorder' ) THEN
_res := fprocesschildorder(_phase,_s,json_populate_record(NULL::yp_childorder,_s.jso));
WHEN (_kind = 'rmorder' ) THEN
_res := fprocessrmorder(_phase,_s,json_populate_record(NULL::yp_rmorder,_s.jso));
WHEN (_kind = 'quote' ) THEN
_res := fprocessquote(_phase,_s,json_populate_record(NULL::yp_order,_s.jso));
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
RETURN _res;
END;
$$ LANGUAGE PLPGSQL;
diff --git a/src/sql/tables.sql b/src/sql/tables.sql
index 85ccd07..f25c783 100644
--- a/src/sql/tables.sql
+++ b/src/sql/tables.sql
@@ -1,354 +1,355 @@
--------------------------------------------------------------------------------
ALTER DEFAULT PRIVILEGES REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON SEQUENCES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TYPES FROM PUBLIC;
--------------------------------------------------------------------------------
create domain dqtt AS int8 check( VALUE>0);
create domain dtext AS text check( char_length(VALUE)>0);
--------------------------------------------------------------------------------
-- main constants of the model
--------------------------------------------------------------------------------
create table tconst(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
GRANT SELECT ON tconst TO role_com;
--------------------------------------------------------------------------------
/* for booleans, 0 == false and !=0 == true
*/
INSERT INTO tconst (name,value) VALUES
('MAXCYCLE',64), -- must be less than yflow_get_maxdim()
('MAXPATHFETCHED',1024),-- maximum depth of the graph exploration
('MAXMVTPERTRANS',128), -- maximum number of movements per transaction
-- if this limit is reached, next cycles are not performed but all others
-- are included in the current transaction
('VERSION-X',2),('VERSION-Y',0),('VERSION-Z',2),
('OWNERINSERT',1), -- boolean when true, owner inserted when not found
('QUAPROVUSR',0), -- boolean when true, the quality provided by a barter is suffixed by user name
-- 1 prod
('OWNUSR',0), -- boolean when true, the owner is suffixed by user name
-- 1 prod
('DEBUG',1);
--------------------------------------------------------------------------------
create table tvar(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
INSERT INTO tvar (name,value) VALUES ('INSTALLED',0);
GRANT SELECT ON tvar TO role_com;
--------------------------------------------------------------------------------
-- TOWNER
--------------------------------------------------------------------------------
create table towner (
id serial UNIQUE not NULL,
name dtext UNIQUE not NULL,
PRIMARY KEY (id)
);
comment on table towner is 'owners of values exchanged';
alter sequence towner_id_seq owned by towner.id;
create index towner_name_idx on towner(name);
SELECT _reference_time('towner');
SELECT _grant_read('towner');
--------------------------------------------------------------------------------
-- ORDER BOOK
--------------------------------------------------------------------------------
-- type = type_flow | type_primitive <<8 | type_mode <<16
create domain dtypeorder AS int check(VALUE >=0 AND VALUE < 16777215); --((1<<24)-1)
-- type_flow &3 1 order limit,2 order best
-- type_flow &12 bit set for c calculations
-- 4 no qttlimit
-- 8 ignoreomega
-- yorder.type is a type_flow = type & 255
-- type_primitive
-- 1 order
-- 2 rmorder
-- 3 quote
-- 4 prequote
CREATE TYPE eordertype AS ENUM ('best','limit');
CREATE TYPE eprimitivetype AS ENUM ('order','childorder','rmorder','quote','prequote');
create table torder (
usr dtext,
own dtext,
ord yorder, --defined by the extension flowf
created timestamp not NULL,
updated timestamp,
duration interval
);
comment on table torder is 'Order book';
comment on column torder.usr is 'user that inserted the order ';
comment on column torder.ord is 'the order';
comment on column torder.created is 'time when the order was put on the stack';
comment on column torder.updated is 'time when the (quantity) of the order was updated by the order book';
comment on column torder.duration is 'the life time of the order';
SELECT _grant_read('torder');
create index torder_qua_prov_idx on torder(((ord).qua_prov)); -- using gin(((ord).qua_prov) text_ops);
create index torder_id_idx on torder(((ord).id));
create index torder_oid_idx on torder(((ord).oid));
-- id,type,own,oid,qtt_requ,qua_requ,qtt_prov,qua_prov,qtt
create view vorder as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where o.usr=session_user;
SELECT _grant_read('vorder');
-- sans dates ni filtre sur usr
create view vorder2 as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt
from torder o left join towner w on ((o.ord).own=w.id);
SELECT _grant_read('vorder2');
-- only parent for all users
create view vbarter as
select (o.ord).id as id,(o.ord).type as type,o.usr as user,w.name as own,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where (o.ord).oid=(o.ord).id;
SELECT _grant_read('vbarter');
-- parent and childs for all users, used with vmvto
create view vordero as
select id,
(case when (type & 3=1) then 'limit' else 'best' end)::eordertype as type,
own as owner,
case when id=oid then (qtt::text || ' ' || qua_prov) else '' end as stock,
'(' || qtt_prov::text || '/' || qtt_requ::text || ') ' ||
qua_prov || ' / '|| qua_requ as expected_Ï,
case when id=oid then '' else oid::text end as oid
from vorder order by id asc;
GRANT SELECT ON vordero TO role_com;
comment on view vordero is 'order book for all users, to be used with vmvto';
comment on column vordero.id is 'the id of the order';
comment on column vordero.owner is 'the owner';
comment on column vordero.stock is 'for a parent order the stock offered by the owner';
comment on column vordero.expected_Ï is 'the Ï of the order';
comment on column vordero.oid is 'for a child-order, the id of the parent-order';
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
CREATE FUNCTION fgetowner(_name text) RETURNS int AS $$
DECLARE
_wid int;
_OWNERINSERT boolean := fgetconst('OWNERINSERT')=1;
BEGIN
LOOP
SELECT id INTO _wid FROM towner WHERE name=_name;
IF found THEN
return _wid;
END IF;
IF (NOT _OWNERINSERT) THEN
RAISE EXCEPTION 'The owner does not exist' USING ERRCODE='YU001';
END IF;
BEGIN
INSERT INTO towner (name) VALUES (_name) RETURNING id INTO _wid;
-- RAISE NOTICE 'owner % created',_name;
return _wid;
EXCEPTION WHEN unique_violation THEN
NULL;--
END;
END LOOP;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- TMVT
-- id,nbc,nbt,grp,xid,usr_src,usr_dst,xoid,own_src,own_dst,qtt,nat,ack,exhausted,order_created,created
--------------------------------------------------------------------------------
/*
create table tmvt (
id serial UNIQUE not NULL,
nbc int default NULL,
nbt int default NULL,
grp int default NULL,
xid int default NULL,
usr_src text default NULL,
usr_dst text default NULL,
xoid int default NULL,
own_src text default NULL,
own_dst text default NULL,
qtt int8 default NULL,
nat text default NULL,
ack boolean default NULL,
cack boolean default NULL,
exhausted boolean default NULL,
order_created timestamp default NULL,
created timestamp default NULL,
om_exp double precision default NULL,
om_rea double precision default NULL,
CONSTRAINT ctmvt_grp FOREIGN KEY (grp) references tmvt(id) ON UPDATE CASCADE
);
GRANT SELECT ON tmvt TO role_com;
comment on table tmvt is 'Records ownership changes';
comment on column tmvt.nbc is 'number of movements of the exchange cycle';
comment on column tmvt.nbt is 'number of movements of the transaction containing several exchange cycles';
comment on column tmvt.grp is 'references the first movement of the exchange';
comment on column tmvt.xid is 'references the order.id';
comment on column tmvt.usr_src is 'usr provider';
comment on column tmvt.usr_dst is 'usr receiver';
comment on column tmvt.xoid is 'references the order.oid';
comment on column tmvt.own_src is 'owner provider';
comment on column tmvt.own_dst is 'owner receiver';
comment on column tmvt.qtt is 'quantity of the value moved';
comment on column tmvt.nat is 'quality of the value moved';
comment on column tmvt.ack is 'set when movement has been acknowledged';
comment on column tmvt.cack is 'set when the cycle has been acknowledged';
comment on column tmvt.exhausted is 'set when the movement exhausted the order providing the value';
comment on column tmvt.om_exp is 'Ï expected by the order';
comment on column tmvt.om_rea is 'real Ï of movement';
alter sequence tmvt_id_seq owned by tmvt.id;
GRANT SELECT ON tmvt_id_seq TO role_com;
create index tmvt_grp_idx on tmvt(grp);
create index tmvt_nat_idx on tmvt(nat);
create index tmvt_own_src_idx on tmvt(own_src);
create index tmvt_own_dst_idx on tmvt(own_dst);
CREATE VIEW vmvt AS select * from tmvt;
GRANT SELECT ON vmvt TO role_com;
CREATE VIEW vmvt_tu AS select id,nbc,grp,xid,xoid,own_src,own_dst,qtt,nat,ack,cack,exhausted from tmvt;
GRANT SELECT ON vmvt_tu TO role_com;
create view vmvto as
select id,grp,
usr_src as from_usr,
own_src as from_own,
qtt::text || ' ' || nat as value,
usr_dst as to_usr,
own_dst as to_own,
to_char(om_exp, 'FM999.9999990') as expected_Ï,
to_char(om_rea, 'FM999.9999990') as actual_Ï,
ack
from tmvt where cack is NULL order by id asc;
GRANT SELECT ON vmvto TO role_com;
*/
CREATE SEQUENCE tmvt_id_seq;
--------------------------------------------------------------------------------
-- STACK id,usr,kind,jso,submitted
--------------------------------------------------------------------------------
create table tstack (
id serial UNIQUE not NULL,
usr dtext,
kind eprimitivetype,
jso json, -- representation of the primitive
submitted timestamp not NULL,
PRIMARY KEY (id)
);
comment on table tstack is 'Records the stack of primitives';
comment on column tstack.id is 'id of this primitive';
comment on column tstack.usr is 'user submitting the primitive';
comment on column tstack.kind is 'type of primitive';
comment on column tstack.jso is 'primitive payload';
comment on column tstack.submitted is 'timestamp when the primitive was successfully submitted';
alter sequence tstack_id_seq owned by tstack.id;
GRANT SELECT ON tstack TO role_com;
SELECT fifo_init('tstack');
GRANT SELECT ON tstack_id_seq TO role_com;
--------------------------------------------------------------------------------
CREATE TYPE eprimphase AS ENUM ('submit', 'execute');
--------------------------------------------------------------------------------
-- MSG
--------------------------------------------------------------------------------
CREATE TYPE emsgtype AS ENUM ('response', 'exchange');
create table tmsg (
id serial UNIQUE not NULL,
usr dtext default NULL, -- the user receiver of this message
typ emsgtype not NULL,
jso json default NULL,
created timestamp not NULL
);
alter sequence tmsg_id_seq owned by tmsg.id;
SELECT _grant_read('tmsg');
SELECT _grant_read('tmsg_id_seq');
SELECT fifo_init('tmsg');
CREATE VIEW vmsg AS select * from tmsg WHERE usr = session_user;
SELECT _grant_read('vmsg');
--------------------------------------------------------------------------------
CREATE TYPE yj_error AS (
code int,
reason text
);
CREATE TYPE yerrorprim AS (
id int,
error yj_error
);
CREATE TYPE yj_value AS (
qtt int8,
nat text
);
CREATE TYPE yj_stock AS (
id int,
qtt int8,
nat text,
own text,
usr text
);
CREATE TYPE yj_Ï AS (
id int,
qtt_prov int8,
qtt_requ int8,
type eordertype
);
CREATE TYPE yj_mvt AS (
id int,
cycle int,
orde yj_Ï,
stock yj_stock,
mvt_from yj_stock,
- mvt_to yj_stock
+ mvt_to yj_stock,
+ orig int
);
CREATE TYPE yj_order AS (
id int,
error yj_error
);
CREATE TYPE yj_primitive AS (
id int,
error yj_error,
primitive json,
result json,
value json
);
diff --git a/src/test/py/molet.py b/src/test/py/molet.py
index 0db2660..ea7d8e9 100644
--- a/src/test/py/molet.py
+++ b/src/test/py/molet.py
@@ -1,227 +1,228 @@
# -*- coding: utf-8 -*-
class MoletException(Exception):
pass
'''---------------------------------------------------------------------------
envoi des mails
---------------------------------------------------------------------------'''
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
def sendMail(subject,body,recipients,smtpServer,
smtpSender,smtpPort,smtpPassword,smtpLogin):
''' Sends an e-mail to the specified recipients
returns False when failed'''
if(len(recipients)==0):
raise MoletException("No recipients was found for the message")
return False
msg = MIMEMultipart("alternative")
msg.set_charset("utf-8")
msg["Subject"] = subject
msg["From"] = smtpSender
msg["To"] = ','.join(recipients)
try:
_uniBody = unicode(body,'utf-8','replace') if isinstance(body,str) else body
_encBody = _uniBody.encode('utf-8')
part1 = MIMEText(_encBody,'html',_charset='utf-8')
# possible UnicodeDecodeError before this
msg.attach(part1)
session = smtplib.SMTP(smtpServer, smtpPort)
session.ehlo()
session.starttls()
session.ehlo
session.login(smtpLogin, smtpPassword)
# print msg.as_string()
session.sendmail(smtpSender, recipients, msg.as_string())
session.quit()
return True
except Exception,e:
raise MoletException('The message "%s" could not be sent.' % subject )
return False
###########################################################################
# gestion de fichiers et de répertoires
import os,shutil
def removeFile(f,ignoreWarning = False):
""" removes a file """
try:
os.remove(f)
return True
except OSError,e:
if e.errno!=2:
raise e
if not ignoreWarning:
raise MoletException("path %s could not be removed" % f)
return False
def removeTree(path,ignoreWarning = False):
try:
shutil.rmtree(path)
return True
except OSError,e:
if e.errno!=2:
raise e
if not ignoreWarning:
raise MoletException("directory %s could not be removed" % path)
return False
def mkdir(path,mode = 0755,ignoreWarning = False):
try:
os.mkdir(path,mode)
return True
except OSError,e:
if e.errno!=17: # exists
raise e
if not ignoreWarning:
raise MoletException("directory %s exists" % path)
return False
def readIntFile(f):
try:
if(os.path.exists(f)):
with open(f,'r') as f:
r = f.readline()
i = int(r)
return i
else:
return None
except ValueError,e:
return None
def writeIntFile(lfile,i):
with open(lfile,'w') as f:
f.write('%d\n' % i)
###########################################################################
# driver postgres
import psycopg2
import psycopg2.extras
import psycopg2.extensions
class DbCursor(object):
'''
with statement used to wrap a transaction. The transaction and cursor type
is defined by DbData object. The transaction is commited by the wrapper.
Several cursors can be opened with the connection.
usage:
dbData = DbData(dbBO,dic=True,autocommit=True)
with DbCursor(dbData) as cur:
... (always close con and cur)
'''
def __init__(self,dbData, dic = False,exit = False):
self.dbData = dbData
self.cur = None
self.dic = dic
self.exit = exit
def __enter__(self):
self.cur = self.dbData.getCursor(dic = self.dic)
return self.cur
-
+
def __exit__(self, type, value, traceback):
exit = self.exit
if self.cur:
self.cur.close()
if type is None:
self.dbData.commit()
exit = True
else:
self.dbData.rollback()
- self.dbData.exception(value,msg='An exception occured while using the cursor')
+ self.dbData.exception(value,msg='An exception occured while using the cursor',tipe=type,triceback=traceback)
#return False # on propage l'exception
return exit
+import traceback
class DbData(object):
''' DbData(db com.srvob_conf.DbInti(),dic = False,autocommit = True)
db defines DSN.
'''
def __init__(self,db,autocommit = True,login=None):
self.db = db
self.aut = autocommit
self.login = login
self.con=psycopg2.connect(self.getDSN())
if self.aut:
self.con.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
def getDSN(self):
if self.login:
_login = self.login
else:
_login = self.db.login
return "dbname='%s' user='%s' password='%s' host='%s' port='%s'" % (self.db.name,_login,self.db.password,self.db.host,self.db.port)
def getCursor(self,dic=False):
if dic:
cur = self.con.cursor(cursor_factory=psycopg2.extras.RealDictCursor)
else:
cur = self.con.cursor()
return cur
def commit(self):
if self.aut:
# raise MoletException("Commit called while autocommit")
return
self.con.commit()
def rollback(self):
if self.aut:
# raise MoletException("Rollback called while autocommit")
return
try:
self.con.rollback()
except psycopg2.InterfaceError,e:
self.exception(e,msg="Attempt to rollback while the connection were closed")
- def exception(self,e,msg = None):
+ def exception(self,e,msg = None,tipe = None,triceback = None):
if msg:
- print e
+ print e,tipe,traceback.print_tb(triceback)
raise MoletException(msg)
else:
raise e
def close(self):
self.con.close()
'''---------------------------------------------------------------------------
divers
---------------------------------------------------------------------------'''
import datetime
def utcNow():
return datetime.datetime.utcnow()
import os
import pwd
import grp
def get_username():
return pwd.getpwuid( os.getuid() )[ 0 ]
def get_usergroup(_file):
stat_info = os.stat(_file)
uid = stat_info.st_uid
gid = stat_info.st_gid
user = pwd.getpwuid(uid)[0]
group = grp.getgrgid(gid)[0]
return (user, group)
diff --git a/src/test/py/run.py b/src/test/py/run.py
index ba049b8..6362dc0 100755
--- a/src/test/py/run.py
+++ b/src/test/py/run.py
@@ -1,251 +1,442 @@
# -*- coding: utf-8 -*-
'''
Framework de tests tu_*
***************************************************************************
execution:
reset_market.sql
soumis:
list de primitives t_*.sql
résultats:
état de l'order book
état de tmsg
comparaison attendu/obtenu
dans src/test/
run.py
sql/reset_market.sql
sql/t_*.sql
expected/t_*.res
obtained/t_*.res
boucle pour chaque t_*.sql:
reset_market.sql
exécuter t_*.sql
dumper les résultats dans obtained/t_.res
comparer expected et obtained
'''
import sys,os,time,logging
import psycopg2
import psycopg2.extensions
import traceback
import srvob_conf
import molet
import utilt
import sys
sys.path = [os.path.abspath(os.path.join(os.path.abspath(__file__),"../../../../simu/liquid"))]+ sys.path
#print sys.path
import distrib
PARLEN=80
prtest = utilt.PrTest(PARLEN,'=')
def get_paths():
curdir = os.path.abspath(__file__)
curdir = os.path.dirname(curdir)
curdir = os.path.dirname(curdir)
sqldir = os.path.join(curdir,'sql')
resultsdir,expecteddir = os.path.join(curdir,'results'),os.path.join(curdir,'expected')
molet.mkdir(resultsdir,ignoreWarning = True)
molet.mkdir(expecteddir,ignoreWarning = True)
tup = (curdir,sqldir,resultsdir,expecteddir)
return tup
def tests_tu(options):
titre_test = "UNDEFINED"
curdir,sqldir,resultsdir,expecteddir = get_paths()
try:
utilt.wait_for_true(srvob_conf.dbBO,0.1,"SELECT value=102,value FROM market.tvar WHERE name='OC_CURRENT_PHASE'",
msg="Waiting for market opening")
except psycopg2.OperationalError,e:
print "Please adjust DB_NAME,DB_USER,DB_PWD,DB_PORT parameters of the file src/test/py/srv_conf.py"
print "The test program could not connect to the market"
exit(1)
if options.test is None:
_fts = [f for f in os.listdir(sqldir) if f.startswith('tu_') and f[-4:]=='.sql']
_fts.sort(lambda x,y: cmp(x,y))
else:
_nt = options.test + '.sql'
_fts = os.path.join(sqldir,_nt)
if not os.path.exists(_fts):
print 'This test \'%s\' was not found' % _fts
return
else:
_fts = [_nt]
_tok,_terr,_num_test = 0,0,0
prtest.title('running tests on database "%s"' % (srvob_conf.DB_NAME,))
#print '='*PARLEN
print ''
print 'Num\tStatus\tName'
print ''
for file_test in _fts: # itération on test cases
_nom_test = file_test[:-4]
_terr +=1
_num_test +=1
file_result = file_test[:-4]+'.res'
_fte = os.path.join(resultsdir,file_result)
_fre = os.path.join(expecteddir,file_result)
with open(_fte,'w') as f:
cur = None
dump = utilt.Dumper(srvob_conf.dbBO,options,None)
titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
dump = utilt.Dumper(srvob_conf.dbBO,options,f)
titre_test = utilt.exec_script(dump,sqldir,file_test)
utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
conn = molet.DbData(srvob_conf.dbBO)
try:
with molet.DbCursor(conn) as cur:
dump.torder(cur)
dump.tmsg(cur)
finally:
conn.close()
if(os.path.exists(_fre)):
if(utilt.files_clones(_fte,_fre)):
_tok +=1
_terr -=1
print '%i\tY\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\tN\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\t?\t%s\t%s' % (_num_test,_nom_test,titre_test)
# display global results
print ''
print 'Test status: (Y)expected ==results, (N)expected!=results,(F)failed, (?)expected undefined'
prtest.line()
if(_terr == 0):
prtest.center('\tAll %i tests passed' % _tok)
else:
prtest.center('\t%i tests KO, %i passed' % (_terr,_tok))
prtest.line()
import random
import csv
-MAX_ORDER = 1000
+import simplejson
+import sys
+
+MAX_ORDER = 100000
def build_ti(options):
''' build a .sql file with a bump of submit
'''
#conf = srvob_conf.dbBO
curdir,sqldir,resultsdir,expecteddir = get_paths()
_frs = os.path.join(sqldir,'test_ti.csv')
MAX_OWNER = 10
MAX_QLT = 20
QTT_PROV = 10000
prtest.title('generating tests cases for quotes')
- def gen(nborders,frs,withquote):
+ def gen(nborders,frs):
for i in range(nborders):
w = random.randint(1,MAX_OWNER)
qlt_prov,qlt_requ = distrib.couple(distrib.uniformQlt,MAX_QLT)
r = random.random()+0.5
qtt_requ = int(QTT_PROV * r)
- lb= 'limit' if (random.random()>0.2) else 'best'
+ lb= 'limit' if (random.random()>0.9) else 'best'
frs.writerow(['admin',lb,'w%i'%w,'q%i'%qlt_requ,qtt_requ,'q%i'%qlt_prov,QTT_PROV])
with open(_frs,'w') as f:
spamwriter = csv.writer(f)
- gen(MAX_ORDER,spamwriter,False)
- gen(30,spamwriter,True)
+ gen(MAX_ORDER,spamwriter)
molet.removeFile(os.path.join(expecteddir,'test_ti.res'),ignoreWarning = True)
prtest.center('done, test_ti.res removed')
prtest.line()
def test_ti(options):
+ _reset,titre_test = options.test_ti_reset,''
+
curdir,sqldir,resultsdir,expecteddir = get_paths()
- prtest.title('running tests on database "%s"' % (srvob_conf.DB_NAME,))
+ prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
+
+ dump = utilt.Dumper(srvob_conf.dbBO,options,None)
+ if _reset:
+ print '\tReset: Clearing market ...'
+ titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
+ print '\t\tDone'
+
+ fn = os.path.join(sqldir,'test_ti.csv')
+ if( not os.path.exists(fn)):
+ raise ValueError('The data %s is not found' % fn)
+
+ with open(fn,'r') as f:
+ _nbtest = 0
+ for row in f:
+ _nbtest +=1
+
+ cur_login = None
+ titre_test = None
+
+ inst = utilt.ExecInst(dump)
+
+
+ user = None
+ fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
+ fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
+
+ fmtjsosr = '''SELECT jso from market.tmsg
+ where json_extract_path_text(jso,'id')::int=%i and typ='response' '''
+
+ fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
+ sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
+ sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
+ from market.tmsg
+ where json_extract_path_text(jso,'orig')::int=%i
+ and json_extract_path_text(jso,'orde','id')::int=%i
+ and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
+ '''
+ the order that produced the exchange has the qualities expected
+ '''
+ i= 0
+ if _reset:
+ print '\tSubmission: sending a series of %i tests where a random set of arguments' % _nbtest
+ print '\t\tis used to submit a quote, then an order'
+ with open(fn,'r') as f:
+
+ spamreader = csv.reader(f)
+
+ compte = utilt.Delai()
+ for row in spamreader:
+ user = row[0]
+ params = tuple(row[1:])
+
+ cursor = inst.exe( fmtquote % params,user)
+ cursor = inst.exe( fmtorder % params,user)
+ i +=1
+ if i % 100 == 0:
+ prtest.progress(i/float(_nbtest))
+
+ delai = compte.getSecs()
+
+ print '\t\t%i quote & order primitives in %f seconds' % (_nbtest*2,delai)
+ print '\tExecution: Waiting for end of execution ...'
+ #utilt.wait_for_true(srvob_conf.dbBO,1000,"SELECT market.fstackdone()",prtest=prtest)
+ utilt.wait_for_empty_stack(srvob_conf.dbBO,prtest)
+ delai = compte.getSecs()
+ print '\t\t Done: mean time per primitive %f seconds' % (delai/(_nbtest*2),)
+
+
+
+
+ fmtiter = '''SELECT json_extract_path_text(jso,'id')::int id,json_extract_path_text(jso,'primitive','type') typ
+ from market.tmsg where typ='response' and json_extract_path_text(jso,'primitive','kind')='quote'
+ order by id asc limit 10 offset %i'''
+ i = 0
+ _notnull,_ko,_limit,_limitko = 0,0,0,0
+ print '\tChecking: identity of quote result and order result for each %i test cases' % _nbtest
+ print '\t\tusing the content of market.tmsg'
+ while True:
+ cursor = inst.exe( fmtiter % i,user)
+
+ vec = []
+ for re in cursor:
+ vec.append(re)
+
+ l = len(vec)
+
+ if l == 0:
+ break
+
+ for idq,_type in vec:
+ i += 1
+ if _type == 'limit':
+ _limit += 1
+
+ # result of the quote for idq
+ _cur = inst.exe(fmtjsosr %idq,user)
+ res = _cur.fetchone()
+ res_quote =simplejson.loads(res[0])
+ expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
+
+ #result of the order for idq+1
+ _cur = inst.exe(fmtjsose %(idq+1,idq+1),user)
+ res = _cur.fetchone()
+
+ if res is None:
+ result = 0,0
+ else:
+ ido_,qtt_prov_,qtt_reci_ = res
+ result = qtt_prov_,qtt_reci_
+ _notnull +=1
+ if _type == 'limit':
+ if float(expected[0])/float(expected[1]) < float(qtt_prov_)/float(qtt_reci_):
+ _limitko +=1
+
+ if result != expected:
+ _ko += 1
+ print idq,res,res_quote
+
+ if i %100 == 0:
+ prtest.progress(i/float(_nbtest))
+ '''
+ if i == 100:
+ print '\t\t.',
+ else:
+ print '.',
+ sys.stdout.flush()
+
+ if(_ko != 0): _errs = ' - %i errors' %_ko
+ else: _errs = ''
+ print ('\t\t%i quote & order\t%i quotes returned a result %s' % (i-_ko,_notnull,_errs))
+ '''
+ prtest.title('Results of %i checkings' % i)
+ if(_ko == 0 and _limitko == 0):
+ print '\tall checks ok'
+
+ print '\t\t%i orders returned a result different from the previous quote' % _ko
+ print '\t\t%i limit orders returned a result where the limit is not observed' % _limitko
+
+
+
+
+ inst.close()
+ return titre_test
+
+def test_ti_old(options):
+
+ curdir,sqldir,resultsdir,expecteddir = get_paths()
+ prtest.title('running test_ti on database "%s"' % (srvob_conf.DB_NAME,))
dump = utilt.Dumper(srvob_conf.dbBO,options,None)
titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
fn = os.path.join(sqldir,'test_ti.csv')
if( not os.path.exists(fn)):
raise ValueError('The data %s is not found' % fn)
cur_login = None
titre_test = None
inst = utilt.ExecInst(dump)
quote = False
with open(fn,'r') as f:
spamreader = csv.reader(f)
i= 0
usr = None
fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
+ fmtjsosr = "SELECT jso from market.tmsg where json_extract_path_text(jso,'id')::int=%i and typ='response'"
+ fmtjsose = """SELECT json_extract_path_text(jso,'orde','id')::int id,
+ sum(json_extract_path_text(jso,'mvt_from','qtt')::bigint) qtt_prov,
+ sum(json_extract_path_text(jso,'mvt_to','qtt')::bigint) qtt_requ
+ from market.tmsg
+ where json_extract_path_text(jso,'orde','id')::int=%i
+ and typ='exchange' group by json_extract_path_text(jso,'orde','id')::int """
+ '''
+ the order that produced the exchange has the qualities expected
+ '''
+ _notnull,_ko = 0,0
for row in spamreader:
i += 1
-
- if i < 20: #i < MAX_ORDER:
- cursor = inst.exe( fmtorder % tuple(row[1:]),row[0])
+ user = row[0]
+ params = tuple(row[1:])
+
+ cursor = inst.exe( fmtquote % params,user)
+ idq,err = cursor.fetchone()
+ if err != '(0,)':
+ raise ValueError('Quote returned an error "%s"' % err)
+ utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
+ cursor = inst.exe(fmtjsosr %idq,user)
+ res = cursor.fetchone() # result of the quote
+ res_quote =simplejson.loads(res[0])
+ expected = res_quote['result']['qtt_give'],res_quote['result']['qtt_reci']
+ #print res_quote
+ #print ''
+
+ cursor = inst.exe( fmtorder % params,user)
+ ido,err = cursor.fetchone()
+ if err != '(0,)':
+ raise ValueError('Order returned an error "%s"' % err)
+ utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
+ cursor = inst.exe(fmtjsose %ido,user)
+ res = cursor.fetchone()
+
+ if res is None:
+ result = 0,0
else:
- cursor = inst.exe( fmtquote % tuple(row[1:]),row[0])
- id,err = cursor.fetchone()
- if err != '(0,)':
- raise ValueError('Order returned an error "%s"' % err)
- utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
- print id
- cursor = inst.exe('SELECT * from market.tmsg')
- print cursor.fetchone()
+ ido_,qtt_prov_,qtt_reci_ = res
+ result = qtt_prov_,qtt_reci_
+ _notnull +=1
+
+ if result != expected:
+ _ko += 1
+ print qtt_prov_,qtt_reci_,res_quote
- if i >30:
- break
+ if i %100 == 0:
+ if(_ko != 0): _errs = ' - %i errors' %_ko
+ else: _errs = ''
+ print ('\t%i quote & order - %i quotes returned a result %s' % (i-_ko,_notnull,_errs))
+
+ if(_ko == 0):
+ prtest.title(' all %i tests passed' % i)
+ else:
+ prtest.title('%i checked %i tests failed' % (i,_ko))
inst.close()
return titre_test
- prtest.line()
'''---------------------------------------------------------------------------
arguments
---------------------------------------------------------------------------'''
from optparse import OptionParser
import os
def main():
#global options
usage = """usage: %prog [options]
tests """
parser = OptionParser(usage)
parser.add_option("-t","--test",action="store",type="string",dest="test",help="test",
default= None)
parser.add_option("-v","--verbose",action="store_true",dest="verbose",help="verbose",default=False)
- parser.add_option("-b","--build",action="store_true",dest="build",help="build",default=False)
+ parser.add_option("-b","--build",action="store_true",dest="build",help="generates random test cases for test_ti",default=False)
parser.add_option("-i","--ti",action="store_true",dest="test_ti",help="execute test_ti",default=False)
+ parser.add_option("-r","--reset",action="store_true",dest="test_ti_reset",help="clean before execution test_ti",default=False)
(options, args) = parser.parse_args()
# um = os.umask(0177) # u=rw,g=,o=
if options.build:
build_ti(options)
elif options.test_ti:
test_ti(options)
else:
tests_tu(options)
if __name__ == "__main__":
main()
diff --git a/src/test/py/utilt.py b/src/test/py/utilt.py
index 40f5d71..457f86d 100644
--- a/src/test/py/utilt.py
+++ b/src/test/py/utilt.py
@@ -1,236 +1,306 @@
# -*- coding: utf-8 -*-
import string
import os.path
+import time, sys
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
class PrTest(object):
''' results printing '''
def __init__(self,parlen,sep):
self.parlen = parlen+ parlen%2
self.sep = sep
def title(self,title):
_l = len(title)
_p = max(_l%2 +_l,40)
_x = self.parlen -_p
if (_x > 2):
print (_x/2)*self.sep + string.center(title,_p) + (_x/2)*self.sep
else:
print string.center(text,self.parlen)
def line(self):
print self.parlen*self.sep
def center(self,text):
print string.center(text,self.parlen)
+ def progress(self,progress):
+ # update_progress() : Displays or updates a console progress bar
+ ## Accepts a float between 0 and 1. Any int will be converted to a float.
+ ## A value under 0 represents a 'halt'.
+ ## A value at 1 or bigger represents 100%
+ barLength = 20 # Modify this to change the length of the progress bar
+ status = ""
+ if isinstance(progress, int):
+ progress = float(progress)
+ if not isinstance(progress, float):
+ progress = 0
+ status = "error: progress var must be float\r\n"
+ if progress < 0:
+ progress = 0
+ status = "Halt...\r\n"
+ if progress >= 1:
+ progress = 1
+ status = "Done...\r\n"
+ block = int(round(barLength*progress))
+ text = "\r\t\t\t[{0}] {1}% {2}".format( "#"*block + "-"*(barLength-block), progress*100, status)
+ sys.stdout.write(text)
+ sys.stdout.flush()
+
'''---------------------------------------------------------------------------
file comparison
---------------------------------------------------------------------------'''
import filecmp
def files_clones(f1,f2):
#res = filecmp.cmp(f1,f2)
return (md5sum(f1) == md5sum(f2))
import hashlib
def md5sum(filename, blocksize=65536):
hash = hashlib.md5()
with open(filename, "r+b") as f:
for block in iter(lambda: f.read(blocksize), ""):
hash.update(block)
return hash.hexdigest()
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
SEPARATOR = '\n'+'-'*80 +'\n'
import json
class Dumper(object):
def __init__(self,conf,options,fdr):
self.options =options
self.conf = conf
self.fdr = fdr
def getConf(self):
return self.conf
def torder(self,cur):
self.write(SEPARATOR)
self.write('table: torder\n')
cur.execute('SELECT * FROM market.vord order by id asc')
self.cur(cur)
'''
yorder not shown:
pos_requ box, -- box (point(lat,lon),point(lat,lon))
pos_prov box, -- box (point(lat,lon),point(lat,lon))
dist float8,
carre_prov box -- carre_prov @> pos_requ
'''
return
def write(self,txt):
if self.fdr:
self.fdr.write(txt)
def cur(self,cur,_len=10):
#print cur.description
if(cur.description is None): return
#print type(cur)
cols = [e.name for e in cur.description]
row_format = ('{:>'+str(_len)+'}')*len(cols)
self.write(row_format.format(*cols)+'\n')
self.write(row_format.format(*(['+'+'-'*(_len-1)]*len(cols)))+'\n')
for res in cur:
self.write(row_format.format(*res)+'\n')
return
def tmsg(self,cur):
self.write(SEPARATOR)
self.write('table: tmsg')
self.write(SEPARATOR)
cur.execute('SELECT id,typ,usr,jso FROM market.tmsg order by id asc')
for res in cur:
_id,typ,usr,jso = res
_jso = json.loads(jso)
if typ == 'response':
- if _jso['error']['code']==None:
+ if _jso['error']['code']==0:
_msg = 'Primitive id:%i from %s: OK\n' % (_jso['id'],usr)
else:
- _msg = 'Primitive id:%i from %s: ERROR(%i,%s)\n' % (usr,_jso['id'],
+ _msg = 'Primitive id:%i from %s: ERROR(%i,%s)\n' % (_jso['id'],usr,
_jso['error']['code'],_jso['error']['reason'])
elif typ == 'exchange':
_fmt = '''Cycle id:%i Exchange id:%i for %s @%s:
\t%i:mvt_from %s @%s : %i \'%s\' -> %s @%s
\t%i:mvt_to %s @%s : %i \'%s\' <- %s @%s
\tstock id:%i remaining after exchange: %i \'%s\' \n'''
_dat = (
_jso['cycle'],_jso['id'],_jso['stock']['own'],usr,
_jso['mvt_from']['id'],_jso['stock']['own'],usr,_jso['mvt_from']['qtt'],_jso['mvt_from']['nat'],_jso['mvt_from']['own'],_jso['mvt_from']['usr'],
_jso['mvt_to']['id'], _jso['stock']['own'],usr,_jso['mvt_to']['qtt'],_jso['mvt_to']['nat'],_jso['mvt_to']['own'],_jso['mvt_to']['usr'],
_jso['stock']['id'],_jso['stock']['qtt'],_jso['stock']['nat'])
_msg = _fmt %_dat
else:
_msg = str(res)
self.write('\t%i:'%_id+_msg+'\n')
if self.options.verbose:
print jso
return
'''---------------------------------------------------------------------------
wait until a command returns true with timeout
---------------------------------------------------------------------------'''
import molet
import time
def wait_for_true(conf,delai,sql,msg=None):
_i = 0;
_w = 0;
while True:
_i +=1
conn = molet.DbData(conf)
try:
with molet.DbCursor(conn) as cur:
cur.execute(sql)
r = cur.fetchone()
- # print r
if r[0] == True:
break
finally:
conn.close()
if msg is None:
pass
elif(_i%10)==0:
print msg
_a = 0.1;
_w += _a;
if _w > delai: # seconds
raise ValueError('After %f seconds, %s != True' % (_w,sql))
time.sleep(_a)
+'''---------------------------------------------------------------------------
+wait for stack empty
+---------------------------------------------------------------------------'''
+def wait_for_empty_stack(conf,prtest):
+ _i = 0;
+ _w = 0;
+ sql = "SELECT name,value FROM market.tvar WHERE name in ('STACK_TOP','STACK_EXECUTED')"
+ while True:
+ _i +=1
+ conn = molet.DbData(conf)
+ try:
+ with molet.DbCursor(conn) as cur:
+ cur.execute(sql)
+ re = {}
+ for r in cur:
+ re[r[0]] = r[1]
+
+ prtest.progress(float(re['STACK_EXECUTED'])/float(re['STACK_TOP']))
+
+ if re['STACK_TOP'] == re['STACK_EXECUTED']:
+ break
+ finally:
+ conn.close()
+ time.sleep(2)
'''---------------------------------------------------------------------------
executes a script
---------------------------------------------------------------------------'''
def exec_script(dump,dirsql,fn):
fn = os.path.join(dirsql,fn)
if( not os.path.exists(fn)):
raise ValueError('The script %s is not found' % fn)
cur_login = None
titre_test = None
inst = ExecInst(dump)
with open(fn,'r') as f:
for line in f:
line = line.strip()
if len(line) == 0:
continue
dump.write(line+'\n')
if line.startswith('--'):
if titre_test is None:
titre_test = line
elif line.startswith('--USER:'):
cur_login = line[7:].strip()
else:
cursor = inst.exe(line,cur_login)
dump.cur(cursor)
inst.close()
return titre_test
'''---------------------------------------------------------------------------
---------------------------------------------------------------------------'''
class ExecInst(object):
def __init__(self,dump):
self.login = None
self.conn = None
self.cur = None
self.dump = dump
def exe(self,sql,login):
#print login
if self.login != login:
self.close()
if self.conn is None:
self.login = login
_login = None if login == 'admin' else login
self.conn = molet.DbData(self.dump.getConf(),login = _login)
self.cur = self.conn.con.cursor()
+ # print sql
self.cur.execute(sql)
return self.cur
def close(self):
if not(self.conn is None):
if not(self.cur is None):
self.cur.close()
self.conn.close()
self.conn = None
def execinst(dump,cur_login,sql):
if cur_login == 'admin':
cur_login = None
conn = molet.DbData(dump.getConf(),login = cur_login)
try:
with molet.DbCursor(conn,exit = True) as _cur:
_cur.execute(sql)
dump.cur(_cur)
finally:
- conn.close()
\ No newline at end of file
+ conn.close()
+
+
+'''---------------------------------------------------------------------------
+---------------------------------------------------------------------------'''
+from datetime import datetime
+
+class Delai(object):
+ def __init__(self):
+ self.debut = datetime.now()
+
+ def getSecs(self):
+ return self._duree(self.debut,datetime.now())
+
+ def _duree(self,begin,end):
+ """ returns a float; the number of seconds elapsed between begin and end
+ """
+ if(not isinstance(begin,datetime)): raise ValueError('begin is not datetime object')
+ if(not isinstance(end,datetime)): raise ValueError('end is not datetime object')
+ duration = end - begin
+ secs = duration.days*3600*24 + duration.seconds + duration.microseconds/1000000.
+ return secs
+
|
olivierch/openBarter
|
d3ec6b8d3b1848c6f5ef8ad276eb0caeb0912fc4
|
correction sur quote
|
diff --git a/src/sql/prims.sql b/src/sql/prims.sql
index f9d2e36..6560183 100644
--- a/src/sql/prims.sql
+++ b/src/sql/prims.sql
@@ -1,491 +1,491 @@
--------------------------------------------------------------------------------
-- check params
-- code in [-9,0]
--------------------------------------------------------------------------------
CREATE FUNCTION
fcheckquaown(_r yj_error,_own dtext,_qua_requ dtext,_pos_requ point,_qua_prov dtext,_pos_prov point,_dist float8)
RETURNS yj_error AS $$
DECLARE
_r yj_error;
_i int;
BEGIN
IF(NOT ((yflow_checktxt(_own)&1)=1)) THEN
_r.reason := '_own is empty string';
_r.code := -1;
RETURN _r;
END IF;
IF(_qua_prov IS NULL) THEN
IF(NOT ((yflow_checktxt(_qua_requ)&1)=1)) THEN
_r.reason := '_qua_requ is empty string';
_r.code := -2;
RETURN _r;
END IF;
ELSE
IF(_qua_prov = _qua_requ) THEN
_r.reason := 'qua_prov == qua_requ';
_r.code := -3;
return _r;
END IF;
_i = yflow_checkquaownpos(_own,_qua_requ,_pos_requ,_qua_prov,_pos_prov,_dist);
IF (_i != 0) THEN
_r.reason := 'rejected by yflow_checkquaownpos';
_r.code := _i; -- -9<=i<=-5
return _r;
END IF;
END IF;
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
CREATE FUNCTION fcheckquaprovusr(_r yj_error,_qua_prov dtext,_usr dtext) RETURNS yj_error AS $$
DECLARE
_QUAPROVUSR boolean := fgetconst('QUAPROVUSR')=1;
_p int;
_suffix text;
BEGIN
IF (NOT _QUAPROVUSR) THEN
RETURN _r;
END IF;
_p := position('@' IN _qua_prov);
IF (_p = 0) THEN
-- without prefix, it should be a currency
SELECT count(*) INTO _p FROM tcurrency WHERE _qua_prov = name;
IF (_p = 1) THEN
RETURN _r;
ELSE
_r.code := -12;
_r.reason := 'the quality provided that is not a currency must be prefixed';
RETURN _r;
END IF;
END IF;
-- with prefix
IF (char_length(substring(_qua_prov FROM 1 FOR (_p-1))) <1) THEN
_r.code := -13;
_r.reason := 'the prefix of the quality provided cannot be empty';
RETURN _r;
END IF;
_suffix := substring(_qua_prov FROM (_p+1));
_suffix := replace(_suffix,'.','_'); -- change . to _
-- it must be the username
IF ( _suffix!= _usr) THEN
_r.code := -14;
_r.reason := 'the prefix of the quality provided must by the user name';
RETURN _r;
END IF;
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
CREATE FUNCTION fchecknameowner(_r yj_error,_name dtext,_usr dtext) RETURNS yj_error AS $$
DECLARE
_p int;
_OWNUSR boolean := fgetconst('OWNUSR')=1;
_suffix text;
BEGIN
IF (NOT _OWNUSR) THEN
RETURN _r;
END IF;
_p := position('@' IN _name);
IF (char_length(substring(_name FROM 1 FOR (_p-1))) <1) THEN
_r.code := -20;
_r.reason := 'the owner name has an empty prefix';
RETURN _r;
END IF;
_suffix := substring(_name FROM (_p+1));
SELECT count(*) INTO _p FROM townauth WHERE _suffix = name;
IF (_p = 1) THEN
RETURN _r; --well known auth provider
END IF;
-- change . to _
_suffix := replace(_suffix,'.','_');
IF ( _suffix= _usr) THEN
RETURN _r; -- owners name suffixed by users name
END IF;
_r.code := -21;
_r.reason := 'if the owner name is not prefixed by a well know provider, it must be prefixed by user name';
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- order primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_order AS (
kind eprimitivetype,
type eordertype,
owner dtext,
qua_requ dtext,
qtt_requ dqtt,
qua_prov dtext,
qtt_prov dqtt
);
CREATE FUNCTION fsubmitorder(_type eordertype,_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_order;
BEGIN
_prim := ROW('order',_type,_owner,_qua_requ,_qtt_requ,_qua_prov,_qtt_prov)::yp_order;
_res := fprocessorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitorder(eordertype,dtext,dtext,dqtt,dtext,dqtt) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessorder(_phase eprimphase, _t tstack,_s yp_order)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_ir int;
_o yorder;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_prov,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_res := fpushprimitive(_r,'order',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
/*
IF(
(_s.duration IS NOT NULL) AND (_s.submitted + _s.duration) < clock_timestamp()
) THEN
_r.reason := 'barter order - the order is too old';
_r.code := -19;
END IF; */
_wid := fgetowner(_s.owner);
_o := ROW(CASE WHEN _s.type='limit' THEN 1 ELSE 2 END,
_t.id,_wid,_t.id,
_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt_prov,
box('(0,0)'::point,'(0,0)'::point),box('(0,0)'::point,'(0,0)'::point),
0.0,earth_get_square('(0,0)'::point,0.0)
)::yorder;
_ir := insertorder(_s.owner,_o,_t.usr,_t.submitted,'1 day');
RETURN ROW(_t.id,NULL,_t.jso,
row_to_json(ROW(_o.id,_o.qtt,_o.qua_prov,_s.owner,_t.usr)::yj_stock),
NULL
)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- child order primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_childorder AS (
kind eprimitivetype,
owner dtext,
qua_requ dtext,
qtt_requ dqtt,
stock_id int
);
CREATE FUNCTION fsubmitchildorder(_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_stock_id int)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_childorder;
BEGIN
_prim := ROW('childorder',_owner,_qua_requ,_qtt_requ,_stock_id)::yp_childorder;
_res := fprocesschildorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitchildorder(dtext,dtext,dqtt,int) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocesschildorder(_phase eprimphase, _t tstack,_s yp_childorder)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_otype int;
_ir int;
_o yorder;
_op torder%rowtype;
_sp tstack%rowtype;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_requ,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid;
IF (NOT FOUND) THEN
/* could be found in the stack */
SELECT * INTO _sp FROM tstack WHERE
id = _s.stock_id AND usr = session_user AND _s.owner = jso->'owner' AND kind='order';
IF (NOT FOUND) THEN
_r.code := -200;
_r.reason := 'the order was not found for this user and owner';
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
END IF;
_res := fpushprimitive(_r,'childorder',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid;
IF (NOT FOUND) THEN
_r.code := -201;
_r.reason := 'the stock is not in the order book';
RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
END IF;
_o := _op.ord;
_o.id := _id;
_o.qua_requ := _s.qua_requ;
_o.qtt_requ := _s.qtt_requ;
_ir := insertorder(_s.owner,_o,_s.usr,_s.submitted,_op.duration);
RETURN ROW(_t.id,NULL,_t.jso,NULL,NULL)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- rm primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_rmorder AS (
kind eprimitivetype,
owner dtext,
stock_id int
);
CREATE FUNCTION fsubmitrmorder(_owner dtext,_stock_id int)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_rmorder;
BEGIN
_prim := ROW('rmorder',_owner,_stock_id)::yp_rmorder;
_res := fprocessrmorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitrmorder(dtext,int) TO role_co,role_bo;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessrmorder(_phase eprimphase, _t tstack,_s yp_rmorder)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_otype int;
_ir int;
_o yorder;
_opy yorder; -- parent_order
_op torder%rowtype;
_te text;
_pusr text;
_sp tstack%rowtype;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
/* could be found in the stack */
SELECT * INTO _sp FROM tstack WHERE
id = _s.stock_id AND usr = session_user AND _s.owner = jso->'owner' AND kind='order' AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
_r.code := -300;
_r.reason := 'the order was not found for this user and owner';
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
END IF;
_res := fpushprimitive(_r,'rmorder',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
_r.code := -301;
_r.reason := 'the stock is not in the order book';
RETURN ROW(_t.id,_r,_t.json,NULL,NULL)::yj_primitive;
END IF;
-- delete order and sub-orders from the book
DELETE FROM torder o WHERE (o.ord).oid = _yo.oid;
-- id,error,primitive,result
RETURN ROW(_t.id,NULL,_t.json,
ROW((_op.ord).id,(_op.ord).qtt,(_op.ord).qua_prov,_s.owner,_op.usr)::yj_stock,
ROW((_op.ord).qua_prov,(_op.ord).qtt)::yj_value
)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- quote
--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitquote(_type eordertype,_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_order;
BEGIN
_prim := ROW('quote',_type,_owner,_qua_requ,_qtt_requ,_qua_prov,_qtt_prov)::yp_order;
_res := fprocessquote('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitquote(eordertype,dtext,dtext,dqtt,dtext,dqtt) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessquote(_phase eprimphase, _t tstack,_s yp_order)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_ir int;
_o yorder;
_type int;
_json_res json;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_prov,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_res := fpushprimitive(_r,'quote',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
_type := CASE WHEN _s.type='limit' THEN 1 ELSE 2 END;
_o := ROW( _type,
_t.id,_wid,_t.id,
_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt_prov,
box('(0,0)'::point,'(0,0)'::point),box('(0,0)'::point,'(0,0)'::point),
- _s.dist,earth_get_square(box('(0,0)'::point,0.0))
+ 0.0,earth_get_square(box('(0,0)'::point,0.0))
)::yorder;
/*fproducequote(_ord yorder,_isquote boolean,_isnoqttlimit boolean,_islimit boolean,_isignoreomega boolean)
*/
_json_res := fproducequote(_o,true,false,_s.type='limit',false);
RETURN ROW(_t.id,NULL,_t.json,_tx,NULL,NULL)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- primitive processing
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessprimitive(_phase eprimphase, _s tstack)
RETURNS yj_primitive AS $$
DECLARE
_res yj_primitive;
_kind eprimitivetype;
BEGIN
_kind := _s.kind;
CASE
WHEN (_kind = 'order' ) THEN
_res := fprocessorder(_phase,_s,json_populate_record(NULL::yp_order,_s.jso));
WHEN (_kind = 'childorder' ) THEN
_res := fprocesschildorder(_phase,_s,json_populate_record(NULL::yp_childorder,_s.jso));
WHEN (_kind = 'rmorder' ) THEN
_res := fprocessrmorder(_phase,_s,json_populate_record(NULL::yp_rmorder,_s.jso));
WHEN (_kind = 'quote' ) THEN
_res := fprocessquote(_phase,_s,json_populate_record(NULL::yp_order,_s.jso));
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
RETURN _res;
END;
$$ LANGUAGE PLPGSQL;
|
olivierch/openBarter
|
8f5e274adbf4aa286fa5aab47c9d6623880f1d37
|
worker_ob correction!!
|
diff --git a/src/sql/tables.sql b/src/sql/tables.sql
index 28f744d..85ccd07 100644
--- a/src/sql/tables.sql
+++ b/src/sql/tables.sql
@@ -1,353 +1,354 @@
--------------------------------------------------------------------------------
ALTER DEFAULT PRIVILEGES REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON SEQUENCES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TYPES FROM PUBLIC;
--------------------------------------------------------------------------------
create domain dqtt AS int8 check( VALUE>0);
create domain dtext AS text check( char_length(VALUE)>0);
--------------------------------------------------------------------------------
-- main constants of the model
--------------------------------------------------------------------------------
create table tconst(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
GRANT SELECT ON tconst TO role_com;
--------------------------------------------------------------------------------
/* for booleans, 0 == false and !=0 == true
*/
INSERT INTO tconst (name,value) VALUES
('MAXCYCLE',64), -- must be less than yflow_get_maxdim()
('MAXPATHFETCHED',1024),-- maximum depth of the graph exploration
('MAXMVTPERTRANS',128), -- maximum number of movements per transaction
-- if this limit is reached, next cycles are not performed but all others
-- are included in the current transaction
('VERSION-X',2),('VERSION-Y',0),('VERSION-Z',2),
('OWNERINSERT',1), -- boolean when true, owner inserted when not found
('QUAPROVUSR',0), -- boolean when true, the quality provided by a barter is suffixed by user name
-- 1 prod
('OWNUSR',0), -- boolean when true, the owner is suffixed by user name
-- 1 prod
('DEBUG',1);
--------------------------------------------------------------------------------
create table tvar(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
INSERT INTO tvar (name,value) VALUES ('INSTALLED',0);
+GRANT SELECT ON tvar TO role_com;
--------------------------------------------------------------------------------
-- TOWNER
--------------------------------------------------------------------------------
create table towner (
id serial UNIQUE not NULL,
name dtext UNIQUE not NULL,
PRIMARY KEY (id)
);
comment on table towner is 'owners of values exchanged';
alter sequence towner_id_seq owned by towner.id;
create index towner_name_idx on towner(name);
SELECT _reference_time('towner');
SELECT _grant_read('towner');
--------------------------------------------------------------------------------
-- ORDER BOOK
--------------------------------------------------------------------------------
-- type = type_flow | type_primitive <<8 | type_mode <<16
create domain dtypeorder AS int check(VALUE >=0 AND VALUE < 16777215); --((1<<24)-1)
-- type_flow &3 1 order limit,2 order best
-- type_flow &12 bit set for c calculations
-- 4 no qttlimit
-- 8 ignoreomega
-- yorder.type is a type_flow = type & 255
-- type_primitive
-- 1 order
-- 2 rmorder
-- 3 quote
-- 4 prequote
CREATE TYPE eordertype AS ENUM ('best','limit');
CREATE TYPE eprimitivetype AS ENUM ('order','childorder','rmorder','quote','prequote');
create table torder (
usr dtext,
own dtext,
ord yorder, --defined by the extension flowf
created timestamp not NULL,
updated timestamp,
duration interval
);
comment on table torder is 'Order book';
comment on column torder.usr is 'user that inserted the order ';
comment on column torder.ord is 'the order';
comment on column torder.created is 'time when the order was put on the stack';
comment on column torder.updated is 'time when the (quantity) of the order was updated by the order book';
comment on column torder.duration is 'the life time of the order';
SELECT _grant_read('torder');
create index torder_qua_prov_idx on torder(((ord).qua_prov)); -- using gin(((ord).qua_prov) text_ops);
create index torder_id_idx on torder(((ord).id));
create index torder_oid_idx on torder(((ord).oid));
-- id,type,own,oid,qtt_requ,qua_requ,qtt_prov,qua_prov,qtt
create view vorder as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where o.usr=session_user;
SELECT _grant_read('vorder');
-- sans dates ni filtre sur usr
create view vorder2 as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt
from torder o left join towner w on ((o.ord).own=w.id);
SELECT _grant_read('vorder2');
-- only parent for all users
create view vbarter as
select (o.ord).id as id,(o.ord).type as type,o.usr as user,w.name as own,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where (o.ord).oid=(o.ord).id;
SELECT _grant_read('vbarter');
-- parent and childs for all users, used with vmvto
create view vordero as
select id,
(case when (type & 3=1) then 'limit' else 'best' end)::eordertype as type,
own as owner,
case when id=oid then (qtt::text || ' ' || qua_prov) else '' end as stock,
'(' || qtt_prov::text || '/' || qtt_requ::text || ') ' ||
qua_prov || ' / '|| qua_requ as expected_Ï,
case when id=oid then '' else oid::text end as oid
from vorder order by id asc;
GRANT SELECT ON vordero TO role_com;
comment on view vordero is 'order book for all users, to be used with vmvto';
comment on column vordero.id is 'the id of the order';
comment on column vordero.owner is 'the owner';
comment on column vordero.stock is 'for a parent order the stock offered by the owner';
comment on column vordero.expected_Ï is 'the Ï of the order';
comment on column vordero.oid is 'for a child-order, the id of the parent-order';
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
CREATE FUNCTION fgetowner(_name text) RETURNS int AS $$
DECLARE
_wid int;
_OWNERINSERT boolean := fgetconst('OWNERINSERT')=1;
BEGIN
LOOP
SELECT id INTO _wid FROM towner WHERE name=_name;
IF found THEN
return _wid;
END IF;
IF (NOT _OWNERINSERT) THEN
RAISE EXCEPTION 'The owner does not exist' USING ERRCODE='YU001';
END IF;
BEGIN
INSERT INTO towner (name) VALUES (_name) RETURNING id INTO _wid;
-- RAISE NOTICE 'owner % created',_name;
return _wid;
EXCEPTION WHEN unique_violation THEN
NULL;--
END;
END LOOP;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- TMVT
-- id,nbc,nbt,grp,xid,usr_src,usr_dst,xoid,own_src,own_dst,qtt,nat,ack,exhausted,order_created,created
--------------------------------------------------------------------------------
/*
create table tmvt (
id serial UNIQUE not NULL,
nbc int default NULL,
nbt int default NULL,
grp int default NULL,
xid int default NULL,
usr_src text default NULL,
usr_dst text default NULL,
xoid int default NULL,
own_src text default NULL,
own_dst text default NULL,
qtt int8 default NULL,
nat text default NULL,
ack boolean default NULL,
cack boolean default NULL,
exhausted boolean default NULL,
order_created timestamp default NULL,
created timestamp default NULL,
om_exp double precision default NULL,
om_rea double precision default NULL,
CONSTRAINT ctmvt_grp FOREIGN KEY (grp) references tmvt(id) ON UPDATE CASCADE
);
GRANT SELECT ON tmvt TO role_com;
comment on table tmvt is 'Records ownership changes';
comment on column tmvt.nbc is 'number of movements of the exchange cycle';
comment on column tmvt.nbt is 'number of movements of the transaction containing several exchange cycles';
comment on column tmvt.grp is 'references the first movement of the exchange';
comment on column tmvt.xid is 'references the order.id';
comment on column tmvt.usr_src is 'usr provider';
comment on column tmvt.usr_dst is 'usr receiver';
comment on column tmvt.xoid is 'references the order.oid';
comment on column tmvt.own_src is 'owner provider';
comment on column tmvt.own_dst is 'owner receiver';
comment on column tmvt.qtt is 'quantity of the value moved';
comment on column tmvt.nat is 'quality of the value moved';
comment on column tmvt.ack is 'set when movement has been acknowledged';
comment on column tmvt.cack is 'set when the cycle has been acknowledged';
comment on column tmvt.exhausted is 'set when the movement exhausted the order providing the value';
comment on column tmvt.om_exp is 'Ï expected by the order';
comment on column tmvt.om_rea is 'real Ï of movement';
alter sequence tmvt_id_seq owned by tmvt.id;
GRANT SELECT ON tmvt_id_seq TO role_com;
create index tmvt_grp_idx on tmvt(grp);
create index tmvt_nat_idx on tmvt(nat);
create index tmvt_own_src_idx on tmvt(own_src);
create index tmvt_own_dst_idx on tmvt(own_dst);
CREATE VIEW vmvt AS select * from tmvt;
GRANT SELECT ON vmvt TO role_com;
CREATE VIEW vmvt_tu AS select id,nbc,grp,xid,xoid,own_src,own_dst,qtt,nat,ack,cack,exhausted from tmvt;
GRANT SELECT ON vmvt_tu TO role_com;
create view vmvto as
select id,grp,
usr_src as from_usr,
own_src as from_own,
qtt::text || ' ' || nat as value,
usr_dst as to_usr,
own_dst as to_own,
to_char(om_exp, 'FM999.9999990') as expected_Ï,
to_char(om_rea, 'FM999.9999990') as actual_Ï,
ack
from tmvt where cack is NULL order by id asc;
GRANT SELECT ON vmvto TO role_com;
*/
CREATE SEQUENCE tmvt_id_seq;
--------------------------------------------------------------------------------
-- STACK id,usr,kind,jso,submitted
--------------------------------------------------------------------------------
create table tstack (
id serial UNIQUE not NULL,
usr dtext,
kind eprimitivetype,
jso json, -- representation of the primitive
submitted timestamp not NULL,
PRIMARY KEY (id)
);
comment on table tstack is 'Records the stack of primitives';
comment on column tstack.id is 'id of this primitive';
comment on column tstack.usr is 'user submitting the primitive';
comment on column tstack.kind is 'type of primitive';
comment on column tstack.jso is 'primitive payload';
comment on column tstack.submitted is 'timestamp when the primitive was successfully submitted';
alter sequence tstack_id_seq owned by tstack.id;
GRANT SELECT ON tstack TO role_com;
SELECT fifo_init('tstack');
GRANT SELECT ON tstack_id_seq TO role_com;
--------------------------------------------------------------------------------
CREATE TYPE eprimphase AS ENUM ('submit', 'execute');
--------------------------------------------------------------------------------
-- MSG
--------------------------------------------------------------------------------
CREATE TYPE emsgtype AS ENUM ('response', 'exchange');
create table tmsg (
id serial UNIQUE not NULL,
usr dtext default NULL, -- the user receiver of this message
typ emsgtype not NULL,
jso json default NULL,
created timestamp not NULL
);
alter sequence tmsg_id_seq owned by tmsg.id;
SELECT _grant_read('tmsg');
SELECT _grant_read('tmsg_id_seq');
SELECT fifo_init('tmsg');
CREATE VIEW vmsg AS select * from tmsg WHERE usr = session_user;
SELECT _grant_read('vmsg');
--------------------------------------------------------------------------------
CREATE TYPE yj_error AS (
code int,
reason text
);
CREATE TYPE yerrorprim AS (
id int,
error yj_error
);
CREATE TYPE yj_value AS (
qtt int8,
nat text
);
CREATE TYPE yj_stock AS (
id int,
qtt int8,
nat text,
own text,
usr text
);
CREATE TYPE yj_Ï AS (
id int,
qtt_prov int8,
qtt_requ int8,
type eordertype
);
CREATE TYPE yj_mvt AS (
id int,
cycle int,
orde yj_Ï,
stock yj_stock,
mvt_from yj_stock,
mvt_to yj_stock
);
CREATE TYPE yj_order AS (
id int,
error yj_error
);
CREATE TYPE yj_primitive AS (
id int,
error yj_error,
primitive json,
result json,
value json
);
diff --git a/src/worker_ob.c b/src/worker_ob.c
index af2669e..a72d2c4 100644
--- a/src/worker_ob.c
+++ b/src/worker_ob.c
@@ -1,377 +1,377 @@
/* -------------------------------------------------------------------------
*
* worker_ob.c
* Code based on worker_spi.c
*
* This code connects to a database, lauches two background workers.
for i in [0,1],workeri do the following:
while(true)
dowait := market.workeri()
if (dowait):
wait(dowait) // dowait millisecond
These worker do nothing if the schema market is not installed
To force restarting of a bg_worker,send a SIGHUP signal to the worker process
*
* -------------------------------------------------------------------------
*/
#include "postgres.h"
/* These are always necessary for a bgworker */
#include "miscadmin.h"
#include "postmaster/bgworker.h"
#include "storage/ipc.h"
#include "storage/latch.h"
#include "storage/lwlock.h"
#include "storage/proc.h"
#include "storage/shmem.h"
/* these headers are used by this particular worker's code */
#include "access/xact.h"
#include "executor/spi.h"
#include "fmgr.h"
#include "lib/stringinfo.h"
#include "pgstat.h"
#include "utils/builtins.h"
#include "utils/snapmgr.h"
#include "tcop/utility.h"
#define BGW_NBWORKERS 2
#define BGW_OPENCLOSE 0
static char *worker_names[] = {"openclose","consumestack"};
// PG_MODULE_MAGIC;
void _PG_init(void);
/* flags set by signal handlers */
static volatile sig_atomic_t got_sighup = false;
static volatile sig_atomic_t got_sigterm = false;
/* GUC variable */
static char *worker_ob_database = "market";
/* others */
static char *worker_ob_user = "user_bo";
typedef struct worktable
{
const char *function_name;
int dowait;
} worktable;
/*
* Signal handler for SIGTERM
* Set a flag to let the main loop to terminate, and set our latch to wake
* it up.
*/
static void
worker_spi_sigterm(SIGNAL_ARGS)
{
int save_errno = errno;
got_sigterm = true;
if (MyProc)
SetLatch(&MyProc->procLatch);
errno = save_errno;
}
/*
* Signal handler for SIGHUP
* Set a flag to tell the main loop to reread the config file, and set
* our latch to wake it up.
*/
static void
worker_spi_sighup(SIGNAL_ARGS)
{
int save_errno = errno;
got_sighup = true;
if (MyProc)
SetLatch(&MyProc->procLatch);
errno = save_errno;
}
static int _spi_exec_select_ret_int(StringInfoData buf) {
int ret;
int ntup;
bool isnull;
ret = SPI_execute(buf.data, true, 1); // read_only -- one row returned
pfree(buf.data);
if (ret != SPI_OK_SELECT)
elog(FATAL, "SPI_execute failed: error code %d", ret);
if (SPI_processed != 1)
elog(FATAL, "not a singleton result");
ntup = DatumGetInt32(SPI_getbinval(SPI_tuptable->vals[0],
SPI_tuptable->tupdesc,
1, &isnull));
if (isnull)
elog(FATAL, "null result");
return ntup;
}
static bool _test_market_installed() {
int ret;
StringInfoData buf;
initStringInfo(&buf);
appendStringInfo(&buf, "select count(*) from pg_namespace where nspname = 'market'");
ret = _spi_exec_select_ret_int(buf);
if(ret == 0)
return false;
- resetStringInfo(&buf);
+ initStringInfo(&buf);
appendStringInfo(&buf, "select value from market.tvar where name = 'INSTALLED'");
ret = _spi_exec_select_ret_int(buf);
if(ret == 0)
return false;
return true;
}
/*
*/
static bool
_worker_ob_installed()
{
bool installed;
SetCurrentStatementStartTimestamp();
StartTransactionCommand();
SPI_connect();
PushActiveSnapshot(GetTransactionSnapshot());
pgstat_report_activity(STATE_RUNNING, "initializing spi_worker");
installed = _test_market_installed();
if (installed)
elog(LOG, "%s starting",MyBgworkerEntry->bgw_name);
else
elog(LOG, "%s waiting for installation",MyBgworkerEntry->bgw_name);
SPI_finish();
PopActiveSnapshot();
CommitTransactionCommand();
pgstat_report_activity(STATE_IDLE, NULL);
return installed;
}
static void _openclose_vacuum() {
/* called by openclose bg_worker */
StringInfoData buf;
int ret;
initStringInfo(&buf);
appendStringInfo(&buf,"VACUUM FULL");
pgstat_report_activity(STATE_RUNNING, buf.data);
ret = SPI_execute(buf.data, false, 0);
pfree(buf.data);
if (ret != SPI_OK_UTILITY) // SPI_OK_UPDATE_RETURNING,SPI_OK_SELECT
// TODO CODE DE RETOUR?????
elog(FATAL, "cannot execute VACUUM FULL: error code %d",ret);
return;
}
static void
worker_ob_main(Datum main_arg)
{
int index = DatumGetInt32(main_arg);
worktable *table;
StringInfoData buf;
bool installed;
//char function_name[20];
table = palloc(sizeof(worktable));
table->function_name = pstrdup(worker_names[index]);
table->dowait = 0;
/* Establish signal handlers before unblocking signals. */
pqsignal(SIGHUP, worker_spi_sighup);
pqsignal(SIGTERM, worker_spi_sigterm);
/* We're now ready to receive signals */
BackgroundWorkerUnblockSignals();
/* Connect to our database */
if(!(worker_ob_database && *worker_ob_database))
elog(FATAL, "database name undefined");
BackgroundWorkerInitializeConnection(worker_ob_database, worker_ob_user);
installed = _worker_ob_installed();
initStringInfo(&buf);
appendStringInfo(&buf,"SELECT %s FROM market.%s()",
table->function_name, table->function_name);
/*
* Main loop: do this until the SIGTERM handler tells us to terminate
*/
while (!got_sigterm)
{
int ret;
int rc;
int _worker_ob_naptime; // = worker_ob_naptime * 1000L;
if(installed) // && !table->dowait)
_worker_ob_naptime = table->dowait;
else
_worker_ob_naptime = 1000L; // 1 second
/*
* Background workers mustn't call usleep() or any direct equivalent:
* instead, they may wait on their process latch, which sleeps as
* necessary, but is awakened if postmaster dies. That way the
* background process goes away immediately in an emergency.
*/
/* done even if _worker_ob_naptime == 0 */
rc = WaitLatch(&MyProc->procLatch,
WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
_worker_ob_naptime );
ResetLatch(&MyProc->procLatch);
/* emergency bailout if postmaster has died */
if (rc & WL_POSTMASTER_DEATH)
proc_exit(1);
/*
* In case of a SIGHUP, just reload the configuration.
*/
if (got_sighup)
{
got_sighup = false;
ProcessConfigFile(PGC_SIGHUP);
installed = _worker_ob_installed();
}
if( !installed) continue;
/*
* Start a transaction on which we can run queries. Note that each
* StartTransactionCommand() call should be preceded by a
* SetCurrentStatementStartTimestamp() call, which sets both the time
* for the statement we're about the run, and also the transaction
* start time. Also, each other query sent to SPI should probably be
* preceded by SetCurrentStatementStartTimestamp(), so that statement
* start time is always up to date.
*
* The SPI_connect() call lets us run queries through the SPI manager,
* and the PushActiveSnapshot() call creates an "active" snapshot
* which is necessary for queries to have MVCC data to work on.
*
* The pgstat_report_activity() call makes our activity visible
* through the pgstat views.
*/
SetCurrentStatementStartTimestamp();
StartTransactionCommand();
SPI_connect();
PushActiveSnapshot(GetTransactionSnapshot());
pgstat_report_activity(STATE_RUNNING, buf.data);
/* We can now execute queries via SPI */
ret = SPI_execute(buf.data, false, 0);
if (ret != SPI_OK_SELECT) // SPI_OK_UPDATE_RETURNING)
elog(FATAL, "cannot execute market.%s(): error code %d",
table->function_name, ret);
if (SPI_processed != 1) // number of tuple returned
elog(FATAL, "market.%s() returned %d tuples instead of one",
table->function_name, SPI_processed);
{
bool isnull;
int32 val;
val = DatumGetInt32(SPI_getbinval(SPI_tuptable->vals[0],
SPI_tuptable->tupdesc,
1, &isnull));
table->dowait = 0;
if (!isnull) {
if (val >=0)
table->dowait = val;
else {
//
if((val == -100) && (index == BGW_OPENCLOSE))
_openclose_vacuum();
}
}
}
/*
* And finish our transaction.
*/
SPI_finish();
PopActiveSnapshot();
CommitTransactionCommand();
pgstat_report_activity(STATE_IDLE, NULL);
}
proc_exit(0);
}
/*
* Entrypoint of this module.
*
* We register more than one worker process here, to demonstrate how that can
* be done.
*/
void
_PG_init(void)
{
BackgroundWorker worker;
unsigned int i;
/* get the configuration */
/*
DefineCustomIntVariable("worker_ob.naptime",
"Mimimum duration of wait time (in milliseconds).",
NULL,
&worker_ob_naptime,
100,
1,
INT_MAX,
PGC_SIGHUP,
0,
NULL,
NULL,
NULL); */
DefineCustomStringVariable("worker_ob.database",
"Name of the database.",
NULL,
&worker_ob_database,
"market",
PGC_SIGHUP, 0,
NULL,NULL,NULL);
/* set up common data for all our workers */
worker.bgw_flags = BGWORKER_SHMEM_ACCESS |
BGWORKER_BACKEND_DATABASE_CONNECTION;
worker.bgw_start_time = BgWorkerStart_RecoveryFinished;
worker.bgw_restart_time = 60; // BGW_NEVER_RESTART;
worker.bgw_main = worker_ob_main;
/*
* Now fill in worker-specific data, and do the actual registrations.
*/
for (i = 0; i < BGW_NBWORKERS; i++)
{
snprintf(worker.bgw_name, BGW_MAXLEN, "market.%s", worker_names[i]);
worker.bgw_main_arg = Int32GetDatum(i);
RegisterBackgroundWorker(&worker);
}
}
|
olivierch/openBarter
|
3533fd6366737409bbdacb11314dffbd6bee7d51
|
bg_worker redesign
|
diff --git a/simu/liquid/distrib.py b/simu/liquid/distrib.py
index c1049e0..3d8a1b8 100644
--- a/simu/liquid/distrib.py
+++ b/simu/liquid/distrib.py
@@ -1,46 +1,45 @@
#!/usr/bin/python
# -*- coding: utf8 -*-
import random
-import cliquid
def init():
random.seed()
def couple(f,maxi):
"""
usage: x,y = distrib.couple(distrib.uniform)
"""
a = f(maxi)
b = a
while(a == b):
b = f(maxi)
return a,b
def couple_money(f,maxi):
a = 1
b = a
while(a == b):
b = f(maxi)
if(random.randint(0,1)):
return a,b
else:
return b,a
def uniformQlt(maxi):
return random.randint(1,maxi)
def betaQlt(maxi):
r = random.betavariate(2.0,5.0)
# r in [0,1] proba max pour 0.2
s = int(r*maxi)+1
if(s<0):
return 0
if(s>maxi):
return maxi
return s
diff --git a/src/sql/algo.sql b/src/sql/algo.sql
index ebb80c1..bca3d32 100644
--- a/src/sql/algo.sql
+++ b/src/sql/algo.sql
@@ -1,492 +1,496 @@
--------------------------------------------------------------------------------
/* function fcreate_tmp
It is the central query of openbarter
for an order O fcreate_tmp creates a temporary table _tmp of objects.
Each object represents a chain of orders - a flows - going to O.
The table has columns
debut the first order of the path
path the path
fin the end of the path (O)
depth the exploration depth
cycle a boolean true when the path contains the new order
The number of paths fetched is limited to MAXPATHFETCHED
Among those objects representing chains of orders,
only those making a potential exchange (draft) are recorded.
*/
--------------------------------------------------------------------------------
/*
CREATE VIEW vorderinsert AS
SELECT id,yorder_get(id,own,nr,qtt_requ,np,qtt_prov,qtt) as ord,np,nr
FROM torder ORDER BY ((qtt_prov::double precision)/(qtt_requ::double precision)) DESC; */
--------------------------------------------------------------------------------
CREATE FUNCTION fcreate_tmp(_ord yorder) RETURNS int AS $$
DECLARE
_MAXPATHFETCHED int := fgetconst('MAXPATHFETCHED');
_MAXCYCLE int := fgetconst('MAXCYCLE');
_cnt int;
BEGIN
/* the statement LIMIT would not avoid deep exploration if the condition
was specified on Z in the search_backward WHERE condition */
+ -- fails when qua_prov == qua_requ
+ IF((_ord).qua_prov = (_ord).qua_requ) THEN
+ RAISE EXCEPTION 'quality provided and required are the same: %',_ord;
+ END IF;
CREATE TEMPORARY TABLE _tmp ON COMMIT DROP AS (
SELECT yflow_finish(Z.debut,Z.path,Z.fin) as cycle FROM (
WITH RECURSIVE search_backward(debut,path,fin,depth,cycle) AS(
SELECT _ord,yflow_init(_ord),
_ord,1,false
-- FROM torder WHERE (ord).id= _ordid
UNION ALL
SELECT X.ord,yflow_grow_backward(X.ord,Y.debut,Y.path),
Y.fin,Y.depth+1,yflow_contains_oid((X.ord).oid,Y.path)
FROM torder X,search_backward Y
WHERE yflow_match(X.ord,Y.debut) -- (X.ord).qua_prov=(Y.debut).qua_requ
AND ((X.duration IS NULL) OR ((X.created + X.duration) > clock_timestamp()))
AND Y.depth < _MAXCYCLE
AND NOT cycle
AND (X.ord).carre_prov @> (Y.debut).pos_requ -- use if gist(carre_prov)
AND NOT yflow_contains_oid((X.ord).oid,Y.path)
) SELECT debut,path,fin from search_backward
LIMIT _MAXPATHFETCHED
) Z WHERE /* (Z.fin).qua_prov=(Z.debut).qua_requ
AND */ yflow_match(Z.fin,Z.debut) -- it is a cycle
AND yflow_is_draft(yflow_finish(Z.debut,Z.path,Z.fin)) -- and a draft
);
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- order unstacked and inserted into torder
/* if the referenced oid is found,
the order is inserted, and the process is loached
else a movement is created
*/
--------------------------------------------------------------------------------
CREATE TYPE yresflow AS (
mvts int[], -- list of id des mvts
qtts int8[], -- value.qtt moved
nats text[], -- value.nat moved
grp int,
owns text[],
usrs text[],
ords yorder[]
);
--------------------------------------------------------------------------------
CREATE FUNCTION insertorder(_owner dtext,_o yorder,_usr dtext,_created timestamp,_duration interval) RETURNS int AS $$
DECLARE
_fmvtids int[];
-- _first_mvt int;
-- _err int;
--_flows json[]:= ARRAY[]::json[];
_cyclemax yflow;
-- _mvts int[];
_res int8[];
_MAXMVTPERTRANS int := fgetconst('MAXMVTPERTRANS');
_nbmvts int := 0;
_qtt_give int8 := 0;
_qtt_reci int8 := 0;
_cnt int;
_resflow yresflow;
BEGIN
lock table torder in share update exclusive mode NOWAIT;
-- immediatly aborts the order if the lock cannot be acquired
INSERT INTO torder(usr,own,ord,created,updated,duration) VALUES (_usr,_owner,_o,_created,NULL,_duration);
_fmvtids := ARRAY[]::int[];
-- _time_begin := clock_timestamp();
_cnt := fcreate_tmp(_o);
-- RAISE WARNING 'insertbarter A % %',_o,_cnt;
LOOP
SELECT yflow_max(cycle) INTO _cyclemax FROM _tmp WHERE yflow_is_draft(cycle);
IF(NOT yflow_is_draft(_cyclemax)) THEN
EXIT; -- from LOOP
END IF;
-- RAISE WARNING 'insertbarter B %',_cyclemax;
_nbmvts := _nbmvts + yflow_dim(_cyclemax);
IF(_nbmvts > _MAXMVTPERTRANS) THEN
EXIT;
END IF;
_resflow := fexecute_flow(_cyclemax);
_cnt := foncreatecycle(_resflow);
_fmvtids := _fmvtids || _resflow.mvts;
_res := yflow_qtts(_cyclemax);
_qtt_reci := _qtt_reci + _res[1];
_qtt_give := _qtt_give + _res[2];
UPDATE _tmp SET cycle = yflow_reduce(cycle,_cyclemax,false);
DELETE FROM _tmp WHERE NOT yflow_is_draft(cycle);
END LOOP;
-- RAISE WARNING 'insertbarter C % % % % %',_qtt_give,_qtt_reci,_o.qtt_prov,_o.qtt_requ,_fmvtids;
IF ( (_qtt_give != 0) AND ((_o.type & 3) = 1) -- ORDER_LIMIT
AND ((_qtt_give::double precision) /(_qtt_reci::double precision)) >
((_o.qtt_prov::double precision) /(_o.qtt_requ::double precision))
) THEN
RAISE EXCEPTION 'pb: Omega of the flows obtained is not limited by the order limit' USING ERRCODE='YA003';
END IF;
-- set the number of movements in this transaction
-- UPDATE tmvt SET nbt= array_length(_fmvtids,1) WHERE id = ANY (_fmvtids);
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
CREATE FUNCTION foncreatecycle(_r yresflow) RETURNS int AS $$
DECLARE
_usr_src text;
_cnt int;
_i int;
_nbcommit int;
_iprev int;
_inext int;
_o yorder;
BEGIN
_nbcommit := array_length(_r.ords,1);
_i := _nbcommit;
_iprev := _i -1;
FOR _inext IN 1.._nbcommit LOOP
_usr_src := _r.usrs[_i];
_o := _r.ords[_i];
INSERT INTO tmsg (typ,jso,usr,created) VALUES (
'exchange',
row_to_json(ROW(
_r.mvts[_i],
_r.grp,
ROW( -- order
_o.id,
_o.qtt_prov,
_o.qtt_requ,
CASE WHEN _o.type&3 =1 THEN 'limit' ELSE 'best' END
)::yj_Ï,
ROW( -- stock
_o.oid,
_o.qtt,
_r.nats[_i],
_r.owns[_i],
_r.usrs[_i]
)::yj_stock,
ROW( -- mvt_from
_r.mvts[_i],
_r.qtts[_i],
_r.nats[_i],
_r.owns[_inext],
_r.usrs[_inext]
)::yj_stock,
ROW( --mvt_to
_r.mvts[_iprev],
_r.qtts[_iprev],
_r.nats[_iprev],
_r.owns[_iprev],
_r.usrs[_iprev]
)::yj_stock
)::yj_mvt),
_usr_src,statement_timestamp());
_iprev := _i;
_i := _inext;
END LOOP;
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/* fexecute_flow used for a barter
from a flow representing a draft, for each order:
inserts a new movement
updates the order book
*/
--------------------------------------------------------------------------------
CREATE FUNCTION fexecute_flow(_flw yflow) RETURNS yresflow AS $$
DECLARE
_i int;
_next_i int;
_prev_i int;
_nbcommit int;
_first_mvt int;
_exhausted boolean;
_mvtexhausted boolean;
_cntExhausted int;
_mvt_id int;
_cnt int;
_resflow yresflow;
--_mvts int[];
--_oids int[];
_qtt int8;
_flowr int8;
_qttmin int8;
_qttmax int8;
_o yorder;
_usr text;
_usrnext text;
-- _own text;
-- _ownnext text;
-- _idownnext int;
-- _pidnext int;
_or torder%rowtype;
_mat int8[][];
_om_exp double precision;
_om_rea double precision;
BEGIN
_nbcommit := yflow_dim(_flw);
-- sanity check
IF( _nbcommit <2 ) THEN
RAISE EXCEPTION 'the flow should be draft:_nbcommit = %',_nbcommit
USING ERRCODE='YA003';
END IF;
_first_mvt := NULL;
_exhausted := false;
-- _resx.nbc := _nbcommit;
_resflow.mvts := ARRAY[]::int[];
_resflow.qtts := ARRAY[]::int8[];
_resflow.nats := ARRAY[]::text[];
_resflow.owns := ARRAY[]::text[];
_resflow.usrs := ARRAY[]::text[];
_resflow.ords := ARRAY[]::yorder[];
_mat := yflow_to_matrix(_flw);
_i := _nbcommit;
_prev_i := _i - 1;
FOR _next_i IN 1 .. _nbcommit LOOP
------------------------------------------------------------------------
_o.id := _mat[_i][1];
_o.own := _mat[_i][2];
_o.oid := _mat[_i][3];
_o.qtt := _mat[_i][6];
_flowr := _mat[_i][7];
-- _idownnext := _mat[_next_i][2];
-- _pidnext := _mat[_next_i][3];
-- sanity check
SELECT count(*),min((ord).qtt),max((ord).qtt) INTO _cnt,_qttmin,_qttmax
FROM torder WHERE (ord).oid = _o.oid;
IF(_cnt = 0) THEN
RAISE EXCEPTION 'the stock % expected does not exist',_o.oid USING ERRCODE='YU002';
END IF;
IF( _qttmin != _qttmax ) THEN
RAISE EXCEPTION 'the value of stock % is not the same value for all orders',_o.oid USING ERRCODE='YU002';
END IF;
_cntExhausted := 0;
_mvtexhausted := false;
IF( _qttmin < _flowr ) THEN
RAISE EXCEPTION 'the stock % is smaller than the flow (% < %)',_o.oid,_qttmin,_flowr USING ERRCODE='YU002';
ELSIF (_qttmin = _flowr) THEN
_cntExhausted := _cnt;
_exhausted := true;
_mvtexhausted := true;
END IF;
-- update all stocks of the order book
UPDATE torder SET ord.qtt = (ord).qtt - _flowr ,updated = statement_timestamp()
WHERE (ord).oid = _o.oid;
GET DIAGNOSTICS _cnt = ROW_COUNT;
IF(_cnt = 0) THEN
RAISE EXCEPTION 'no orders with the stock % exist',_o.oid USING ERRCODE='YU002';
END IF;
SELECT * INTO _or FROM torder WHERE (ord).id = _o.id LIMIT 1; -- child order
-- RAISE WARNING 'ici %',_or.ord;
_om_exp := (((_or.ord).qtt_prov)::double precision) / (((_or.ord).qtt_requ)::double precision);
_om_rea := ((_flowr)::double precision) / ((_mat[_prev_i][7])::double precision);
/*
SELECT name INTO STRICT _ownnext FROM towner WHERE id=_idownnext;
SELECT name INTO STRICT _own FROM towner WHERE id=_o.own;
SELECT usr INTO STRICT _usrnext FROM torder WHERE (ord).id=_pidnext;
INSERT INTO tmvt (nbc,nbt,grp,
xid,usr_src,usr_dst,xoid,own_src,own_dst,qtt,nat,ack,
exhausted,order_created,created,om_exp,om_rea)
VALUES(_nbcommit,1,_first_mvt,
_o.id,_or.usr,_usrnext,_o.oid,_own,_ownnext,_flowr,(_or.ord).qua_prov,_cycleack,
_mvtexhausted,_or.created,statement_timestamp(),_om_exp,_om_rea)
RETURNING id INTO _mvt_id;
*/
SELECT nextval('tmvt_id_seq') INTO _mvt_id;
IF(_first_mvt IS NULL) THEN
_first_mvt := _mvt_id;
_resflow.grp := _mvt_id;
-- _resx.first_mvt := _mvt_id;
-- UPDATE tmvt SET grp = _first_mvt WHERE id = _first_mvt;
END IF;
_resflow.mvts := array_append(_resflow.mvts,_mvt_id);
_resflow.qtts := array_append(_resflow.qtts,_flowr);
_resflow.nats := array_append(_resflow.nats,(_or.ord).qua_prov);
_resflow.owns := array_append(_resflow.owns,_or.own::text);
_resflow.usrs := array_append(_resflow.usrs,_or.usr::text);
_resflow.ords := array_append(_resflow.ords,_or.ord);
_prev_i := _i;
_i := _next_i;
------------------------------------------------------------------------
END LOOP;
IF( NOT _exhausted ) THEN
-- some order should be exhausted
RAISE EXCEPTION 'the cycle should exhaust some order'
USING ERRCODE='YA003';
END IF;
RETURN _resflow;
END;
$$ LANGUAGE PLPGSQL;
CREATE TYPE yr_quote AS (
qtt_reci int8,
qtt_give int8
);
--------------------------------------------------------------------------------
-- quote execution at the output of the stack
--------------------------------------------------------------------------------
CREATE FUNCTION fproducequote(_ord yorder,_isquote boolean,_isnoqttlimit boolean,_islimit boolean,_isignoreomega boolean)
/*
_isquote := true;
it can be a quote or a prequote
_isnoqttlimit := false;
when true the quantity provided is not limited by the stock available
_islimit:= (_t.jso->'type')='limit';
type of the quoted order
_isignoreomega := -- (_t.type & 8) = 8
*/
RETURNS json AS $$
DECLARE
_cnt int;
_cyclemax yflow;
_cycle yflow;
_res int8[];
_firstloop boolean := true;
_freezeOmega boolean;
_mid int;
_nbmvts int;
_wid int;
_MAXMVTPERTRANS int := fgetconst('MAXMVTPERTRANS');
_barter text;
_paths text;
_qtt_reci int8 := 0;
_qtt_give int8 := 0;
_qtt_prov int8 := 0;
_qtt_requ int8 := 0;
_qtt int8 := 0;
_resjso json;
BEGIN
_cnt := fcreate_tmp(_ord);
_nbmvts := 0;
_paths := '';
LOOP
SELECT yflow_max(cycle) INTO _cyclemax FROM _tmp WHERE yflow_is_draft(cycle);
IF(NOT yflow_is_draft(_cyclemax)) THEN
EXIT; -- from LOOP
END IF;
_nbmvts := _nbmvts + yflow_dim(_cyclemax);
IF(_nbmvts > _MAXMVTPERTRANS) THEN
EXIT;
END IF;
_res := yflow_qtts(_cyclemax);
_qtt_reci := _qtt_reci + _res[1];
_qtt_give := _qtt_give + _res[2];
IF(_isquote) THEN
IF(_firstloop) THEN
_qtt_requ := _res[3];
_qtt_prov := _res[4];
END IF;
IF(_isnoqttlimit) THEN
_qtt := _qtt + _res[5];
ELSE
IF(_firstloop) THEN
_qtt := _res[5];
END IF;
END IF;
END IF;
-- for a PREQUOTE they remain 0
_freezeOmega := _firstloop AND _isignoreomega AND _isquote;
/* yflow_reduce:
for all orders except for node with NOQTTLIMIT:
qtt = qtt -flowr
for the last order, if is IGNOREOMEGA:
- omega is set:
_cycle[last].qtt_requ,_cycle[last].qtt_prov
:= _cyclemax[last].qtt_requ,_cyclemax[last].qtt_prov
- if _freezeOmega the IGNOREOMEGA is reset
*/
UPDATE _tmp SET cycle = yflow_reduce(cycle,_cyclemax,_freezeOmega);
DELETE FROM _tmp WHERE NOT yflow_is_draft(cycle);
_firstloop := false;
END LOOP;
IF ( (_qtt_requ != 0)
AND _islimit AND _isquote
AND ((_qtt_give::double precision) /(_qtt_reci::double precision)) >
((_qtt_prov::double precision) /(_qtt_requ::double precision))
) THEN
RAISE EXCEPTION 'pq: Omega of the flows obtained is not limited by the order limit' USING ERRCODE='YA003';
END IF;
_resjso := row_to_json(ROW(_qtt_reci,_qtt_give)::yr_quote);
RETURN _resjso;
END;
$$ LANGUAGE PLPGSQL;
diff --git a/src/sql/model.sql b/src/sql/model.sql
index cb646b2..fc1e046 100644
--- a/src/sql/model.sql
+++ b/src/sql/model.sql
@@ -1,47 +1,43 @@
\set ECHO none
/* roles.sql must be executed previously */
drop schema if exists market cascade;
create schema market;
set search_path to market;
-- DROP EXTENSION IF EXISTS flowf;
-- CREATE EXTENSION flowf WITH VERSION '0.1';
\i sql/roles.sql
GRANT USAGE ON SCHEMA market TO role_com;
\i sql/util.sql
\i sql/tables.sql
\i sql/prims.sql
\i sql/pushpull.sql
\i sql/currencies.sql
\i sql/algo.sql
\i sql/openclose.sql
create view vord as (SELECT
(ord).id,
(ord).oid,
own,
-
-
-
-
-
(ord).qtt_requ,
(ord).qua_requ,
CASE WHEN (ord).type=1 THEN 'limit' ELSE 'best' END typ,
(ord).qtt_prov,
(ord).qua_prov,
(ord).qtt,
(ord).own as own_id,
usr
-- duration
FROM market.torder order by (ord).id asc);
+select fsetvar('INSTALLED',1);
select * from fversion();
\echo model installed.
diff --git a/src/sql/openclose.sql b/src/sql/openclose.sql
index f93a0b3..e05ee8a 100644
--- a/src/sql/openclose.sql
+++ b/src/sql/openclose.sql
@@ -1,271 +1,272 @@
/*
manage transitions between daily market phases.
*/
--------------------------------------------------------------------------------
INSERT INTO tvar(name,value) VALUES
('OC_CURRENT_PHASE',101), -- phase of the model when settled
('OC_CURRENT_OPENED',0); -- sub-state of the opened phase
CREATE TABLE tmsgdaysbefore(LIKE tmsg);
SELECT _grant_read('tmsgdaysbefore');
-- index and unique constraint are not cloned
--------------------------------------------------------------------------------
CREATE FUNCTION openclose() RETURNS int AS $$
/*
* This code is executed by the bg_worker 1 doing following:
while(true)
status := market.openclose()
if (status >=0):
do_wait := status
elif status == -100:
VACUUM FULL
do_wait := 0
wait(do_wait) milliseconds
*/
DECLARE
_phase int;
_dowait int := 0; -- not DOWAIT
_cnt int;
_rp yerrorprim;
_stock_id int;
_owner text;
_done boolean;
BEGIN
set search_path to market;
_phase := fgetvar('OC_CURRENT_PHASE');
CASE _phase
/* PHASE 0XX BEGIN OF THE DAY
*/
WHEN 0 THEN
/* creates the timetable */
PERFORM foc_create_timesum();
/* pruge tmsg - single transaction */
WITH t AS (DELETE FROM tmsg RETURNING * )
INSERT INTO tmsgdaysbefore SELECT * FROM t ;
TRUNCATE tmsg;
PERFORM setval('tmsg_id_seq',1,false);
PERFORM foc_next(1,'tmsg archived');
WHEN 1 THEN
IF(foc_in_phase(_phase)) THEN
_dowait := 60000; -- 1 minute
ELSE
PERFORM foc_next(101,'Start opening sequence');
END IF;
/* PHASE 1XX -- MARKET OPENED
*/
WHEN 101 THEN
/* open client access */
REVOKE role_co_closed FROM role_client;
GRANT role_co TO role_client;
PERFORM foc_next(102,'Client access opened');
WHEN 102 THEN
/* market is opened to client access:
While in phase,
OC_CURRENT_OPENED <- OC_CURRENT_OPENED % 5
if 0:
delete outdated order and sub-orders from the book
do_wait = 1 minute
else,
phase <- 120
*/
IF(foc_in_phase(_phase)) THEN
UPDATE tvar SET value=((value+1)/5) WHERE name='OC_CURRENT_OPENED'
RETURNING value INTO _cnt ;
_dowait := 60000; -- 1 minute
IF(_cnt =0) THEN
-- every 5 calls(5 minutes),
-- delete outdated order and sub-orders from the book
DELETE FROM torder o USING torder po
WHERE (o.ord).oid = (po.ord).id
-- outdated parent orders
AND (po.ord).oid = (po.ord).oid
AND NOT (po.duration IS NULL)
AND (po.created + po.duration) <= clock_timestamp();
END IF;
ELSE
PERFORM foc_next(120,'Start closing');
END IF;
WHEN 120 THEN
/* market closing
revoke client access
*/
REVOKE role_co FROM role_client;
GRANT role_co_closed TO role_client;
PERFORM foc_next(121,'Client access revoked');
WHEN 121 THEN
/* wait until the stack is compleatly consumed */
-- waiting worker2 stack purge
_done := fstackdone();
-- SELECT count(*) INTO _cnt FROM tstack;
IF(not _done) THEN
_dowait := 60000; -- 1 minute
-- wait and test again
ELSE
PERFORM foc_next(200,'Last primitives performed');
END IF;
/* PHASE 2XX MARKET CLOSED */
WHEN 200 THEN
/* remove orders of the order book */
SELECT (o.ord).id,w.name INTO _stock_id,_owner FROM torder o
- INNER JOIN town w ON w.id=(o.ord).own
+ INNER JOIN towner w ON w.id=(o.ord).own
WHERE (o.ord).oid = (o.ord).id LIMIT 1;
IF(FOUND) THEN
_rp := fsubmitrmorder(_owner,_stock_id);
-- repeate again until order_book is empty
ELSE
PERFORM foc_next(201,'Order book is emptied');
END IF;
WHEN 201 THEN
/* wait until stack is empty */
-- waiting worker2 stack purge
_done := fstackdone();
-- SELECT count(*) INTO _cnt FROM tstack;
IF(not _done) THEN
_dowait := 60000; -- 1 minute
-- wait and test again
ELSE
PERFORM foc_next(202,'rm primitives are processed');
END IF;
WHEN 202 THEN
/* tables truncated except tmsg */
truncate torder;
truncate tstack;
PERFORM setval('tstack_id_seq',1,false);
PERFORM setval('tmvt_id_seq',1,false);
truncate towner;
PERFORM setval('towner_id_seq',1,false);
PERFORM foc_next(203,'tables torder,tsack,tmvt,towner are truncated');
WHEN 203 THEN
_dowait := -100; -- VACUUM FULL; executed by pg_worker 1 openclose
PERFORM foc_next(204,'VACUUM FULL is lauched');
WHEN 204 THEN
/* wait till the end of the day */
IF(foc_in_phase(_phase)) THEN
_dowait := 60000; -- 1 minute
-- wait and test again
ELSE
PERFORM foc_next(0,'End of the day');
END IF;
+
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
RETURN _dowait; -- DOWAIT or VACUUM FULL
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION openclose() TO role_bo;
-------------------------------------------------------------------------------
CREATE FUNCTION foc_next(_phase int,_msg text) RETURNS void AS $$
BEGIN
PERFORM fsetvar('OC_CURRENT_PHASE',_phase);
RAISE LOG 'MARKET PHASE %: %',_phase,_msg;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION foc_next(int,text) TO role_bo;
--------------------------------------------------------------------------------
/* access by clients can be disabled/enabled with a single command:
REVOKE role_co FROM role_client
GRANT role_co TO role_client
same thing for role batch with role_bo:
REVOKE role_bo FROM user_bo
GRANT role_bo TO user_bo
*/
CREATE VIEW vmsg3 AS WITH t AS (SELECT * from tmsg WHERE usr = session_user
UNION ALL SELECT * from tmsgdaysbefore WHERE usr = session_user
) SELECT created,id,typ,jso
from t order by created ASC,id ASC;
SELECT _grant_read('vmsg3');
/*------------------------------------------------------------------------------
TIME DEPENDANT FUNCTION
------------------------------------------------------------------------------*/
/* the day is shared in NB_PHASE, with id between [0,NB_PHASE-1]
delay are defined for [0,NB_PHASE-2],the last waits the end of the day
OC_DELAY_i are number of seconds for a phase for i in [0,NB_PHASE-2]
*/
INSERT INTO tconst (name,value) VALUES
('OC_DELAY_0',30*60), -- stops at 0h 30'
('OC_DELAY_1',23*60*60) -- stops at 23h 30'
-- sum of delays < 24*60*60
;
CREATE FUNCTION foc_create_timesum() RETURNS void AS $$
DECLARE
_cnt int;
BEGIN
DROP TABLE IF EXISTS timesum;
SELECT count(*) INTO STRICT _cnt FROM tconst WHERE name like 'OC_DELAY_%';
CREATE TABLE timesum (id,ends) AS
SELECT t.id+1,sum(d.value) OVER w FROM generate_series(0,_cnt-1) t(id)
LEFT JOIN tconst d ON (('OC_DELAY_' ||(t.id)::text) = d.name)
WINDOW w AS (order by t.id );
INSERT INTO timesum VALUES (0,0);
END;
$$ LANGUAGE PLPGSQL;
GRANT EXECUTE ON FUNCTION foc_create_timesum() TO role_bo;
select foc_create_timesum();
--------------------------------------------------------------------------------
CREATE FUNCTION foc_in_phase(_phase int) RETURNS boolean AS $$
-- returns TRUE when in phase, else returns the suffix of the archive
DECLARE
_actual_gphase int := _phase /100;
_planned_gphase int;
BEGIN
-- the number of seconds since the beginning of the day
-- in the interval (timesum[id],timesum[id+1])
SELECT max(id) INTO _planned_gphase FROM
timesum where ends < (EXTRACT(HOUR FROM now()) *60*60)
+ (EXTRACT(MINUTE FROM now()) *60)
+ EXTRACT(SECOND FROM now()) ;
IF (_planned_gphase = _actual_gphase) THEN
RETURN true;
ELSE
RETURN false;
END IF;
END;
$$ LANGUAGE PLPGSQL;
diff --git a/src/sql/prims.sql b/src/sql/prims.sql
index df6d958..f9d2e36 100644
--- a/src/sql/prims.sql
+++ b/src/sql/prims.sql
@@ -1,485 +1,491 @@
--------------------------------------------------------------------------------
-- check params
-- code in [-9,0]
--------------------------------------------------------------------------------
CREATE FUNCTION
fcheckquaown(_r yj_error,_own dtext,_qua_requ dtext,_pos_requ point,_qua_prov dtext,_pos_prov point,_dist float8)
RETURNS yj_error AS $$
DECLARE
_r yj_error;
_i int;
BEGIN
IF(NOT ((yflow_checktxt(_own)&1)=1)) THEN
_r.reason := '_own is empty string';
_r.code := -1;
RETURN _r;
END IF;
+
IF(_qua_prov IS NULL) THEN
- --IF (NOT yflow_quacheck(_qua_requ,1)) THEN
IF(NOT ((yflow_checktxt(_qua_requ)&1)=1)) THEN
_r.reason := '_qua_requ is empty string';
_r.code := -2;
RETURN _r;
END IF;
ELSE
+ IF(_qua_prov = _qua_requ) THEN
+ _r.reason := 'qua_prov == qua_requ';
+ _r.code := -3;
+ return _r;
+ END IF;
_i = yflow_checkquaownpos(_own,_qua_requ,_pos_requ,_qua_prov,_pos_prov,_dist);
IF (_i != 0) THEN
_r.reason := 'rejected by yflow_checkquaownpos';
_r.code := _i; -- -9<=i<=-5
return _r;
END IF;
END IF;
+
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
CREATE FUNCTION fcheckquaprovusr(_r yj_error,_qua_prov dtext,_usr dtext) RETURNS yj_error AS $$
DECLARE
_QUAPROVUSR boolean := fgetconst('QUAPROVUSR')=1;
_p int;
_suffix text;
BEGIN
IF (NOT _QUAPROVUSR) THEN
RETURN _r;
END IF;
_p := position('@' IN _qua_prov);
IF (_p = 0) THEN
-- without prefix, it should be a currency
SELECT count(*) INTO _p FROM tcurrency WHERE _qua_prov = name;
IF (_p = 1) THEN
RETURN _r;
ELSE
_r.code := -12;
_r.reason := 'the quality provided that is not a currency must be prefixed';
RETURN _r;
END IF;
END IF;
-- with prefix
IF (char_length(substring(_qua_prov FROM 1 FOR (_p-1))) <1) THEN
_r.code := -13;
_r.reason := 'the prefix of the quality provided cannot be empty';
RETURN _r;
END IF;
_suffix := substring(_qua_prov FROM (_p+1));
_suffix := replace(_suffix,'.','_'); -- change . to _
-- it must be the username
IF ( _suffix!= _usr) THEN
_r.code := -14;
_r.reason := 'the prefix of the quality provided must by the user name';
RETURN _r;
END IF;
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
CREATE FUNCTION fchecknameowner(_r yj_error,_name dtext,_usr dtext) RETURNS yj_error AS $$
DECLARE
_p int;
_OWNUSR boolean := fgetconst('OWNUSR')=1;
_suffix text;
BEGIN
IF (NOT _OWNUSR) THEN
RETURN _r;
END IF;
_p := position('@' IN _name);
IF (char_length(substring(_name FROM 1 FOR (_p-1))) <1) THEN
_r.code := -20;
_r.reason := 'the owner name has an empty prefix';
RETURN _r;
END IF;
_suffix := substring(_name FROM (_p+1));
SELECT count(*) INTO _p FROM townauth WHERE _suffix = name;
IF (_p = 1) THEN
RETURN _r; --well known auth provider
END IF;
-- change . to _
_suffix := replace(_suffix,'.','_');
IF ( _suffix= _usr) THEN
RETURN _r; -- owners name suffixed by users name
END IF;
_r.code := -21;
_r.reason := 'if the owner name is not prefixed by a well know provider, it must be prefixed by user name';
RETURN _r;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- order primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_order AS (
kind eprimitivetype,
type eordertype,
owner dtext,
qua_requ dtext,
qtt_requ dqtt,
qua_prov dtext,
qtt_prov dqtt
);
CREATE FUNCTION fsubmitorder(_type eordertype,_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_order;
BEGIN
_prim := ROW('order',_type,_owner,_qua_requ,_qtt_requ,_qua_prov,_qtt_prov)::yp_order;
_res := fprocessorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitorder(eordertype,dtext,dtext,dqtt,dtext,dqtt) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessorder(_phase eprimphase, _t tstack,_s yp_order)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_ir int;
_o yorder;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_prov,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_res := fpushprimitive(_r,'order',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
/*
IF(
(_s.duration IS NOT NULL) AND (_s.submitted + _s.duration) < clock_timestamp()
) THEN
_r.reason := 'barter order - the order is too old';
_r.code := -19;
END IF; */
_wid := fgetowner(_s.owner);
_o := ROW(CASE WHEN _s.type='limit' THEN 1 ELSE 2 END,
_t.id,_wid,_t.id,
_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt_prov,
box('(0,0)'::point,'(0,0)'::point),box('(0,0)'::point,'(0,0)'::point),
0.0,earth_get_square('(0,0)'::point,0.0)
)::yorder;
_ir := insertorder(_s.owner,_o,_t.usr,_t.submitted,'1 day');
RETURN ROW(_t.id,NULL,_t.jso,
row_to_json(ROW(_o.id,_o.qtt,_o.qua_prov,_s.owner,_t.usr)::yj_stock),
NULL
)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- child order primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_childorder AS (
kind eprimitivetype,
owner dtext,
qua_requ dtext,
qtt_requ dqtt,
stock_id int
);
CREATE FUNCTION fsubmitchildorder(_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_stock_id int)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_childorder;
BEGIN
_prim := ROW('childorder',_owner,_qua_requ,_qtt_requ,_stock_id)::yp_childorder;
_res := fprocesschildorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitchildorder(dtext,dtext,dqtt,int) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocesschildorder(_phase eprimphase, _t tstack,_s yp_childorder)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_otype int;
_ir int;
_o yorder;
_op torder%rowtype;
_sp tstack%rowtype;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_requ,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid;
IF (NOT FOUND) THEN
/* could be found in the stack */
SELECT * INTO _sp FROM tstack WHERE
id = _s.stock_id AND usr = session_user AND _s.owner = jso->'owner' AND kind='order';
IF (NOT FOUND) THEN
_r.code := -200;
_r.reason := 'the order was not found for this user and owner';
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
END IF;
_res := fpushprimitive(_r,'childorder',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid;
IF (NOT FOUND) THEN
_r.code := -201;
_r.reason := 'the stock is not in the order book';
RETURN ROW(_t.id,_r,_t.jso,NULL,NULL)::yj_primitive;
END IF;
_o := _op.ord;
_o.id := _id;
_o.qua_requ := _s.qua_requ;
_o.qtt_requ := _s.qtt_requ;
_ir := insertorder(_s.owner,_o,_s.usr,_s.submitted,_op.duration);
RETURN ROW(_t.id,NULL,_t.jso,NULL,NULL)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- rm primitive
--------------------------------------------------------------------------------
CREATE TYPE yp_rmorder AS (
kind eprimitivetype,
owner dtext,
stock_id int
);
CREATE FUNCTION fsubmitrmorder(_owner dtext,_stock_id int)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_rmorder;
BEGIN
_prim := ROW('rmorder',_owner,_stock_id)::yp_rmorder;
_res := fprocessrmorder('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitrmorder(dtext,int) TO role_co,role_bo;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessrmorder(_phase eprimphase, _t tstack,_s yp_rmorder)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_otype int;
_ir int;
_o yorder;
_opy yorder; -- parent_order
_op torder%rowtype;
_te text;
_pusr text;
_sp tstack%rowtype;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
/* could be found in the stack */
SELECT * INTO _sp FROM tstack WHERE
id = _s.stock_id AND usr = session_user AND _s.owner = jso->'owner' AND kind='order' AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
_r.code := -300;
_r.reason := 'the order was not found for this user and owner';
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
END IF;
_res := fpushprimitive(_r,'rmorder',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
SELECT * INTO _op FROM torder WHERE
(ord).id = _s.stock_id AND usr = session_user AND (ord).own = _wid AND (ord).id=(ord).oid;
IF (NOT FOUND) THEN
_r.code := -301;
_r.reason := 'the stock is not in the order book';
RETURN ROW(_t.id,_r,_t.json,NULL,NULL)::yj_primitive;
END IF;
-- delete order and sub-orders from the book
DELETE FROM torder o WHERE (o.ord).oid = _yo.oid;
-- id,error,primitive,result
RETURN ROW(_t.id,NULL,_t.json,
ROW((_op.ord).id,(_op.ord).qtt,(_op.ord).qua_prov,_s.owner,_op.usr)::yj_stock,
ROW((_op.ord).qua_prov,(_op.ord).qtt)::yj_value
)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- quote
--------------------------------------------------------------------------------
CREATE FUNCTION fsubmitquote(_type eordertype,_owner dtext,_qua_requ dtext,_qtt_requ dqtt,_qua_prov dtext,_qtt_prov dqtt)
RETURNS yerrorprim AS $$
DECLARE
_res yj_primitive;
_prim yp_order;
BEGIN
_prim := ROW('quote',_type,_owner,_qua_requ,_qtt_requ,_qua_prov,_qtt_prov)::yp_order;
_res := fprocessquote('submit',NULL,_prim);
RETURN ROW(_res.id,_res.error)::yerrorprim;
END;
$$ LANGUAGE PLPGSQL SECURITY DEFINER set search_path = market,public;
GRANT EXECUTE ON FUNCTION fsubmitquote(eordertype,dtext,dtext,dqtt,dtext,dqtt) TO role_co;
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessquote(_phase eprimphase, _t tstack,_s yp_order)
RETURNS yj_primitive AS $$
DECLARE
_r yj_error;
_res yj_primitive;
_wid int;
_ir int;
_o yorder;
_type int;
_json_res json;
BEGIN
_r := ROW(0,NULL)::yj_error; -- code,reason
CASE
WHEN (_phase = 'submit') THEN
_r := fchecknameowner(_r,_s.owner,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_r := fcheckquaprovusr(_r,_s.qua_prov,session_user);
IF (_r.code!=0) THEN
RETURN ROW(NULL,_r,NULL,NULL,NULL)::yj_primitive;
END IF;
_res := fpushprimitive(_r,'quote',row_to_json(_s));
RETURN _res;
WHEN (_phase = 'execute') THEN
_wid := fgetowner(_s.owner);
_type := CASE WHEN _s.type='limit' THEN 1 ELSE 2 END;
_o := ROW( _type,
- _s.id,_wid,_s.id,
+ _t.id,_wid,_t.id,
_s.qtt_requ,_s.qua_requ,_s.qtt_prov,_s.qua_prov,_s.qtt_prov,
box('(0,0)'::point,'(0,0)'::point),box('(0,0)'::point,'(0,0)'::point),
_s.dist,earth_get_square(box('(0,0)'::point,0.0))
)::yorder;
/*fproducequote(_ord yorder,_isquote boolean,_isnoqttlimit boolean,_islimit boolean,_isignoreomega boolean)
*/
_json_res := fproducequote(_o,true,false,_s.type='limit',false);
RETURN ROW(_t.id,NULL,_t.json,_tx,NULL,NULL)::yj_primitive;
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- primitive processing
--------------------------------------------------------------------------------
CREATE FUNCTION fprocessprimitive(_phase eprimphase, _s tstack)
RETURNS yj_primitive AS $$
DECLARE
_res yj_primitive;
_kind eprimitivetype;
BEGIN
_kind := _s.kind;
CASE
WHEN (_kind = 'order' ) THEN
_res := fprocessorder(_phase,_s,json_populate_record(NULL::yp_order,_s.jso));
WHEN (_kind = 'childorder' ) THEN
_res := fprocesschildorder(_phase,_s,json_populate_record(NULL::yp_childorder,_s.jso));
WHEN (_kind = 'rmorder' ) THEN
_res := fprocessrmorder(_phase,_s,json_populate_record(NULL::yp_rmorder,_s.jso));
WHEN (_kind = 'quote' ) THEN
_res := fprocessquote(_phase,_s,json_populate_record(NULL::yp_order,_s.jso));
ELSE
RAISE EXCEPTION 'Should not reach this point';
END CASE;
RETURN _res;
END;
$$ LANGUAGE PLPGSQL;
diff --git a/src/sql/tables.sql b/src/sql/tables.sql
index 574ae9f..28f744d 100644
--- a/src/sql/tables.sql
+++ b/src/sql/tables.sql
@@ -1,351 +1,353 @@
--------------------------------------------------------------------------------
ALTER DEFAULT PRIVILEGES REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON SEQUENCES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TYPES FROM PUBLIC;
--------------------------------------------------------------------------------
create domain dqtt AS int8 check( VALUE>0);
create domain dtext AS text check( char_length(VALUE)>0);
--------------------------------------------------------------------------------
-- main constants of the model
--------------------------------------------------------------------------------
create table tconst(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
GRANT SELECT ON tconst TO role_com;
--------------------------------------------------------------------------------
/* for booleans, 0 == false and !=0 == true
*/
INSERT INTO tconst (name,value) VALUES
('MAXCYCLE',64), -- must be less than yflow_get_maxdim()
('MAXPATHFETCHED',1024),-- maximum depth of the graph exploration
('MAXMVTPERTRANS',128), -- maximum number of movements per transaction
-- if this limit is reached, next cycles are not performed but all others
-- are included in the current transaction
('VERSION-X',2),('VERSION-Y',0),('VERSION-Z',2),
('OWNERINSERT',1), -- boolean when true, owner inserted when not found
('QUAPROVUSR',0), -- boolean when true, the quality provided by a barter is suffixed by user name
-- 1 prod
('OWNUSR',0), -- boolean when true, the owner is suffixed by user name
-- 1 prod
('DEBUG',1);
+
--------------------------------------------------------------------------------
create table tvar(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
+INSERT INTO tvar (name,value) VALUES ('INSTALLED',0);
--------------------------------------------------------------------------------
-- TOWNER
--------------------------------------------------------------------------------
create table towner (
id serial UNIQUE not NULL,
name dtext UNIQUE not NULL,
PRIMARY KEY (id)
);
comment on table towner is 'owners of values exchanged';
alter sequence towner_id_seq owned by towner.id;
create index towner_name_idx on towner(name);
SELECT _reference_time('towner');
SELECT _grant_read('towner');
--------------------------------------------------------------------------------
-- ORDER BOOK
--------------------------------------------------------------------------------
-- type = type_flow | type_primitive <<8 | type_mode <<16
create domain dtypeorder AS int check(VALUE >=0 AND VALUE < 16777215); --((1<<24)-1)
-- type_flow &3 1 order limit,2 order best
-- type_flow &12 bit set for c calculations
-- 4 no qttlimit
-- 8 ignoreomega
-- yorder.type is a type_flow = type & 255
-- type_primitive
-- 1 order
-- 2 rmorder
-- 3 quote
-- 4 prequote
CREATE TYPE eordertype AS ENUM ('best','limit');
CREATE TYPE eprimitivetype AS ENUM ('order','childorder','rmorder','quote','prequote');
create table torder (
usr dtext,
own dtext,
ord yorder, --defined by the extension flowf
created timestamp not NULL,
updated timestamp,
duration interval
);
comment on table torder is 'Order book';
comment on column torder.usr is 'user that inserted the order ';
comment on column torder.ord is 'the order';
comment on column torder.created is 'time when the order was put on the stack';
comment on column torder.updated is 'time when the (quantity) of the order was updated by the order book';
comment on column torder.duration is 'the life time of the order';
SELECT _grant_read('torder');
create index torder_qua_prov_idx on torder(((ord).qua_prov)); -- using gin(((ord).qua_prov) text_ops);
create index torder_id_idx on torder(((ord).id));
create index torder_oid_idx on torder(((ord).oid));
-- id,type,own,oid,qtt_requ,qua_requ,qtt_prov,qua_prov,qtt
create view vorder as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where o.usr=session_user;
SELECT _grant_read('vorder');
-- sans dates ni filtre sur usr
create view vorder2 as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt
from torder o left join towner w on ((o.ord).own=w.id);
SELECT _grant_read('vorder2');
-- only parent for all users
create view vbarter as
select (o.ord).id as id,(o.ord).type as type,o.usr as user,w.name as own,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where (o.ord).oid=(o.ord).id;
SELECT _grant_read('vbarter');
-- parent and childs for all users, used with vmvto
create view vordero as
select id,
(case when (type & 3=1) then 'limit' else 'best' end)::eordertype as type,
own as owner,
case when id=oid then (qtt::text || ' ' || qua_prov) else '' end as stock,
'(' || qtt_prov::text || '/' || qtt_requ::text || ') ' ||
qua_prov || ' / '|| qua_requ as expected_Ï,
case when id=oid then '' else oid::text end as oid
from vorder order by id asc;
GRANT SELECT ON vordero TO role_com;
comment on view vordero is 'order book for all users, to be used with vmvto';
comment on column vordero.id is 'the id of the order';
comment on column vordero.owner is 'the owner';
comment on column vordero.stock is 'for a parent order the stock offered by the owner';
comment on column vordero.expected_Ï is 'the Ï of the order';
comment on column vordero.oid is 'for a child-order, the id of the parent-order';
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
CREATE FUNCTION fgetowner(_name text) RETURNS int AS $$
DECLARE
_wid int;
_OWNERINSERT boolean := fgetconst('OWNERINSERT')=1;
BEGIN
LOOP
SELECT id INTO _wid FROM towner WHERE name=_name;
IF found THEN
return _wid;
END IF;
IF (NOT _OWNERINSERT) THEN
RAISE EXCEPTION 'The owner does not exist' USING ERRCODE='YU001';
END IF;
BEGIN
INSERT INTO towner (name) VALUES (_name) RETURNING id INTO _wid;
-- RAISE NOTICE 'owner % created',_name;
return _wid;
EXCEPTION WHEN unique_violation THEN
NULL;--
END;
END LOOP;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- TMVT
-- id,nbc,nbt,grp,xid,usr_src,usr_dst,xoid,own_src,own_dst,qtt,nat,ack,exhausted,order_created,created
--------------------------------------------------------------------------------
/*
create table tmvt (
id serial UNIQUE not NULL,
nbc int default NULL,
nbt int default NULL,
grp int default NULL,
xid int default NULL,
usr_src text default NULL,
usr_dst text default NULL,
xoid int default NULL,
own_src text default NULL,
own_dst text default NULL,
qtt int8 default NULL,
nat text default NULL,
ack boolean default NULL,
cack boolean default NULL,
exhausted boolean default NULL,
order_created timestamp default NULL,
created timestamp default NULL,
om_exp double precision default NULL,
om_rea double precision default NULL,
CONSTRAINT ctmvt_grp FOREIGN KEY (grp) references tmvt(id) ON UPDATE CASCADE
);
GRANT SELECT ON tmvt TO role_com;
comment on table tmvt is 'Records ownership changes';
comment on column tmvt.nbc is 'number of movements of the exchange cycle';
comment on column tmvt.nbt is 'number of movements of the transaction containing several exchange cycles';
comment on column tmvt.grp is 'references the first movement of the exchange';
comment on column tmvt.xid is 'references the order.id';
comment on column tmvt.usr_src is 'usr provider';
comment on column tmvt.usr_dst is 'usr receiver';
comment on column tmvt.xoid is 'references the order.oid';
comment on column tmvt.own_src is 'owner provider';
comment on column tmvt.own_dst is 'owner receiver';
comment on column tmvt.qtt is 'quantity of the value moved';
comment on column tmvt.nat is 'quality of the value moved';
comment on column tmvt.ack is 'set when movement has been acknowledged';
comment on column tmvt.cack is 'set when the cycle has been acknowledged';
comment on column tmvt.exhausted is 'set when the movement exhausted the order providing the value';
comment on column tmvt.om_exp is 'Ï expected by the order';
comment on column tmvt.om_rea is 'real Ï of movement';
alter sequence tmvt_id_seq owned by tmvt.id;
GRANT SELECT ON tmvt_id_seq TO role_com;
create index tmvt_grp_idx on tmvt(grp);
create index tmvt_nat_idx on tmvt(nat);
create index tmvt_own_src_idx on tmvt(own_src);
create index tmvt_own_dst_idx on tmvt(own_dst);
CREATE VIEW vmvt AS select * from tmvt;
GRANT SELECT ON vmvt TO role_com;
CREATE VIEW vmvt_tu AS select id,nbc,grp,xid,xoid,own_src,own_dst,qtt,nat,ack,cack,exhausted from tmvt;
GRANT SELECT ON vmvt_tu TO role_com;
create view vmvto as
select id,grp,
usr_src as from_usr,
own_src as from_own,
qtt::text || ' ' || nat as value,
usr_dst as to_usr,
own_dst as to_own,
to_char(om_exp, 'FM999.9999990') as expected_Ï,
to_char(om_rea, 'FM999.9999990') as actual_Ï,
ack
from tmvt where cack is NULL order by id asc;
GRANT SELECT ON vmvto TO role_com;
*/
CREATE SEQUENCE tmvt_id_seq;
--------------------------------------------------------------------------------
-- STACK id,usr,kind,jso,submitted
--------------------------------------------------------------------------------
create table tstack (
id serial UNIQUE not NULL,
usr dtext,
kind eprimitivetype,
jso json, -- representation of the primitive
submitted timestamp not NULL,
PRIMARY KEY (id)
);
comment on table tstack is 'Records the stack of primitives';
comment on column tstack.id is 'id of this primitive';
comment on column tstack.usr is 'user submitting the primitive';
comment on column tstack.kind is 'type of primitive';
comment on column tstack.jso is 'primitive payload';
comment on column tstack.submitted is 'timestamp when the primitive was successfully submitted';
alter sequence tstack_id_seq owned by tstack.id;
GRANT SELECT ON tstack TO role_com;
SELECT fifo_init('tstack');
GRANT SELECT ON tstack_id_seq TO role_com;
--------------------------------------------------------------------------------
CREATE TYPE eprimphase AS ENUM ('submit', 'execute');
--------------------------------------------------------------------------------
-- MSG
--------------------------------------------------------------------------------
CREATE TYPE emsgtype AS ENUM ('response', 'exchange');
create table tmsg (
id serial UNIQUE not NULL,
usr dtext default NULL, -- the user receiver of this message
typ emsgtype not NULL,
jso json default NULL,
created timestamp not NULL
);
alter sequence tmsg_id_seq owned by tmsg.id;
SELECT _grant_read('tmsg');
SELECT _grant_read('tmsg_id_seq');
SELECT fifo_init('tmsg');
CREATE VIEW vmsg AS select * from tmsg WHERE usr = session_user;
SELECT _grant_read('vmsg');
--------------------------------------------------------------------------------
CREATE TYPE yj_error AS (
code int,
reason text
);
CREATE TYPE yerrorprim AS (
id int,
error yj_error
);
CREATE TYPE yj_value AS (
qtt int8,
nat text
);
CREATE TYPE yj_stock AS (
id int,
qtt int8,
nat text,
own text,
usr text
);
CREATE TYPE yj_Ï AS (
id int,
qtt_prov int8,
qtt_requ int8,
type eordertype
);
CREATE TYPE yj_mvt AS (
id int,
cycle int,
orde yj_Ï,
stock yj_stock,
mvt_from yj_stock,
mvt_to yj_stock
);
CREATE TYPE yj_order AS (
id int,
error yj_error
);
CREATE TYPE yj_primitive AS (
id int,
error yj_error,
primitive json,
result json,
value json
);
diff --git a/src/test/expected/tu_1.res b/src/test/expected/tu_1.res
index ebf9dae..f6b39be 100644
--- a/src/test/expected/tu_1.res
+++ b/src/test/expected/tu_1.res
@@ -1,49 +1,50 @@
-- Bilateral exchange between owners with distinct users (limit)
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar order by name;
name value
+---------+---------
+ INSTALLED 1
OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 10,'a',20);
id error
+---------+---------
2 (0,)
--Bilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wb 10 b limit 20 a 10 2test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wb @test_clientb
stock id:1 remaining after exchange: 0 'b'
3:Cycle id:1 Exchange id:1 for wb @test_clientb:
1:mvt_from wb @test_clientb : 10 'a' -> wa @test_clienta
2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 10 'a'
4:Primitive id:2 from test_clientb: OK
diff --git a/src/test/expected/tu_2.res b/src/test/expected/tu_2.res
index bacfc9c..357662d 100644
--- a/src/test/expected/tu_2.res
+++ b/src/test/expected/tu_2.res
@@ -1,65 +1,66 @@
-- Bilateral exchange between owners with distinct users (best+limit)
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar order by name;
name value
+---------+---------
+ INSTALLED 1
OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('best','wa','a', 10,'b',5);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('best','wb','b', 20,'a',10);
id error
+---------+---------
2 (0,)
--Bilateral exchange expected
SELECT * FROM market.fsubmitorder('limit','wa','a', 10,'b',5);
id error
+---------+---------
3 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('best','wb','b', 20,'a',10);
id error
+---------+---------
4 (0,)
--No exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 10 a best 5 b 0 1test_clienta
2 2 wb 20 b best 10 a 5 2test_clientb
3 3 wa 10 a limit 5 b 5 1test_clientb
4 4 wb 20 b best 10 a 10 2test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 5 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 5 'a' <- wb @test_clientb
stock id:1 remaining after exchange: 0 'b'
3:Cycle id:1 Exchange id:1 for wb @test_clientb:
1:mvt_from wb @test_clientb : 5 'a' -> wa @test_clienta
2:mvt_to wb @test_clientb : 5 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 5 'a'
4:Primitive id:2 from test_clientb: OK
5:Primitive id:3 from test_clientb: OK
6:Primitive id:4 from test_clientb: OK
diff --git a/src/test/expected/tu_3.res b/src/test/expected/tu_3.res
index 4b22b28..fdf455c 100644
--- a/src/test/expected/tu_3.res
+++ b/src/test/expected/tu_3.res
@@ -1,61 +1,62 @@
-- Trilateral exchange between owners with distinct users
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar order by name;
name value
+---------+---------
+ INSTALLED 1
OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--USER:test_clienta
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wc','c', 10,'a',20);
id error
+---------+---------
3 (0,)
--Trilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wb 20 b limit 40 c 30 2test_clientb
3 3 wc 10 c limit 20 a 10 3test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Cycle id:1 Exchange id:3 for wb @test_clientb:
3:mvt_from wb @test_clientb : 10 'c' -> wc @test_clientb
2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 30 'c'
4:Cycle id:1 Exchange id:1 for wc @test_clientb:
1:mvt_from wc @test_clientb : 10 'a' -> wa @test_clienta
3:mvt_to wc @test_clientb : 10 'c' <- wb @test_clientb
stock id:3 remaining after exchange: 10 'a'
5:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wc @test_clientb
stock id:1 remaining after exchange: 0 'b'
6:Primitive id:3 from test_clientb: OK
diff --git a/src/test/expected/tu_4.res b/src/test/expected/tu_4.res
index cf1bcea..fdabf22 100644
--- a/src/test/expected/tu_4.res
+++ b/src/test/expected/tu_4.res
@@ -1,62 +1,63 @@
-- Trilateral exchange between owners with two owners
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar order by name;
name value
+---------+---------
+ INSTALLED 1
OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
-- The profit is shared equally between wa and wb
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',20);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wb','c', 10,'a',20);
id error
+---------+---------
3 (0,)
--Trilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 20 b 0 1test_clienta
2 2 wb 20 b limit 40 c 20 2test_clientb
3 3 wb 10 c limit 20 a 0 2test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Cycle id:1 Exchange id:3 for wb @test_clientb:
3:mvt_from wb @test_clientb : 20 'c' -> wb @test_clientb
2:mvt_to wb @test_clientb : 20 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 20 'c'
4:Cycle id:1 Exchange id:1 for wb @test_clientb:
1:mvt_from wb @test_clientb : 20 'a' -> wa @test_clienta
3:mvt_to wb @test_clientb : 20 'c' <- wb @test_clientb
stock id:3 remaining after exchange: 0 'a'
5:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 20 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 20 'a' <- wb @test_clientb
stock id:1 remaining after exchange: 0 'b'
6:Primitive id:3 from test_clientb: OK
diff --git a/src/test/expected/tu_5.res b/src/test/expected/tu_5.res
index 8abd0b8..fda10b9 100644
--- a/src/test/expected/tu_5.res
+++ b/src/test/expected/tu_5.res
@@ -1,61 +1,62 @@
-- Trilateral exchange by one owners
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar order by name;
name value
+---------+---------
+ INSTALLED 1
OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wa','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wa','c', 10,'a',20);
id error
+---------+---------
3 (0,)
--Trilateral exchange expected
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wa 20 b limit 40 c 30 1test_clientb
3 3 wa 10 c limit 20 a 10 1test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Cycle id:1 Exchange id:3 for wa @test_clientb:
3:mvt_from wa @test_clientb : 10 'c' -> wa @test_clientb
2:mvt_to wa @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 30 'c'
4:Cycle id:1 Exchange id:1 for wa @test_clientb:
1:mvt_from wa @test_clientb : 10 'a' -> wa @test_clienta
3:mvt_to wa @test_clientb : 10 'c' <- wa @test_clientb
stock id:3 remaining after exchange: 10 'a'
5:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wa @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wa @test_clientb
stock id:1 remaining after exchange: 0 'b'
6:Primitive id:3 from test_clientb: OK
diff --git a/src/test/expected/tu_6.res b/src/test/expected/tu_6.res
index 3355c55..71e1776 100644
--- a/src/test/expected/tu_6.res
+++ b/src/test/expected/tu_6.res
@@ -1,108 +1,109 @@
-- 7-exchange with 7 partners
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar order by name;
name value
+---------+---------
+ INSTALLED 1
OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
id error
+---------+---------
1 (0,)
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wc','c', 20,'d',40);
id error
+---------+---------
3 (0,)
SELECT * FROM market.fsubmitorder('limit','wd','d', 20,'e',40);
id error
+---------+---------
4 (0,)
SELECT * FROM market.fsubmitorder('limit','we','e', 20,'f',40);
id error
+---------+---------
5 (0,)
SELECT * FROM market.fsubmitorder('limit','wf','f', 20,'g',40);
id error
+---------+---------
6 (0,)
SELECT * FROM market.fsubmitorder('limit','wg','g', 20,'a',40);
id error
+---------+---------
7 (0,)
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wa 5 a limit 10 b 0 1test_clienta
2 2 wb 20 b limit 40 c 30 2test_clientb
3 3 wc 20 c limit 40 d 30 3test_clientb
4 4 wd 20 d limit 40 e 30 4test_clientb
5 5 we 20 e limit 40 f 30 5test_clientb
6 6 wf 20 f limit 40 g 30 6test_clientb
7 7 wg 20 g limit 40 a 30 7test_clientb
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clienta: OK
2:Primitive id:2 from test_clientb: OK
3:Primitive id:3 from test_clientb: OK
4:Primitive id:4 from test_clientb: OK
5:Primitive id:5 from test_clientb: OK
6:Primitive id:6 from test_clientb: OK
7:Cycle id:1 Exchange id:7 for wf @test_clientb:
7:mvt_from wf @test_clientb : 10 'g' -> wg @test_clientb
6:mvt_to wf @test_clientb : 10 'f' <- we @test_clientb
stock id:6 remaining after exchange: 30 'g'
8:Cycle id:1 Exchange id:1 for wg @test_clientb:
1:mvt_from wg @test_clientb : 10 'a' -> wa @test_clienta
7:mvt_to wg @test_clientb : 10 'g' <- wf @test_clientb
stock id:7 remaining after exchange: 30 'a'
9:Cycle id:1 Exchange id:2 for wa @test_clienta:
2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
1:mvt_to wa @test_clienta : 10 'a' <- wg @test_clientb
stock id:1 remaining after exchange: 0 'b'
10:Cycle id:1 Exchange id:3 for wb @test_clientb:
3:mvt_from wb @test_clientb : 10 'c' -> wc @test_clientb
2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 30 'c'
11:Cycle id:1 Exchange id:4 for wc @test_clientb:
4:mvt_from wc @test_clientb : 10 'd' -> wd @test_clientb
3:mvt_to wc @test_clientb : 10 'c' <- wb @test_clientb
stock id:3 remaining after exchange: 30 'd'
12:Cycle id:1 Exchange id:5 for wd @test_clientb:
5:mvt_from wd @test_clientb : 10 'e' -> we @test_clientb
4:mvt_to wd @test_clientb : 10 'd' <- wc @test_clientb
stock id:4 remaining after exchange: 30 'e'
13:Cycle id:1 Exchange id:6 for we @test_clientb:
6:mvt_from we @test_clientb : 10 'f' -> wf @test_clientb
5:mvt_to we @test_clientb : 10 'e' <- wd @test_clientb
stock id:5 remaining after exchange: 30 'f'
14:Primitive id:7 from test_clientb: OK
diff --git a/src/test/expected/tu_7.res b/src/test/expected/tu_7.res
index 4cb36b2..9dcab26 100644
--- a/src/test/expected/tu_7.res
+++ b/src/test/expected/tu_7.res
@@ -1,79 +1,80 @@
-- Competition between bilateral and multilateral exchange 1/2
---------------------------------------------------------
--variables
--USER:admin
select * from market.tvar order by name;
name value
+---------+---------
+ INSTALLED 1
OC_CURRENT_OPENED 0
OC_CURRENT_PHASE 102
STACK_EXECUTED 0
STACK_TOP 0
---------------------------------------------------------
--SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
--USER:test_clientb
SELECT * FROM market.fsubmitorder('limit','wb','b', 80,'a',40);
id error
+---------+---------
1 (0,)
SELECT * FROM market.fsubmitorder('limit','wc','b', 20,'d',40);
id error
+---------+---------
2 (0,)
SELECT * FROM market.fsubmitorder('limit','wd','d', 20,'a',40);
id error
+---------+---------
3 (0,)
--USER:test_clienta
SELECT * FROM market.fsubmitorder('limit','wa','a', 40,'b',80);
id error
+---------+---------
4 (0,)
-- omega better for the trilateral exchange
-- it wins, the rest is used with a bilateral exchange
--------------------------------------------------------------------------------
table: torder
id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
1 1 wb 80 b limit 40 a 20 1test_clientb
2 2 wc 20 b limit 40 d 0 2test_clientb
3 3 wd 20 d limit 40 a 0 3test_clientb
4 4 wa 40 a limit 80 b 0 4test_clienta
--------------------------------------------------------------------------------
table: tmsg
--------------------------------------------------------------------------------
1:Primitive id:1 from test_clientb: OK
2:Primitive id:2 from test_clientb: OK
3:Primitive id:3 from test_clientb: OK
4:Cycle id:1 Exchange id:3 for wd @test_clientb:
3:mvt_from wd @test_clientb : 40 'a' -> wa @test_clienta
2:mvt_to wd @test_clientb : 40 'd' <- wc @test_clientb
stock id:3 remaining after exchange: 0 'a'
5:Cycle id:1 Exchange id:1 for wa @test_clienta:
1:mvt_from wa @test_clienta : 40 'b' -> wc @test_clientb
3:mvt_to wa @test_clienta : 40 'a' <- wd @test_clientb
stock id:4 remaining after exchange: 40 'b'
6:Cycle id:1 Exchange id:2 for wc @test_clientb:
2:mvt_from wc @test_clientb : 40 'd' -> wd @test_clientb
1:mvt_to wc @test_clientb : 40 'b' <- wa @test_clienta
stock id:2 remaining after exchange: 0 'd'
7:Cycle id:4 Exchange id:5 for wb @test_clientb:
5:mvt_from wb @test_clientb : 20 'a' -> wa @test_clienta
4:mvt_to wb @test_clientb : 40 'b' <- wa @test_clienta
stock id:1 remaining after exchange: 20 'a'
8:Cycle id:4 Exchange id:4 for wa @test_clienta:
4:mvt_from wa @test_clienta : 40 'b' -> wb @test_clientb
5:mvt_to wa @test_clienta : 20 'a' <- wb @test_clientb
stock id:4 remaining after exchange: 0 'b'
9:Primitive id:4 from test_clienta: OK
diff --git a/src/test/py/quote.py b/src/test/py/quote.py
deleted file mode 100644
index b8535b5..0000000
--- a/src/test/py/quote.py
+++ /dev/null
@@ -1,302 +0,0 @@
-# -*- coding: utf-8 -*-
-'''
-Framework de tests quote_*
-***************************************************************************
-
-execution:
-reset_market.sql
-soumis:
- list de primitives t_*.sql
-résultats:
- état de l'order book
- état de tmsg
-comparaison attendu/obtenu
-
-dans src/test/
- quote.py
- reset_market.sql
- sql/t_*.sql
- expected/t_*.res
- obtained/t_*.res
-
-boucle pour chaque t_*.sql:
- reset_market.sql
- exécuter t_*.sql
- dumper les résultats dans obtained/t_.res
- comparer expected et obtained
-'''
-import sys,os,time,logging
-import psycopg2
-import psycopg2.extensions
-import traceback
-
-import srvob_conf
-import molet
-import utilt
-
-PARLEN=80
-prtest = utilt.PrTest(PARLEN,'=')
-
-titre_test = "UNDEFINED"
-options = None
-
-def tests():
- global titre_test
-
- curdir = os.path.abspath(__file__)
- curdir = os.path.dirname(curdir)
- curdir = os.path.dirname(curdir)
- sqldir = os.path.join(curdir,'sql')
- molet.mkdir(os.path.join(curdir,'results'),ignoreWarning = True)
- molet.mkdir(os.path.join(curdir,'expected'),ignoreWarning = True)
- try:
- wait_for_true(0.1,"SELECT value=102,value FROM market.tvar WHERE name='OC_CURRENT_PHASE'",
- msg="Waiting for market opening")
- except psycopg2.OperationalError,e:
- print "Please adjust DB_NAME,DB_USER,DB_PWD,DB_PORT parameters of the file src/test/py/srv_conf.py"
- print "The test program could not connect to the market"
- exit(1)
-
- if options.test is None:
- _fts = [f for f in os.listdir(sqldir) if f.startswith('tu_') and f[-4:]=='.sql']
- _fts.sort(lambda x,y: cmp(x,y))
- else:
- _nt = options.test + '.sql'
- _fts = os.path.join(curdir,_nt)
- if not os.path.exists(_fts):
- print 'This test \'%s\' was not found' % _fts
- return
- else:
- _fts = [_nt]
-
- _tok,_terr,_num_test = 0,0,0
-
- prtest.title('running regesssion tests on %s' % (srvob_conf.DB_NAME,))
- #print '='*PARLEN
-
-
- print ''
- print 'Num\tStatus\tName'
- print ''
- for file_test in _fts: # itération on test cases
- _nom_test = file_test[:-4]
- _terr +=1
- _num_test +=1
-
- file_result = file_test[:-4]+'.res'
- _fte = os.path.join(curdir,'results',file_result)
- _fre = os.path.join(curdir,'expected',file_result)
-
- with open(_fte,'w') as f:
- cur = None
-
- exec_script(cur,'reset_market.sql',None)
- exec_script(cur,file_test,f)
-
- wait_for_true(20,"SELECT market.fstackdone()")
-
- conn = molet.DbData(srvob_conf.dbBO)
- try:
- with molet.DbCursor(conn) as cur:
- dump_torder(cur,f)
- dump_tmsg(cur,f)
-
- finally:
- conn.close()
-
- if(os.path.exists(_fre)):
- if(files_clones(_fte,_fre)):
- _tok +=1
- _terr -=1
- print '%i\tY\t%s\t%s' % (_num_test,_nom_test,titre_test)
- else:
- print '%i\tN\t%s\t%s' % (_num_test,_nom_test,titre_test)
- else:
- print '%i\t?\t%s\t%s' % (_num_test,_nom_test,titre_test)
-
- # display global results
- print ''
- print 'Test status: (Y)expected ==results, (N)expected!=results,(F)failed, (?)expected undefined'
- prtest.line()
-
- if(_terr == 0):
- prtest.center('\tAll %i tests passed' % _tok)
- else:
- prtest.center('\t%i tests KO, %i passed' % (_terr,_tok))
- prtest.line()
-
-def get_prefix_file(fm):
- return '.'.join(fm.split('.')[:-1])
-
-SEPARATOR = '\n'+'-'*PARLEN +'\n'
-
-def exec_script(cur,fn,fdr):
- global titre_test
-
- if( not os.path.exists(fn)):
- raise ValueError('The script %s is not found' % fn)
-
- cur_login = 'admin'
- titre_test = None
-
- with open(fn,'r') as f:
- for line in f:
- line = line.strip()
- if len(line) == 0:
- continue
-
- if line.startswith('--'):
- if titre_test is None:
- titre_test = line
- elif line.startswith('--USER:'):
- cur_login = line[7:].strip()
- if fdr:
- fdr.write(line+'\n')
- else:
-
- if fdr:
- fdr.write(line+'\n')
- execinst(cur_login,line,fdr,cur)
- return
-
-def execinst(cur_login,sql,fdr,cursor):
- if cur_login == 'admin':
- cur_login = None
- conn = molet.DbData(srvob_conf.dbBO,login = cur_login)
- try:
- with molet.DbCursor(conn,exit = True) as _cur:
- _cur.execute(sql)
- if fdr:
- dump_cur(_cur,fdr)
- finally:
- conn.close()
-'''
-yorder not shown:
- pos_requ box, -- box (point(lat,lon),point(lat,lon))
- pos_prov box, -- box (point(lat,lon),point(lat,lon))
- dist float8,
- carre_prov box -- carre_prov @> pos_requ
-'''
-
-def dump_torder(cur,fdr):
- fdr.write(SEPARATOR)
- fdr.write('table: torder\n')
- cur.execute('SELECT * FROM market.vord order by id asc')
- dump_cur(cur,fdr)
-
-def dump_cur(cur,fdr,_len=10):
- if(cur is None): return
- cols = [e.name for e in cur.description]
- row_format = ('{:>'+str(_len)+'}')*len(cols)
- fdr.write(row_format.format(*cols)+'\n')
- fdr.write(row_format.format(*(['+'+'-'*(_len-1)]*len(cols)))+'\n')
- for res in cur:
- fdr.write(row_format.format(*res)+'\n')
- return
-
-import json
-
-def dump_tmsg(cur,fdr):
- fdr.write(SEPARATOR)
- fdr.write('table: tmsg')
- fdr.write(SEPARATOR)
-
- cur.execute('SELECT id,typ,usr,jso FROM market.tmsg order by id asc')
- for res in cur:
- _id,typ,usr,jso = res
- _jso = json.loads(jso)
- if typ == 'response':
- if _jso['error']['code']==None:
- _msg = 'Primitive id:%i from %s: OK\n' % (_jso['id'],usr)
- else:
- _msg = 'Primitive id:%i from %s: ERROR(%i,%s)\n' % (usr,_jso['id'],
- _jso['error']['code'],_jso['error']['reason'])
- elif typ == 'exchange':
-
- _fmt = '''Cycle id:%i Exchange id:%i for %s @%s:
- \t%i:mvt_from %s @%s : %i \'%s\' -> %s @%s
- \t%i:mvt_to %s @%s : %i \'%s\' <- %s @%s
- \tstock id:%i remaining after exchange: %i \'%s\' \n'''
- _dat = (
- _jso['cycle'],_jso['id'],_jso['stock']['own'],usr,
- _jso['mvt_from']['id'],_jso['stock']['own'],usr,_jso['mvt_from']['qtt'],_jso['mvt_from']['nat'],_jso['mvt_from']['own'],_jso['mvt_from']['usr'],
- _jso['mvt_to']['id'], _jso['stock']['own'],usr,_jso['mvt_to']['qtt'],_jso['mvt_to']['nat'],_jso['mvt_to']['own'],_jso['mvt_to']['usr'],
- _jso['stock']['id'],_jso['stock']['qtt'],_jso['stock']['nat'])
- _msg = _fmt %_dat
- else:
- _msg = str(res)
-
- fdr.write('\t%i:'%_id+_msg+'\n')
- if options.verbose:
- print jso
- return
-
-import filecmp
-def files_clones(f1,f2):
- #res = filecmp.cmp(f1,f2)
- return (md5sum(f1) == md5sum(f2))
-
-import hashlib
-def md5sum(filename, blocksize=65536):
- hash = hashlib.md5()
- with open(filename, "r+b") as f:
- for block in iter(lambda: f.read(blocksize), ""):
- hash.update(block)
- return hash.hexdigest()
-
-def wait_for_true(delai,sql,msg=None):
- _i = 0;
- _w = 0;
-
- while True:
- _i +=1
-
- conn = molet.DbData(srvob_conf.dbBO)
- try:
- with molet.DbCursor(conn) as cur:
- cur.execute(sql)
- r = cur.fetchone()
- # print r
- if r[0] == True:
- break
- finally:
- conn.close()
-
- if msg is None:
- pass
- elif(_i%10)==0:
- print msg
-
- _a = 0.1;
- _w += _a;
-
- if _w > delai: # seconds
- raise ValueError('After %f seconds, %s != True' % (_w,sql))
- time.sleep(_a)
-
-'''---------------------------------------------------------------------------
-arguments
----------------------------------------------------------------------------'''
-from optparse import OptionParser
-import os
-
-def main():
- global options
-
- usage = """usage: %prog [options]
- tests """
- parser = OptionParser(usage)
- parser.add_option("-t","--test",action="store",type="string",dest="test",help="test",
- default= None)
- parser.add_option("-v","--verbose",action="store_true",dest="verbose",help="verbose",default=False)
-
- (options, args) = parser.parse_args()
-
- # um = os.umask(0177) # u=rw,g=,o=
-
- tests()
-
-
-if __name__ == "__main__":
- main()
-
diff --git a/src/test/py/run.py b/src/test/py/run.py
index f7514dd..ba049b8 100755
--- a/src/test/py/run.py
+++ b/src/test/py/run.py
@@ -1,203 +1,251 @@
# -*- coding: utf-8 -*-
'''
Framework de tests tu_*
***************************************************************************
execution:
reset_market.sql
soumis:
list de primitives t_*.sql
résultats:
état de l'order book
état de tmsg
comparaison attendu/obtenu
dans src/test/
run.py
sql/reset_market.sql
sql/t_*.sql
expected/t_*.res
obtained/t_*.res
boucle pour chaque t_*.sql:
reset_market.sql
exécuter t_*.sql
dumper les résultats dans obtained/t_.res
comparer expected et obtained
'''
import sys,os,time,logging
import psycopg2
import psycopg2.extensions
import traceback
import srvob_conf
import molet
import utilt
+import sys
+sys.path = [os.path.abspath(os.path.join(os.path.abspath(__file__),"../../../../simu/liquid"))]+ sys.path
+#print sys.path
+import distrib
+
PARLEN=80
prtest = utilt.PrTest(PARLEN,'=')
-titre_test = "UNDEFINED"
-#options = None
-
-def tests(options):
- global titre_test
-
+def get_paths():
curdir = os.path.abspath(__file__)
curdir = os.path.dirname(curdir)
curdir = os.path.dirname(curdir)
sqldir = os.path.join(curdir,'sql')
- molet.mkdir(os.path.join(curdir,'results'),ignoreWarning = True)
- molet.mkdir(os.path.join(curdir,'expected'),ignoreWarning = True)
+ resultsdir,expecteddir = os.path.join(curdir,'results'),os.path.join(curdir,'expected')
+ molet.mkdir(resultsdir,ignoreWarning = True)
+ molet.mkdir(expecteddir,ignoreWarning = True)
+ tup = (curdir,sqldir,resultsdir,expecteddir)
+ return tup
+
+def tests_tu(options):
+ titre_test = "UNDEFINED"
+
+ curdir,sqldir,resultsdir,expecteddir = get_paths()
+
try:
utilt.wait_for_true(srvob_conf.dbBO,0.1,"SELECT value=102,value FROM market.tvar WHERE name='OC_CURRENT_PHASE'",
msg="Waiting for market opening")
except psycopg2.OperationalError,e:
print "Please adjust DB_NAME,DB_USER,DB_PWD,DB_PORT parameters of the file src/test/py/srv_conf.py"
print "The test program could not connect to the market"
exit(1)
if options.test is None:
_fts = [f for f in os.listdir(sqldir) if f.startswith('tu_') and f[-4:]=='.sql']
_fts.sort(lambda x,y: cmp(x,y))
else:
_nt = options.test + '.sql'
_fts = os.path.join(sqldir,_nt)
if not os.path.exists(_fts):
print 'This test \'%s\' was not found' % _fts
return
else:
_fts = [_nt]
_tok,_terr,_num_test = 0,0,0
prtest.title('running tests on database "%s"' % (srvob_conf.DB_NAME,))
#print '='*PARLEN
print ''
print 'Num\tStatus\tName'
print ''
for file_test in _fts: # itération on test cases
_nom_test = file_test[:-4]
_terr +=1
_num_test +=1
file_result = file_test[:-4]+'.res'
- _fte = os.path.join(curdir,'results',file_result)
- _fre = os.path.join(curdir,'expected',file_result)
+ _fte = os.path.join(resultsdir,file_result)
+ _fre = os.path.join(expecteddir,file_result)
with open(_fte,'w') as f:
cur = None
dump = utilt.Dumper(srvob_conf.dbBO,options,None)
- exec_script(dump,'reset_market.sql')
+ titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
dump = utilt.Dumper(srvob_conf.dbBO,options,f)
- exec_script(dump,file_test)
+ titre_test = utilt.exec_script(dump,sqldir,file_test)
utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
conn = molet.DbData(srvob_conf.dbBO)
try:
with molet.DbCursor(conn) as cur:
dump.torder(cur)
dump.tmsg(cur)
finally:
conn.close()
if(os.path.exists(_fre)):
if(utilt.files_clones(_fte,_fre)):
_tok +=1
_terr -=1
print '%i\tY\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\tN\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\t?\t%s\t%s' % (_num_test,_nom_test,titre_test)
# display global results
print ''
print 'Test status: (Y)expected ==results, (N)expected!=results,(F)failed, (?)expected undefined'
prtest.line()
if(_terr == 0):
prtest.center('\tAll %i tests passed' % _tok)
else:
prtest.center('\t%i tests KO, %i passed' % (_terr,_tok))
prtest.line()
-def exec_script(dump,fn):
- global titre_test
+import random
+import csv
+MAX_ORDER = 1000
+def build_ti(options):
+ ''' build a .sql file with a bump of submit
+ '''
+ #conf = srvob_conf.dbBO
+ curdir,sqldir,resultsdir,expecteddir = get_paths()
+ _frs = os.path.join(sqldir,'test_ti.csv')
+
+ MAX_OWNER = 10
+ MAX_QLT = 20
+ QTT_PROV = 10000
+
+ prtest.title('generating tests cases for quotes')
+ def gen(nborders,frs,withquote):
+ for i in range(nborders):
+ w = random.randint(1,MAX_OWNER)
+ qlt_prov,qlt_requ = distrib.couple(distrib.uniformQlt,MAX_QLT)
+ r = random.random()+0.5
+ qtt_requ = int(QTT_PROV * r)
+ lb= 'limit' if (random.random()>0.2) else 'best'
+ frs.writerow(['admin',lb,'w%i'%w,'q%i'%qlt_requ,qtt_requ,'q%i'%qlt_prov,QTT_PROV])
+
+ with open(_frs,'w') as f:
+ spamwriter = csv.writer(f)
+ gen(MAX_ORDER,spamwriter,False)
+ gen(30,spamwriter,True)
+
+ molet.removeFile(os.path.join(expecteddir,'test_ti.res'),ignoreWarning = True)
+ prtest.center('done, test_ti.res removed')
+ prtest.line()
+
+def test_ti(options):
+
+ curdir,sqldir,resultsdir,expecteddir = get_paths()
+ prtest.title('running tests on database "%s"' % (srvob_conf.DB_NAME,))
+
+ dump = utilt.Dumper(srvob_conf.dbBO,options,None)
+ titre_test = utilt.exec_script(dump,sqldir,'reset_market.sql')
- fn = os.path.join('sql',fn)
+ fn = os.path.join(sqldir,'test_ti.csv')
if( not os.path.exists(fn)):
- raise ValueError('The script %s is not found' % fn)
+ raise ValueError('The data %s is not found' % fn)
- cur_login = 'admin'
+ cur_login = None
titre_test = None
- with open(fn,'r') as f:
- for line in f:
- line = line.strip()
- if len(line) == 0:
- continue
-
- dump.write(line+'\n')
- if line.startswith('--'):
- if titre_test is None:
- titre_test = line
- elif line.startswith('--USER:'):
- cur_login = line[7:].strip()
-
+ inst = utilt.ExecInst(dump)
+ quote = False
+
+ with open(fn,'r') as f:
+ spamreader = csv.reader(f)
+ i= 0
+ usr = None
+ fmtorder = "SELECT * from market.fsubmitorder('%s','%s','%s',%s,'%s',%s)"
+ fmtquote = "SELECT * from market.fsubmitquote('%s','%s','%s',%s,'%s',%s)"
+ for row in spamreader:
+ i += 1
+
+ if i < 20: #i < MAX_ORDER:
+ cursor = inst.exe( fmtorder % tuple(row[1:]),row[0])
else:
- execinst(dump,cur_login,line)
- return
+ cursor = inst.exe( fmtquote % tuple(row[1:]),row[0])
+ id,err = cursor.fetchone()
+ if err != '(0,)':
+ raise ValueError('Order returned an error "%s"' % err)
+ utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
+ print id
+ cursor = inst.exe('SELECT * from market.tmsg')
+ print cursor.fetchone()
-def execinst(dump,cur_login,sql):
+ if i >30:
+ break
- if cur_login == 'admin':
- cur_login = None
- conn = molet.DbData(srvob_conf.dbBO,login = cur_login)
- try:
- with molet.DbCursor(conn,exit = True) as _cur:
- # print sql
- _cur.execute(sql)
- dump.cur(_cur)
- finally:
- conn.close()
-'''
-yorder not shown:
- pos_requ box, -- box (point(lat,lon),point(lat,lon))
- pos_prov box, -- box (point(lat,lon),point(lat,lon))
- dist float8,
- carre_prov box -- carre_prov @> pos_requ
-'''
+
+ inst.close()
+ return titre_test
+
+ prtest.line()
'''---------------------------------------------------------------------------
arguments
---------------------------------------------------------------------------'''
from optparse import OptionParser
import os
def main():
#global options
usage = """usage: %prog [options]
tests """
parser = OptionParser(usage)
parser.add_option("-t","--test",action="store",type="string",dest="test",help="test",
default= None)
parser.add_option("-v","--verbose",action="store_true",dest="verbose",help="verbose",default=False)
-
+ parser.add_option("-b","--build",action="store_true",dest="build",help="build",default=False)
+ parser.add_option("-i","--ti",action="store_true",dest="test_ti",help="execute test_ti",default=False)
(options, args) = parser.parse_args()
# um = os.umask(0177) # u=rw,g=,o=
-
- tests(options)
+ if options.build:
+ build_ti(options)
+ elif options.test_ti:
+ test_ti(options)
+ else:
+ tests_tu(options)
if __name__ == "__main__":
main()
diff --git a/src/test/py/utilt.py b/src/test/py/utilt.py
index 3d64891..40f5d71 100644
--- a/src/test/py/utilt.py
+++ b/src/test/py/utilt.py
@@ -1,140 +1,236 @@
# -*- coding: utf-8 -*-
import string
-
+import os.path
+'''---------------------------------------------------------------------------
+---------------------------------------------------------------------------'''
class PrTest(object):
''' results printing '''
def __init__(self,parlen,sep):
self.parlen = parlen+ parlen%2
self.sep = sep
def title(self,title):
_l = len(title)
_p = max(_l%2 +_l,40)
_x = self.parlen -_p
if (_x > 2):
print (_x/2)*self.sep + string.center(title,_p) + (_x/2)*self.sep
else:
print string.center(text,self.parlen)
def line(self):
print self.parlen*self.sep
def center(self,text):
print string.center(text,self.parlen)
-############################################################################
-''' file comparison '''
+'''---------------------------------------------------------------------------
+ file comparison
+---------------------------------------------------------------------------'''
import filecmp
def files_clones(f1,f2):
#res = filecmp.cmp(f1,f2)
return (md5sum(f1) == md5sum(f2))
import hashlib
def md5sum(filename, blocksize=65536):
hash = hashlib.md5()
with open(filename, "r+b") as f:
for block in iter(lambda: f.read(blocksize), ""):
hash.update(block)
return hash.hexdigest()
-############################################################################
+'''---------------------------------------------------------------------------
+---------------------------------------------------------------------------'''
SEPARATOR = '\n'+'-'*80 +'\n'
import json
class Dumper(object):
def __init__(self,conf,options,fdr):
self.options =options
self.conf = conf
self.fdr = fdr
+ def getConf(self):
+ return self.conf
+
def torder(self,cur):
self.write(SEPARATOR)
self.write('table: torder\n')
cur.execute('SELECT * FROM market.vord order by id asc')
self.cur(cur)
+ '''
+ yorder not shown:
+ pos_requ box, -- box (point(lat,lon),point(lat,lon))
+ pos_prov box, -- box (point(lat,lon),point(lat,lon))
+ dist float8,
+ carre_prov box -- carre_prov @> pos_requ
+ '''
return
def write(self,txt):
if self.fdr:
self.fdr.write(txt)
def cur(self,cur,_len=10):
#print cur.description
if(cur.description is None): return
#print type(cur)
cols = [e.name for e in cur.description]
row_format = ('{:>'+str(_len)+'}')*len(cols)
self.write(row_format.format(*cols)+'\n')
self.write(row_format.format(*(['+'+'-'*(_len-1)]*len(cols)))+'\n')
for res in cur:
self.write(row_format.format(*res)+'\n')
return
def tmsg(self,cur):
self.write(SEPARATOR)
self.write('table: tmsg')
self.write(SEPARATOR)
cur.execute('SELECT id,typ,usr,jso FROM market.tmsg order by id asc')
for res in cur:
_id,typ,usr,jso = res
_jso = json.loads(jso)
if typ == 'response':
if _jso['error']['code']==None:
_msg = 'Primitive id:%i from %s: OK\n' % (_jso['id'],usr)
else:
_msg = 'Primitive id:%i from %s: ERROR(%i,%s)\n' % (usr,_jso['id'],
_jso['error']['code'],_jso['error']['reason'])
elif typ == 'exchange':
_fmt = '''Cycle id:%i Exchange id:%i for %s @%s:
\t%i:mvt_from %s @%s : %i \'%s\' -> %s @%s
\t%i:mvt_to %s @%s : %i \'%s\' <- %s @%s
\tstock id:%i remaining after exchange: %i \'%s\' \n'''
_dat = (
_jso['cycle'],_jso['id'],_jso['stock']['own'],usr,
_jso['mvt_from']['id'],_jso['stock']['own'],usr,_jso['mvt_from']['qtt'],_jso['mvt_from']['nat'],_jso['mvt_from']['own'],_jso['mvt_from']['usr'],
_jso['mvt_to']['id'], _jso['stock']['own'],usr,_jso['mvt_to']['qtt'],_jso['mvt_to']['nat'],_jso['mvt_to']['own'],_jso['mvt_to']['usr'],
_jso['stock']['id'],_jso['stock']['qtt'],_jso['stock']['nat'])
_msg = _fmt %_dat
else:
_msg = str(res)
self.write('\t%i:'%_id+_msg+'\n')
if self.options.verbose:
print jso
return
-############################################################################
+'''---------------------------------------------------------------------------
+wait until a command returns true with timeout
+---------------------------------------------------------------------------'''
import molet
import time
def wait_for_true(conf,delai,sql,msg=None):
_i = 0;
_w = 0;
while True:
_i +=1
conn = molet.DbData(conf)
try:
with molet.DbCursor(conn) as cur:
cur.execute(sql)
r = cur.fetchone()
# print r
if r[0] == True:
break
finally:
conn.close()
if msg is None:
pass
elif(_i%10)==0:
print msg
_a = 0.1;
_w += _a;
if _w > delai: # seconds
raise ValueError('After %f seconds, %s != True' % (_w,sql))
- time.sleep(_a)
\ No newline at end of file
+ time.sleep(_a)
+
+'''---------------------------------------------------------------------------
+executes a script
+---------------------------------------------------------------------------'''
+def exec_script(dump,dirsql,fn):
+
+ fn = os.path.join(dirsql,fn)
+ if( not os.path.exists(fn)):
+ raise ValueError('The script %s is not found' % fn)
+
+ cur_login = None
+ titre_test = None
+
+ inst = ExecInst(dump)
+
+ with open(fn,'r') as f:
+ for line in f:
+ line = line.strip()
+ if len(line) == 0:
+ continue
+
+ dump.write(line+'\n')
+
+ if line.startswith('--'):
+ if titre_test is None:
+ titre_test = line
+ elif line.startswith('--USER:'):
+ cur_login = line[7:].strip()
+
+ else:
+ cursor = inst.exe(line,cur_login)
+ dump.cur(cursor)
+
+ inst.close()
+ return titre_test
+
+'''---------------------------------------------------------------------------
+---------------------------------------------------------------------------'''
+class ExecInst(object):
+
+ def __init__(self,dump):
+ self.login = None
+ self.conn = None
+ self.cur = None
+ self.dump = dump
+
+ def exe(self,sql,login):
+ #print login
+ if self.login != login:
+ self.close()
+
+ if self.conn is None:
+ self.login = login
+ _login = None if login == 'admin' else login
+ self.conn = molet.DbData(self.dump.getConf(),login = _login)
+ self.cur = self.conn.con.cursor()
+
+ self.cur.execute(sql)
+
+ return self.cur
+
+ def close(self):
+ if not(self.conn is None):
+ if not(self.cur is None):
+ self.cur.close()
+ self.conn.close()
+ self.conn = None
+
+
+def execinst(dump,cur_login,sql):
+
+ if cur_login == 'admin':
+ cur_login = None
+ conn = molet.DbData(dump.getConf(),login = cur_login)
+ try:
+ with molet.DbCursor(conn,exit = True) as _cur:
+ _cur.execute(sql)
+ dump.cur(_cur)
+ finally:
+ conn.close()
\ No newline at end of file
diff --git a/src/test/sql/tu_1.res b/src/test/sql/tu_1.res
deleted file mode 100644
index ebf9dae..0000000
--- a/src/test/sql/tu_1.res
+++ /dev/null
@@ -1,49 +0,0 @@
--- Bilateral exchange between owners with distinct users (limit)
----------------------------------------------------------
---variables
---USER:admin
-select * from market.tvar order by name;
- name value
-+---------+---------
-OC_CURRENT_OPENED 0
-OC_CURRENT_PHASE 102
-STACK_EXECUTED 0
- STACK_TOP 0
----------------------------------------------------------
---USER:test_clienta
---SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
-SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
- id error
-+---------+---------
- 1 (0,)
---USER:test_clientb
-SELECT * FROM market.fsubmitorder('limit','wb','b', 10,'a',20);
- id error
-+---------+---------
- 2 (0,)
---Bilateral exchange expected
-
---------------------------------------------------------------------------------
-table: torder
- id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
- 1 1 wa 5 a limit 10 b 0 1test_clienta
- 2 2 wb 10 b limit 20 a 10 2test_clientb
-
---------------------------------------------------------------------------------
-table: tmsg
---------------------------------------------------------------------------------
- 1:Primitive id:1 from test_clienta: OK
-
- 2:Cycle id:1 Exchange id:2 for wa @test_clienta:
- 2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
- 1:mvt_to wa @test_clienta : 10 'a' <- wb @test_clientb
- stock id:1 remaining after exchange: 0 'b'
-
- 3:Cycle id:1 Exchange id:1 for wb @test_clientb:
- 1:mvt_from wb @test_clientb : 10 'a' -> wa @test_clienta
- 2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
- stock id:2 remaining after exchange: 10 'a'
-
- 4:Primitive id:2 from test_clientb: OK
-
diff --git a/src/test/sql/tu_2.res b/src/test/sql/tu_2.res
deleted file mode 100644
index bacfc9c..0000000
--- a/src/test/sql/tu_2.res
+++ /dev/null
@@ -1,65 +0,0 @@
--- Bilateral exchange between owners with distinct users (best+limit)
----------------------------------------------------------
---variables
---USER:admin
-select * from market.tvar order by name;
- name value
-+---------+---------
-OC_CURRENT_OPENED 0
-OC_CURRENT_PHASE 102
-STACK_EXECUTED 0
- STACK_TOP 0
----------------------------------------------------------
---USER:test_clienta
---SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
-SELECT * FROM market.fsubmitorder('best','wa','a', 10,'b',5);
- id error
-+---------+---------
- 1 (0,)
---USER:test_clientb
-SELECT * FROM market.fsubmitorder('best','wb','b', 20,'a',10);
- id error
-+---------+---------
- 2 (0,)
---Bilateral exchange expected
-SELECT * FROM market.fsubmitorder('limit','wa','a', 10,'b',5);
- id error
-+---------+---------
- 3 (0,)
---USER:test_clientb
-SELECT * FROM market.fsubmitorder('best','wb','b', 20,'a',10);
- id error
-+---------+---------
- 4 (0,)
---No exchange expected
-
---------------------------------------------------------------------------------
-table: torder
- id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
- 1 1 wa 10 a best 5 b 0 1test_clienta
- 2 2 wb 20 b best 10 a 5 2test_clientb
- 3 3 wa 10 a limit 5 b 5 1test_clientb
- 4 4 wb 20 b best 10 a 10 2test_clientb
-
---------------------------------------------------------------------------------
-table: tmsg
---------------------------------------------------------------------------------
- 1:Primitive id:1 from test_clienta: OK
-
- 2:Cycle id:1 Exchange id:2 for wa @test_clienta:
- 2:mvt_from wa @test_clienta : 5 'b' -> wb @test_clientb
- 1:mvt_to wa @test_clienta : 5 'a' <- wb @test_clientb
- stock id:1 remaining after exchange: 0 'b'
-
- 3:Cycle id:1 Exchange id:1 for wb @test_clientb:
- 1:mvt_from wb @test_clientb : 5 'a' -> wa @test_clienta
- 2:mvt_to wb @test_clientb : 5 'b' <- wa @test_clienta
- stock id:2 remaining after exchange: 5 'a'
-
- 4:Primitive id:2 from test_clientb: OK
-
- 5:Primitive id:3 from test_clientb: OK
-
- 6:Primitive id:4 from test_clientb: OK
-
diff --git a/src/test/sql/tu_3.res b/src/test/sql/tu_3.res
deleted file mode 100644
index 4b22b28..0000000
--- a/src/test/sql/tu_3.res
+++ /dev/null
@@ -1,61 +0,0 @@
--- Trilateral exchange between owners with distinct users
----------------------------------------------------------
---variables
---USER:admin
-select * from market.tvar order by name;
- name value
-+---------+---------
-OC_CURRENT_OPENED 0
-OC_CURRENT_PHASE 102
-STACK_EXECUTED 0
- STACK_TOP 0
----------------------------------------------------------
---USER:test_clienta
---SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
-SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
- id error
-+---------+---------
- 1 (0,)
---USER:test_clientb
-SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
- id error
-+---------+---------
- 2 (0,)
-SELECT * FROM market.fsubmitorder('limit','wc','c', 10,'a',20);
- id error
-+---------+---------
- 3 (0,)
---Trilateral exchange expected
-
---------------------------------------------------------------------------------
-table: torder
- id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
- 1 1 wa 5 a limit 10 b 0 1test_clienta
- 2 2 wb 20 b limit 40 c 30 2test_clientb
- 3 3 wc 10 c limit 20 a 10 3test_clientb
-
---------------------------------------------------------------------------------
-table: tmsg
---------------------------------------------------------------------------------
- 1:Primitive id:1 from test_clienta: OK
-
- 2:Primitive id:2 from test_clientb: OK
-
- 3:Cycle id:1 Exchange id:3 for wb @test_clientb:
- 3:mvt_from wb @test_clientb : 10 'c' -> wc @test_clientb
- 2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
- stock id:2 remaining after exchange: 30 'c'
-
- 4:Cycle id:1 Exchange id:1 for wc @test_clientb:
- 1:mvt_from wc @test_clientb : 10 'a' -> wa @test_clienta
- 3:mvt_to wc @test_clientb : 10 'c' <- wb @test_clientb
- stock id:3 remaining after exchange: 10 'a'
-
- 5:Cycle id:1 Exchange id:2 for wa @test_clienta:
- 2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
- 1:mvt_to wa @test_clienta : 10 'a' <- wc @test_clientb
- stock id:1 remaining after exchange: 0 'b'
-
- 6:Primitive id:3 from test_clientb: OK
-
diff --git a/src/test/sql/tu_4.res b/src/test/sql/tu_4.res
deleted file mode 100644
index cf1bcea..0000000
--- a/src/test/sql/tu_4.res
+++ /dev/null
@@ -1,62 +0,0 @@
--- Trilateral exchange between owners with two owners
----------------------------------------------------------
---variables
---USER:admin
-select * from market.tvar order by name;
- name value
-+---------+---------
-OC_CURRENT_OPENED 0
-OC_CURRENT_PHASE 102
-STACK_EXECUTED 0
- STACK_TOP 0
--- The profit is shared equally between wa and wb
----------------------------------------------------------
---SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
---USER:test_clienta
-SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',20);
- id error
-+---------+---------
- 1 (0,)
---USER:test_clientb
-SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
- id error
-+---------+---------
- 2 (0,)
-SELECT * FROM market.fsubmitorder('limit','wb','c', 10,'a',20);
- id error
-+---------+---------
- 3 (0,)
---Trilateral exchange expected
-
---------------------------------------------------------------------------------
-table: torder
- id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
- 1 1 wa 5 a limit 20 b 0 1test_clienta
- 2 2 wb 20 b limit 40 c 20 2test_clientb
- 3 3 wb 10 c limit 20 a 0 2test_clientb
-
---------------------------------------------------------------------------------
-table: tmsg
---------------------------------------------------------------------------------
- 1:Primitive id:1 from test_clienta: OK
-
- 2:Primitive id:2 from test_clientb: OK
-
- 3:Cycle id:1 Exchange id:3 for wb @test_clientb:
- 3:mvt_from wb @test_clientb : 20 'c' -> wb @test_clientb
- 2:mvt_to wb @test_clientb : 20 'b' <- wa @test_clienta
- stock id:2 remaining after exchange: 20 'c'
-
- 4:Cycle id:1 Exchange id:1 for wb @test_clientb:
- 1:mvt_from wb @test_clientb : 20 'a' -> wa @test_clienta
- 3:mvt_to wb @test_clientb : 20 'c' <- wb @test_clientb
- stock id:3 remaining after exchange: 0 'a'
-
- 5:Cycle id:1 Exchange id:2 for wa @test_clienta:
- 2:mvt_from wa @test_clienta : 20 'b' -> wb @test_clientb
- 1:mvt_to wa @test_clienta : 20 'a' <- wb @test_clientb
- stock id:1 remaining after exchange: 0 'b'
-
- 6:Primitive id:3 from test_clientb: OK
-
diff --git a/src/test/sql/tu_5.res b/src/test/sql/tu_5.res
deleted file mode 100644
index 8abd0b8..0000000
--- a/src/test/sql/tu_5.res
+++ /dev/null
@@ -1,61 +0,0 @@
--- Trilateral exchange by one owners
----------------------------------------------------------
---variables
---USER:admin
-select * from market.tvar order by name;
- name value
-+---------+---------
-OC_CURRENT_OPENED 0
-OC_CURRENT_PHASE 102
-STACK_EXECUTED 0
- STACK_TOP 0
----------------------------------------------------------
---SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
---USER:test_clienta
-SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
- id error
-+---------+---------
- 1 (0,)
---USER:test_clientb
-SELECT * FROM market.fsubmitorder('limit','wa','b', 20,'c',40);
- id error
-+---------+---------
- 2 (0,)
-SELECT * FROM market.fsubmitorder('limit','wa','c', 10,'a',20);
- id error
-+---------+---------
- 3 (0,)
---Trilateral exchange expected
-
---------------------------------------------------------------------------------
-table: torder
- id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
- 1 1 wa 5 a limit 10 b 0 1test_clienta
- 2 2 wa 20 b limit 40 c 30 1test_clientb
- 3 3 wa 10 c limit 20 a 10 1test_clientb
-
---------------------------------------------------------------------------------
-table: tmsg
---------------------------------------------------------------------------------
- 1:Primitive id:1 from test_clienta: OK
-
- 2:Primitive id:2 from test_clientb: OK
-
- 3:Cycle id:1 Exchange id:3 for wa @test_clientb:
- 3:mvt_from wa @test_clientb : 10 'c' -> wa @test_clientb
- 2:mvt_to wa @test_clientb : 10 'b' <- wa @test_clienta
- stock id:2 remaining after exchange: 30 'c'
-
- 4:Cycle id:1 Exchange id:1 for wa @test_clientb:
- 1:mvt_from wa @test_clientb : 10 'a' -> wa @test_clienta
- 3:mvt_to wa @test_clientb : 10 'c' <- wa @test_clientb
- stock id:3 remaining after exchange: 10 'a'
-
- 5:Cycle id:1 Exchange id:2 for wa @test_clienta:
- 2:mvt_from wa @test_clienta : 10 'b' -> wa @test_clientb
- 1:mvt_to wa @test_clienta : 10 'a' <- wa @test_clientb
- stock id:1 remaining after exchange: 0 'b'
-
- 6:Primitive id:3 from test_clientb: OK
-
diff --git a/src/test/sql/tu_6.res b/src/test/sql/tu_6.res
deleted file mode 100644
index 3355c55..0000000
--- a/src/test/sql/tu_6.res
+++ /dev/null
@@ -1,108 +0,0 @@
--- 7-exchange with 7 partners
----------------------------------------------------------
---variables
---USER:admin
-select * from market.tvar order by name;
- name value
-+---------+---------
-OC_CURRENT_OPENED 0
-OC_CURRENT_PHASE 102
-STACK_EXECUTED 0
- STACK_TOP 0
----------------------------------------------------------
---SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
---USER:test_clienta
-SELECT * FROM market.fsubmitorder('limit','wa','a', 5,'b',10);
- id error
-+---------+---------
- 1 (0,)
---USER:test_clientb
-SELECT * FROM market.fsubmitorder('limit','wb','b', 20,'c',40);
- id error
-+---------+---------
- 2 (0,)
-SELECT * FROM market.fsubmitorder('limit','wc','c', 20,'d',40);
- id error
-+---------+---------
- 3 (0,)
-SELECT * FROM market.fsubmitorder('limit','wd','d', 20,'e',40);
- id error
-+---------+---------
- 4 (0,)
-SELECT * FROM market.fsubmitorder('limit','we','e', 20,'f',40);
- id error
-+---------+---------
- 5 (0,)
-SELECT * FROM market.fsubmitorder('limit','wf','f', 20,'g',40);
- id error
-+---------+---------
- 6 (0,)
-SELECT * FROM market.fsubmitorder('limit','wg','g', 20,'a',40);
- id error
-+---------+---------
- 7 (0,)
-
---------------------------------------------------------------------------------
-table: torder
- id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
- 1 1 wa 5 a limit 10 b 0 1test_clienta
- 2 2 wb 20 b limit 40 c 30 2test_clientb
- 3 3 wc 20 c limit 40 d 30 3test_clientb
- 4 4 wd 20 d limit 40 e 30 4test_clientb
- 5 5 we 20 e limit 40 f 30 5test_clientb
- 6 6 wf 20 f limit 40 g 30 6test_clientb
- 7 7 wg 20 g limit 40 a 30 7test_clientb
-
---------------------------------------------------------------------------------
-table: tmsg
---------------------------------------------------------------------------------
- 1:Primitive id:1 from test_clienta: OK
-
- 2:Primitive id:2 from test_clientb: OK
-
- 3:Primitive id:3 from test_clientb: OK
-
- 4:Primitive id:4 from test_clientb: OK
-
- 5:Primitive id:5 from test_clientb: OK
-
- 6:Primitive id:6 from test_clientb: OK
-
- 7:Cycle id:1 Exchange id:7 for wf @test_clientb:
- 7:mvt_from wf @test_clientb : 10 'g' -> wg @test_clientb
- 6:mvt_to wf @test_clientb : 10 'f' <- we @test_clientb
- stock id:6 remaining after exchange: 30 'g'
-
- 8:Cycle id:1 Exchange id:1 for wg @test_clientb:
- 1:mvt_from wg @test_clientb : 10 'a' -> wa @test_clienta
- 7:mvt_to wg @test_clientb : 10 'g' <- wf @test_clientb
- stock id:7 remaining after exchange: 30 'a'
-
- 9:Cycle id:1 Exchange id:2 for wa @test_clienta:
- 2:mvt_from wa @test_clienta : 10 'b' -> wb @test_clientb
- 1:mvt_to wa @test_clienta : 10 'a' <- wg @test_clientb
- stock id:1 remaining after exchange: 0 'b'
-
- 10:Cycle id:1 Exchange id:3 for wb @test_clientb:
- 3:mvt_from wb @test_clientb : 10 'c' -> wc @test_clientb
- 2:mvt_to wb @test_clientb : 10 'b' <- wa @test_clienta
- stock id:2 remaining after exchange: 30 'c'
-
- 11:Cycle id:1 Exchange id:4 for wc @test_clientb:
- 4:mvt_from wc @test_clientb : 10 'd' -> wd @test_clientb
- 3:mvt_to wc @test_clientb : 10 'c' <- wb @test_clientb
- stock id:3 remaining after exchange: 30 'd'
-
- 12:Cycle id:1 Exchange id:5 for wd @test_clientb:
- 5:mvt_from wd @test_clientb : 10 'e' -> we @test_clientb
- 4:mvt_to wd @test_clientb : 10 'd' <- wc @test_clientb
- stock id:4 remaining after exchange: 30 'e'
-
- 13:Cycle id:1 Exchange id:6 for we @test_clientb:
- 6:mvt_from we @test_clientb : 10 'f' -> wf @test_clientb
- 5:mvt_to we @test_clientb : 10 'e' <- wd @test_clientb
- stock id:5 remaining after exchange: 30 'f'
-
- 14:Primitive id:7 from test_clientb: OK
-
diff --git a/src/test/sql/tu_7.res b/src/test/sql/tu_7.res
deleted file mode 100644
index 4cb36b2..0000000
--- a/src/test/sql/tu_7.res
+++ /dev/null
@@ -1,79 +0,0 @@
--- Competition between bilateral and multilateral exchange 1/2
----------------------------------------------------------
---variables
---USER:admin
-select * from market.tvar order by name;
- name value
-+---------+---------
-OC_CURRENT_OPENED 0
-OC_CURRENT_PHASE 102
-STACK_EXECUTED 0
- STACK_TOP 0
----------------------------------------------------------
---SELECT * FROM fsubmitorder(type,owner,qua_requ,qtt_requ,qua_prov,qtt_prov);
---USER:test_clientb
-SELECT * FROM market.fsubmitorder('limit','wb','b', 80,'a',40);
- id error
-+---------+---------
- 1 (0,)
-SELECT * FROM market.fsubmitorder('limit','wc','b', 20,'d',40);
- id error
-+---------+---------
- 2 (0,)
-SELECT * FROM market.fsubmitorder('limit','wd','d', 20,'a',40);
- id error
-+---------+---------
- 3 (0,)
---USER:test_clienta
-SELECT * FROM market.fsubmitorder('limit','wa','a', 40,'b',80);
- id error
-+---------+---------
- 4 (0,)
--- omega better for the trilateral exchange
--- it wins, the rest is used with a bilateral exchange
-
---------------------------------------------------------------------------------
-table: torder
- id oid own qtt_requ qua_requ typ qtt_prov qua_prov qtt own_id usr
-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
- 1 1 wb 80 b limit 40 a 20 1test_clientb
- 2 2 wc 20 b limit 40 d 0 2test_clientb
- 3 3 wd 20 d limit 40 a 0 3test_clientb
- 4 4 wa 40 a limit 80 b 0 4test_clienta
-
---------------------------------------------------------------------------------
-table: tmsg
---------------------------------------------------------------------------------
- 1:Primitive id:1 from test_clientb: OK
-
- 2:Primitive id:2 from test_clientb: OK
-
- 3:Primitive id:3 from test_clientb: OK
-
- 4:Cycle id:1 Exchange id:3 for wd @test_clientb:
- 3:mvt_from wd @test_clientb : 40 'a' -> wa @test_clienta
- 2:mvt_to wd @test_clientb : 40 'd' <- wc @test_clientb
- stock id:3 remaining after exchange: 0 'a'
-
- 5:Cycle id:1 Exchange id:1 for wa @test_clienta:
- 1:mvt_from wa @test_clienta : 40 'b' -> wc @test_clientb
- 3:mvt_to wa @test_clienta : 40 'a' <- wd @test_clientb
- stock id:4 remaining after exchange: 40 'b'
-
- 6:Cycle id:1 Exchange id:2 for wc @test_clientb:
- 2:mvt_from wc @test_clientb : 40 'd' -> wd @test_clientb
- 1:mvt_to wc @test_clientb : 40 'b' <- wa @test_clienta
- stock id:2 remaining after exchange: 0 'd'
-
- 7:Cycle id:4 Exchange id:5 for wb @test_clientb:
- 5:mvt_from wb @test_clientb : 20 'a' -> wa @test_clienta
- 4:mvt_to wb @test_clientb : 40 'b' <- wa @test_clienta
- stock id:1 remaining after exchange: 20 'a'
-
- 8:Cycle id:4 Exchange id:4 for wa @test_clienta:
- 4:mvt_from wa @test_clienta : 40 'b' -> wb @test_clientb
- 5:mvt_to wa @test_clienta : 20 'a' <- wb @test_clientb
- stock id:4 remaining after exchange: 0 'b'
-
- 9:Primitive id:4 from test_clienta: OK
-
diff --git a/src/worker_ob.c b/src/worker_ob.c
index bcc12db..af2669e 100644
--- a/src/worker_ob.c
+++ b/src/worker_ob.c
@@ -1,382 +1,377 @@
/* -------------------------------------------------------------------------
*
* worker_ob.c
* Code based on worker_spi.c
*
* This code connects to a database, lauches two background workers.
for i in [0,1],workeri do the following:
while(true)
dowait := market.workeri()
if (dowait):
wait(dowait) // dowait millisecond
These worker do nothing if the schema market is not installed
To force restarting of a bg_worker,send a SIGHUP signal to the worker process
*
* -------------------------------------------------------------------------
*/
#include "postgres.h"
/* These are always necessary for a bgworker */
#include "miscadmin.h"
#include "postmaster/bgworker.h"
#include "storage/ipc.h"
#include "storage/latch.h"
#include "storage/lwlock.h"
#include "storage/proc.h"
#include "storage/shmem.h"
/* these headers are used by this particular worker's code */
#include "access/xact.h"
#include "executor/spi.h"
#include "fmgr.h"
#include "lib/stringinfo.h"
#include "pgstat.h"
#include "utils/builtins.h"
#include "utils/snapmgr.h"
#include "tcop/utility.h"
-#define BGW_OPENCLOSE 1
-#define BGW_CONSUMESTACK 2
+#define BGW_NBWORKERS 2
+#define BGW_OPENCLOSE 0
+static char *worker_names[] = {"openclose","consumestack"};
// PG_MODULE_MAGIC;
void _PG_init(void);
/* flags set by signal handlers */
static volatile sig_atomic_t got_sighup = false;
static volatile sig_atomic_t got_sigterm = false;
/* GUC variable */
static char *worker_ob_database = "market";
/* others */
-static char *openclose = "openclose",
- *consumestack = "consumestack";
+
static char *worker_ob_user = "user_bo";
typedef struct worktable
{
- const char *schema;
const char *function_name;
- bool installed;
- int dowait;
+ int dowait;
} worktable;
/*
* Signal handler for SIGTERM
* Set a flag to let the main loop to terminate, and set our latch to wake
* it up.
*/
static void
worker_spi_sigterm(SIGNAL_ARGS)
{
int save_errno = errno;
got_sigterm = true;
if (MyProc)
SetLatch(&MyProc->procLatch);
errno = save_errno;
}
/*
* Signal handler for SIGHUP
* Set a flag to tell the main loop to reread the config file, and set
* our latch to wake it up.
*/
static void
worker_spi_sighup(SIGNAL_ARGS)
{
int save_errno = errno;
got_sighup = true;
if (MyProc)
SetLatch(&MyProc->procLatch);
errno = save_errno;
}
-
-static char* _get_worker_function_name(int index) {
- if(index == BGW_OPENCLOSE)
- return openclose;
- else if(index == BGW_CONSUMESTACK)
- return consumestack;
- else
- elog(FATAL, "bg_worker index unknown");
-}
-
-/*
- * Initialize workspace for a worker process: create the schema if it doesn't
- * already exist.
- */
-static void
-initialize_worker_ob(worktable *table)
-{
+static int _spi_exec_select_ret_int(StringInfoData buf) {
int ret;
int ntup;
bool isnull;
- StringInfoData buf;
- SetCurrentStatementStartTimestamp();
- StartTransactionCommand();
- SPI_connect();
- PushActiveSnapshot(GetTransactionSnapshot());
- pgstat_report_activity(STATE_RUNNING, "initializing spi_worker schema");
-
- /* XXX could we use CREATE SCHEMA IF NOT EXISTS? */
- initStringInfo(&buf);
- appendStringInfo(&buf, "select count(*) from pg_namespace where nspname = '%s'",
- table->schema);
-
- ret = SPI_execute(buf.data, true, 0);
+ ret = SPI_execute(buf.data, true, 1); // read_only -- one row returned
+ pfree(buf.data);
if (ret != SPI_OK_SELECT)
elog(FATAL, "SPI_execute failed: error code %d", ret);
+
if (SPI_processed != 1)
elog(FATAL, "not a singleton result");
- ntup = DatumGetInt64(SPI_getbinval(SPI_tuptable->vals[0],
+ ntup = DatumGetInt32(SPI_getbinval(SPI_tuptable->vals[0],
SPI_tuptable->tupdesc,
1, &isnull));
if (isnull)
- elog(FATAL, "null result");
-
- if (ntup != 0) {
- table->installed = true;
- elog(LOG, "%s function %s.%s installed",
- MyBgworkerEntry->bgw_name, table->schema, table->function_name);
- } else {
- table->installed = false;
- elog(LOG, "%s function %s.%s not installed",
- MyBgworkerEntry->bgw_name, table->schema, table->function_name);
- }
+ elog(FATAL, "null result");
+ return ntup;
+}
+
+static bool _test_market_installed() {
+ int ret;
+ StringInfoData buf;
+
+ initStringInfo(&buf);
+ appendStringInfo(&buf, "select count(*) from pg_namespace where nspname = 'market'");
+ ret = _spi_exec_select_ret_int(buf);
+ if(ret == 0)
+ return false;
+
+ resetStringInfo(&buf);
+ appendStringInfo(&buf, "select value from market.tvar where name = 'INSTALLED'");
+ ret = _spi_exec_select_ret_int(buf);
+ if(ret == 0)
+ return false;
+ return true;
+}
+
+/*
+ */
+static bool
+_worker_ob_installed()
+{
+ bool installed;
+
+ SetCurrentStatementStartTimestamp();
+ StartTransactionCommand();
+ SPI_connect();
+ PushActiveSnapshot(GetTransactionSnapshot());
+ pgstat_report_activity(STATE_RUNNING, "initializing spi_worker");
+
+ installed = _test_market_installed();
+
+ if (installed)
+ elog(LOG, "%s starting",MyBgworkerEntry->bgw_name);
+ else
+ elog(LOG, "%s waiting for installation",MyBgworkerEntry->bgw_name);
SPI_finish();
PopActiveSnapshot();
CommitTransactionCommand();
pgstat_report_activity(STATE_IDLE, NULL);
+ return installed;
}
static void _openclose_vacuum() {
- /* called in openclose bg_worker */
+ /* called by openclose bg_worker */
StringInfoData buf;
int ret;
initStringInfo(&buf);
appendStringInfo(&buf,"VACUUM FULL");
+ pgstat_report_activity(STATE_RUNNING, buf.data);
ret = SPI_execute(buf.data, false, 0);
+ pfree(buf.data);
if (ret != SPI_OK_UTILITY) // SPI_OK_UPDATE_RETURNING,SPI_OK_SELECT
// TODO CODE DE RETOUR?????
elog(FATAL, "cannot execute VACUUM FULL: error code %d",ret);
return;
}
static void
worker_ob_main(Datum main_arg)
{
int index = DatumGetInt32(main_arg);
worktable *table;
StringInfoData buf;
+ bool installed;
//char function_name[20];
- table = palloc(sizeof(worktable));
- table->schema = pstrdup("market");
- //sprintf(function_name, "worker%d", index);
- table->function_name = pstrdup(_get_worker_function_name(index));
+ table = palloc(sizeof(worktable));
+
+ table->function_name = pstrdup(worker_names[index]);
+ table->dowait = 0;
/* Establish signal handlers before unblocking signals. */
pqsignal(SIGHUP, worker_spi_sighup);
pqsignal(SIGTERM, worker_spi_sigterm);
/* We're now ready to receive signals */
BackgroundWorkerUnblockSignals();
/* Connect to our database */
if(!(worker_ob_database && *worker_ob_database))
elog(FATAL, "database name undefined");
BackgroundWorkerInitializeConnection(worker_ob_database, worker_ob_user);
-
-
- initialize_worker_ob(table);
-
- /*
- * Quote identifiers passed to us. Note that this must be done after
- * initialize_worker_ob, because that routine assumes the names are not
- * quoted.
- *
- * Note some memory might be leaked here.
- */
- table->schema = quote_identifier(table->schema);
- table->function_name = quote_identifier(table->function_name);
+ installed = _worker_ob_installed();
initStringInfo(&buf);
- appendStringInfo(&buf,"SELECT %s FROM %s.%s()",
- table->function_name, table->schema, table->function_name);
+ appendStringInfo(&buf,"SELECT %s FROM market.%s()",
+ table->function_name, table->function_name);
/*
* Main loop: do this until the SIGTERM handler tells us to terminate
*/
while (!got_sigterm)
{
int ret;
int rc;
int _worker_ob_naptime; // = worker_ob_naptime * 1000L;
- if(table->installed) // && !table->dowait)
+ if(installed) // && !table->dowait)
_worker_ob_naptime = table->dowait;
else
_worker_ob_naptime = 1000L; // 1 second
/*
* Background workers mustn't call usleep() or any direct equivalent:
* instead, they may wait on their process latch, which sleeps as
* necessary, but is awakened if postmaster dies. That way the
* background process goes away immediately in an emergency.
*/
+ /* done even if _worker_ob_naptime == 0 */
rc = WaitLatch(&MyProc->procLatch,
WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
_worker_ob_naptime );
ResetLatch(&MyProc->procLatch);
+
/* emergency bailout if postmaster has died */
if (rc & WL_POSTMASTER_DEATH)
proc_exit(1);
/*
* In case of a SIGHUP, just reload the configuration.
*/
if (got_sighup)
{
got_sighup = false;
ProcessConfigFile(PGC_SIGHUP);
- initialize_worker_ob(table);
+ installed = _worker_ob_installed();
}
- if( !table->installed) continue;
+ if( !installed) continue;
/*
* Start a transaction on which we can run queries. Note that each
* StartTransactionCommand() call should be preceded by a
* SetCurrentStatementStartTimestamp() call, which sets both the time
* for the statement we're about the run, and also the transaction
* start time. Also, each other query sent to SPI should probably be
* preceded by SetCurrentStatementStartTimestamp(), so that statement
* start time is always up to date.
*
* The SPI_connect() call lets us run queries through the SPI manager,
* and the PushActiveSnapshot() call creates an "active" snapshot
* which is necessary for queries to have MVCC data to work on.
*
* The pgstat_report_activity() call makes our activity visible
* through the pgstat views.
*/
SetCurrentStatementStartTimestamp();
StartTransactionCommand();
SPI_connect();
PushActiveSnapshot(GetTransactionSnapshot());
pgstat_report_activity(STATE_RUNNING, buf.data);
/* We can now execute queries via SPI */
ret = SPI_execute(buf.data, false, 0);
if (ret != SPI_OK_SELECT) // SPI_OK_UPDATE_RETURNING)
- elog(FATAL, "cannot execute %s.%s(): error code %d",
- table->schema, table->function_name, ret);
+ elog(FATAL, "cannot execute market.%s(): error code %d",
+ table->function_name, ret);
- if (SPI_processed > 0)
+ if (SPI_processed != 1) // number of tuple returned
+ elog(FATAL, "market.%s() returned %d tuples instead of one",
+ table->function_name, SPI_processed);
{
bool isnull;
int32 val;
val = DatumGetInt32(SPI_getbinval(SPI_tuptable->vals[0],
SPI_tuptable->tupdesc,
1, &isnull));
table->dowait = 0;
if (!isnull) {
- //elog(LOG, "%s: execution of %s.%s() returned %d",
- // MyBgworkerEntry->bgw_name,
- // table->schema, table->function_name, val);
if (val >=0)
table->dowait = val;
else {
//
if((val == -100) && (index == BGW_OPENCLOSE))
_openclose_vacuum();
}
}
}
/*
* And finish our transaction.
*/
SPI_finish();
PopActiveSnapshot();
CommitTransactionCommand();
pgstat_report_activity(STATE_IDLE, NULL);
}
proc_exit(0);
}
/*
* Entrypoint of this module.
*
* We register more than one worker process here, to demonstrate how that can
* be done.
*/
void
_PG_init(void)
{
BackgroundWorker worker;
unsigned int i;
/* get the configuration */
/*
DefineCustomIntVariable("worker_ob.naptime",
"Mimimum duration of wait time (in milliseconds).",
NULL,
&worker_ob_naptime,
100,
1,
INT_MAX,
PGC_SIGHUP,
0,
NULL,
NULL,
NULL); */
DefineCustomStringVariable("worker_ob.database",
"Name of the database.",
NULL,
&worker_ob_database,
"market",
PGC_SIGHUP, 0,
NULL,NULL,NULL);
/* set up common data for all our workers */
worker.bgw_flags = BGWORKER_SHMEM_ACCESS |
BGWORKER_BACKEND_DATABASE_CONNECTION;
worker.bgw_start_time = BgWorkerStart_RecoveryFinished;
worker.bgw_restart_time = 60; // BGW_NEVER_RESTART;
worker.bgw_main = worker_ob_main;
/*
* Now fill in worker-specific data, and do the actual registrations.
*/
- for (i = 1; i <= 2; i++)
+ for (i = 0; i < BGW_NBWORKERS; i++)
{
- snprintf(worker.bgw_name, BGW_MAXLEN, "market.%s", _get_worker_function_name(i));
+ snprintf(worker.bgw_name, BGW_MAXLEN, "market.%s", worker_names[i]);
worker.bgw_main_arg = Int32GetDatum(i);
RegisterBackgroundWorker(&worker);
}
}
|
olivierch/openBarter
|
c4d501baf8e9cd602945083f84cd8985ae2f0b99
|
corrections
|
diff --git a/src/test/py/run.py b/src/test/py/run.py
index 405010a..7b4a4ba 100755
--- a/src/test/py/run.py
+++ b/src/test/py/run.py
@@ -1,303 +1,213 @@
# -*- coding: utf-8 -*-
'''
Framework de tests tu_*
***************************************************************************
execution:
reset_market.sql
soumis:
list de primitives t_*.sql
résultats:
état de l'order book
état de tmsg
comparaison attendu/obtenu
dans src/test/
run.py
reset_market.sql
sql/t_*.sql
expected/t_*.res
obtained/t_*.res
boucle pour chaque t_*.sql:
reset_market.sql
exécuter t_*.sql
dumper les résultats dans obtained/t_.res
comparer expected et obtained
'''
import sys,os,time,logging
import psycopg2
import psycopg2.extensions
import traceback
import srvob_conf
import molet
import utilt
PARLEN=80
prtest = utilt.PrTest(PARLEN,'=')
titre_test = "UNDEFINED"
options = None
def tests():
global titre_test
curdir = os.path.abspath(__file__)
curdir = os.path.dirname(curdir)
curdir = os.path.dirname(curdir)
sqldir = os.path.join(curdir,'sql')
molet.mkdir(os.path.join(curdir,'results'),ignoreWarning = True)
molet.mkdir(os.path.join(curdir,'expected'),ignoreWarning = True)
try:
- wait_for_true(0.1,"SELECT value=102,value FROM market.tvar WHERE name='OC_CURRENT_PHASE'",
+ utilt.wait_for_true(srvob_conf.dbBO,0.1,"SELECT value=102,value FROM market.tvar WHERE name='OC_CURRENT_PHASE'",
msg="Waiting for market opening")
except psycopg2.OperationalError,e:
print "Please adjust DB_NAME,DB_USER,DB_PWD,DB_PORT parameters of the file src/test/py/srv_conf.py"
print "The test program could not connect to the market"
exit(1)
if options.test is None:
_fts = [f for f in os.listdir(sqldir) if f.startswith('tu_') and f[-4:]=='.sql']
_fts.sort(lambda x,y: cmp(x,y))
else:
_nt = options.test + '.sql'
_fts = os.path.join(sqldir,_nt)
if not os.path.exists(_fts):
print 'This test \'%s\' was not found' % _fts
return
else:
_fts = [_nt]
_tok,_terr,_num_test = 0,0,0
- prtest.title('running regression tests on %s' % (srvob_conf.DB_NAME,))
+ prtest.title('running tests on database "%s"' % (srvob_conf.DB_NAME,))
#print '='*PARLEN
print ''
print 'Num\tStatus\tName'
print ''
for file_test in _fts: # itération on test cases
_nom_test = file_test[:-4]
_terr +=1
_num_test +=1
file_result = file_test[:-4]+'.res'
_fte = os.path.join(curdir,'results',file_result)
_fre = os.path.join(curdir,'expected',file_result)
with open(_fte,'w') as f:
cur = None
exec_script(cur,'reset_market.sql',None)
exec_script(cur,file_test,f)
- wait_for_true(20,"SELECT market.fstackdone()")
+ utilt.wait_for_true(srvob_conf.dbBO,20,"SELECT market.fstackdone()")
+
+ dump = utilt.Dumper(srvob_conf.dbBO,options)
conn = molet.DbData(srvob_conf.dbBO)
try:
with molet.DbCursor(conn) as cur:
- dump_torder(cur,f)
- dump_tmsg(cur,f)
+ dump.torder(cur,f)
+ dump.tmsg(cur,f)
finally:
conn.close()
if(os.path.exists(_fre)):
- if(files_clones(_fte,_fre)):
+ if(utilt.files_clones(_fte,_fre)):
_tok +=1
_terr -=1
print '%i\tY\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\tN\t%s\t%s' % (_num_test,_nom_test,titre_test)
else:
print '%i\t?\t%s\t%s' % (_num_test,_nom_test,titre_test)
# display global results
print ''
print 'Test status: (Y)expected ==results, (N)expected!=results,(F)failed, (?)expected undefined'
prtest.line()
if(_terr == 0):
prtest.center('\tAll %i tests passed' % _tok)
else:
prtest.center('\t%i tests KO, %i passed' % (_terr,_tok))
prtest.line()
def get_prefix_file(fm):
return '.'.join(fm.split('.')[:-1])
SEPARATOR = '\n'+'-'*PARLEN +'\n'
def exec_script(cur,fn,fdr):
global titre_test
fn = os.path.join('sql',fn)
if( not os.path.exists(fn)):
raise ValueError('The script %s is not found' % fn)
cur_login = 'admin'
titre_test = None
with open(fn,'r') as f:
for line in f:
line = line.strip()
if len(line) == 0:
continue
if line.startswith('--'):
if titre_test is None:
titre_test = line
elif line.startswith('--USER:'):
cur_login = line[7:].strip()
if fdr:
fdr.write(line+'\n')
else:
if fdr:
fdr.write(line+'\n')
execinst(cur_login,line,fdr,cur)
return
def execinst(cur_login,sql,fdr,cursor):
+ global options
+
if cur_login == 'admin':
cur_login = None
conn = molet.DbData(srvob_conf.dbBO,login = cur_login)
+ dump = utilt.Dumper(srvob_conf.dbBO,options)
try:
with molet.DbCursor(conn,exit = True) as _cur:
_cur.execute(sql)
if fdr:
- dump_cur(_cur,fdr)
+ dump.cur(_cur,fdr)
finally:
conn.close()
'''
yorder not shown:
pos_requ box, -- box (point(lat,lon),point(lat,lon))
pos_prov box, -- box (point(lat,lon),point(lat,lon))
dist float8,
carre_prov box -- carre_prov @> pos_requ
'''
-def dump_torder(cur,fdr):
- fdr.write(SEPARATOR)
- fdr.write('table: torder\n')
- cur.execute('SELECT * FROM market.vord order by id asc')
- dump_cur(cur,fdr)
-
-def dump_cur(cur,fdr,_len=10):
- if(cur is None): return
- cols = [e.name for e in cur.description]
- row_format = ('{:>'+str(_len)+'}')*len(cols)
- fdr.write(row_format.format(*cols)+'\n')
- fdr.write(row_format.format(*(['+'+'-'*(_len-1)]*len(cols)))+'\n')
- for res in cur:
- fdr.write(row_format.format(*res)+'\n')
- return
-
-import json
-
-def dump_tmsg(cur,fdr):
- fdr.write(SEPARATOR)
- fdr.write('table: tmsg')
- fdr.write(SEPARATOR)
-
- cur.execute('SELECT id,typ,usr,jso FROM market.tmsg order by id asc')
- for res in cur:
- _id,typ,usr,jso = res
- _jso = json.loads(jso)
- if typ == 'response':
- if _jso['error']['code']==None:
- _msg = 'Primitive id:%i from %s: OK\n' % (_jso['id'],usr)
- else:
- _msg = 'Primitive id:%i from %s: ERROR(%i,%s)\n' % (usr,_jso['id'],
- _jso['error']['code'],_jso['error']['reason'])
- elif typ == 'exchange':
-
- _fmt = '''Cycle id:%i Exchange id:%i for %s @%s:
- \t%i:mvt_from %s @%s : %i \'%s\' -> %s @%s
- \t%i:mvt_to %s @%s : %i \'%s\' <- %s @%s
- \tstock id:%i remaining after exchange: %i \'%s\' \n'''
- _dat = (
- _jso['cycle'],_jso['id'],_jso['stock']['own'],usr,
- _jso['mvt_from']['id'],_jso['stock']['own'],usr,_jso['mvt_from']['qtt'],_jso['mvt_from']['nat'],_jso['mvt_from']['own'],_jso['mvt_from']['usr'],
- _jso['mvt_to']['id'], _jso['stock']['own'],usr,_jso['mvt_to']['qtt'],_jso['mvt_to']['nat'],_jso['mvt_to']['own'],_jso['mvt_to']['usr'],
- _jso['stock']['id'],_jso['stock']['qtt'],_jso['stock']['nat'])
- _msg = _fmt %_dat
- else:
- _msg = str(res)
-
- fdr.write('\t%i:'%_id+_msg+'\n')
- if options.verbose:
- print jso
- return
-
-import filecmp
-def files_clones(f1,f2):
- #res = filecmp.cmp(f1,f2)
- return (md5sum(f1) == md5sum(f2))
-
-import hashlib
-def md5sum(filename, blocksize=65536):
- hash = hashlib.md5()
- with open(filename, "r+b") as f:
- for block in iter(lambda: f.read(blocksize), ""):
- hash.update(block)
- return hash.hexdigest()
-
-def wait_for_true(delai,sql,msg=None):
- _i = 0;
- _w = 0;
-
- while True:
- _i +=1
-
- conn = molet.DbData(srvob_conf.dbBO)
- try:
- with molet.DbCursor(conn) as cur:
- cur.execute(sql)
- r = cur.fetchone()
- # print r
- if r[0] == True:
- break
- finally:
- conn.close()
-
- if msg is None:
- pass
- elif(_i%10)==0:
- print msg
-
- _a = 0.1;
- _w += _a;
-
- if _w > delai: # seconds
- raise ValueError('After %f seconds, %s != True' % (_w,sql))
- time.sleep(_a)
'''---------------------------------------------------------------------------
arguments
---------------------------------------------------------------------------'''
from optparse import OptionParser
import os
def main():
global options
usage = """usage: %prog [options]
tests """
parser = OptionParser(usage)
parser.add_option("-t","--test",action="store",type="string",dest="test",help="test",
default= None)
parser.add_option("-v","--verbose",action="store_true",dest="verbose",help="verbose",default=False)
(options, args) = parser.parse_args()
# um = os.umask(0177) # u=rw,g=,o=
tests()
if __name__ == "__main__":
main()
diff --git a/src/test/py/utilt.py b/src/test/py/utilt.py
index 61b16ec..47e7052 100644
--- a/src/test/py/utilt.py
+++ b/src/test/py/utilt.py
@@ -1,23 +1,133 @@
# -*- coding: utf-8 -*-
import string
class PrTest(object):
+ ''' results printing '''
+ def __init__(self,parlen,sep):
+ self.parlen = parlen+ parlen%2
+ self.sep = sep
- def __init__(self,parlen,sep):
- self.parlen = parlen+ parlen%2
- self.sep = sep
+ def title(self,title):
+ _l = len(title)
+ _p = max(_l%2 +_l,40)
+ _x = self.parlen -_p
+ if (_x > 2):
+ print (_x/2)*self.sep + string.center(title,_p) + (_x/2)*self.sep
+ else:
+ print string.center(text,self.parlen)
- def title(self,title):
- _l = len(title)
- _p = max(_l%2 +_l,40)
- _x = self.parlen -_p
- if (_x > 2):
- print (_x/2)*self.sep + string.center(title,_p) + (_x/2)*self.sep
- else:
- print string.center(text,self.parlen)
+ def line(self):
+ print self.parlen*self.sep
- def line(self):
- print self.parlen*self.sep
+ def center(self,text):
+ print string.center(text,self.parlen)
- def center(self,text):
- print string.center(text,self.parlen)
\ No newline at end of file
+############################################################################
+''' file comparison '''
+import filecmp
+def files_clones(f1,f2):
+ #res = filecmp.cmp(f1,f2)
+ return (md5sum(f1) == md5sum(f2))
+
+import hashlib
+def md5sum(filename, blocksize=65536):
+ hash = hashlib.md5()
+ with open(filename, "r+b") as f:
+ for block in iter(lambda: f.read(blocksize), ""):
+ hash.update(block)
+ return hash.hexdigest()
+
+############################################################################
+SEPARATOR = '\n'+'-'*80 +'\n'
+import json
+
+class Dumper(object):
+
+ def __init__(self,conf,options):
+ self.options =options
+ self.conf = conf
+
+ def torder(self,cur,fdr):
+ fdr.write(SEPARATOR)
+ fdr.write('table: torder\n')
+ cur.execute('SELECT * FROM market.vord order by id asc')
+ self.cur(cur,fdr)
+ return
+
+ def cur(self,cur,fdr,_len=10):
+ if(cur is None): return
+ cols = [e.name for e in cur.description]
+ row_format = ('{:>'+str(_len)+'}')*len(cols)
+ fdr.write(row_format.format(*cols)+'\n')
+ fdr.write(row_format.format(*(['+'+'-'*(_len-1)]*len(cols)))+'\n')
+ for res in cur:
+ fdr.write(row_format.format(*res)+'\n')
+ return
+
+ def tmsg(self,cur,fdr):
+ fdr.write(SEPARATOR)
+ fdr.write('table: tmsg')
+ fdr.write(SEPARATOR)
+
+ cur.execute('SELECT id,typ,usr,jso FROM market.tmsg order by id asc')
+ for res in cur:
+ _id,typ,usr,jso = res
+ _jso = json.loads(jso)
+ if typ == 'response':
+ if _jso['error']['code']==None:
+ _msg = 'Primitive id:%i from %s: OK\n' % (_jso['id'],usr)
+ else:
+ _msg = 'Primitive id:%i from %s: ERROR(%i,%s)\n' % (usr,_jso['id'],
+ _jso['error']['code'],_jso['error']['reason'])
+ elif typ == 'exchange':
+
+ _fmt = '''Cycle id:%i Exchange id:%i for %s @%s:
+ \t%i:mvt_from %s @%s : %i \'%s\' -> %s @%s
+ \t%i:mvt_to %s @%s : %i \'%s\' <- %s @%s
+ \tstock id:%i remaining after exchange: %i \'%s\' \n'''
+ _dat = (
+ _jso['cycle'],_jso['id'],_jso['stock']['own'],usr,
+ _jso['mvt_from']['id'],_jso['stock']['own'],usr,_jso['mvt_from']['qtt'],_jso['mvt_from']['nat'],_jso['mvt_from']['own'],_jso['mvt_from']['usr'],
+ _jso['mvt_to']['id'], _jso['stock']['own'],usr,_jso['mvt_to']['qtt'],_jso['mvt_to']['nat'],_jso['mvt_to']['own'],_jso['mvt_to']['usr'],
+ _jso['stock']['id'],_jso['stock']['qtt'],_jso['stock']['nat'])
+ _msg = _fmt %_dat
+ else:
+ _msg = str(res)
+
+ fdr.write('\t%i:'%_id+_msg+'\n')
+ if self.options.verbose:
+ print jso
+ return
+
+############################################################################
+import molet
+import time
+def wait_for_true(conf,delai,sql,msg=None):
+ _i = 0;
+ _w = 0;
+
+ while True:
+ _i +=1
+
+ conn = molet.DbData(conf)
+ try:
+ with molet.DbCursor(conn) as cur:
+ cur.execute(sql)
+ r = cur.fetchone()
+ # print r
+ if r[0] == True:
+ break
+ finally:
+ conn.close()
+
+ if msg is None:
+ pass
+ elif(_i%10)==0:
+ print msg
+
+ _a = 0.1;
+ _w += _a;
+
+ if _w > delai: # seconds
+ raise ValueError('After %f seconds, %s != True' % (_w,sql))
+ time.sleep(_a)
\ No newline at end of file
|
olivierch/openBarter
|
89d3049c009e9a1295cf9d47d991288f15a12c8b
|
end for v2.0.2
|
diff --git a/META.json b/META.json
index 6426f2e..bcaef26 100644
--- a/META.json
+++ b/META.json
@@ -1,51 +1,51 @@
{
"name": "openbarter",
"abstract": "Multilateral agreement production engine.",
"description": "openBarter is a central limit order book that performs competition on price even if money is not necessary to express them. It is a barter market allowing cyclic exchange between two partners (buyer and seller) or more in a single transaction. It provides a fluidity equivalent to that of a regular market place.",
"version": "0.1.0",
"maintainer": [
"Olivier Chaussavoine <[email protected]>"
],
"license": "gpl_3",
"prereqs": {
"runtime": {
"requires": {
"plpgsql": "1.0",
"PostgreSQL": "9.3.2"
}
}
},
"provides": {
"openbarter": {
"file": "src/sql/model.sql",
"docfile": "doc/doc-ob.odt",
- "version": "2.0.2",
+ "version": "0.0.1",
"abstract": "Multilateral agreement engine."
}
},
"resources": {
"bugtracker": {
"web": "http://github.com/olivierch/openBarter/issues"
},
"repository": {
"url": "git://github.com/olivierch/openBarter.git",
"web": "http://olivierch.github.com/openBarter",
"type": "git"
}
},
"generated_by": "Olivier Chaussavoine",
"meta-spec": {
"version": "1.0.0",
"url": "http://pgxn.org/meta/spec.txt"
},
"tags": [
"cyclic exchange","non-bilateral","mutilateral",
"central","limit","order","book",
"barter",
"market"
],
"release_status": "stable"
}
diff --git a/src/Makefile b/src/Makefile
index c291edb..aff7e74 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -1,38 +1,38 @@
MODULE_big = flowf
OBJS = flowm.o yflow.o yflowparse.o yorder.o flowc.o earthrad.o worker_ob.o
EXTENSION = flowf
-DATA = flowf--2.0.sql flowf--unpackaged--2.0.sql
+DATA = flowf--0.1.sql flowf--unpackaged--0.1.sql
REGRESS = testflow_1 testflow_2 testflow_2a
EXTRA_CLEAN = yflowparse.c yflowscan.c test/results/*.res test/py/*.pyc ../doc/*.pdf
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
yflowparse.o: yflowscan.c
yflowparse.c: yflowparse.y
ifdef BISON
$(BISON) $(BISONFLAGS) -o $@ $<
else
@$(missing) bison $< $@
endif
yflowscan.c: yflowscan.l
ifdef FLEX
$(FLEX) $(FLEXFLAGS) -o'$@' $<
else
@$(missing) flex $< $@
endif
test: installcheck test/py/run.py test/*.sql
cd test; python py/run.py; cd ..
doc:
soffice --invisible --norestore --convert-to pdf --outdir ../doc ../doc/*.odt
diff --git a/src/flowf--2.0.sql b/src/flowf--0.1.sql
similarity index 100%
rename from src/flowf--2.0.sql
rename to src/flowf--0.1.sql
diff --git a/src/flowf--unpackaged--2.0.sql b/src/flowf--unpackaged--0.1.sql
similarity index 100%
rename from src/flowf--unpackaged--2.0.sql
rename to src/flowf--unpackaged--0.1.sql
diff --git a/src/flowf.control b/src/flowf.control
index 96c5588..c14b816 100644
--- a/src/flowf.control
+++ b/src/flowf.control
@@ -1,5 +1,5 @@
# flowf extension
comment = 'data type for cycle of orders'
-default_version = '2.0'
+default_version = '0.1'
module_pathname = '$libdir/flowf'
relocatable = true
diff --git a/src/sql/roles.sql b/src/sql/roles.sql
index 504a9c8..f2194f7 100644
--- a/src/sql/roles.sql
+++ b/src/sql/roles.sql
@@ -1,95 +1,95 @@
\set ECHO none
/* script executed for the whole cluster */
SET client_min_messages = warning;
SET log_error_verbosity = terse;
/* flowf extension */
-- drop extension if exists btree_gin cascade;
-- create extension btree_gin with version '1.0';
DROP EXTENSION IF EXISTS flowf;
-CREATE EXTENSION flowf VERSION '2.0';
+CREATE EXTENSION flowf WITH VERSION '0.1';
--------------------------------------------------------------------------------
CREATE FUNCTION _create_role(_role text) RETURNS int AS $$
BEGIN
BEGIN
EXECUTE 'CREATE ROLE ' || _role;
EXCEPTION WHEN duplicate_object THEN
NULL;
END;
EXECUTE 'ALTER ROLE ' || _role || ' NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOREPLICATION';
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/* definition of roles
-- role_com ----> role_co --------->role_client---->clientA
| \-->clientB
|\-> role_co_closed
|
\-> role_bo---->user_bo
access by clients can be disabled/enabled with a single command:
REVOKE role_co FROM role_client
GRANT role_co TO role_client
opening/closing of market is performed by switching role_client
between role_co and role_co_closed
same thing for role batch with role_bo:
REVOKE role_bo FROM user_bo
GRANT role_bo TO user_bo
*/
--------------------------------------------------------------------------------
select _create_role('prod'); -- owner of market objects
ALTER ROLE prod WITH createrole;
/* so that prod can modify roles at opening and closing. */
select _create_role('role_com');
SELECT _create_role('role_co'); -- when market is opened
GRANT role_com TO role_co;
SELECT _create_role('role_co_closed'); -- when market is closed
GRANT role_com TO role_co_closed;
SELECT _create_role('role_client');
GRANT role_co_closed TO role_client; -- maket phase 101
-- role_com ---> role_bo----> user_bo
SELECT _create_role('role_bo');
GRANT role_com TO role_bo;
-- ALTER ROLE role_bo INHERIT;
SELECT _create_role('user_bo');
GRANT role_bo TO user_bo;
-- two connections are allowed for user_bo
ALTER ROLE user_bo WITH LOGIN CONNECTION LIMIT 2;
--------------------------------------------------------------------------------
select _create_role('test_clienta');
ALTER ROLE test_clienta WITH login;
GRANT role_client TO test_clienta;
select _create_role('test_clientb');
ALTER ROLE test_clientb WITH login;
GRANT role_client TO test_clientb;
select _create_role('test_clientc');
ALTER ROLE test_clientc WITH login;
GRANT role_client TO test_clientc;
select _create_role('test_clientd');
ALTER ROLE test_clientd WITH login;
GRANT role_client TO test_clientd;
|
olivierch/openBarter
|
878e838dd08de4f0f2f5db8b74873ce06732dcbc
|
preparation for v2.0.2
|
diff --git a/META.json b/META.json
index bcaef26..6426f2e 100644
--- a/META.json
+++ b/META.json
@@ -1,51 +1,51 @@
{
"name": "openbarter",
"abstract": "Multilateral agreement production engine.",
"description": "openBarter is a central limit order book that performs competition on price even if money is not necessary to express them. It is a barter market allowing cyclic exchange between two partners (buyer and seller) or more in a single transaction. It provides a fluidity equivalent to that of a regular market place.",
"version": "0.1.0",
"maintainer": [
"Olivier Chaussavoine <[email protected]>"
],
"license": "gpl_3",
"prereqs": {
"runtime": {
"requires": {
"plpgsql": "1.0",
"PostgreSQL": "9.3.2"
}
}
},
"provides": {
"openbarter": {
"file": "src/sql/model.sql",
"docfile": "doc/doc-ob.odt",
- "version": "0.0.1",
+ "version": "2.0.2",
"abstract": "Multilateral agreement engine."
}
},
"resources": {
"bugtracker": {
"web": "http://github.com/olivierch/openBarter/issues"
},
"repository": {
"url": "git://github.com/olivierch/openBarter.git",
"web": "http://olivierch.github.com/openBarter",
"type": "git"
}
},
"generated_by": "Olivier Chaussavoine",
"meta-spec": {
"version": "1.0.0",
"url": "http://pgxn.org/meta/spec.txt"
},
"tags": [
"cyclic exchange","non-bilateral","mutilateral",
"central","limit","order","book",
"barter",
"market"
],
"release_status": "stable"
}
diff --git a/src/Makefile b/src/Makefile
index aff7e74..c291edb 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -1,38 +1,38 @@
MODULE_big = flowf
OBJS = flowm.o yflow.o yflowparse.o yorder.o flowc.o earthrad.o worker_ob.o
EXTENSION = flowf
-DATA = flowf--0.1.sql flowf--unpackaged--0.1.sql
+DATA = flowf--2.0.sql flowf--unpackaged--2.0.sql
REGRESS = testflow_1 testflow_2 testflow_2a
EXTRA_CLEAN = yflowparse.c yflowscan.c test/results/*.res test/py/*.pyc ../doc/*.pdf
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
yflowparse.o: yflowscan.c
yflowparse.c: yflowparse.y
ifdef BISON
$(BISON) $(BISONFLAGS) -o $@ $<
else
@$(missing) bison $< $@
endif
yflowscan.c: yflowscan.l
ifdef FLEX
$(FLEX) $(FLEXFLAGS) -o'$@' $<
else
@$(missing) flex $< $@
endif
test: installcheck test/py/run.py test/*.sql
cd test; python py/run.py; cd ..
doc:
soffice --invisible --norestore --convert-to pdf --outdir ../doc ../doc/*.odt
diff --git a/src/flowf--0.1.sql b/src/flowf--2.0.sql
similarity index 100%
rename from src/flowf--0.1.sql
rename to src/flowf--2.0.sql
diff --git a/src/flowf--unpackaged--0.1.sql b/src/flowf--unpackaged--2.0.sql
similarity index 100%
rename from src/flowf--unpackaged--0.1.sql
rename to src/flowf--unpackaged--2.0.sql
diff --git a/src/flowf.control b/src/flowf.control
index c14b816..96c5588 100644
--- a/src/flowf.control
+++ b/src/flowf.control
@@ -1,5 +1,5 @@
# flowf extension
comment = 'data type for cycle of orders'
-default_version = '0.1'
+default_version = '2.0'
module_pathname = '$libdir/flowf'
relocatable = true
diff --git a/src/sql/roles.sql b/src/sql/roles.sql
index f2194f7..504a9c8 100644
--- a/src/sql/roles.sql
+++ b/src/sql/roles.sql
@@ -1,95 +1,95 @@
\set ECHO none
/* script executed for the whole cluster */
SET client_min_messages = warning;
SET log_error_verbosity = terse;
/* flowf extension */
-- drop extension if exists btree_gin cascade;
-- create extension btree_gin with version '1.0';
DROP EXTENSION IF EXISTS flowf;
-CREATE EXTENSION flowf WITH VERSION '0.1';
+CREATE EXTENSION flowf VERSION '2.0';
--------------------------------------------------------------------------------
CREATE FUNCTION _create_role(_role text) RETURNS int AS $$
BEGIN
BEGIN
EXECUTE 'CREATE ROLE ' || _role;
EXCEPTION WHEN duplicate_object THEN
NULL;
END;
EXECUTE 'ALTER ROLE ' || _role || ' NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOREPLICATION';
RETURN 0;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
/* definition of roles
-- role_com ----> role_co --------->role_client---->clientA
| \-->clientB
|\-> role_co_closed
|
\-> role_bo---->user_bo
access by clients can be disabled/enabled with a single command:
REVOKE role_co FROM role_client
GRANT role_co TO role_client
opening/closing of market is performed by switching role_client
between role_co and role_co_closed
same thing for role batch with role_bo:
REVOKE role_bo FROM user_bo
GRANT role_bo TO user_bo
*/
--------------------------------------------------------------------------------
select _create_role('prod'); -- owner of market objects
ALTER ROLE prod WITH createrole;
/* so that prod can modify roles at opening and closing. */
select _create_role('role_com');
SELECT _create_role('role_co'); -- when market is opened
GRANT role_com TO role_co;
SELECT _create_role('role_co_closed'); -- when market is closed
GRANT role_com TO role_co_closed;
SELECT _create_role('role_client');
GRANT role_co_closed TO role_client; -- maket phase 101
-- role_com ---> role_bo----> user_bo
SELECT _create_role('role_bo');
GRANT role_com TO role_bo;
-- ALTER ROLE role_bo INHERIT;
SELECT _create_role('user_bo');
GRANT role_bo TO user_bo;
-- two connections are allowed for user_bo
ALTER ROLE user_bo WITH LOGIN CONNECTION LIMIT 2;
--------------------------------------------------------------------------------
select _create_role('test_clienta');
ALTER ROLE test_clienta WITH login;
GRANT role_client TO test_clienta;
select _create_role('test_clientb');
ALTER ROLE test_clientb WITH login;
GRANT role_client TO test_clientb;
select _create_role('test_clientc');
ALTER ROLE test_clientc WITH login;
GRANT role_client TO test_clientc;
select _create_role('test_clientd');
ALTER ROLE test_clientd WITH login;
GRANT role_client TO test_clientd;
|
olivierch/openBarter
|
d314fc969d18c521d4f2d3c3bcf7b2e0ec066f45
|
correction of the release script
|
diff --git a/doc/release-script b/doc/release-script
index bc907db..94a7d37 100755
--- a/doc/release-script
+++ b/doc/release-script
@@ -1,65 +1,75 @@
#!/bin/bash
# AUTOMATISATION DES RELEASE
# **************************
# modifier VERSION-X,VERSION-Y,VERSION-Z dans src/sql/tables.sql
# ici 0.1.3
# cd openbarter
# ./doc/release-script 0.1.2 > ~/.bash/do-release_0.1.3
# chmod u+x ~/.bash/do-release_0.1.3
# do-release_0.1.3
#
# chargement sur pgxn
# https://manager.pgxn.org/upload
# (olivierch,...)
# upload the file ../release/openbarter-0.1.3
if [ "$#" -ne 1 ]; then echo "one parameter required:the version of the previous release"; exit 1; fi
last="$1"
if [ ! -d "doc" ]; then echo "must be run from the root";exit 1; fi
# lecture de la version dans src/tables.sql
VERSION=`cat src/sql/tables.sql | grep "('VERSION-X',[0-9].*),('VERSION-Y',[0-9].*),('VERSION-Z',[0-9].*)," | sed "s/('VERSION-[X-Z]',//g" |sed "s/),/ /g" `
ARR=($VERSION)
VERSION_X=${ARR[0]}
VERSION_Y=${ARR[1]}
VERSION_Z=${ARR[2]}
NEW_VERSION=`echo $VERSION_X.$VERSION_Y.$VERSION_Z`
NEW_VERS=$VERSION_X.$VERSION_Y
echo "cp META.json META.json.orig"
echo "sed 's,0.0.1,$NEW_VERSION,g' META.json.orig > META.json"
+
echo "cp openbarter.control openbarter.control.orig"
echo "sed 's,0.0.1,$NEW_VERSION,g' openbarter.control.orig > openbarter.control"
+
echo "cp src/Makefile src/Makefile.orig"
echo "cat src/Makefile.orig | sed \"s/flowf--0.1.sql flowf--unpackaged--0.1.sql/flowf--$VERSION_X.$VERSION_Y.sql flowf--unpackaged--$VERSION_X.$VERSION_Y.sql/\" > src/Makefile"
+
+echo "cp src/flowf.control src/flowf.control.orig"
+echo "cat src/flowf.control.orig | sed \"s/0.1/$VERSION_X.$VERSION_Y/\" > src/flowf.control"
+
echo "cp src/sql/roles.sql src/sql/roles.sql.orig"
echo "cat src/sql/roles.sql.orig | sed \"s/flowf WITH VERSION '0.1'/flowf VERSION '$VERSION_X.$VERSION_Y'/\" > src/sql/roles.sql"
-echo "git add src/Makefile META.json openbarter.control src/sql/roles.sql"
+
+echo "git add src/Makefile META.json openbarter.control src/sql/roles.sql src/flowf.control"
echo "git mv src/flowf--0.1.sql src/flowf--$VERSION_X.$VERSION_Y.sql"
echo "git mv src/flowf--unpackaged--0.1.sql src/flowf--unpackaged--$VERSION_X.$VERSION_Y.sql"
echo "git commit -a -m 'preparation for v$NEW_VERSION' "
echo "git tag -s v$NEW_VERSION -m 'release v$NEW_VERSION'"
dirnew="../release/openbarter-$NEW_VERSION"
namenew="openbarter-$NEW_VERSION"
echo "mkdir -p $dirnew"
echo "git archive --format=tar --prefix=$namenew/ v$NEW_VERSION | gzip -9 > $dirnew/$namenew.tar.gz"
echo "# git archive --format zip --prefix=$namenew/ v$new --output $dirnew/$namenew.zip"
echo "soffice --invisible --norestore --convert-to pdf --outdir ./doc ./doc/*.odt"
for DOC in `ls doc/*.odt`
do
RAD=`echo $DOC | sed "s/doc\/\([^./]*\).odt/\1/"`
echo "mv doc/$RAD.pdf $dirnew/$namenew-$RAD.pdf"
done
echo "git log --no-merges v$NEW_VERSION ^v$last > $dirnew/ChangeLog-$NEW_VERSION"
echo "git shortlog --no-merges v$NEW_VERSION ^v$last > $dirnew/ShortLog-$NEW_VERSION"
echo "git diff --stat --summary -M v$last v$NEW_VERSION > $dirnew/diffstat-$NEW_VERSION"
echo "# git push --tags"
#
echo "git mv src/flowf--$VERSION_X.$VERSION_Y.sql src/flowf--0.1.sql"
echo "git mv src/flowf--unpackaged--$VERSION_X.$VERSION_Y.sql src/flowf--unpackaged--0.1.sql"
+
echo "mv src/Makefile.orig src/Makefile"
echo "mv openbarter.control.orig openbarter.control"
echo "mv META.json.orig META.json"
echo "mv src/sql/roles.sql.orig src/sql/roles.sql"
-echo "git add src/Makefile openbarter.control META.json src/sql/roles.sql"
+echo "mv src/flowf.control.orig src/flowf.control"
+
+echo "git add src/Makefile openbarter.control META.json src/sql/roles.sql src/flowf.control"
echo "git commit -a -m 'end for v$NEW_VERSION'"
|
olivierch/openBarter
|
98094298a11696591467f25ba0fbcc799fff88e8
|
makefile do not remove META.json
|
diff --git a/src/Makefile b/src/Makefile
index f851f0b..aff7e74 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -1,38 +1,38 @@
MODULE_big = flowf
OBJS = flowm.o yflow.o yflowparse.o yorder.o flowc.o earthrad.o worker_ob.o
EXTENSION = flowf
DATA = flowf--0.1.sql flowf--unpackaged--0.1.sql
REGRESS = testflow_1 testflow_2 testflow_2a
-EXTRA_CLEAN = yflowparse.c yflowscan.c test/results/*.res test/py/*.pyc ../doc/*.pdf ../META.json
+EXTRA_CLEAN = yflowparse.c yflowscan.c test/results/*.res test/py/*.pyc ../doc/*.pdf
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
yflowparse.o: yflowscan.c
yflowparse.c: yflowparse.y
ifdef BISON
$(BISON) $(BISONFLAGS) -o $@ $<
else
@$(missing) bison $< $@
endif
yflowscan.c: yflowscan.l
ifdef FLEX
$(FLEX) $(FLEXFLAGS) -o'$@' $<
else
@$(missing) flex $< $@
endif
test: installcheck test/py/run.py test/*.sql
cd test; python py/run.py; cd ..
doc:
soffice --invisible --norestore --convert-to pdf --outdir ../doc ../doc/*.odt
|
olivierch/openBarter
|
e84b64bdc77baacdd6187c0fb2b66f83ad3da0c1
|
correction on 2.0.2
|
diff --git a/src/Makefile b/src/Makefile
index 7f5b7d5..f851f0b 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -1,38 +1,38 @@
MODULE_big = flowf
OBJS = flowm.o yflow.o yflowparse.o yorder.o flowc.o earthrad.o worker_ob.o
EXTENSION = flowf
DATA = flowf--0.1.sql flowf--unpackaged--0.1.sql
-REGRESS = testflow_1 testflow_2 testflow_2a testflow_earth
+REGRESS = testflow_1 testflow_2 testflow_2a
EXTRA_CLEAN = yflowparse.c yflowscan.c test/results/*.res test/py/*.pyc ../doc/*.pdf ../META.json
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
yflowparse.o: yflowscan.c
yflowparse.c: yflowparse.y
ifdef BISON
$(BISON) $(BISONFLAGS) -o $@ $<
else
@$(missing) bison $< $@
endif
yflowscan.c: yflowscan.l
ifdef FLEX
$(FLEX) $(FLEXFLAGS) -o'$@' $<
else
@$(missing) flex $< $@
endif
test: installcheck test/py/run.py test/*.sql
cd test; python py/run.py; cd ..
doc:
soffice --invisible --norestore --convert-to pdf --outdir ../doc ../doc/*.odt
diff --git a/src/expected/testflow_1.out b/src/expected/testflow_1.out
index 59674dc..b2e7eda 100644
--- a/src/expected/testflow_1.out
+++ b/src/expected/testflow_1.out
@@ -1,138 +1,138 @@
drop extension if exists flowf cascade;
NOTICE: extension "flowf" does not exist, skipping
-create extension flowf with version '0.1';
+create extension flowf;
RESET client_min_messages;
RESET log_error_verbosity;
SET client_min_messages = notice;
SET log_error_verbosity = terse;
SELECT '[(1,1,2,3,4,5,6,7.00)]'::yflow;
yflow
-----------------------------------
[(1, 1, 2, 3, 4, 5, 6, 7.000000)]
(1 row)
SELECT yflow_init(ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder);
yflow_init
-----------------------------------------
[(1, 1, 1, 1, 100, 200, 50, -1.000000)]
(1 row)
SELECT yflow_grow_backward(
ROW(1,2,1,2,100,'|q2',200,'|q1',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder,
ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder,
yflow_init(
ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder) );
yflow_grow_backward
------------------------------------------------------------------------------
[(1, 2, 2, 1, 100, 200, 50, 1.000000),(1, 1, 1, 1, 100, 200, 50, -1.000000)]
(1 row)
SELECT yflow_finish( ROW(1,2,1,2,100,'|q2',200,'|q1',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder,
yflow_grow_backward( ROW(1,2,1,2,100,'|q2',200,'|q1',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder,
ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder,
yflow_init( ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder) ),
ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder);
yflow_finish
-----------------------------------------------------------------------------
[(1, 2, 2, 1, 100, 200, 50, 1.000000),(1, 1, 1, 1, 100, 200, 50, 1.000000)]
(1 row)
SELECT yflow_dim(yflow_init(ROW(1,1,1,2,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder));
yflow_dim
-----------
1
(1 row)
SELECT yflow_contains_oid(1,yflow_init(ROW(1,1,1,2,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder));
yflow_contains_oid
--------------------
f
(1 row)
SELECT yflow_contains_oid(2,yflow_init(ROW(1,1,1,2,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder));
yflow_contains_oid
--------------------
t
(1 row)
select yflow_is_draft('[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow);
yflow_is_draft
----------------
t
(1 row)
select yflow_reduce(x.f1,x.f1,true) from (select '[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow as f1) x;
yflow_reduce
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[(1, 35, 93, 35, 21170, 2685, 0, 1.000000),(1, 636, 50, 636, 12213, 95415, 78129, 1.000000),(1, 389, 68, 389, 23785, 29283, 11746, 1.000000),(1, 274, 12, 274, 58834, 80362, 60622, 1.000000),(1, 12, 55, 12, 35136, 55490, 29800, 1.000000)]
(1 row)
select yflow_show('[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow);
yflow_show
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
YFLOW [(1, 35, 93, 35, 21170, 2685, 2685, 1.000000 :2685),(1, 636, 50, 636, 12213, 95415, 95415, 1.000000 :17286),(1, 389, 68, 389, 23785, 29283, 29283, 1.000000 :17537),(1, 274, 12, 274, 58834, 80362, 80362, 1.000000 :19740),(1, 12, 55, 12, 35136, 55490, 55490, 1.000000 :25690)]+
(1 row)
select yflow_to_matrix('[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow);
yflow_to_matrix
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
{{35,35,93,21170,2685,2685,2685},{636,636,50,12213,95415,95415,17286},{389,389,68,23785,29283,29283,17537},{274,274,12,58834,80362,80362,19740},{12,12,55,35136,55490,55490,25690}}
(1 row)
select yflow_show('[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow);
yflow_show
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
YFLOW [(1, 35, 93, 35, 21170, 2685, 2685, 1.000000 :2685),(1, 636, 50, 636, 12213, 95415, 95415, 1.000000 :17286),(1, 389, 68, 389, 23785, 29283, 29283, 1.000000 :17537),(1, 274, 12, 274, 58834, 80362, 80362, 1.000000 :19740),(1, 12, 55, 12, 35136, 55490, 55490, 1.000000 :25690)]+
(1 row)
select yflow_qtts('[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow);
yflow_qtts
---------------------------------
{19740,25690,35136,55490,55490}
(1 row)
select yflow_show('[(1,62, 62, 6, 49210, 60487, 55111, 1.000000),(1,64, 64, 4, 64784, 55162, 53296, 1.000000),(1,52, 52, 14, 34697, 57236, 56208, 1.000000),(1,86, 86, 11, 19239, 28465, 27569, 1.000000),(1,87, 87, 4, 20786, 61473, 60554, 1.000000),(1,33, 33, 16, 828, 12515, 12515, 1.000000),(1,1, 1, 1, 35542, 66945, 17677, 1.000000),(1,102, 102, 15, 87633, 69633, 33223, 1.000000)]'::yflow);
yflow_show
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
YFLOW [(1, 62, 62, 6, 49210, 60487, 55111, 1.000000 :2543),(1, 64, 64, 4, 64784, 55162, 53296, 1.000000 :1500),(1, 52, 52, 14, 34697, 57236, 56208, 1.000000 :1187),(1, 86, 86, 11, 19239, 28465, 27569, 1.000000 :843),(1, 87, 87, 4, 20786, 61473, 60554, 1.000000 :1726),(1, 33, 33, 16, 828, 12515, 12515, 1.000000 :12515),(1, 1, 1, 1, 35542, 66945, 17677, 1.000000 :11310),(1, 102, 102, 15, 87633, 69633, 33223, 1.000000 :4312)]+
(1 row)
select yflow_qtts('[(1,62, 62, 6, 49210, 60487, 55111, 1.000000),(1,64, 64, 4, 64784, 55162, 53296, 1.000000),(1,52, 52, 14, 34697, 57236, 56208, 1.000000),(1,86, 86, 11, 19239, 28465, 27569, 1.000000),(1,87, 87, 4, 20786, 61473, 60554, 1.000000),(1,33, 33, 16, 828, 12515, 12515, 1.000000),(1,1, 1, 1, 35542, 66945, 17677, 1.000000),(1,102, 102, 15, 87633, 69633, 33223, 1.000000)]'::yflow);
yflow_qtts
--------------------------------
{11310,4312,87633,69633,33223}
(1 row)
select yflow_show('[(2, 3298, 3298, 25, 87502, 61824, 20450, 1.000000),(2, 1090, 1090, 91, 41375, 55121, 1, 1.000000),(78, 10013, 10013, 72, 55121, 58560, 58559, 1.000000)]'::yflow);
yflow_show
--------------------------------------------------------------------------------------------------------------------------------------------------------------
YFLOW [(2, 3298, 3298, 25, 87502, 61824, 20450, 1.000000 :1),(2, 1090, 1090, 91, 41375, 55121, 1, 1.000000 :1),(78, 10013, 10013, 72, 1, 2, 2, 1.000000 :2)]+
(1 row)
/* IGNOREOMEGA QTTNOLIMIT */
select yflow_show('[
(2, 3298, 3298, 25, 10, 20, 20, 1.000000),
(2, 1090, 1090, 91, 10, 10, 10, 1.000000),
(78, 10013, 10013, 72, 0, 0, 0, 1.000000)]'::yflow);
yflow_show
---------------------------------------------------------------------------------------------------------------------------------------------------
YFLOW [(2, 3298, 3298, 25, 10, 20, 20, 1.000000 :10),(2, 1090, 1090, 91, 10, 10, 10, 1.000000 :10),(78, 10013, 10013, 72, 10, 5, 5, 1.000000 :5)]+
(1 row)
diff --git a/src/expected/testflow_2.out b/src/expected/testflow_2.out
index 182cbd0..b502dcc 100644
--- a/src/expected/testflow_2.out
+++ b/src/expected/testflow_2.out
@@ -1,83 +1,83 @@
drop extension if exists flowf cascade;
-create extension flowf with version '0.1';
+create extension flowf;
-- order limit
SELECT yflow_show('[(1,1,1,1,10,20,20,7.00),(1,2,2,1,10,20,20,7.00)]'::yflow); -- omega >1
yflow_show
---------------------------------------------------------------------------------------
YFLOW [(1, 1, 1, 1, 10, 20, 20, 7.000000 :20),(1, 2, 2, 1, 10, 20, 20, 7.000000 :20)]+
(1 row)
SELECT yflow_show('[(1,1,1,1,20,20,20,7.00),(1,2,2,1,20,20,20,7.00)]'::yflow); -- omega =1
yflow_show
---------------------------------------------------------------------------------------
YFLOW [(1, 1, 1, 1, 20, 20, 20, 7.000000 :20),(1, 2, 2, 1, 20, 20, 20, 7.000000 :20)]+
(1 row)
SELECT yflow_show('[(1,1,1,1,20,10,10,7.00),(1,2,2,1,20,10,10,7.00)]'::yflow); -- omega <1
yflow_show
---------------------------------------------------------------------------------------
YFLOW [(1, 1, 1, 1, 20, 10, 10, 7.000000 :-1),(1, 2, 2, 1, 20, 10, 10, 7.000000 :-1)]+
(1 row)
-- order best
SELECT yflow_show('[(2,1,1,1,10,20,20,7.00),(2,2,2,1,10,20,20,7.00)]'::yflow); -- omega >1
yflow_show
---------------------------------------------------------------------------------------
YFLOW [(2, 1, 1, 1, 10, 20, 20, 7.000000 :20),(2, 2, 2, 1, 10, 20, 20, 7.000000 :20)]+
(1 row)
SELECT yflow_show('[(2,1,1,1,20,20,20,7.00),(2,2,2,1,20,20,20,7.00)]'::yflow); -- omega =1
yflow_show
---------------------------------------------------------------------------------------
YFLOW [(2, 1, 1, 1, 20, 20, 20, 7.000000 :20),(2, 2, 2, 1, 20, 20, 20, 7.000000 :20)]+
(1 row)
SELECT yflow_show('[(2,1,1,1,20,10,10,7.00),(2,2,2,1,20,10,10,7.00)]'::yflow); -- omega <1
yflow_show
---------------------------------------------------------------------------------------
YFLOW [(2, 1, 1, 1, 20, 10, 10, 7.000000 :10),(2, 2, 2, 1, 20, 10, 10, 7.000000 :10)]+
(1 row)
-- order limit and best
SELECT yflow_show('[(1,1,1,1,20,10,10,7.00),(2,2,2,1,20,10,10,7.00)]'::yflow); -- omega <1
yflow_show
---------------------------------------------------------------------------------------
YFLOW [(1, 1, 1, 1, 20, 10, 10, 7.000000 :-1),(2, 2, 2, 1, 20, 10, 10, 7.000000 :-1)]+
(1 row)
SELECT yflow_show('[(2,1,1,1,20,10,10,7.00),(1,2,2,1,20,10,10,7.00)]'::yflow); -- omega <1
yflow_show
---------------------------------------------------------------------------------------
YFLOW [(2, 1, 1, 1, 20, 10, 10, 7.000000 :-1),(1, 2, 2, 1, 20, 10, 10, 7.000000 :-1)]+
(1 row)
-- order limit lnNoQttLimit (4+1+128)
SELECT yflow_show('[(1,1,1,1,10,20,100,7.00),(133,2,2,1,10,20,0,7.00)]'::yflow); -- omega >1
yflow_show
---------------------------------------------------------------------------------------------
YFLOW [(1, 1, 1, 1, 10, 20, 100, 7.000000 :100),(133, 2, 2, 1, 10, 20, 100, 7.000000 :100)]+
(1 row)
-- order limit lnNoQttLimit+lnIgnoreOmega (8+4+1+128)
SELECT yflow_show('[(1,1,1,1,10,20,100,7.00),(141,2,2,1,0,0,0,7.00)]'::yflow); -- omega >1
yflow_show
--------------------------------------------------------------------------------------------
YFLOW [(1, 1, 1, 1, 10, 20, 100, 7.000000 :100),(141, 2, 2, 1, 100, 50, 50, 7.000000 :50)]+
(1 row)
SELECT yflow_qtts('[(1,1,1,1,10,20,100,7.00),(141,2,2,1,0,0,0,7.00)]'::yflow); -- omega >1
yflow_qtts
--------------------
{100,50,100,50,50}
(1 row)
diff --git a/src/expected/testflow_2a.out b/src/expected/testflow_2a.out
index 777f304..f7ce36c 100644
--- a/src/expected/testflow_2a.out
+++ b/src/expected/testflow_2a.out
@@ -1,19 +1,19 @@
drop extension if exists flowf cascade;
-create extension flowf with version '0.1';
+create extension flowf;
-- yflow ''[(type,id,oid,own,qtt_requ,qtt_prov,qtt,proba), ...]''
-- (type,id,oid,own,qtt_requ,qtt_prov,qtt,proba)
select yflow_show('[(2, 8928, 8928, 72, 49263, 87732, 87732, 1.000000),(1, 515, 515, 69, 53751, 67432, 67432, 1.000000),(141, 10001, 10001, 72, 1, 1, 1, 1.000000)]'::yflow);
yflow_show
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
YFLOW [(2, 8928, 8928, 72, 49263, 87732, 87732, 1.000000 :53751),(1, 515, 515, 69, 53751, 67432, 67432, 1.000000 :67432),(141, 10001, 10001, 72, 67432, 30183, 30183, 1.000000 :30183)]+
(1 row)
select yflow_show('[(2, 8928, 8928, 72, 49263, 87732, 87732, 1.000000),(1, 515, 515, 69, 53751, 67432, 67432, 1.000000),(1, 10001, 10001, 72, 67432, 30183,30183, 1.000000)]'::yflow);
yflow_show
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
YFLOW [(2, 8928, 8928, 72, 49263, 87732, 87732, 1.000000 :53752),(1, 515, 515, 69, 53751, 67432, 67432, 1.000000 :67432),(1, 10001, 10001, 72, 67432, 30183, 30183, 1.000000 :30183)]+
(1 row)
diff --git a/src/expected/testflow_earth.out b/src/expected/testflow_earth.out
deleted file mode 100644
index b8ced3e..0000000
--- a/src/expected/testflow_earth.out
+++ /dev/null
@@ -1,44 +0,0 @@
-drop extension if exists flowf cascade;
-create extension flowf with version '0.1';
-RESET client_min_messages;
-RESET log_error_verbosity;
-SET client_min_messages = notice;
-SET log_error_verbosity = terse;
-/*
-(48.670828,1.874488)deg ici (0.849466198,0.032715987)rad
-48.670389,1.87415 mimi
-(48.670389,1.87415)deg mimi (0.849458536,0.032710088)rad
-*/
---select earth_dist_points('(48.670828,1.874488)'::point,'(48.670389,1.87415)'::point);
-select earth_dist_points('(0.849466198,0.032715987)'::point,'(0.849458536,0.032710088)'::point);
- earth_dist_points
-----------------------
- 8.59547091253083e-06
-(1 row)
-
--- 0.0547622024263353 km = a*6371.009 (Rterre) => a = 0.0547622024263353/6371.009 = 0,000008596 radians
--- select earth_dist_points('(-91.0,0.0)'::point,'(-30.0,0.0)'::point);
-select earth_dist_points('(-1.588249619,0.0)'::point,'(-0.523598776,0.0)'::point);
-ERROR: attempt to get a distance form points out of range
--- select earth_get_square('(48.670828,1.874488)'::point,1.0);
--- select earth_get_square('(48.670828,1.874488)'::point,0.0);
-select earth_get_square('(0.849466198,0.032715987)'::point,1.0/6371.009);
- earth_get_square
--------------------------------------------------------------------------------
- (0.849623159008845,0.0329536683917499),(0.849309236991155,0.0324783056082495)
-(1 row)
-
-select earth_get_square('(0.849466198,0.032715987)'::point,0.0);
- earth_get_square
--------------------------------------------------------------------------
- (1.5707963267949,3.14159265358979),(-1.5707963267949,-3.14159265358979)
-(1 row)
-
--- d in [0,EARTH_RADIUS * PI/2.[
--- select earth_get_square('(48.670828,1.874488)'::point,(6371.009 * 3.1415926535 *1.001 / 2.0));
--- select earth_get_square('(48.670828,1.874488)'::point,-1.0);
--- d in [0,PI[
-select earth_get_square('(0.849466198,0.032715987)'::point,(3.14159265358979323846 *1.00001));
-ERROR: attempt to get a box form a dist:3.1416240695 rad for a point:(lat=0.8494661980, lon=0.0327159870) out of range
-select earth_get_square('(0.849466198,0.032715987)'::point,-1.0);
-ERROR: attempt to get a box form a dist:-1.0000000000 rad for a point:(lat=0.8494661980, lon=0.0327159870) out of range
diff --git a/src/flowf.control b/src/flowf.control
index d01d1cf..c14b816 100644
--- a/src/flowf.control
+++ b/src/flowf.control
@@ -1,5 +1,5 @@
# flowf extension
comment = 'data type for cycle of orders'
-default_version = '1.3'
+default_version = '0.1'
module_pathname = '$libdir/flowf'
relocatable = true
diff --git a/src/sql/testflow_1.sql b/src/sql/testflow_1.sql
index a6cad09..ad2d0d1 100644
--- a/src/sql/testflow_1.sql
+++ b/src/sql/testflow_1.sql
@@ -1,61 +1,61 @@
drop extension if exists flowf cascade;
-create extension flowf with version '0.1';
+create extension flowf;
RESET client_min_messages;
RESET log_error_verbosity;
SET client_min_messages = notice;
SET log_error_verbosity = terse;
SELECT '[(1,1,2,3,4,5,6,7.00)]'::yflow;
SELECT yflow_init(ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder);
SELECT yflow_grow_backward(
ROW(1,2,1,2,100,'|q2',200,'|q1',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder,
ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder,
yflow_init(
ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder) );
SELECT yflow_finish( ROW(1,2,1,2,100,'|q2',200,'|q1',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder,
yflow_grow_backward( ROW(1,2,1,2,100,'|q2',200,'|q1',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder,
ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder,
yflow_init( ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder) ),
ROW(1,1,1,1,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder);
SELECT yflow_dim(yflow_init(ROW(1,1,1,2,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder));
SELECT yflow_contains_oid(1,yflow_init(ROW(1,1,1,2,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder));
SELECT yflow_contains_oid(2,yflow_init(ROW(1,1,1,2,100,'|q1',200,'|q2',50,'((1,1),(1,1))'::box,'((1,1),(1,1))'::box,1,'((1,1),(1,1))'::box)::yorder));
select yflow_is_draft('[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow);
select yflow_reduce(x.f1,x.f1,true) from (select '[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow as f1) x;
select yflow_show('[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow);
select yflow_to_matrix('[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow);
select yflow_show('[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow);
select yflow_qtts('[(1,35, 93, 35, 21170, 2685, 2685, 1.000000),(1,636, 50, 636, 12213, 95415, 95415, 1.000000
),(1,389, 68, 389, 23785, 29283, 29283, 1.000000),(1,274, 12, 274, 58834, 80362, 80362, 1.000000),(1,12,
55, 12, 35136, 55490, 55490, 1.000000)]'::yflow);
select yflow_show('[(1,62, 62, 6, 49210, 60487, 55111, 1.000000),(1,64, 64, 4, 64784, 55162, 53296, 1.000000),(1,52, 52, 14, 34697, 57236, 56208, 1.000000),(1,86, 86, 11, 19239, 28465, 27569, 1.000000),(1,87, 87, 4, 20786, 61473, 60554, 1.000000),(1,33, 33, 16, 828, 12515, 12515, 1.000000),(1,1, 1, 1, 35542, 66945, 17677, 1.000000),(1,102, 102, 15, 87633, 69633, 33223, 1.000000)]'::yflow);
select yflow_qtts('[(1,62, 62, 6, 49210, 60487, 55111, 1.000000),(1,64, 64, 4, 64784, 55162, 53296, 1.000000),(1,52, 52, 14, 34697, 57236, 56208, 1.000000),(1,86, 86, 11, 19239, 28465, 27569, 1.000000),(1,87, 87, 4, 20786, 61473, 60554, 1.000000),(1,33, 33, 16, 828, 12515, 12515, 1.000000),(1,1, 1, 1, 35542, 66945, 17677, 1.000000),(1,102, 102, 15, 87633, 69633, 33223, 1.000000)]'::yflow);
select yflow_show('[(2, 3298, 3298, 25, 87502, 61824, 20450, 1.000000),(2, 1090, 1090, 91, 41375, 55121, 1, 1.000000),(78, 10013, 10013, 72, 55121, 58560, 58559, 1.000000)]'::yflow);
/* IGNOREOMEGA QTTNOLIMIT */
select yflow_show('[
(2, 3298, 3298, 25, 10, 20, 20, 1.000000),
(2, 1090, 1090, 91, 10, 10, 10, 1.000000),
(78, 10013, 10013, 72, 0, 0, 0, 1.000000)]'::yflow);
diff --git a/src/sql/testflow_2.sql b/src/sql/testflow_2.sql
index fed5845..1b4af50 100644
--- a/src/sql/testflow_2.sql
+++ b/src/sql/testflow_2.sql
@@ -1,22 +1,22 @@
drop extension if exists flowf cascade;
-create extension flowf with version '0.1';
+create extension flowf;
-- order limit
SELECT yflow_show('[(1,1,1,1,10,20,20,7.00),(1,2,2,1,10,20,20,7.00)]'::yflow); -- omega >1
SELECT yflow_show('[(1,1,1,1,20,20,20,7.00),(1,2,2,1,20,20,20,7.00)]'::yflow); -- omega =1
SELECT yflow_show('[(1,1,1,1,20,10,10,7.00),(1,2,2,1,20,10,10,7.00)]'::yflow); -- omega <1
-- order best
SELECT yflow_show('[(2,1,1,1,10,20,20,7.00),(2,2,2,1,10,20,20,7.00)]'::yflow); -- omega >1
SELECT yflow_show('[(2,1,1,1,20,20,20,7.00),(2,2,2,1,20,20,20,7.00)]'::yflow); -- omega =1
SELECT yflow_show('[(2,1,1,1,20,10,10,7.00),(2,2,2,1,20,10,10,7.00)]'::yflow); -- omega <1
-- order limit and best
SELECT yflow_show('[(1,1,1,1,20,10,10,7.00),(2,2,2,1,20,10,10,7.00)]'::yflow); -- omega <1
SELECT yflow_show('[(2,1,1,1,20,10,10,7.00),(1,2,2,1,20,10,10,7.00)]'::yflow); -- omega <1
-- order limit lnNoQttLimit (4+1+128)
SELECT yflow_show('[(1,1,1,1,10,20,100,7.00),(133,2,2,1,10,20,0,7.00)]'::yflow); -- omega >1
-- order limit lnNoQttLimit+lnIgnoreOmega (8+4+1+128)
SELECT yflow_show('[(1,1,1,1,10,20,100,7.00),(141,2,2,1,0,0,0,7.00)]'::yflow); -- omega >1
SELECT yflow_qtts('[(1,1,1,1,10,20,100,7.00),(141,2,2,1,0,0,0,7.00)]'::yflow); -- omega >1
diff --git a/src/sql/testflow_2a.sql b/src/sql/testflow_2a.sql
index 8282e74..4c97390 100644
--- a/src/sql/testflow_2a.sql
+++ b/src/sql/testflow_2a.sql
@@ -1,10 +1,10 @@
drop extension if exists flowf cascade;
-create extension flowf with version '0.1';
+create extension flowf;
-- yflow ''[(type,id,oid,own,qtt_requ,qtt_prov,qtt,proba), ...]''
-- (type,id,oid,own,qtt_requ,qtt_prov,qtt,proba)
select yflow_show('[(2, 8928, 8928, 72, 49263, 87732, 87732, 1.000000),(1, 515, 515, 69, 53751, 67432, 67432, 1.000000),(141, 10001, 10001, 72, 1, 1, 1, 1.000000)]'::yflow);
select yflow_show('[(2, 8928, 8928, 72, 49263, 87732, 87732, 1.000000),(1, 515, 515, 69, 53751, 67432, 67432, 1.000000),(1, 10001, 10001, 72, 67432, 30183,30183, 1.000000)]'::yflow);
diff --git a/src/sql/testflow_earth.sql b/src/sql/testflow_earth.sql
deleted file mode 100644
index 39e54d7..0000000
--- a/src/sql/testflow_earth.sql
+++ /dev/null
@@ -1,32 +0,0 @@
-drop extension if exists flowf cascade;
-create extension flowf with version '0.1';
-
-RESET client_min_messages;
-RESET log_error_verbosity;
-SET client_min_messages = notice;
-SET log_error_verbosity = terse;
-
-/*
-(48.670828,1.874488)deg ici (0.849466198,0.032715987)rad
-48.670389,1.87415 mimi
-(48.670389,1.87415)deg mimi (0.849458536,0.032710088)rad
-*/
-
---select earth_dist_points('(48.670828,1.874488)'::point,'(48.670389,1.87415)'::point);
-select earth_dist_points('(0.849466198,0.032715987)'::point,'(0.849458536,0.032710088)'::point);
--- 0.0547622024263353 km = a*6371.009 (Rterre) => a = 0.0547622024263353/6371.009 = 0,000008596 radians
--- select earth_dist_points('(-91.0,0.0)'::point,'(-30.0,0.0)'::point);
-select earth_dist_points('(-1.588249619,0.0)'::point,'(-0.523598776,0.0)'::point);
-
--- select earth_get_square('(48.670828,1.874488)'::point,1.0);
--- select earth_get_square('(48.670828,1.874488)'::point,0.0);
-select earth_get_square('(0.849466198,0.032715987)'::point,1.0/6371.009);
-select earth_get_square('(0.849466198,0.032715987)'::point,0.0);
-
--- d in [0,EARTH_RADIUS * PI/2.[
--- select earth_get_square('(48.670828,1.874488)'::point,(6371.009 * 3.1415926535 *1.001 / 2.0));
--- select earth_get_square('(48.670828,1.874488)'::point,-1.0);
--- d in [0,PI[
-select earth_get_square('(0.849466198,0.032715987)'::point,(3.14159265358979323846 *1.00001));
-select earth_get_square('(0.849466198,0.032715987)'::point,-1.0);
-
diff --git a/src/test/py/srvob_conf.py b/src/test/py/srvob_conf.py
index f1c6528..1090518 100644
--- a/src/test/py/srvob_conf.py
+++ b/src/test/py/srvob_conf.py
@@ -1,33 +1,33 @@
# -*- coding: utf-8 -*-
import os.path,logging
PATH_ICI = os.path.dirname(os.path.abspath(__file__))
class DbInit(object):
def __init__(self,name,login,pwd,host,port,schema = None):
self.name = name
self.login = login
self.password = pwd
self.host = host
self.port = port
self.schema = schema
def __str__(self):
return self.name+" "+self.login
-DB_NAME='test'
+DB_NAME='market'
DB_USER='olivier'
DB_PWD=''
DB_HOST='localhost'
DB_PORT=5432
dbBO = DbInit(DB_NAME,DB_USER,DB_PWD,DB_HOST,DB_PORT)
LOGFILE = 'logger.log'
|
olivierch/openBarter
|
590b71bf93e442d0e7a24ca12cbc4d9261c03ec3
|
last change before 2.0.2
|
diff --git a/META.json b/META.json
index 4e7f354..bcaef26 100644
--- a/META.json
+++ b/META.json
@@ -1,51 +1,51 @@
{
"name": "openbarter",
"abstract": "Multilateral agreement production engine.",
"description": "openBarter is a central limit order book that performs competition on price even if money is not necessary to express them. It is a barter market allowing cyclic exchange between two partners (buyer and seller) or more in a single transaction. It provides a fluidity equivalent to that of a regular market place.",
- "version": "0.0.1",
+ "version": "0.1.0",
"maintainer": [
"Olivier Chaussavoine <[email protected]>"
],
"license": "gpl_3",
"prereqs": {
"runtime": {
"requires": {
"plpgsql": "1.0",
"PostgreSQL": "9.3.2"
}
}
},
"provides": {
"openbarter": {
"file": "src/sql/model.sql",
"docfile": "doc/doc-ob.odt",
"version": "0.0.1",
"abstract": "Multilateral agreement engine."
}
},
"resources": {
"bugtracker": {
"web": "http://github.com/olivierch/openBarter/issues"
},
"repository": {
"url": "git://github.com/olivierch/openBarter.git",
"web": "http://olivierch.github.com/openBarter",
"type": "git"
}
},
"generated_by": "Olivier Chaussavoine",
"meta-spec": {
"version": "1.0.0",
"url": "http://pgxn.org/meta/spec.txt"
},
"tags": [
"cyclic exchange","non-bilateral","mutilateral",
"central","limit","order","book",
"barter",
"market"
],
"release_status": "stable"
}
diff --git a/doc/release-process.txt b/doc/release-process.txt
new file mode 100644
index 0000000..2132dd3
--- /dev/null
+++ b/doc/release-process.txt
@@ -0,0 +1,17 @@
+
+
+On suppose ici qu'on pas d'une version 0.1.1 Ã une version 0.1.3
+
+modifier VERSION-X,VERSION-Y,VERSION-Z Ã 0.1.3 dans src/sql/tables.sql
+s'assurer que les tests sont corrects sur git
+
+ # cd openbarter
+ # ./doc/release-script 0.1.1 > ~/.bash/do-release_0.1.3
+ # chmod u+x ~/.bash/do-release_0.1.3
+ # do-release_0.1.3
+ #
+ #
+ # chargement sur pgxn
+ # https://manager.pgxn.org/upload
+ # (olivierch,...)
+ # upload the file ../release/openbarter-0.1.3
diff --git a/openbarter.control b/openbarter.control
index eaa8d7c..464a509 100644
--- a/openbarter.control
+++ b/openbarter.control
@@ -1,6 +1,6 @@
# openbarter extension
comment = 'Multilateral agreement engine.â
-default_version = â0.0.1â
+default_version = â0.1.0â
module_pathname = '$libdir/openBarter'
relocatable = true
diff --git a/src/sql/tables.sql b/src/sql/tables.sql
index 5d2b5ea..574ae9f 100644
--- a/src/sql/tables.sql
+++ b/src/sql/tables.sql
@@ -1,351 +1,351 @@
--------------------------------------------------------------------------------
ALTER DEFAULT PRIVILEGES REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON SEQUENCES FROM PUBLIC;
ALTER DEFAULT PRIVILEGES REVOKE ALL ON TYPES FROM PUBLIC;
--------------------------------------------------------------------------------
create domain dqtt AS int8 check( VALUE>0);
create domain dtext AS text check( char_length(VALUE)>0);
--------------------------------------------------------------------------------
-- main constants of the model
--------------------------------------------------------------------------------
create table tconst(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
GRANT SELECT ON tconst TO role_com;
--------------------------------------------------------------------------------
/* for booleans, 0 == false and !=0 == true
*/
INSERT INTO tconst (name,value) VALUES
('MAXCYCLE',64), -- must be less than yflow_get_maxdim()
('MAXPATHFETCHED',1024),-- maximum depth of the graph exploration
('MAXMVTPERTRANS',128), -- maximum number of movements per transaction
-- if this limit is reached, next cycles are not performed but all others
-- are included in the current transaction
- ('VERSION-X',2),('VERSION-Y',0),('VERSION-Z',1),
+ ('VERSION-X',2),('VERSION-Y',0),('VERSION-Z',2),
('OWNERINSERT',1), -- boolean when true, owner inserted when not found
('QUAPROVUSR',0), -- boolean when true, the quality provided by a barter is suffixed by user name
-- 1 prod
('OWNUSR',0), -- boolean when true, the owner is suffixed by user name
-- 1 prod
('DEBUG',1);
--------------------------------------------------------------------------------
create table tvar(
name dtext UNIQUE not NULL,
value int,
PRIMARY KEY (name)
);
--------------------------------------------------------------------------------
-- TOWNER
--------------------------------------------------------------------------------
create table towner (
id serial UNIQUE not NULL,
name dtext UNIQUE not NULL,
PRIMARY KEY (id)
);
comment on table towner is 'owners of values exchanged';
alter sequence towner_id_seq owned by towner.id;
create index towner_name_idx on towner(name);
SELECT _reference_time('towner');
SELECT _grant_read('towner');
--------------------------------------------------------------------------------
-- ORDER BOOK
--------------------------------------------------------------------------------
-- type = type_flow | type_primitive <<8 | type_mode <<16
create domain dtypeorder AS int check(VALUE >=0 AND VALUE < 16777215); --((1<<24)-1)
-- type_flow &3 1 order limit,2 order best
-- type_flow &12 bit set for c calculations
-- 4 no qttlimit
-- 8 ignoreomega
-- yorder.type is a type_flow = type & 255
-- type_primitive
-- 1 order
-- 2 rmorder
-- 3 quote
-- 4 prequote
CREATE TYPE eordertype AS ENUM ('best','limit');
CREATE TYPE eprimitivetype AS ENUM ('order','childorder','rmorder','quote','prequote');
create table torder (
usr dtext,
own dtext,
ord yorder, --defined by the extension flowf
created timestamp not NULL,
updated timestamp,
duration interval
);
comment on table torder is 'Order book';
comment on column torder.usr is 'user that inserted the order ';
comment on column torder.ord is 'the order';
comment on column torder.created is 'time when the order was put on the stack';
comment on column torder.updated is 'time when the (quantity) of the order was updated by the order book';
comment on column torder.duration is 'the life time of the order';
SELECT _grant_read('torder');
create index torder_qua_prov_idx on torder(((ord).qua_prov)); -- using gin(((ord).qua_prov) text_ops);
create index torder_id_idx on torder(((ord).id));
create index torder_oid_idx on torder(((ord).oid));
-- id,type,own,oid,qtt_requ,qua_requ,qtt_prov,qua_prov,qtt
create view vorder as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where o.usr=session_user;
SELECT _grant_read('vorder');
-- sans dates ni filtre sur usr
create view vorder2 as
select (o.ord).id as id,(o.ord).type as type,w.name as own,(o.ord).oid as oid,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt
from torder o left join towner w on ((o.ord).own=w.id);
SELECT _grant_read('vorder2');
-- only parent for all users
create view vbarter as
select (o.ord).id as id,(o.ord).type as type,o.usr as user,w.name as own,
(o.ord).qtt_requ as qtt_requ,(o.ord).qua_requ as qua_requ,
(o.ord).qtt_prov as qtt_prov,(o.ord).qua_prov as qua_prov,
(o.ord).qtt as qtt, o.created as created, o.updated as updated
from torder o left join towner w on ((o.ord).own=w.id) where (o.ord).oid=(o.ord).id;
SELECT _grant_read('vbarter');
-- parent and childs for all users, used with vmvto
create view vordero as
select id,
(case when (type & 3=1) then 'limit' else 'best' end)::eordertype as type,
own as owner,
case when id=oid then (qtt::text || ' ' || qua_prov) else '' end as stock,
'(' || qtt_prov::text || '/' || qtt_requ::text || ') ' ||
qua_prov || ' / '|| qua_requ as expected_Ï,
case when id=oid then '' else oid::text end as oid
from vorder order by id asc;
GRANT SELECT ON vordero TO role_com;
comment on view vordero is 'order book for all users, to be used with vmvto';
comment on column vordero.id is 'the id of the order';
comment on column vordero.owner is 'the owner';
comment on column vordero.stock is 'for a parent order the stock offered by the owner';
comment on column vordero.expected_Ï is 'the Ï of the order';
comment on column vordero.oid is 'for a child-order, the id of the parent-order';
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
CREATE FUNCTION fgetowner(_name text) RETURNS int AS $$
DECLARE
_wid int;
_OWNERINSERT boolean := fgetconst('OWNERINSERT')=1;
BEGIN
LOOP
SELECT id INTO _wid FROM towner WHERE name=_name;
IF found THEN
return _wid;
END IF;
IF (NOT _OWNERINSERT) THEN
RAISE EXCEPTION 'The owner does not exist' USING ERRCODE='YU001';
END IF;
BEGIN
INSERT INTO towner (name) VALUES (_name) RETURNING id INTO _wid;
-- RAISE NOTICE 'owner % created',_name;
return _wid;
EXCEPTION WHEN unique_violation THEN
NULL;--
END;
END LOOP;
END;
$$ LANGUAGE PLPGSQL;
--------------------------------------------------------------------------------
-- TMVT
-- id,nbc,nbt,grp,xid,usr_src,usr_dst,xoid,own_src,own_dst,qtt,nat,ack,exhausted,order_created,created
--------------------------------------------------------------------------------
/*
create table tmvt (
id serial UNIQUE not NULL,
nbc int default NULL,
nbt int default NULL,
grp int default NULL,
xid int default NULL,
usr_src text default NULL,
usr_dst text default NULL,
xoid int default NULL,
own_src text default NULL,
own_dst text default NULL,
qtt int8 default NULL,
nat text default NULL,
ack boolean default NULL,
cack boolean default NULL,
exhausted boolean default NULL,
order_created timestamp default NULL,
created timestamp default NULL,
om_exp double precision default NULL,
om_rea double precision default NULL,
CONSTRAINT ctmvt_grp FOREIGN KEY (grp) references tmvt(id) ON UPDATE CASCADE
);
GRANT SELECT ON tmvt TO role_com;
comment on table tmvt is 'Records ownership changes';
comment on column tmvt.nbc is 'number of movements of the exchange cycle';
comment on column tmvt.nbt is 'number of movements of the transaction containing several exchange cycles';
comment on column tmvt.grp is 'references the first movement of the exchange';
comment on column tmvt.xid is 'references the order.id';
comment on column tmvt.usr_src is 'usr provider';
comment on column tmvt.usr_dst is 'usr receiver';
comment on column tmvt.xoid is 'references the order.oid';
comment on column tmvt.own_src is 'owner provider';
comment on column tmvt.own_dst is 'owner receiver';
comment on column tmvt.qtt is 'quantity of the value moved';
comment on column tmvt.nat is 'quality of the value moved';
comment on column tmvt.ack is 'set when movement has been acknowledged';
comment on column tmvt.cack is 'set when the cycle has been acknowledged';
comment on column tmvt.exhausted is 'set when the movement exhausted the order providing the value';
comment on column tmvt.om_exp is 'Ï expected by the order';
comment on column tmvt.om_rea is 'real Ï of movement';
alter sequence tmvt_id_seq owned by tmvt.id;
GRANT SELECT ON tmvt_id_seq TO role_com;
create index tmvt_grp_idx on tmvt(grp);
create index tmvt_nat_idx on tmvt(nat);
create index tmvt_own_src_idx on tmvt(own_src);
create index tmvt_own_dst_idx on tmvt(own_dst);
CREATE VIEW vmvt AS select * from tmvt;
GRANT SELECT ON vmvt TO role_com;
CREATE VIEW vmvt_tu AS select id,nbc,grp,xid,xoid,own_src,own_dst,qtt,nat,ack,cack,exhausted from tmvt;
GRANT SELECT ON vmvt_tu TO role_com;
create view vmvto as
select id,grp,
usr_src as from_usr,
own_src as from_own,
qtt::text || ' ' || nat as value,
usr_dst as to_usr,
own_dst as to_own,
to_char(om_exp, 'FM999.9999990') as expected_Ï,
to_char(om_rea, 'FM999.9999990') as actual_Ï,
ack
from tmvt where cack is NULL order by id asc;
GRANT SELECT ON vmvto TO role_com;
*/
CREATE SEQUENCE tmvt_id_seq;
--------------------------------------------------------------------------------
-- STACK id,usr,kind,jso,submitted
--------------------------------------------------------------------------------
create table tstack (
id serial UNIQUE not NULL,
usr dtext,
kind eprimitivetype,
jso json, -- representation of the primitive
submitted timestamp not NULL,
PRIMARY KEY (id)
);
comment on table tstack is 'Records the stack of primitives';
comment on column tstack.id is 'id of this primitive';
comment on column tstack.usr is 'user submitting the primitive';
comment on column tstack.kind is 'type of primitive';
comment on column tstack.jso is 'primitive payload';
comment on column tstack.submitted is 'timestamp when the primitive was successfully submitted';
alter sequence tstack_id_seq owned by tstack.id;
GRANT SELECT ON tstack TO role_com;
SELECT fifo_init('tstack');
GRANT SELECT ON tstack_id_seq TO role_com;
--------------------------------------------------------------------------------
CREATE TYPE eprimphase AS ENUM ('submit', 'execute');
--------------------------------------------------------------------------------
-- MSG
--------------------------------------------------------------------------------
CREATE TYPE emsgtype AS ENUM ('response', 'exchange');
create table tmsg (
id serial UNIQUE not NULL,
usr dtext default NULL, -- the user receiver of this message
typ emsgtype not NULL,
jso json default NULL,
created timestamp not NULL
);
alter sequence tmsg_id_seq owned by tmsg.id;
SELECT _grant_read('tmsg');
SELECT _grant_read('tmsg_id_seq');
SELECT fifo_init('tmsg');
CREATE VIEW vmsg AS select * from tmsg WHERE usr = session_user;
SELECT _grant_read('vmsg');
--------------------------------------------------------------------------------
CREATE TYPE yj_error AS (
code int,
reason text
);
CREATE TYPE yerrorprim AS (
id int,
error yj_error
);
CREATE TYPE yj_value AS (
qtt int8,
nat text
);
CREATE TYPE yj_stock AS (
id int,
qtt int8,
nat text,
own text,
usr text
);
CREATE TYPE yj_Ï AS (
id int,
qtt_prov int8,
qtt_requ int8,
type eordertype
);
CREATE TYPE yj_mvt AS (
id int,
cycle int,
orde yj_Ï,
stock yj_stock,
mvt_from yj_stock,
mvt_to yj_stock
);
CREATE TYPE yj_order AS (
id int,
error yj_error
);
CREATE TYPE yj_primitive AS (
id int,
error yj_error,
primitive json,
result json,
value json
);
|
olivierch/openBarter
|
902bb8b5dbe49d7e3f715dfe3f32625ef662d7cb
|
git ignore related to odt temporary file
|
diff --git a/.gitignore b/.gitignore
index b097817..147d741 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,10 +1,11 @@
*.[oa]
*.so
*~
+.~*
*.pyc
src/yflowparse.c
src/yflowscan.c
src/results/
src/test/results/
doc/*.pdf
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.