Search Results

Search found 5578 results on 224 pages for 'transport rules'.

Page 40/224 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Apache .shared folder

    - by Kevin
    There are already a bunch of rules in my Apache configuration. What I want to add is the following. There are some shared folders (.shared): /var/www/.shared/ and /var/www/.include/.shared/ and /var/www/.include/(.*)/.shared/. Now when someone visits http://domain.com/test.png it first executes the existing apache rules and will (when the file/folder was not found) look in those .shared folders. So suppose I've got this filesystem: /var/www/.shared/dog.png /var/www/.shared/test.gif and /var/www/domain.com/dog.png. Now when someone visits http://domain.com/test.gif, it must load the test.gif from the .shared folder. Now when someone visits http://domain.com/dog.png it must load the dog.png from the domain.com folder (because the existing apache rules will be executed first).

    Read the article

  • Opening a port to make a connection in Windows 7 [closed]

    - by jannes braet
    Possible Duplicate: trouble with opening a port to make a connection I have watched a video on how to open my ports in Windows 7. I followed the example by going to my "firewall" in "advanced settings" and I made new rules in "inbound rules" and in "outbound rules". I chose to allow connections to all ports, but if I try it with canyouseeme, then it says I can't find the configured port. Maybe it is because the site is wrong, but I don't really believe so. Could someone tell me how I open my ports so that I can connect to them and others to connect to them via the internet (if they have my ip-adress of course)?

    Read the article

  • How to configure Unity/Compiz in 12.04 so that Update Manager opens maximized w/screen bigger than 1024×600?

    - by Jani Uusitalo
    In Precise, window auto-maximize is disabled on monitors with a resolution above 1024 × 600 [1]. I have a bigger resolution, but I prefer maximized windows anyway. I want Update Manager to start maximized. What I've tried so far: In Compiz Config Settings Manager, I have Place Windows activated and 'Windows with fixed placement mode' has windows matching the rule "(name=gnome-terminal) | (name=update-manager)" set to 'Maximize'. With this, Gnome Terminal starts maximized, Update Manager does not. In Compiz Config Settings Manager, I have set a Window Rules [2] rule to match "name=update-manager". Irregardless of any rules set or not, activating Window Rules results in not being able to bring out Unity Launcher anymore, Alt+Tab window switching becoming slow or nonfunctional entirely and the screen sporadically freezing completely. Not a viable option apparently. I've installed Maximus [3] and started it. Update Manager ignores it (or vice versa). I've not tried devilspie and would prefer not to. Having to configure something external for this would seem stupidly redundant with (the no-brainer) Maximus and all these Compiz options already available. I just can't seem to make them work. [1] https://bugs.launchpad.net/ayatana-design/+bug/797808 [2] http://askubuntu.com/a/53657/34756 [3] How to configure my system so that all windows start maximized?

    Read the article

  • Oracle Technology Network Virtual Developer Day: Service Oriented Architecture (SOA)

    - by programmarketingOTN
    Register now! Oracle Technology Network Virtual Developer Day: Service Oriented Architecture (SOA) - Discover the Power of Oracle SOA Suite 11gTuesday July 12, 2011 - ?9:00 a.m. PT – 1:30 p.m. PT / 12 Noon EDT - 4:30 p.m. EDTOTN is proud to host another Virtual Developer Day, this time focusing on SOA (click here to check out on-demand version of Rich Enterprise Applications and WebLogic)  Save yourself/company some money and join us online for this hands-on virtual workshop. Through developer-focused product presentations and demonstrations delivered by Oracle product and technology experts, there is no faster or more efficient way to jumpstart your Oracle SOA suite learning.Over the course of the Virtual Developer Day, you will learn how an SOA approach can be implemented, whether starting fresh with new services or reusing existing services. Using Oracle SOA Suite 11g components, you will explore, modify, execute, and monitor an SOA composite application. Topics include SCA, BPEL process execution, adapters, business rules and more.Java and WebLogic experience not required for the presentations or demonstrations but it is a plus for the hands-on lab.Come to this event if you are    •    Exploring ways to deliver services faster    •    Integrating packaged and/or legacy applications    •    Developing service orchestration    •    Planning or starting new development projectsRegister online now for this FREE event.AGENDA - Tuesday July 12, 2011?9:00 a.m. PT – 1:30 p.m. PT / 12 Noon EDT - 4:30 p.m. PT EDT  Time  Title  9:00 AM Keynote  9:15 AM Presentation 1 Service Oriented Architecture (SOA) Overview  9:45 AM Demonstration 1 Mediator and Adapters  10:15 AM Presentation 2 BPEL Service Orchestration and Business Rules  10:45 AM Demonstration 2 BPEL Service Orchestration  11:15 AM Demonstration 3 Oracle Business Rules  11:45 AM Hands-on Lab time  1:30 PM Close Register online now for this FREE event.

    Read the article

  • Juju Zookeeper & Provisioning Agent Not Deployed

    - by Keith Tobin
    I am using juju with the openstack provider, i expected that when i bootstrap that zookeeper and provisioning agent would get deployed on the bootstrap vm in openstack. This dose not seem to be the case. the bootstrap vm gets deployed but it seems that nothing gets deployed to the VM. See logs below, I may be missing something, also how is it possible to log on the bootstrap vm. Could I manual deploy, if so what do I need to do. Juju Bootstrap commend root@cinder01:/home/cinder# juju -v bootstrap 2012-10-12 03:21:20,976 DEBUG Initializing juju bootstrap runtime 2012-10-12 03:21:20,982 WARNING Verification of xxxxS certificates is disabled for this environment. Set 'ssl-hostname-verification' to ensure secure communication. 2012-10-12 03:21:20,982 DEBUG openstack: using auth-mode 'userpass' with xxxx:xxxxxx.10:35357/v2.0/ 2012-10-12 03:21:21,064 DEBUG openstack: authenticated til u'2012-10-13T08:21:13Z' 2012-10-12 03:21:21,064 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors' 2012-10-12 03:21:21,091 DEBUG openstack: 200 '{"flavors": [{"id": "3", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "bookmark"}], "name": "m1.medium"}, {"id": "4", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "bookmark"}], "name": "m1.large"}, {"id": "1", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}], "name": "m1.tiny"}, {"id": "5", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "bookmark"}], "name": "m1.xlarge"}, {"id": "2", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "bookmark"}], "name": "m1.small"}]}' 2012-10-12 03:21:21,091 INFO Bootstrapping environment 'openstack' (origin: ppa type: openstack)... 2012-10-12 03:21:21,091 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state 2012-10-12 03:21:21,092 DEBUG openstack: GET 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state' 2012-10-12 03:21:21,165 DEBUG openstack: 200 '{}\n' 2012-10-12 03:21:21,165 DEBUG Verifying writable storage 2012-10-12 03:21:21,165 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/bootstrap-verify 2012-10-12 03:21:21,166 DEBUG openstack: PUT 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/bootstrap-verify' 2012-10-12 03:21:21,251 DEBUG openstack: 201 '201 Created\n\n\n\n ' 2012-10-12 03:21:21,251 DEBUG Launching juju bootstrap instance. 2012-10-12 03:21:21,271 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/juju_master_id 2012-10-12 03:21:21,273 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups 2012-10-12 03:21:21,273 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups' 2012-10-12 03:21:21,321 DEBUG openstack: 200 '{"security_groups": [{"rules": [{"from_port": -1, "group": {}, "ip_protocol": "icmp", "to_port": -1, "parent_group_id": 1, "ip_range": {"cidr": "0.0.0.0/0"}, "id": 7}, {"from_port": 22, "group": {}, "ip_protocol": "tcp", "to_port": 22, "parent_group_id": 1, "ip_range": {"cidr": "0.0.0.0/0"}, "id": 38}], "tenant_id": "d5f52673953f49e595279e89ddde979d", "id": 1, "name": "default", "description": "default"}]}' 2012-10-12 03:21:21,322 DEBUG Creating juju security group juju-openstack 2012-10-12 03:21:21,322 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups' 2012-10-12 03:21:21,401 DEBUG openstack: 200 '{"security_group": {"rules": [], "tenant_id": "d5f52673953f49e595279e89ddde979d", "id": 48, "name": "juju-openstack", "description": "juju group for openstack"}}' 2012-10-12 03:21:21,401 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-group-rules' 2012-10-12 03:21:21,504 DEBUG openstack: 200 '{"security_group_rule": {"from_port": 22, "group": {}, "ip_protocol": "tcp", "to_port": 22, "parent_group_id": 48, "ip_range": {"cidr": "0.0.0.0/0"}, "id": 54}}' 2012-10-12 03:21:21,504 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-group-rules' 2012-10-12 03:21:21,647 DEBUG openstack: 200 '{"security_group_rule": {"from_port": 1, "group": {"tenant_id": "d5f52673953f49e595279e89ddde979d", "name": "juju-openstack"}, "ip_protocol": "tcp", "to_port": 65535, "parent_group_id": 48, "ip_range": {}, "id": 55}}' 2012-10-12 03:21:21,647 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-group-rules' 2012-10-12 03:21:21,791 DEBUG openstack: 200 '{"security_group_rule": {"from_port": 1, "group": {"tenant_id": "d5f52673953f49e595279e89ddde979d", "name": "juju-openstack"}, "ip_protocol": "udp", "to_port": 65535, "parent_group_id": 48, "ip_range": {}, "id": 56}}' 2012-10-12 03:21:21,792 DEBUG Creating machine security group juju-openstack-0 2012-10-12 03:21:21,792 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups' 2012-10-12 03:21:21,871 DEBUG openstack: 200 '{"security_group": {"rules": [], "tenant_id": "d5f52673953f49e595279e89ddde979d", "id": 49, "name": "juju-openstack-0", "description": "juju group for openstack machine 0"}}' 2012-10-12 03:21:21,871 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/detail 2012-10-12 03:21:21,871 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/detail' 2012-10-12 03:21:21,906 DEBUG openstack: 200 '{"flavors": [{"vcpus": 2, "disk": 10, "name": "m1.medium", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 40, "ram": 4096, "id": "3", "swap": ""}, {"vcpus": 4, "disk": 10, "name": "m1.large", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 80, "ram": 8192, "id": "4", "swap": ""}, {"vcpus": 1, "disk": 0, "name": "m1.tiny", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "ram": 512, "id": "1", "swap": ""}, {"vcpus": 8, "disk": 10, "name": "m1.xlarge", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 160, "ram": 16384, "id": "5", "swap": ""}, {"vcpus": 1, "disk": 10, "name": "m1.small", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 20, "ram": 2048, "id": "2", "swap": ""}]}' 2012-10-12 03:21:21,907 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers 2012-10-12 03:21:21,907 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers' 2012-10-12 03:21:22,284 DEBUG openstack: 202 '{"server": {"OS-DCF:diskConfig": "MANUAL", "id": "a598b402-8678-4447-baeb-59255409a023", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "bookmark"}], "adminPass": "SuFp48cZzdo4"}}' 2012-10-12 03:21:22,284 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/juju_master_id 2012-10-12 03:21:22,285 DEBUG openstack: PUT 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/juju_master_id' 2012-10-12 03:21:22,375 DEBUG openstack: 201 '201 Created\n\n\n\n ' 2012-10-12 03:21:27,379 DEBUG Waited for 5 seconds for networking on server u'a598b402-8678-4447-baeb-59255409a023' 2012-10-12 03:21:27,380 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023 2012-10-12 03:21:27,380 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023' 2012-10-12 03:21:27,556 DEBUG openstack: 200 '{"server": {"OS-EXT-STS:task_state": "networking", "addresses": {"private": [{"version": 4, "addr": "10.0.0.8"}]}, "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "bookmark"}], "image": {"id": "5bf60467-0136-4471-9818-e13ade75a0a1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/images/5bf60467-0136-4471-9818-e13ade75a0a1", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "building", "OS-EXT-SRV-ATTR:instance_name": "instance-00000060", "flavor": {"id": "1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}]}, "id": "a598b402-8678-4447-baeb-59255409a023", "user_id": "01610f73d0fb4922aefff09f2627e50c", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, "config_drive": "", "status": "BUILD", "updated": "2012-10-12T08:21:23Z", "hostId": "1cdb25708fb8e464d83a69fe4a024dcd5a80baf24a82ec28f9d9f866", "OS-EXT-SRV-ATTR:host": "nova01", "key_name": "", "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "name": "juju openstack instance 0", "created": "2012-10-12T08:21:22Z", "tenant_id": "d5f52673953f49e595279e89ddde979d", "metadata": {}}}' 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 2012-10-12 03:21:27,557 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-floating-ips 2012-10-12 03:21:27,557 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-floating-ips' 2012-10-12 03:21:27,815 DEBUG openstack: 200 '{"floating_ips": [{"instance_id": "a0e0df11-91c0-4801-95b3-62d910d729e9", "ip": "xxxx.35", "fixed_ip": "10.0.0.5", "id": 447, "pool": "nova"}, {"instance_id": "b84f1a42-7192-415e-8650-ebb1aa56e97f", "ip": "xxxx.36", "fixed_ip": "10.0.0.6", "id": 448, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.37", "fixed_ip": null, "id": 449, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.38", "fixed_ip": null, "id": 450, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.39", "fixed_ip": null, "id": 451, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.40", "fixed_ip": null, "id": 452, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.41", "fixed_ip": null, "id": 453, "pool": "nova"}]}' 2012-10-12 03:21:27,815 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023/action 2012-10-12 03:21:27,816 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023/action' 2012-10-12 03:21:28,356 DEBUG openstack: 202 '' 2012-10-12 03:21:28,356 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state 2012-10-12 03:21:28,357 DEBUG openstack: PUT 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state' 2012-10-12 03:21:28,446 DEBUG openstack: 201 '201 Created\n\n\n\n ' 2012-10-12 03:21:28,446 INFO 'bootstrap' command finished successfully Juju Status Command root@cinder01:/home/cinder# juju -v status 2012-10-12 03:23:28,314 DEBUG Initializing juju status runtime 2012-10-12 03:23:28,320 WARNING Verification of xxxxS certificates is disabled for this environment. Set 'ssl-hostname-verification' to ensure secure communication. 2012-10-12 03:23:28,320 DEBUG openstack: using auth-mode 'userpass' with xxxx:xxxxxx.10:35357/v2.0/ 2012-10-12 03:23:28,320 INFO Connecting to environment... 2012-10-12 03:23:28,403 DEBUG openstack: authenticated til u'2012-10-13T08:23:20Z' 2012-10-12 03:23:28,403 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state 2012-10-12 03:23:28,403 DEBUG openstack: GET 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state' 2012-10-12 03:23:35,480 DEBUG openstack: 200 'zookeeper-instances: [a598b402-8678-4447-baeb-59255409a023]\n' 2012-10-12 03:23:35,480 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023 2012-10-12 03:23:35,480 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023' 2012-10-12 03:23:35,662 DEBUG openstack: 200 '{"server": {"OS-EXT-STS:task_state": null, "addresses": {"private": [{"version": 4, "addr": "10.0.0.8"}, {"version": 4, "addr": "xxxx.37"}]}, "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "bookmark"}], "image": {"id": "5bf60467-0136-4471-9818-e13ade75a0a1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/images/5bf60467-0136-4471-9818-e13ade75a0a1", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000060", "flavor": {"id": "1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}]}, "id": "a598b402-8678-4447-baeb-59255409a023", "user_id": "01610f73d0fb4922aefff09f2627e50c", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "config_drive": "", "status": "ACTIVE", "updated": "2012-10-12T08:21:40Z", "hostId": "1cdb25708fb8e464d83a69fe4a024dcd5a80baf24a82ec28f9d9f866", "OS-EXT-SRV-ATTR:host": "nova01", "key_name": "", "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "name": "juju openstack instance 0", "created": "2012-10-12T08:21:22Z", "tenant_id": "d5f52673953f49e595279e89ddde979d", "metadata": {}}}' 2012-10-12 03:23:35,663 DEBUG Connecting to environment using xxxx.37... 2012-10-12 03:23:35,663 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="xxxx.37" remote_port="2181" local_port="45859". 2012-10-12 03:23:36,173:4355(0x7fd581973700):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-10-12 03:23:36,173:4355(0x7fd581973700):ZOO_INFO@log_env@662: Client environment:host.name=cinder01 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-23-generic 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@671: Client environment:os.version=#36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@679: Client environment:user.name=cinder 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@687: Client environment:user.home=/root 2012-10-12 03:23:36,175:4355(0x7fd581973700):ZOO_INFO@log_env@699: Client environment:user.dir=/home/cinder 2012-10-12 03:23:36,175:4355(0x7fd581973700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:45859 sessionTimeout=10000 watcher=0x7fd57f9146b0 sessionId=0 sessionPasswd= context=0x2c1dab0 flags=0 2012-10-12 03:23:36,175:4355(0x7fd577fff700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:45859] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2012-10-12 03:23:39,512:4355(0x7fd577fff700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:45859] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2012-10-12 03:23:42,848:4355(0x7fd577fff700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:45859] zk retcode=-4, errno=111(Connection refused): server refused to accept the client ^Croot@cinder01:/home/cinder#

    Read the article

  • Evil DRY

    - by StefanSteinegger
    DRY (Don't Repeat Yourself) is a basic software design and coding principle. But there is just no silver bullet. While DRY should increase maintainability by avoiding common design mistakes, it could lead to huge maintenance problems when misunderstood. The root of the problem is most probably that many developers believe that DRY means that any piece of code that is written more then once should be made reusable. But the principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." So the important thing here is "knowledge". Nobody ever said "every piece of code". I try to give some examples of misusing the DRY principle. Code Repetitions by Coincidence There is code that is repeated by pure coincidence. It is not the same code because it is based on the same piece of knowledge, it is just the same by coincidence. It's hard to give an example of such a case. Just think about some lines of code the developer thinks "I already wrote something similar". Then he takes the original code, puts it into a public method, even worse into a base class where none had been there before, puts some weird arguments and some if or switch statements into it to support all special cases and calls this "increasing maintainability based on the DRY principle". The resulting "reusable method" is usually something the developer not even can give a meaningful name, because its contents isn't anything specific, it is just a bunch of code. For the same reason, nobody will really understand this piece of code. Typically this method only makes sense to call after some other method had been called. All the symptoms of really bad design is evident. Fact is, writing this kind of "reusable methods" is worse then copy pasting! Believe me. What will happen when you change this weird piece of code? You can't say what'll happen, because you can't understand what the code is actually doing. So better don't touch it anymore. Maintainability just died. Of course this problem is with any badly designed code. But because the developer tried to make this method as reusable as possible, large parts of the system get dependent on it. Completely independent parts get tightly coupled by this common piece of code. Changing on the single common place will have effects anywhere in the system, a typical symptom of too tight coupling. Without trying to dogmatically (and wrongly) apply the DRY principle, you just had a system with a weak design. Now you get a system which just can't be maintained anymore. So what can you do against it? When making code reusable, always identify the generally reusable parts of it. Find the reason why the code is repeated, find the common "piece of knowledge". If you have to search too far, it's probably not really there. Explain it to a colleague, if you can't explain or the explanation is to complicated, it's probably not worth to reuse. If you identify the piece of knowledge, don't forget to carefully find the place where it should be implemented. Reusing code is never worth giving up a clean design. Methods always need to do something specific. If you can't give it a simple and explanatory name, you did probably something weird. If you can't find the common piece of knowledge, try to make the code simpler. For instance, if you have some complicated string or collection operations within this code, write some general-purpose operations into a helper class. If your code gets simple enough, its not so bad if it can't be reused. If you are not able to find anything simple and reasonable, copy paste it. Put a comment into the code to reference the other copies. You may find a solution later. Requirements Repetitions by Coincidence Let's assume that you need to implement complex tax calculations for many countries. It's possible that some countries have very similar tax rules. These rules are still completely independent from each other, since every country can change it of its own. (Assumed that this similarity is actually by coincidence and not by political membership. There might be basic rules applying to all European countries. etc.) Let's assume that there are similarities between an Asian country and an African country. Moving the common part to a central place will cause problems. What happens if one of the countries changes its rules? Or - more likely - what happens if users of one country complain about an error in the calculation? If there is shared code, it is very risky to change it, even for a bugfix. It is hard to find requirements to be repeated by coincidence. Then there is not much you can do against the repetition of the code. What you really should consider is to make coding of the rules as simple as possible. So this independent knowledge "Tax Rules in Timbuktu" or wherever should be as pure as possible, without much overhead and stuff that does not belong to it. So you can write every independent requirement short and clean. DRYing try-catch and using Blocks This is a technical issue. Blocks like try-catch or using (e.g. in C#) are very hard to DRY. Imagine a complex exception handling, including several catch blocks. When the contents of the try block as well as the contents of the individual catch block are trivial, but the whole structure is repeated on many places in the code, there is almost no reasonable way to DRY it. try { // trivial code here using (Thingy thing = new thingy) { //trivial, but always different line of code } } catch(FooException foo) { // trivial foo handling } catch (BarException bar) { // trivial bar handling } catch { // trivial common handling } finally { // trivial finally block } The key here is that every block is trivial, so there is nothing to just move into a separate method. The only part that differs from case to case is the line of code in the body of the using block (or any other block). The situation is especially interesting if the many occurrences of this structure are completely independent: they appear in classes with no common base class, they don't aggregate each other and so on. Let's assume that this is a common pattern in service methods within the whole system. Examples of Evil DRYing in this situation: Put a if or switch statement into the method to choose the line of code to execute. There are several reasons why this is not a good idea: The close coupling of the formerly independent implementation is the strongest. Also the readability of the code and the use of a parameter to control the logic. Put everything into a method which takes a delegate as argument to call. The caller just passes his "specific line of code" to this method. The code will be very unreadable. The same maintainability problems apply as for any "Code Repetition by Coincidence" situations. Enforce a base class to all the classes where this pattern appears and use the template method pattern. It's the same readability and maintainability problem as above, but additionally complex and tightly coupled because of the base class. I would call this "Inheritance by Coincidence" which will not lead to great software design. What can you do against it: Ideally, the individual line of code is a call to a class or interface, which could be made individual by inheritance. If this would be the case, it wouldn't be a problem at all. I assume that it is no such a trivial case. Consider to refactor the error concept to make error handling easier. The last but not worst option is to keep the replications. Some pattern of code must be maintained in consistency, there is nothing we can do against it. And no reason to make it unreadable. Conclusion The DRY-principle is an important and basic principle every software developer should master. The key is to identify the "pieces of knowledge". There is code which can't be reused easily because of technical reasons. This requires quite a bit flexibility and creativity to make code simple and maintainable. It's not the problem of the principle, it is the problem of blindly applying a principle without understanding the problem it should solve. The result is mostly much worse then ignoring the principle.

    Read the article

  • Developing a Configurable Pricing Program

    - by Ben DeMott
    The organization I work at has some interesting requirements when it comes to pricing for online commerce. Currently the developers write different 'pricing rules' and those rules can be applied to our items based on attributes of the items. For Example: INPUTS: [cost, sug_retail, discontinued, warehouse_qty, orderable_qty, brand, type, days_available, shipping_rate, weight, map_protected, map_discount] MATCH: brand=x, warehouse_qty 1, discontinued=True, map_protected=False SET: retail_price = (sug_retail * 0.95), offer_price1 = (cost * 1.25 + shipping_rate) I am looking to allow the merchandising team to have more control over the pricing and formulas - they are afterall technical enough to write excel formulas. I've been looking at writing a desktop application that uses something like numexpr http://code.google.com/p/numexpr/ or http://sympy.org/en/index.html to allow non-programmers to integrate their own logic into our pricing backend. We have multiple price-tiers we have to set, for multiple markets, so an elegant solution is needed. It's getting frustrating for the dev team to continually tweak/manage all of the pricing rules (we sell over 200 brands in 3 markets). My question is; does this seem like a decent approach? Can you think of a better way to parse string-mathematical-grammer? Can you think of a different way for users to provide formula's to integrate into a automated pricing system? Does anyone know of any examples of existing applications that do this? Excel, and Access are out of the question - the volume of data we manipulate has already proven the need to automate it - now we just need some visibility into that automation.

    Read the article

  • NetworkManager broken after upgrade to Kubuntu Saucy

    - by queueoverflow
    I had Kubuntu 13.04 on my ThinkPad X220, and I upgraded to 13.10 and I am not able to connect to a wired or wireless connection. The new network tray icon does not show any entries at all. In the menu of the tray icon, there is an error saying: Require NetworkManager 0.9.8, found . I then tried the following: nmcli con ** (process:3695): WARNING **: Could not initialize NMClient /org/freedesktop/NetworkManager: Rejected send message, 3 matched rules; type="method_call", sender=":1.64" (uid=1000 pid=3695 comm="nmcli con ") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination="org.freedesktop.NetworkManager" (uid=0 pid=1116 comm="NetworkManager ") Error: nmcli (0.9.8.0) and NetworkManager (unknown) versions don't match. Force execution using --nocheck, but the results are unpredictable. nmcli dev ** (process:3700): WARNING **: Could not initialize NMClient /org/freedesktop/NetworkManager: Rejected send message, 3 matched rules; type="method_call", sender=":1.65" (uid=1000 pid=3700 comm="nmcli dev ") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination="org.freedesktop.NetworkManager" (uid=0 pid=1116 comm="NetworkManager ") Error: nmcli (0.9.8.0) and NetworkManager (unknown) versions don't match. Force execution using --nocheck, but the results are unpredictable. nm-tool ** (process:3705): WARNING **: Could not initialize NMClient /org/freedesktop/NetworkManager: Rejected send message, 3 matched rules; type="method_call", sender=":1.66" (uid=1000 pid=3705 comm="nm-tool ") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination="org.freedesktop.NetworkManager" (uid=0 pid=1116 comm="NetworkManager ") NetworkManager Tool State: unknown ** (process:3705): WARNING **: error: could not connect to NetworkManager Running those as root works, however. I was also able to run nmcli con up id DHCP which got my DHCP connection working and giving me internet access. That did not work using a Wifi connection though, and I do need those. How can I get networking back to work without a reinstall?

    Read the article

  • How to balance a non-symmetric "extension" based game?

    - by Klaim
    Most strategy games have fixed units and possible behaviours. However, think of a game like Magic The Gathering : each card is a set of rules. Regularly, new sets of card types are created. I remember that the firsts editions of the game have been said to be prohibited in official tournaments because the cards were often too powerful. Later extensions of the game provided more subtle effects/rules in cards and they managed to balance the game apparently effectively, even if there is thousands of different cards possible. I'm working on a strategy game that is a bit in the same position : every units are provided by extensions and the game is thought to be extended for some years, at least. The effects variety of the units are very large even with some basic design limitations set to be sure it's manageable. Each player choose a set of units to play with (defining their global strategy) before playing (like chooseing a themed deck of Magic cards). As it's a strategy game (you can think of Magic as a strategy game too in some POV), it's essentially skirmish based so the game have to be fair, even if the players don't choose the same units before starting to play. So, how do you proceed to balance this type of non-symmetric (strategy) game when you know it will always be extended? For the moment, I'm trying to apply those rules but I'm not sure it's right because I don't have enough design experience to know : each unit would provide one unique effect; each unit should have an opposite unit that have an opposite effect that would cancel each others; some limitations based on the gameplay; try to get a lot of beta tests before each extension release? Looks like I'm in the most complex case?

    Read the article

  • Error when running debuild on package source

    - by Chris Wilson
    I'm attempting to build the squeak-vm source but am getting an error every time I do so. The output is: dpkg-buildpackage -rfakeroot -D -us -uc dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package squeak-vm dpkg-buildpackage: source version 1:4.0.3.2202-2 dpkg-buildpackage: source changed by José L. Redrejo Rodríguez <[email protected]> dpkg-source --before-build squeak-vm-4.0.3.2202 dpkg-buildpackage: host architecture i386 fakeroot debian/rules clean dh_testdir dh_testroot rm -f build-stamp configure-stamp rm -f unix/cmake/config.sub unix/cmake/config.guess /usr/bin/make -f debian/rules unpatch make[1]: Entering directory `/home/notgary/Projects/squeak/squeak-vm-4.0.3.2202' QUILT_PATCHES=debian/patches \ quilt --quiltrc /dev/null pop -a -R || test $? = 2 Patch linex.patch does not remove cleanly (refresh it or enforce with -f) make[1]: *** [unpatch] Error 1 make[1]: Leaving directory `/home/notgary/Projects/squeak/squeak-vm-4.0.3.2202' make: *** [clean] Error 2 dpkg-buildpackage: error: fakeroot debian/rules clean gave error exit status 2 debuild: fatal error at line 1337: dpkg-buildpackage -rfakeroot -D -us -uc failed

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • What ufw allows/denies by default?

    - by mgibsonbr
    I was accessing a server running Ubuntu 12.04 Server using SSH and managed to lock myself out of it. I'm still wondering how that happened: The firewall was enabled by default; sudo ufw status did not show any rules (but I could SSH to the server normally); I tried explicitly allowing ports 80 and 443 using the commands: sudo ufw allow 80 sudo ufw allow 443 sudo ufw status now showed something like: Status: active To Action From -- ------ ---- 80 ALLOW Anywhere 80 ALLOW Anywhere (v6) 443 ALLOW Anywhere 443 ALLOW Anywhere (v6) (Recalling from memory and seeing some examples; I can't access the server to see the exact output, so I might be mistaken) After logging out of SSH, now I can't log in anymore (connection timeout). What just happened? There were no DENY rules previously (AFAIK), neither I introduced any. How could SSH be previously available and now it's not? Does ufw (or more precisely iptables) allow everything by default, unless you explicitly allow something, then it denies everything by default? Or did I do something wrong, that broke the existing rules somehow?

    Read the article

  • Procedural world generation oriented on gameplay features

    - by Richard Fabian
    In large procedural landscape games, the land seems dull, but that's probably because the real world is largely dull, with only limited places where the scenery is dramatic or tactical. Looking at world generation from this point of view, a landscape generator for a game (that is, not for the sake of scenery, but for the sake of gameplay) needs to not follow the rules of landscaping, but instead some rules married to the expectations of the gamer. For example, there could be a choke point / route generator that creates hills ravines, rivers and mountains between cities, rather than the natural way cities arise, scattered on the land based on resources or conditions generated by the mountains and rainfall patterns. Is there any existing work being done like this? Start with cities or population centres and then add in terrain afterwards? The reason I'm asking is that I'd previously pondered taking existing maps from fantasy fiction (my own and others), putting the information into the system as a base point, and then generating a good world to play in from it. This seems covered by existing technology, that is, where the designer puts in all the necessary information such as the city populations, resources, biomes, road networks and rivers, then allows the PCG fill in the gaps. But now I'm wondering if it may be possible to have a content generator generate also the overall design. Generate the cities and population centres, balancing them so that there is a natural seeming need of commerce, then generate the positions and connectivity, then from the type of city produce the list of necessary resources that must be nearby, and only then, maybe given some rules on how to make the journey between cities both believable and interesting, generate the final content including the roads, the choke points, the bridges and tunnels, ferries and the terrain including the biomes and coastline necessary. If this has been done before, I'd like to know, and would like to know what went wrong, and what went right.

    Read the article

  • Generic software code style enforcer

    - by FuzziBear
    It seems to me to be a fairly common thing to do, where you have some code that you'd like to automatically run through a code style tool to catch when people break your coding style guide(s). Particularly if you're working on code that has multiple languages (which is becoming more common with web-language-x and javascript), you generally want to apply similar code style guides to both and have them enforced. I've done a bit of research, but I've only been able to find tools to enforce code style guidelines (not necessarily applying the code style, just telling you when you break code style guidelines) for a particular language. It would seem to me a reasonably trivial thing to do by just using current IDE rules for syntax highlighting (so that you don't check style guide rules inside quotes or strings, etc) and a whole lot of regexes to enforce some really generic things. Examples: if ( rather than if( checking lines with only whitespace Are there any tools that do this kind of really generic style checking? I'd prefer it to be easily configurable for different languages (because like it or not, some things would just not work cross language) and to add new "rules" to check new things.

    Read the article

  • Oracle is Child&rsquo;s Play&hellip;in NSW

    - by divya.malik
    A few weeks ago, my colleague Michael Seback posted a blog entry on Oracle’s acquisition of Haley.  We recently read  an interesting report from Down Under, and here was our press release on the  implementation of Oracle’s Policy automation software in New South Wales, which I thought I would share. We always love hearing about our software “at work”, and especially in the Public Sector- social services area, where it makes a big difference to people’s lives. Here were some of the reasons, why NSW chose Oracle software: “One of the things Oracle’s Policy Automation system is good at is allowing you take decision  trees and rules that are obviously written in English and code them up using very much a natural language approach,” said Holling (CIO for Human Services). “So it was quite a short process to translate the final set of rules that were written on paper into business rules that were actually embedded in the system.” “Another reason why we chose Oracle’s Automation tool is because with future versions of Siebel it comes very tightly integrated with that. It allows us to then to basically take the results of the Policy Automation survey and actually populate our client management system database with that information,” said Holling. As per Surend Dayal, North America VP, Oracle’s Policy automation has applications across a wide range of industries, including public sector—especially health and human services—also financial services, insurance, and even airline rewards programs. In other words, any business process that requires consistent, accurate decision-making where complex legislation and/or internal policies are involved. Click here to read more about Oracle and Haley.

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • When searching in Outlook, encountered this error "Instant Search encountered a problem while trying

    - by Imagineer
    Sometime when searching for certain keyword, I get this error "Instant Search encountered a problem while trying to display search results. Modifying your query may resolve this problem." I have enabled Outlook logging to determine what is the error as suggested by someone in other forum. but I haven't had a clue how to decipher it. 2010.05.11 09:38:10 <<<< Logging Started (level is LTF_TRACE) >>>> 2010.05.11 09:38:10 HELPER::Initialize called 2010.05.11 09:38:10 Initializing: Finding a Transport 2010.05.11 09:38:10 MAPI XP Call: XPProviderInit in EMSMDB.DLL, hr = 0x00000000 2010.05.11 09:38:10 MAPI XP Call: TransportLogon, hr = 0x8004011d 2010.05.11 09:38:10 MAPI XP Call: Shutdown, hr = 0x00000000 2010.05.11 09:38:10 MAPI XP Call: XPProviderInit in EMSMDB.DLL, hr = 0x00000000 2010.05.11 09:38:10 MAPI Status: (-- -- ---/--- -- ---) 2010.05.11 09:38:10 MAPI XP Call: TransportLogon, hr = 0x00000000 2010.05.11 09:38:10 Initializing: Found a transport, Error code = 0x00000000 2010.05.11 09:38:10 MAPI XP Call: AddressTypes, hr = 0x00000000, cAddrs = 4, cUids = 1 2010.05.11 09:38:10 MAPI XP Call: RegisterOptions, hr = 0x00000000, cOptions = 2 2010.05.11 09:38:10 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 09:38:10 MAPI XP Call: TransportNotify(BEGIN_IN|BEGIN_OUT), hr = 0x00000000 2010.05.11 09:38:10 HELPER::Initialize done, Error code = 0x00000000 2010.05.11 09:38:10 HELPER::GetCapabilities called, Error code = 0x00000000 2010.05.11 09:38:10 Microsoft Exchange: Synch operation started (flags = 00000031) 2010.05.11 09:38:10 Microsoft Exchange: StartImport(flags = 00000000, max msg = ffffffff): full items 2010.05.11 09:38:10 Microsoft Exchange: UploadItems: 0 messages to send 2010.05.11 09:38:11 Starting the Spooling Cycle 2010.05.11 09:38:11 MAPI Status: (IN fl ---/OUT -- ---) 2010.05.11 09:38:11 MAPI XP Call: FlushQueues, hr = 0x00000000, ulFlushFlags = 0x0000001c 2010.05.11 09:38:11 MAPI XP Call: Poll, hr = 0x00000000, cPollCount = 855 2010.05.11 09:38:11 Progress: Receiving message (message 1 out of 856, size unknown) 2010.05.11 09:38:11 Downloading one message 2010.05.11 09:38:11 Transport tightly coupled with store, download is NOOP 2010.05.11 09:38:11 Downloading done, Error code = 0x8004010f 2010.05.11 09:38:11 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 09:38:11 FINISHED MAPI TASK 2010.05.11 09:38:11 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 09:38:11 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 09:38:11 EXECUTING EndSession MAPI TASK 2010.05.11 09:38:11 Starting the Simplified Transfer Cycle 2010.05.11 09:38:11 MAPI XP Call: Poll, hr = 0x00000000, iMsgsReceived = 0, cPollCount = 855 2010.05.11 09:38:11 Progress: Receiving message (message 1 out of 856, size unknown) 2010.05.11 09:38:11 Downloading one message 2010.05.11 09:38:11 MAPI Status: (IN -- act/OUT -- ---) 2010.05.11 09:38:11 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 09:38:11 Downloading done, Error code = 0x8004010f 2010.05.11 09:38:11 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 09:38:11 FINISHED MAPI TASK 2010.05.11 09:38:11 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 09:38:11 Microsoft Exchange: Synch operation completed 2010.05.11 10:08:15 Microsoft Exchange: Synch operation started (flags = 00000031) 2010.05.11 10:08:15 Microsoft Exchange: StartImport(flags = 00000000, max msg = ffffffff): full items 2010.05.11 10:08:15 Microsoft Exchange: UploadItems: 0 messages to send 2010.05.11 10:08:16 Starting the Spooling Cycle 2010.05.11 10:08:16 MAPI Status: (IN fl ---/OUT -- ---) 2010.05.11 10:08:16 MAPI XP Call: FlushQueues, hr = 0x00000000, ulFlushFlags = 0x0000001c 2010.05.11 10:08:16 MAPI XP Call: Poll, hr = 0x00000000, cPollCount = 858 2010.05.11 10:08:16 Progress: Receiving message (message 1 out of 859, size unknown) 2010.05.11 10:08:16 Downloading one message 2010.05.11 10:08:16 Transport tightly coupled with store, download is NOOP 2010.05.11 10:08:16 Downloading done, Error code = 0x8004010f 2010.05.11 10:08:16 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 10:08:16 FINISHED MAPI TASK 2010.05.11 10:08:16 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 10:08:16 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 10:08:16 EXECUTING EndSession MAPI TASK 2010.05.11 10:08:16 Starting the Simplified Transfer Cycle 2010.05.11 10:08:16 MAPI XP Call: Poll, hr = 0x00000000, iMsgsReceived = 0, cPollCount = 858 2010.05.11 10:08:16 Progress: Receiving message (message 1 out of 859, size unknown) 2010.05.11 10:08:16 Downloading one message 2010.05.11 10:08:16 MAPI Status: (IN -- act/OUT -- ---) 2010.05.11 10:08:16 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 10:08:16 Downloading done, Error code = 0x8004010f 2010.05.11 10:08:16 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 10:08:16 FINISHED MAPI TASK 2010.05.11 10:08:16 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 10:08:16 Microsoft Exchange: Synch operation completed 2010.05.11 10:09:48 HELPER::Uninitialize called 2010.05.11 10:09:48 MAPI Status: (-- -- ---/--- -- ---) 2010.05.11 10:09:48 MAPI XP Call: TransportNotify(END_IN|END_OUT), hr = 0x00000000 2010.05.11 10:09:48 MAPI XP Call: TransportLogoff in EMSMDB.DLL, hr = 0x00000000 2010.05.11 10:09:48 MAPI XP Call: Shutdown, hr = 0x00000000 2010.05.11 10:09:48 Resource manager terminated I'm running Outlook 2007 SP1 in Citrix environment and should be running in Cache Mode. In my Outlook Tools-Options-Search Option, there is nothing under indexing. Any help is greatly appreciated! Thank you.

    Read the article

  • Stand-alone Java code formatter/beautifier/pretty printer?

    - by Greg Mattes
    I'm interested in learning about the available choices of high-quality, stand-alone source code formatters for Java. The formatter must be stand-alone, that is, it must support a "batch" mode that is decoupled from any particular development environment. Ideally it should be independent of any particular operating system as well. So, a built-in formatter for the IDE du jour is of little interest here (unless that IDE supports batch mode formatter invocation, perhaps from the command line). A formatter written in closed-source C/C++ that only runs on, say, Windows is not ideal, but is somewhat interesting. To be clear, a "formatter" (or "beautifier") is not the same as a "style checker." A formatter accepts source code as input, applies styling rules, and produces styled source code that is semantically equivalent to the original source code. A style checker also applies styling rules, but it simply reports rule violations without producing modified source code as output. So the picture looks like this: Formatter (produces modified source code that conforms to styling rules) Read Source Code → Apply Styling Rules → Write Styled Source Code Style Checker (does not produce modified source code) Read Source Code → Apply Styling Rules → Write Rule Violations Further Clarifications Solutions must be highly configurable. I want to be able to specify my own style, not simply select from a canned list. Also, I'm not looking for a general purpose pretty-printer written in Java that can pretty-print many things. I want to style Java code. I'm also not necessarily interested in a grand-unified formatter for many languages. I suppose it might be nice for a solution to have support for languages other than Java, but that is not a requirement. Furthermore, tools that only perform code highlighting are right out. I'm also not interested in a web service. I want a tool that I can run locally. Finally, solutions need not be restricted to open source, public domain, shareware, free software, commercial, or anything else. All forms of licensing are acceptable.

    Read the article

  • XSD: xs:sequence & xs:choice combination for xs:extension elements?

    - by bguiz
    Hi, My question is about defining an XML schema that will validate the following sample XML: <rules> <other>...</other> <bool>...</bool> <other>...</other> <string>...</string> <other>...</other> </rules> The order of the child nodes does not matter. The cardinality of the child nodes is 0..unbounded. All the child elements of the rules node have a common base type, rule, like so: <xs:complexType name="booleanRule"> <xs:complexContent> <xs:extension base="rule"> ... </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="stringFilterRule"> <xs:complexContent> <xs:extension base="filterRule"> ... </xs:extension> </xs:complexContent> </xs:complexType> My current attempt at defining the schema for the rules node is below. However, Can I nest xs:choice within xs:sequence? If, where do I specify the maxOccurs="unbounded" attribute? Is there a better way to do this, such as an xs:sequence which specifies only the base type of its child elements? <xs:element name="rules"> <xs:complexType> <xs:sequence> <xs:choice> <xs:element name="bool" type="booleanRule" /> <xs:element name="string" type="stringRule" /> <xs:element name="other" type="someOtherRule" /> </xs:choice> </xs:sequence> </xs:complexType> </xs:element>

    Read the article

  • Wordpress pages address rewrite

    - by kemp
    UPDATE I tried using the internal wordpress rewrite. What I have to do is an address like this: http://example.com/galleria/artist-name sent to the gallery.php page with a variable containing the artist-name. I used these rules as per Wordpress' documentation: // REWRITE RULES (per gallery) {{{ add_filter('rewrite_rules_array','wp_insertMyRewriteRules'); add_filter('query_vars','wp_insertMyRewriteQueryVars'); add_filter('init','flushRules'); // Remember to flush_rules() when adding rules function flushRules(){ global $wp_rewrite; $wp_rewrite->flush_rules(); } // Adding a new rule function wp_insertMyRewriteRules($rules) { $newrules = array(); $newrules['(galleria)/(.*)$'] = 'index.php?pagename=gallery&galleryname=$matches[2]'; return $newrules + $rules; } // Adding the id var so that WP recognizes it function wp_insertMyRewriteQueryVars($vars) { array_push($vars, 'galleryname'); return $vars; } what's weird now is that on my local wordpress test install, that works fine: the gallery page is called and the galleryname variable is passed. On the real site, on the other hand, the initial URL is accepted (as in it doesn't go into a 404) BUT it changes to http://example.com/gallery (I mean it actually changes in the browser's address bar) and the variable is not defined in gallery.php. Any idea what could possibly cause this different behavior? Alternatively, any other way I couldn't think of which could achieve the same effect described in the first three lines is perfectly fine. Old question What I need to do is rewriting this address: (1) http://localhost/wordpress/fake/text-value to (2) http://localhost/wordpress/gallery?somevar=text-value Notes: the remapping must be transparent: the user always has to see address (1) gallery is a permalink to a wordpress page, not a real address I basically need to rewrite the address first (to modify it) and then feed it back to mod rewrite again (to let wordpress parse it its own way). Problems if I simply do RewriteRule ^fake$ http://localhost/wordpress/gallery [L] it works but the address in the browser changes, which is no good, if I do RewriteRule ^fake$ /wordpress/gallery [L] I get a 404. I tried different flags instead of [L] but to no avail. How can I get this to work? EDIT: full .htaccess # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^fake$ /wordpress/gallery [R] RewriteBase /wordpress/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /wordpress/index.php [L] </IfModule> # END WordPress

    Read the article

  • XSD: xs:sequence & xs:choice combination for xs:extension elements of a common base type?

    - by bguiz
    Hi, My question is about defining an XML schema that will validate the following XML: <rules> <other>...</other> <bool>...</bool> <other>...</other> <string>...</string> <other>...</other> </rules> The order of the child nodes does not matter. The cardinality of the child nodes is 0..unbounded. All the child elements of the rules node have a common base type, rule, like so: <xs:complexType name="booleanRule"> <xs:complexContent> <xs:extension base="rule"> ... </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="stringFilterRule"> <xs:complexContent> <xs:extension base="filterRule"> ... </xs:extension> </xs:complexContent> </xs:complexType> My current (feeble) attempt at defining the schema for the rules node is below. However, Can I nest xs:choice within xs:sequence? If, where do I specify the maxOccurs="unbounded" attribute? Is there a better way to do this, such as an xs:sequence which specifies only the base type of its child elements? <xs:element name="rules"> <xs:complexType> <xs:sequence> <xs:choice> <xs:element name="bool" type="booleanRule" /> <xs:element name="string" type="stringRule" /> <xs:element name="other" type="someOtherRule" /> </xs:choice> </xs:sequence> </xs:complexType> </xs:element>

    Read the article

  • sifr menu list problems, height of object calculated wrong

    - by armyofda12mnkeys
    I am using Drupal and have sifr set on list items, and also a, a:hover are set via Drupal so that the links hover. I looked at the sifr3-rules.js file drupal's render module creates and it looks good. and in fact my other sifr items look fine... But the list item ones are goofing for some reason... There is extra space below the list, so if i have a list item, and inside of that, I have a unordered list with more list items, the Flash Object made in the 1st li (which will cover the rest of the sublist items too), is too bid so you see space under the children until the next parent li comes up. (so looks like extra bottom padding on that portion of the list in IE8... in FF almost similar but each subitem has space at bottom... with javascript turned off, you see the list items look fine by themselves).... Also if the parent list item is shorter than the sub list items text, the width for the flash object is set only as long as the first list item, and therefore cuts off the rest the sublist item's text. Any idea how to resolve any of these? Only unusual thing i see i am doing is setting forceSingleLine and preventWrap (which dont make a difference if taken off). ****Edit, I may just try to figure out how to get Drupal's menu-block module to output a around my hyperlinks in the list items... then i can target the div's with my rule (and the a,a:hover rules will apply), so each menu item gets its own sifr object instead of sifr3 trying to figure out how to do the lists and sublists. Very useful to me is anyone knows a way to target a hyperlink (<a> tag) and also allow a:hover rules. I know how to do it with pretend another tag that includes hyperlinks, like if i had <h2><a>sometitle</a></h2>, i could have a rule for h2, but then use the a, a:hover rules in the sifr3-rules.js file to target that. so i would need a way to target the hyperlink in the list, but also apply a :hover to it (not sure if this can be done since its not underneith for example the h2 tag).

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • iptables not allowing mysql connections to aliased ips?

    - by Curtis
    I have a fairly simple iptables firewall on a server that provides MySQL services, but iptables seems to be giving me very inconsistent results. The default policy on the script is as follows: iptables -P INPUT DROP I can then make MySQL public with the following rule: iptables -A INPUT -p tcp --dport 3306 -j ACCEPT With this rule in place, I can connect to MySQL from any source IP to any destination IP on the server without a problem. However, when I try to restrict access to just three IPs by replacing the above line with the following, I run into trouble (xxx=masked octect): iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.184 -j ACCEPT iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.196 -j ACCEPT iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.251 -j ACCEPT Once the above rules are in place, the following happens: I can connect to the MySQL server from the .184, .196 and .251 hosts just fine as long as am connecting to the MySQL server using it's default IP address or an IP alias in the same subnet as the default IP address. I am unable to connect to MySQL using IP aliases that are assigned to the server from a different subnet than the server's default IP when I'm coming from the .184 or .196 hosts, but .251 works just fine. From the .184 or .196 hosts, a telnet attempt just hangs... # telnet 209.xxx.xxx.22 3306 Trying 209.xxx.xxx.22... If I remove the .251 line (making .196 the last rule added), the .196 host still can not connect to MySQL using IP aliases (so it's not the order of the rules that is causing the inconsistent behavior). I know, this particular test was silly as it shouldn't matter what order these three rules are added in, but I figured someone might ask. If I switch back to the "public" rule, all hosts can connect to the MySQL server using either the default or aliased IPs (in either subnet): iptables -A INPUT -p tcp --dport 3306 -j ACCEPT The server is running in a CentOS 5.4 OpenVZ/Proxmox container (2.6.32-4-pve). And, just in case you prefer to see the problem rules in the context of the iptables script, here it is (xxx=masked octect): # Flush old rules, old custom tables /sbin/iptables --flush /sbin/iptables --delete-chain # Set default policies for all three default chains /sbin/iptables -P INPUT DROP /sbin/iptables -P FORWARD DROP /sbin/iptables -P OUTPUT ACCEPT # Enable free use of loopback interfaces /sbin/iptables -A INPUT -i lo -j ACCEPT /sbin/iptables -A OUTPUT -o lo -j ACCEPT # All TCP sessions should begin with SYN /sbin/iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Accept inbound TCP packets (Do this *before* adding the 'blocked' chain) /sbin/iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow the server's own IP to connect to itself /sbin/iptables -A INPUT -i eth0 -s 208.xxx.xxx.178 -j ACCEPT # Add the 'blocked' chain *after* we've accepted established/related connections # so we remain efficient and only evaluate new/inbound connections /sbin/iptables -N BLOCKED /sbin/iptables -A INPUT -j BLOCKED # Accept inbound ICMP messages /sbin/iptables -A INPUT -p ICMP --icmp-type 8 -j ACCEPT /sbin/iptables -A INPUT -p ICMP --icmp-type 11 -j ACCEPT # ssh (private) /sbin/iptables -A INPUT -p tcp --dport 22 -m state --state NEW -s xxx.xxx.xxx.xxx -j ACCEPT # ftp (private) /sbin/iptables -A INPUT -p tcp --dport 21 -m state --state NEW -s xxx.xxx.xxx.xxx -j ACCEPT # www (public) /sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 443 -j ACCEPT # smtp (public) /sbin/iptables -A INPUT -p tcp --dport 25 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 2525 -j ACCEPT # pop (public) /sbin/iptables -A INPUT -p tcp --dport 110 -j ACCEPT # mysql (private) /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.184 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.196 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.251 -j ACCEPT Any ideas? Thanks in advance. :-)

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >