Search Results

Search found 21530 results on 862 pages for 'service pack'.

Page 322/862 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases

    - by Ronen Kofman
      rkofman Normal rkofman 4 138 2014-06-05T03:38:00Z 2014-06-05T05:04:00Z 3 2735 15596 Oracle Corporation 129 36 18295 12.00 Clean Clean false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} In the previous post we reviewed several network components including Open vSwitch, Network Namespaces, Linux Bridges and veth pairs. In this post we will take three simple use cases and see how those basic components come together to create a complete SDN solution in OpenStack. With those three use cases we will review almost the entire network setup and see how all the pieces work together. The use cases we will use are: 1.       Create network – what happens when we create network and how can we create multiple isolated networks 2.       Launch a VM – once we have networks we can launch VMs and connect them to networks. 3.       DHCP request from a VM – OpenStack can automatically assign IP addresses to VMs. This is done through local DHCP service controlled by OpenStack Neutron. We will see how this service runs and how does a DHCP request and response look like. In this post we will show connectivity, we will see how packets get from point A to point B. We first focus on how a configured deployment looks like and only later we will discuss how and when the configuration is created. Personally I found it very valuable to see the actual interfaces and how they connect to each other through examples and hands on experiments. After the end game is clear and we know how the connectivity works, in a later post, we will take a step back and explain how Neutron configures the components to be able to provide such connectivity.  We are going to get pretty technical shortly and I recommend trying these examples on your own deployment or using the Oracle OpenStack Tech Preview. Understanding these three use cases thoroughly and how to look at them will be very helpful when trying to debug a deployment in case something does not work. Use case #1: Create Network Create network is a simple operation it can be performed from the GUI or command line. When we create a network in OpenStack the network is only available to the tenant who created it or it could be defined as “shared” and then it can be used by all tenants. A network can have multiple subnets but for this demonstration purpose and for simplicity we will assume that each network has exactly one subnet. Creating a network from the command line will look like this: # neutron net-create net1 Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | 5f833617-6179-4797-b7c0-7d420d84040c | | name                      | net1                                 | | provider:network_type     | vlan                                 | | provider:physical_network | default                              | | provider:segmentation_id  | 1000                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | 9796e5145ee546508939cd49ad59d51f     | +---------------------------+--------------------------------------+ Creating a subnet for this network will look like this: # neutron subnet-create net1 10.10.10.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field            | Value                                          | +------------------+------------------------------------------------+ | allocation_pools | {"start": "10.10.10.2", "end": "10.10.10.254"} | | cidr             | 10.10.10.0/24                                  | | dns_nameservers  |                                                | | enable_dhcp      | True                                           | | gateway_ip       | 10.10.10.1                                     | | host_routes      |                                                | | id               | 2d7a0a58-0674-439a-ad23-d6471aaae9bc           | | ip_version       | 4                                              | | name             |                                                | | network_id       | 5f833617-6179-4797-b7c0-7d420d84040c           | | tenant_id        | 9796e5145ee546508939cd49ad59d51f               | +------------------+------------------------------------------------+ We now have a network and a subnet, on the network topology view this looks like this: Now let’s dive in and see what happened under the hood. Looking at the control node we will discover that a new namespace was created: # ip netns list qdhcp-5f833617-6179-4797-b7c0-7d420d84040c   The name of the namespace is qdhcp-<network id> (see above), let’s look into the namespace and see what’s in it: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo     inet6 ::1/128 scope host        valid_lft forever preferred_lft forever 12: tap26c9b807-7c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN     link/ether fa:16:3e:1d:5c:81 brd ff:ff:ff:ff:ff:ff     inet 10.10.10.3/24 brd 10.10.10.255 scope global tap26c9b807-7c     inet6 fe80::f816:3eff:fe1d:5c81/64 scope link        valid_lft forever preferred_lft forever   We see two interfaces in the namespace, one is the loopback and the other one is an interface called “tap26c9b807-7c”. This interface has the IP address of 10.10.10.3 and it will also serve dhcp requests in a way we will see later. Let’s trace the connectivity of the “tap26c9b807-7c” interface from the namespace.  First stop is OVS, we see that the interface connects to bridge  “br-int” on OVS: # ovs-vsctl show 8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-ex         Port br-ex             Interface br-ex                 type: internal     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port "tap26c9b807-7c"             tag: 1             Interface "tap26c9b807-7c"                 type: internal         Port br-int             Interface br-int                 type: internal     ovs_version: "1.11.0"   In the picture above we have a veth pair which has two ends called “int-br-eth2” and "phy-br-eth2", this veth pair is used to connect two bridge in OVS "br-eth2" and "br-int". In the previous post we explained how to check the veth connectivity using the ethtool command. It shows that the two are indeed a pair: # ethtool -S int-br-eth2 NIC statistics:      peer_ifindex: 10 . .   #ip link . . 10: phy-br-eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 . . Note that “phy-br-eth2” is connected to a bridge called "br-eth2" and one of this bridge's interfaces is the physical link eth2. This means that the network which we have just created has created a namespace which is connected to the physical interface eth2. eth2 is the “VM network” the physical interface where all the virtual machines connect to where all the VMs are connected. About network isolation: OpenStack supports creation of multiple isolated networks and can use several mechanisms to isolate the networks from one another. The isolation mechanism can be VLANs, VxLANs or GRE tunnels, this is configured as part of the initial setup in our deployment we use VLANs. When using VLAN tagging as an isolation mechanism a VLAN tag is allocated by Neutron from a pre-defined VLAN tags pool and assigned to the newly created network. By provisioning VLAN tags to the networks Neutron allows creation of multiple isolated networks on the same physical link.  The big difference between this and other platforms is that the user does not have to deal with allocating and managing VLANs to networks. The VLAN allocation and provisioning is handled by Neutron which keeps track of the VLAN tags, and responsible for allocating and reclaiming VLAN tags. In the example above net1 has the VLAN tag 1000, this means that whenever a VM is created and connected to this network the packets from that VM will have to be tagged with VLAN tag 1000 to go on this particular network. This is true for namespace as well, if we would like to connect a namespace to a particular network we have to make sure that the packets to and from the namespace are correctly tagged when they reach the VM network. In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2: #  ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL  cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop  cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL   For packets coming from the interface to the namespace we see the following modification: #  ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL  cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop  cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL   To summarize we can see that when a user creates a network Neutron creates a namespace and this namespace is connected through OVS to the “VM network”. OVS also takes care of tagging the packets from the namespace to the VM network with the correct VLAN tag and knows to modify the VLAN for packets coming from VM network to the namespace. Now let’s see what happens when a VM is launched and how it is connected to the “VM network”. Use case #2: Launch a VM Launching a VM can be done from Horizon or from the command line this is how we do it from Horizon: Attach the network: And Launch Once the virtual machine is up and running we can see the associated IP using the nova list command : # nova list +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | ID                                   | Name         | Status | Task State | Power State | Networks        | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | 3707ac87-4f5d-4349-b7ed-3a673f55e5e1 | Oracle Linux | ACTIVE | None       | Running     | net1=10.10.10.2 | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ The nova list command shows us that the VM is running and that the IP 10.10.10.2 is assigned to this VM. Let’s trace the connectivity from the VM to VM network on eth2 starting with the VM definition file. The configuration files of the VM including the virtual disk(s), in case of ephemeral storage, are stored on the compute node at/var/lib/nova/instances/<instance-id>/. Looking into the VM definition file ,libvirt.xml,  we see that the VM is connected to an interface called “tap53903a95-82” which is connected to a Linux bridge called “qbr53903a95-82”: <interface type="bridge">       <mac address="fa:16:3e:fe:c7:87"/>       <source bridge="qbr53903a95-82"/>       <target dev="tap53903a95-82"/>     </interface>   Looking at the bridge using the brctl show command we see this: # brctl show bridge name     bridge id               STP enabled     interfaces qbr53903a95-82          8000.7e7f3282b836       no              qvb53903a95-82                                                         tap53903a95-82    The bridge has two interfaces, one connected to the VM (“tap53903a95-82 “) and another one ( “qvb53903a95-82”) connected to “br-int” bridge on OVS: # ovs-vsctl show 83c42f80-77e9-46c8-8560-7697d76de51c     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-int         Port br-int             Interface br-int                 type: internal         Port "int-br-eth2"             Interface "int-br-eth2"         Port "qvo53903a95-82"             tag: 3             Interface "qvo53903a95-82"     ovs_version: "1.11.0"   As we showed earlier “br-int” is connected to “br-eth2” on OVS using the veth pair int-br-eth2,phy-br-eth2 and br-eth2 is connected to the physical interface eth2. The whole flow end to end looks like this: VM è tap53903a95-82 (virtual interface)è qbr53903a95-82 (Linux bridge) è qvb53903a95-82 (interface connected from Linux bridge to OVS bridge br-int) è int-br-eth2 (veth one end) è phy-br-eth2 (veth the other end) è eth2 physical interface. The purpose of the Linux Bridge connecting to the VM is to allow security group enforcement with iptables. Security groups are enforced at the edge point which are the interface of the VM, since iptables nnot be applied to OVS bridges we use Linux bridge to apply them. In the future we hope to see this Linux Bridge going away rules.  VLAN tags: As we discussed in the first use case net1 is using VLAN tag 1000, looking at OVS above we see that qvo41f1ebcf-7c is tagged with VLAN tag 3. The modification from VLAN tag 3 to 1000 as we go to the physical network is done by OVS  as part of the packet flow of br-eth2 in the same way we showed before. To summarize, when a VM is launched it is connected to the VM network through a chain of elements as described here. During the packet from VM to the network and back the VLAN tag is modified. Use case #3: Serving a DHCP request coming from the virtual machine In the previous use cases we have shown that both the namespace called dhcp-<some id> and the VM end up connecting to the physical interface eth2  on their respective nodes, both will tag their packets with VLAN tag 1000.We saw that the namespace has an interface with IP of 10.10.10.3. Since the VM and the namespace are connected to each other and have interfaces on the same subnet they can ping each other, in this picture we see a ping from the VM which was assigned 10.10.10.2 to the namespace: The fact that they are connected and can ping each other can become very handy when something doesn’t work right and we need to isolate the problem. In such case knowing that we should be able to ping from the VM to the namespace and back can be used to trace the disconnect using tcpdump or other monitoring tools. To serve DHCP requests coming from VMs on the network Neutron uses a Linux tool called “dnsmasq”,this is a lightweight DNS and DHCP service you can read more about it here. If we look at the dnsmasq on the control node with the ps command we see this: dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap26c9b807-7c --except-interface=lo --pid-file=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host --dhcp-optsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/opts --leasefile-ro --dhcp-range=tag0,10.10.10.0,static,120s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal The service connects to the tap interface in the namespace (“--interface=tap26c9b807-7c”), If we look at the hosts file we see this: # cat  /var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host fa:16:3e:fe:c7:87,host-10-10-10-2.openstacklocal,10.10.10.2   If you look at the console output above you can see the MAC address fa:16:3e:fe:c7:87 which is the VM MAC. This MAC address is mapped to IP 10.10.10.2 and so when a DHCP request comes with this MAC dnsmasq will return the 10.10.10.2.If we look into the namespace at the time we initiate a DHCP request from the VM (this can be done by simply restarting the network service in the VM) we see the following: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c tcpdump -n 19:27:12.191280 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:fe:c7:87, length 310 19:27:12.191666 IP 10.10.10.3.bootps > 10.10.10.2.bootpc: BOOTP/DHCP, Reply, length 325   To summarize, the DHCP service is handled by dnsmasq which is configured by Neutron to listen to the interface in the DHCP namespace. Neutron also configures dnsmasq with the combination of MAC and IP so when a DHCP request comes along it will receive the assigned IP. Summary In this post we relied on the components described in the previous post and saw how network connectivity is achieved using three simple use cases. These use cases gave a good view of the entire network stack and helped understand how an end to end connection is being made between a VM on a compute node and the DHCP namespace on the control node. One conclusion we can draw from what we saw here is that if we launch a VM and it is able to perform a DHCP request and receive a correct IP then there is reason to believe that the network is working as expected. We saw that a packet has to travel through a long list of components before reaching its destination and if it has done so successfully this means that many components are functioning properly. In the next post we will look at some more sophisticated services Neutron supports and see how they work. We will see that while there are some more components involved for the most part the concepts are the same. @RonenKofman

    Read the article

  • Incomplete upgrade 12.04 to 12.10

    - by David
    Everything was running smoothly. Everything had been downloaded from Internet, packages had been installed and a prompt asked for some obsolete programs/files to be removed or kept. After that the computer crashed and and to manually force a shutdown. I turned it on again and surprise I was on 12.10! Still the upgrade was not finished! How can I properly finish that upgrade? Here's the output I got in the command line after following posted instructions: i astrill - Astrill VPN client software i dayjournal - Simple, minimal, digital journal. i gambas2-gb-form - A gambas native form component i gambas2-gb-gtk - The Gambas gtk component i gambas2-gb-gtk-ext - The Gambas extended gtk GUI component i gambas2-gb-gui - The graphical toolkit selector component i gambas2-gb-qt - The Gambas Qt GUI component i gambas2-gb-settings - Gambas utilities class i A gambas2-runtime - The Gambas runtime i google-chrome-stable - The web browser from Google i google-talkplugin - Google Talk Plugin i indicator-keylock - Indicator for Lock Keys i indicator-ubuntuone - Indicator for Ubuntu One synchronization s i A language-pack-kde-zh-hans - KDE translation updates for language Simpl i language-pack-kde-zh-hans-base - KDE translations for language Simplified C i libapt-inst1.4 - deb package format runtime library idA libattica0.3 - a Qt library that implements the Open Coll idA libbabl-0.0-0 - Dynamic, any to any, pixel format conversi idA libboost-filesystem1.46.1 - filesystem operations (portable paths, ite idA libboost-program-options1.46.1 - program options library for C++ idA libboost-python1.46.1 - Boost.Python Library idA libboost-regex1.46.1 - regular expression library for C++ i libboost-serialization1.46.1 - serialization library for C++ idA libboost-signals1.46.1 - managed signals and slots library for C++ idA libboost-system1.46.1 - Operating system (e.g. diagnostics support idA libboost-thread1.46.1 - portable C++ multi-threading i libcamel-1.2-29 - Evolution MIME message handling library i libcmis-0.2-0 - CMIS protocol client library i libcupsdriver1 - Common UNIX Printing System(tm) - Driver l i libdconf0 - simple configuration storage system - runt i libdvdcss2 - Simple foundation for reading DVDs - runti i libebackend-1.2-1 - Utility library for evolution data servers i libecal-1.2-10 - Client library for evolution calendars i libedata-cal-1.2-13 - Backend library for evolution calendars i libedataserver-1.2-15 - Utility library for evolution data servers i libexiv2-11 - EXIF/IPTC metadata manipulation library i libgdu-gtk0 - GTK+ standard dialog library for libgdu i libgdu0 - GObject based Disk Utility Library idA libgegl-0.0-0 - Generic Graphics Library idA libglew1.5 - The OpenGL Extension Wrangler - runtime en i libglew1.6 - OpenGL Extension Wrangler - runtime enviro i libglewmx1.6 - OpenGL Extension Wrangler - runtime enviro i libgnome-bluetooth8 - GNOME Bluetooth tools - support library i libgnomekbd7 - GNOME library to manage keyboard configura idA libgsoap1 - Runtime libraries for gSOAP i libgweather-3-0 - GWeather shared library i libimobiledevice2 - Library for communicating with the iPhone i libkdcraw20 - RAW picture decoding library i libkexiv2-10 - Qt like interface for the libexiv2 library i libkipi8 - library for apps that want to use kipi-plu i libkpathsea5 - TeX Live: path search library for TeX (run i libmagickcore4 - low-level image manipulation library i libmagickwand4 - image manipulation library i libmarblewidget13 - Marble globe widget library idA libmusicbrainz4-3 - Library to access the MusicBrainz.org data i libnepomukdatamanagement4 - Basic Nepomuk data manipulation interface i libnux-2.0-0 - Visual rendering toolkit for real-time app i libnux-2.0-common - Visual rendering toolkit for real-time app i libpoppler19 - PDF rendering library i libqt3-mt - Qt GUI Library (Threaded runtime version), i librhythmbox-core5 - support library for the rhythmbox music pl i libusbmuxd1 - USB multiplexor daemon for iPhone and iPod i libutouch-evemu1 - KernelInput Event Device Emulation Library i libutouch-frame1 - Touch Frame Library i libutouch-geis1 - Gesture engine interface support i libutouch-grail1 - Gesture Recognition And Instantiation Libr idA libx264-120 - x264 video coding library i libyajl1 - Yet Another JSON Library i linux-headers-3.2.0-29 - Header files related to Linux kernel versi i linux-headers-3.2.0-29-generic - Linux kernel headers for version 3.2.0 on i linux-image-3.2.0-29-generic - Linux kernel image for version 3.2.0 on 64 i mplayerthumbs - video thumbnail generator using mplayer i myunity - Unity configurator i A openoffice.org-calc - office productivity suite -- spreadsheet i A openoffice.org-writer - office productivity suite -- word processo i python-brlapi - Python bindings for BrlAPI i python-louis - Python bindings for liblouis i rts-bpp-dkms - rts-bpp driver in DKMS format. i system76-driver - Universal driver for System76 computers. i systemconfigurator - Unified Configuration API for Linux Instal i systemimager-client - Utilities for creating an image and upgrad i systemimager-common - Utilities and libraries common to both the i systemimager-initrd-template-am - SystemImager initrd template for amd64 cli i touchpad-indicator - An indicator for the touchpad i ubuntu-tweak - Ubuntu Tweak i A unity-lens-utilities - Unity Utilities lens i A unity-scope-calculator - Calculator engine i unity-scope-cities - Cities engine i unity-scope-rottentomatoes - Unity Scope Rottentomatoes

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform

    - by Michael Snow
    Openmatics s.r.o. was founded in 2010 as a subsidiary of ZF Friedrichshafen AG, a global player in driveline and chassis technology. Oracle Customer:  Openmatics s.r.o.Location:  Pilsen, Czech RepublicIndustry:  AutomotiveEmployees:  70 Its goal was to develop and operate a flexible, open telematics platform for automotive applications, which is independent from vehicle and component suppliers—recognizing that the fragmented telematics market was not meeting today’s fleet management needs. Openmatics provides a rich product portfolio, and customers can extend the platform, as required, to meet their needs. Partners and third-parties can develop their own applications using the Openmatics’ software development kit and can sell them via the Openmatics app shop.ZF Friedrichshafen AG is a global player in driveline and chassis technology. With 121 production companies and 650 service partners in 26 countries, ZF is among the top 10 largest automotive suppliers worldwide. Founded in 1915 to develop and produce transmissions for airships and vehicles, the group’s product offerings now include transmissions and steering systems as well as chassis components and complete axle systems and modules.  A word from Openmatics s.r.o.  “Oracle WebCenter Portal, together with the underlying Oracle Application Development Framework, provided the fundamental infrastructure for the Openmatics platform. Fleet managers can now reduce fuel consumption and operating costs, and more efficiently manage vehicle usage, maintenance, and safety. The standards-based platform allows third-party suppliers to deploy their own vehicle telematics services as Openmatics apps and creates a de facto standard for the automotive industry, independent from a single manufacturer or service provider.” – Gero Strobel, Head of Development, Openmatics s.r.o. Challenges Create an industry standard for vehicle telematics by establishing a customizable platform that enables access to telematics information, such as current and past fuel consumption, through a web browser to better meet automotive market and customer needs Reduce fleet-management costs by eliminating the need to invest in isolated telematics hardware and software solutions per vehicle brand and vehicle component manufacturer Establish an open platform where third-party providers—such as original equipment manufacturers (OEM), insurers, fleet operators, and individual developers—can deploy their own vehicle telematics services Allow users to purchase targeted telematics services as single apps to reduce costs and ensure rapid growth of telematics services available on the platform Enable users to configure their telematics apps with ease to make sure the platform meets individual fleet management requirements, such as analyzing past and current fuel consumption of a truck fleet Solutions Deployed Oracle WebCenter Portal as a foundation for Openmatics, a standards-based automotive telematics platform that provides next-generation fleet management with unified digital communication from and to vehicles on the move Used Oracle Application Development Framework as the development framework for Oracle WebCenter Portal’s components and services, providing developers with ready-to-use software development kits with application programming interfaces, design templates, and visual tools that accelerated time to market Used Oracle Enterprise Pack for Eclipse to simplify telematics application development in Java Enabled fleet monitoring by recording vehicle data—such as fuel consumption information—through onboard units, delivering the information to Oracle Database, and making it accessible through a customizable app portfolio on any web browser Stored vehicle telematics data—sent as encrypted information—in Oracle Database, ensuring data integrity and immediate availability for the platform’s telematics applications Enabled a wide range of telematics services suppliers, from vehicle component manufacturers to fleet application developers, to offer vehicle telematics services on the Openmatics platform, ensuring platform independence from OEMs Provided Openmatics customers with the means to individually select the automotive telematics services that are relevant to their business requirements, eliminating the need to pay for superfluous information and reducing fleet management costs Oracle Products & Services Oracle Application Development Framework Oracle WebCenter Portal Oracle SOA Suite Oracle Enterprise Pack for Eclipse Oracle Database Oracle Consulting &amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;amp;&amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;gt;

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • Approach for packing 2D shapes while minimizing total enclosing area

    - by Dennis
    Not sure on my tags for this question, but in short .... I need to solve a problem of packing industrial parts into crates while minimizing total containing area. These parts are motors, or pumps, or custom-made components, and they have quite unusual shapes. For some, it may be possible to assume that a part === rectangular cuboid, but some are not so simple, i.e. they assume a shape more of that of a hammer or letter T. With those, (assuming 2D shape), by alternating direction of top & bottom, one can pack more objects into the same space, than if all tops were in the same direction. Crude example below with letter "T"-shaped parts: ***** xxxxx ***** x ***** *** ooo * x vs * x vs * x vs * x o * x * xxxxx * x * x o xxxxx xxx Right now we are solving the problem by something like this: using CAD software, make actual models of how things fit in crate boxes make estimates of actual crate dimensions & write them into Excel file (1) is crazy amount of work and as the result we have just a limited amount of possible entries in (2), the Excel file. The good things is that programming this is relatively easy. Given a combination of products to go into crates, we do a lookup, and if entry exists in the Excel (or Database), we bring it out. If it doesn't, we say "sorry, no data!". I don't necessarily want to go full force on making up some crazy algorithm that given geometrical part description can align, rotate, and figure out best part packing into a crate, given its shape, but maybe I do.. Question Well, here is my question: assuming that I can represent my parts as 2D (to be determined how), and that some parts look like letter T, and some parts look like rectangles, which algorithm can I use to give me a good estimate on the dimensions of the encompassing area, while ensuring that the parts are packed in a minimal possible area, to minimize crating/shipping costs? Are there approximation algorithms? Seeing how this can get complex, is there an existing library I could use? My thought / Approach My naive approach would be to define a way to describe position of parts, and place the first part, compute total enclosing area & dimensions. Then place 2nd part in 0 degree orientation, repeat, place it at 180 degree orientation, repeat (for my case I don't think 90 degree rotations will be meaningful due to long lengths of parts). Proceed using brute force "tacking on" other parts to the enclosing area until all parts are processed. I may have to shift some parts a tad (see 3rd pictorial example above with letters T). This adds a layer of 2D complexity rather than 1D. I am not sure how to approach this. One idea I have is genetic algorithms, but I think those will take up too much processing power and time. I will need to look out for shape collisions, as well as adding extra padding space, since we are talking about real parts with irregularities rather than perfect imaginary blocks. I'm afraid this can get geometrically messy fairly fast, and I'd rather keep things simple, if I can. But what if the best (practical) solution is to pack things into different crate boxes rather than just one? This can get a bit more tricky. There is human element involved as well, i.e. like parts can go into same box and are thus a constraint to be considered. Some parts that are not the same are sometimes grouped together for shipping and can be considered as a common grouped item. Sometimes customers want things shipped their way, which adds human element to constraints. so there will have to be some customization.

    Read the article

  • What components and IDE add-ins do you install with Delphi?

    - by Mick
    After a clean install of Delphi, what components and IDE add-ins do you make certain that you install? What's your Delphi "rig"? Here's what I install after a clean installation: Delphi 2007 JCL / JVCL - JEDI Code Library and JEDI Visual Code Library (600+ components) JWA / JWSCL - JEDI API Library & Security Code Library GExperts - GExperts is a free set of tools built to increase the productivity of Delphi and C++Builder programmers by adding several features to the IDE. TWM's experimental GExperts code formatter - adds code formatting capabilities to Delphi Virtual TreeView - Virtual Treeview is a treeview control built from ground up. More than 5 years of development made it one of the most flexible and advanced tree controls available today. MustangPeak Components (EasyList View, Virtual ShellTools, etc) - EasyListview is a control that has no dependance on the Microsoft Listview control but has all the features of the latest version from Microsoft. Also includes 'Explorer.exe' like shell components. Synapse lightweight networking components - contains simple low level non-visual objects for easy programming without problems. (no required multi-threaded synchronization, no need for windows message processing,…) Great for command line utilities, visual projects, NT services EurekaLog - EurekaLog is a complete bug resolution tool for Delphi and C++Builder developers that gives your application the power to catch every exception and memory leak, directly on the end user PC, generating a detailed log of the call stack (with file, class, method and line number), optionally sending you a copy of each log entry via email or to a web bug-tracker. DelphiSpeedUp - DelphiSpeedUp is an IDE plugin for Delphi and C++Builder. It improves the IDE’s startup speed and increases the general speed of the whole IDE. DDevExtensions - DDevExtensions extends the Delphi/C++Builder IDE by adding some new productivity features. IDE Fix Pack - The IDE Fix Pack installs is a DLL-Expert that fixes the following RAD Studio 2007 bugs at runtime. All changes are done in memory. No file on disk is modified. TPerlRegex - Regular Expression library for Delphi How about other Delphi developers?

    Read the article

  • Error while dynamically loading mapi32.dll

    - by The_Fox
    Our application uses Simple MAPI to send e-mails. One of our clients has problems sending e-mail from a session on his terminal server. The mapi32.dll is loaded with a call to LoadLibrary which succeeds, but then our application tries to get the addresses of the functions MAPILogon, MAPILogOff, MAPISendMail, MAPIFreeBuffer and MAPIResolveName. The problem is that GetProcAddress fails for those functions with an ERROR_ACCESS_DENIED (code: 5) except for MAPIFreeBuffer. It looks like some sort of security thing. How can I fix this or should I use another method to send mail? FWI, here some more information about OS and contents of registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Messaging Subsystem: OS info: 5.2.3790 VER_PLATFORM_WIN32_NT Service Pack 2 Contents of SOFTWARE\Microsoft\Windows Messaging Subsystem InstallCmd: rundll32 setupapi,InstallHinfSection MSMAIL 132 msmail.inf MAPI: 1 CMCDLLNAME: mapi.dll CMCDLLNAME32: mapi32.dll CMC: 1 MAPIX: 1 MAPIXVER: 1.0.0.1 OLEMessaging: 1 Contents of SOFTWARE\Microsoft\Windows Messaging Subsystem\MSMapiApps inetsw95.exe: choosusr.dll: msab32.dll: nwab32.dll: outstore.dll: Microsoft Outlook CDOEXM.DLL: EMSMDB32.DLL: EMSABP32.DLL: newprof.exe: Microsoft Outlook outlook.exe: wfxmsrvr.exe: Microsoft Outlook msexcimc.exe: exchng32.exe: schdmapi.dll: Microsoft Outlook pilotcfg.exe: Microsoft Outlook mailmig.exe: Microsoft Outlook admin.exe: msspc32.dll: Microsoft Outlook cnfnot32.exe: Microsoft Outlook ilpilot.exe: Microsoft Outlook events.exe: I'm on Delphi 7.0, but that shouldn't matter. Edit, added version information: Fileversion info of C:\WINDOWS\system32\mapi32.dll Fileversion: 6.5.7226.0 FileDescription=Extended MAPI 1.0 for Windows NT CompanyName=Microsoft Corporation InternalName=MAPI32 Comments=Service Pack 1 LegalCopyRight=Copyright (C) 1986-2003 Microsoft Corp. All rights reserved. LegalTradeMarks=Microsoft(R) and Windows(R) are registered trademarks of Microsoft Corporation. OriginalFileName=MAPI32.DLL ProductName=Microsoft Exchange ProductVersion=6.5 Fileversion info of C:\Program Files\Common Files\SYSTEM\MSMAPI\1043\msmapi32.dll Fileversion: 11.0.5601.0 FileDescription=Extended MAPI 1.0 for Windows NT CompanyName=Microsoft Corporation InternalName=MAPI32.DLL LegalCopyRight=Copyright © 1995-2003 Microsoft Corporation. All rights reserved. OriginalFileName=MAPI32.DLL ProductName=MAPI32 ProductVersion=11.0.5601

    Read the article

  • Perl unpack in list context

    - by drewk
    A common 'Perlism' is generating a list as something to loop over in this form: for($str=~/./g) { print "the next character from \"$str\"=$_\n"; } In this case the global match regex returns a list that is one character in turn from the string $str, and assigns that value to $_ Instead of a regex, split can be used in the same way or 'a'..'z', map, etc. I am investigating unpack to generate a field by field interpretation of a string. I have always found unpack to be less straightforward to the way my brain works, and I have never really dug that deeply into it. As a simple case, I want to generate a list that is one character in each element from a string using unpack (yes -- I know I can do it with split(//,$str) and /./g but I really want to see if unpack can be used this way...) Obviously, I can use a field list for unpack that is unpack("A1" x length($str), $str) but is there some other way that kinda looks like globbing? ie, can I call unpack(some_format,$str) either in list context or in a loop such that unpack will return the next group of character in the format group until $str is exausted? I have read The Perl 5.12 Pack pod and the Perl 5.12 pack tutorial and the Perkmonks tutorial Here is the sample code: #!/usr/bin/perl use warnings; use strict; my $str=join('',('a'..'z', 'A'..'Z')); #the alphabet... $str=~s/(.{1,3})/$1 /g; #...in groups of three print "str=$str\n\n"; for ($str=~/./g) { print "regex: = $_\n"; } for(split(//,$str)) { print "split: \$_=$_\n"; } for(unpack("A1" x length($str), $str)) { print "unpack: \$_=$_\n"; }

    Read the article

  • Scheme homework Black jack help....

    - by octavio
    So I need to do a game of blackjack simulator, butt can't seem to figure out whats wrong with the shuffle it's suppose to take a card randomly from the pack the put it on top of the pack. The delete it from the rest. so : (ace)(2)(3)(4)(5)...(k) if random card is let say 5 (5)(ace)(2)(3)(4)(5)...(k) then it deletes the 2nd 5 (5)(ace)(2)(3)(4)(6)...(k) here is the code: (define deck '((A . C) (2 . C) (3 . C) (4 . C) (5 . C) (6 . C) (7 . C) (8 . C) (9 . C) (10 . C) (V . C) (Q . C) (K . C))) ;auxilliary function for shuffle let you randomly select a card. (define shuffAux (lambda (t) (define cardR (lambda (t) (list-ref t (random 13)))) (cardR t))) ;auxilliary function used to remove the card after the car to prevent you from removing the randomly selected from the car(begining of the deck). (define (removeDupC card deck) (delete card (cdr deck)) ) (define shuffle2ndtry (lambda (deck seed) (define do-shuffle (lambda (deck seed) (if (> seed 0)( (cons (shuffAux deck) deck) (removeDupC (car deck) deck) (- 1 seed)) (write deck) ) ) ) (do-shuffle deck seed))) (define (shuffle deck seed) (define cards (cons (shuffAux deck) deck)) (write cards) (case (> seed 0) [(#t) (removeDupC (car cards) (cdr cards)) (shuffle cards (- seed 1))] [(#f) (write cards)])) (define random (let ((seed 0) (a 3141592653) (c 2718281829) (m (expt 2 35))) (lambda (limit) (cond ((and (integer? limit)) (set! seed (modulo (+ (* seed a) c) m)) (quotient (* seed limit) m)) (else (/ (* limit (random 34359738368)) 34359738368)))))) ;function in which you can delete an element from the list. (define delete (lambda (item list) (cond ((equal? item (car list)) (cdr list)) (else (cons (car list) (delete item (cdr list))))))) (

    Read the article

  • Java Classpath Issues with Webservices(CXF) and Jboss

    - by JohnC
    I am using CXF(which autogenerates my webservices in my pom.xml from my wsdl) with JBoss(eclipse ide), and I am having some trouble accessing the webservice from my web application. I found this resource: http://blog.progs.be/?p=92 but I am having a really hard time using WSDL_LOCATION = cl.getResource( "my/progam/pack/wsdl/myService.wsdl" ); to work properly in my code. I have my wsdls located in src/main/wsdl and have added the following line to the .classpath file: classpathentry kind="src" path="src/main/wsdl" I also created the folders my,program,pack,wsdl and dropped my wsdls into that location, so it is accessible. However, the classloader.getResource call always returns null no matter what. When I specify getResource( "/wsdl/myService.wsdl" ) it does not return null, but I believe it looks at the full file path and not what I need (considering part of the URL contains the path to the wsdl file all the way through the jboss server directory and includes the WEB-INF dir. Is my .classpath file set up incorrectly or am I missing something else? if the WSDL Location is not correct it always throws a ClassCast Exception like so: java.lang.ClassCastException: org.apache.cxf.jaxws.ServiceImpl at javax.xml.ws.Service.(Service.java:81)

    Read the article

  • Relative Uri works for BitmapImage, but not for MediaPlayer?

    - by Thomas Stock
    This will be simple for you guys: var uri = new Uri("pack://application:,,,/LiftExperiment;component/pics/outside/elevator.jpg"); imageBitmap = new BitmapImage(); imageBitmap.BeginInit(); imageBitmap.UriSource = uri; imageBitmap.EndInit(); image.Source = imageBitmap; = Works perfectly on a .jpg with Build Action: Content Copy to Output Directory: Copy always MediaPlayer mp = new MediaPlayer(); var uri = new Uri("pack://application:,,,/LiftExperiment;component/sounds/DialingTone.wav"); mp.Open(uri); mp.Play(); = Does not work on a .wav with the same build action and copy to output. I see the file in my /debug/ folder.. MediaPlayer mp = new MediaPlayer(); var uri = new Uri(@"E:\projects\LiftExp\_solution\LiftExperiment\bin\Debug\sounds\DialingTone.wav"); mp.Open(uri); mp.Play(); = Works perfectly.. So, how do I get the sound to work with a relative path? Why is it not working this way? Let me know if you want more code or screenshots. Thanks.

    Read the article

  • .NET Test Harness what should it have

    - by Conor
    Hi Folks, We have a software house developing code for us on a project, .NET Web Service (WCF) and we are also paying for a test harness to be built as a separate billable task on a daily rate. I have just joined the company and am reviewing what we are getting from the software house and wanted to know what you guys in industry thought about it? Basically what we got was a WinForm that called the w/s that had an input area (Web Service Request) to drop our XML a Submit button along with a response area for the result of the Web Response and that's it... Our internal BA has created all the xml request documents so there was no logic put into the harness around this. Looking on the Net for a definition of a Test Harness I got this: http://en.wikipedia.org/wiki/Test_harness It states it should have these 3 below things: Automate the testing process. Execute test suites of test cases. Generate associated test reports. Clearly we have got none of this apart from a partial "Automate the testing process" via a WinForm. OK, from my development background I would expect someone to Produce a WinForm as a test harness 5 years ago and really should be using some sort of Tooling around this, I explicitly told the Software House I expected some sort of tooling (NUnit,NBUnit, SOAPIU) so we could create a regression test pack for future use. [Didn’t get it but I asked for this after the requirements were signed off as I wasn’t employed then :)] Would someone be able to clarify with me if my requirement for this is over realistic, I know if I did this, I would use NUnit and TDD and then reuse the test harness as a regression test pack in future? I am interested to see what the community thought. Cheers

    Read the article

  • Can't get SWT Display on Mac OS X.

    - by Mattias Holmqvist
    I'm running Mac OS X Snow Leopard and wan't to access the Display from the activator in an OSGi bundle. Below is the start method for my activator: @Override public void start(BundleContext context) throws Exception { ExecutorService service = Executors.newSingleThreadExecutor(); service.execute(new Runnable() { @Override public void run() { Display display = Display.getDefault(); Shell shell = new Shell(display); Text helloText = new Text(shell, SWT.CENTER); helloText.setText("Hello SWT!"); helloText.pack(); shell.pack(); shell.open(); while (!shell.isDisposed()) { if (!display.readAndDispatch()) display.sleep(); } display.dispose(); } }); } Calling this code in a Windows environment works fine, but deploying on Mac OS X I get the following output: 2009-10-14 17:17:54.050 java[2010:10003] *** __NSAutoreleaseNoPool(): Object 0x101620d20 of class NSCFString autoreleased with no pool in place - just leaking 2009-10-14 17:17:54.081 java[2010:10003] *** __NSAutoreleaseNoPool(): Object 0x100119240 of class NSCFNumber autoreleased with no pool in place - just leaking 2009-10-14 17:17:54.084 java[2010:10003] *** __NSAutoreleaseNoPool(): Object 0x1001024b0 of class NSCFString autoreleased with no pool in place - just leaking 2009-10-14 17:17:54.086 java[2010:10003] *** __NSAutoreleaseNoPool(): Object 0x7fff701d7f70 of class NSCFString autoreleased with no pool in place - just leaking 2009-10-14 17:17:54.087 java[2010:10003] *** __NSAutoreleaseNoPool(): Object 0x100113330 of class NSCFString autoreleased with no pool in place - just leaking 2009-10-14 17:17:54.092 java[2010:10003] *** __NSAutoreleaseNoPool(): Object 0x101624540 of class NSCFData autoreleased with no pool in place - just leaking . . . I've used the -XstartOnFirstThread VM argument without any luck. I'm on 64-bit Cocoa but I've also tried 32-bit Cocoa. When trying on Carbon I get the following error: Invalid memory access of location 00000020 eip=9012337c When debugging into the Display class I can see that the Displays[] array only contains null references.

    Read the article

  • python - tkinter - update label from variable

    - by Tom
    I wrote a python script that does some stuff to generate and then keep changing some text stored as a string variable. This works, and I can print the string each time it gets changed. Problems have arisen while trying to display that output in a GUI (just as a basic label) using tkinter. I can get the label to display the string for the first time... but it never updates. This is really the first time I have tried to use tkinter, so it's likely I'm making a foolish error. What I've got looks logical to me, but I'm evidently going wrong somewhere! from tkinter import * outputText = 'Ready' counter = int(0) root = Tk() root.maxsize(400, 400) var = StringVar() l = Label(root, textvariable=var, anchor=NW, justify=LEFT, wraplength=398) l.pack() var.set(outputText) while True: counter = counter + 1 #do some stuff that generates string as variable 'result' outputText = result #do some more stuff that generates new string as variable 'result' outputText = result #do some more stuff that generates new string as variable 'result' outputText = result if counter == 5: break root.mainloop() I also tried: from tkinter import * outputText = 'Ready' counter = int(0) root = Tk() root.maxsize(400, 400) var = StringVar() l = Label(root, textvariable=var, anchor=NW, justify=LEFT, wraplength=398) l.pack() var.set(outputText) while True: counter = counter + 1 #do some stuff that generates string as variable 'result' outputText = result var.set(outputText) #do some more stuff that generates new string as variable 'result' outputText = result var.set(outputText) #do some more stuff that generates new string as variable 'result' outputText = result var.set(outputText) if counter == 5: break root.mainloop() In both cases, the label will show 'Ready' but won't update to change that to the strings as they're generated later. After a fair bit of googling and looking through answers on this site, I thought the solution might be to use update_idletasks - I tried putting that in after each time the variable was changed, but it didn't help. It also seems possible I am meant to be using trace and callback somehow to make all this work...but I can't get my head around how that works (I tried, but didn't manage to make anything that even looked like it would do something, let alone actually worked). I'm still very new to both python and especially tkinter, so, any help much appreciated but please spell it out for me if you can :)

    Read the article

  • JEditorPanes, Preferred Size, and printing HTML

    - by Ryan Elkins
    I'm trying to print some HTML directly, without displaying anything to the user. It works currently (somewhat) using a custom JEditorPane that implements Printable. The problem I'm having is that it always wants to use a preferred size of 582px x 560px. If I manually change the size using something like setSize(x,y) it will change the size of the pane, put the content renders at preferred size, not actual size (so it's still 582x560). I can scale it up to fit the page, but it's basically just an enlarged version where the images are all pixelated and the layout is wrong (based on the smaller window size). Inside the print method of my Printable JEditorPane I used this to try and get the size: javax.swing.JWindow wnd = new javax.swing.JWindow(); wnd.setContentPane(this); wnd.setSize(1024,1584); wnd.pack(); Dimension d = wnd.getPreferredSize(); With or without that setSize and/or pack methods on the JWindow the preferred size always comes back as 582x560. I do have control over the html that I'm trying to print but I'd rather not have to rewrite all of that to scale it down so it will print correctly at full size (when scaled up).

    Read the article

  • Simple perl program failing to execute

    - by yves Baumes
    Here is a sample that fails: #!/usr/bin/perl -w # client.pl #---------------- use strict; use Socket; # initialize host and port my $host = shift || 'localhost'; my $port = shift || 55555; my $server = "10.126.142.22"; # create the socket, connect to the port socket(SOCKET,PF_INET,SOCK_STREAM,(getprotobyname('tcp'))[2]) or die "Can't create a socket $!\n"; connect( SOCKET, pack( 'Sn4x8', AF_INET, $port, $server )) or die "Can't connect to port $port! \n"; my $line; while ($line = <SOCKET>) { print "$line\n"; } close SOCKET or die "close: $!"; with the error: Argument "10.126.142.22" isn't numeric in pack at D:\send.pl line 16. Can't connect to port 55555! I am using this version of Perl: This is perl, v5.10.1 built for MSWin32-x86-multi-thread (with 2 registered patches, see perl -V for more detail) Copyright 1987-2009, Larry Wall Binary build 1006 [291086] provided by ActiveState http://www.ActiveState.com Built Aug 24 2009 13:48:26 Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using "man perl" or "perldoc perl". If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. While I am running the netcat command on the server side. Telnet does work.

    Read the article

  • JVM terminates when launching eclipse with J2SE 6.0 on mac os x (need J2SE 6.0 for Oracle enterprise

    - by rooban bajwa
    I know my issue has party been addressed at this link http://stackoverflow.com/questions/245803/jvm-terminates-when-launching-eclipse-mat-on-mac-os-with-j2se-60 but it was a year+ ago.. plus the link that's provided in there http://landonf.bikemonkey.org/static/soylatte/ does not seem to be alive (i mean the download section on that link no longer provide the 32-bit port of j2se 6.0 for mac osx 10.5) I am trying to run eclipse 3.5 on mac OSX 10.5. It works fine with J2SE 5.0. But when I installed the Oracle enterprise pack for eclipse - it requires to start eclipse with J2SE 6.0 JVM otherwise it will get disabled. Here's the exact message I get from it - "You are running Eclipse on Java VM version: 1.5.0_22 Oracle Enterprise Pack for Eclipse requires Java version 6 or higher. Click next to configure a compatible Java VM." It asks me to point to J2SE 6.0 JVM, when I do that (i.e point it to "/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home") , it asks to restart eclipse , when I do that, eclipse just bombs .. with JVM terminated error .. SO I need to start eclipse with J2SE 6.0 JVM but eclipse needs carbon which is only available in 32 bits and hence I cann't start eclipse with J2SE 6.0 JVM which is only available in 64bit mode from mac. And the site providing 32 bit port of J2SE 6.0 JVM does not seem to be active anymore.. Can someone help me on this issue, Thanks in advance,

    Read the article

  • Visual Studio 2008: Can't connect to known good TFS 2010 beta 2

    - by p.campbell
    A freshly installed TFS 2010 Beta 2 is at http://serverX:8080/tfs. A Windows 7 developer machine with VS 2008 Pro SP1 and the VS2008 Team Explorer (no SP). The TFS 2008 Service Pack 1 didn't work for me - "None of the products that are addressed by this software update are installed on this computer." The developer machine is able to browse the TFS site at the above URL. The Issue is around trying to add the TFS server into the Team Explorer window in Visual Studio 2008. Here's a screenshot showing the error: unable to connect to this Team Foundation Server. Possible reasons for failure include: The Team Foundation Server name, port number or protocol is incorrect. The Team Foundation Server is offline. Password is expired or incorrect. The TFS server is up and running properly. Firewall ports are open, and is accessible via the browser on the dev machine!! larger image Question: how can you connect from VS 2008 Pro to a TFS 2010 Beta 2 server? Resolution Here's how I solved this problem: installed VS 2008 Team Explorer as above. re-install VS 2008 Service Pack 1 when adding a TFS server to Team Explorer, you MUST specify the URL as such: http://[tfsserver]:[port]/[vdir]/[projectCollection] in my case above, it was http://serverX:8080/tfs/AppDev-TestProject you cannot simply add the TFS server name and have VS look for all Project Collections on the server. TFS 2010 has a new URL (by default) and VS 2008 doesn't recognize how to gather that list.

    Read the article

  • Problem with relative path to image in XAML?

    - by Giri
    I am trying to reference a PNG file in my applications working directory through XAML with the following: <Image Name="contactImage"> <Image.Source> <BitmapImage UriSource="/Images/contact.png"> </Image.Source> </Image> Now in my code-behind I try to get the height of the image with contactImage.Source.Height This fails with System.IOException - cannot locate resource 'images/contact.png'. If I use something like PngBitmapDecoder p = new PngBitmapDecoder(new Uri("./Images/contact.png"), UriKind.Relative, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default); Everything is happy. How can I reference an image in xaml to a path relative to the working deirectory of the app. BTW- this is being run on a remote machine (if that makes a difference). I have tried "./Images/contact.png" and ".\Images\contact.png" and several other combinations of back/forward slashes and dots. Here is the primary difference- Any time the file is referenced in XAML, it shows up as pack://aplication:,,, blah blah blah when I use the PngBitmapDecoder, it shows up correctly as "./Images/contact.png". How do I reference the image file in XAML and get it show a source as "./Images/contact.png" instead of a pack://application,,,blah blah blah?

    Read the article

  • Help to resolve 'Out of memory' exception when calling DrawImage

    - by Jack Juiceson
    Hi guys, About one percent of our users experience sudden crash while using our application. The logs show below exception, the only thing in common that I've seen so far is that, they all have XP SP3. Thanks in advance Out of memory. at System.Drawing.Graphics.CheckErrorStatus(Int32 status) at System.Drawing.Graphics.DrawImage(Image image, Rectangle destRect, Int32 srcX, Int32 srcY, Int32 srcWidth, Int32 srcHeight, GraphicsUnit srcUnit, ImageAttributes imageAttrs, DrawImageAbort callback, IntPtr callbackData) at System.Drawing.Graphics.DrawImage(Image image, Rectangle destRect, Int32 srcX, Int32 srcY, Int32 srcWidth, Int32 srcHeight, GraphicsUnit srcUnit, ImageAttributes imageAttr, DrawImageAbort callback) at System.Drawing.Graphics.DrawImage(Image image, Rectangle destRect, Int32 srcX, Int32 srcY, Int32 srcWidth, Int32 srcHeight, GraphicsUnit srcUnit, ImageAttributes imageAttr) at System.Windows.Forms.ControlPaint.DrawBackgroundImage(Graphics g, Image backgroundImage, Color backColor, ImageLayout backgroundImageLayout, Rectangle bounds, Rectangle clipRect, Point scrollOffset, RightToLeft rightToLeft) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle, Color backColor, Point scrollOffset) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle) at System.Windows.Forms.Control.OnPaintBackground(PaintEventArgs pevent) at System.Windows.Forms.ScrollableControl.OnPaintBackground(PaintEventArgs e) at System.Windows.Forms.Control.PaintTransparentBackground(PaintEventArgs e, Rectangle rectangle, Region transparentRegion) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle, Color backColor, Point scrollOffset) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle) at System.Windows.Forms.Control.OnPaintBackground(PaintEventArgs pevent) at System.Windows.Forms.ScrollableControl.OnPaintBackground(PaintEventArgs e) at System.Windows.Forms.Control.PaintTransparentBackground(PaintEventArgs e, Rectangle rectangle, Region transparentRegion) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle, Color backColor, Point scrollOffset) at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle) at System.Windows.Forms.Control.OnPaintBackground(PaintEventArgs pevent) at System.Windows.Forms.Control.PaintWithErrorHandling(PaintEventArgs e, Int16 layer, Boolean disposeEventArgs) at System.Windows.Forms.Control.WmPaint(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) Operation System Information ---------------------------- Name = Windows XP Edition = Home Service Pack = Service Pack 3 Version = 5.1.2600.196608 Bits = 32

    Read the article

  • Does GetSystemInfo (on Windows) always return the number of logical processors?

    - by mhughes
    Reading up on this, and specifically reading the Microsoft docs, it looks like it should be returning the number of PHYSICAL processors, and that you should use GetLogicalProcessorInformation to figure out how many LOGICAL processors you have. Here's the doc I found on the SYSTEM_INFO structure: http://msdn.microsoft.com/en-us/library/ms724958(v=VS.85).aspx And here's the doc on GetLogicalProcessorInformation: (spaces added to get through spam filter) http:// msdn.microsoft.com/ en-us/ library/ ms683194.aspx Reading up on it further though, in most of the discussions I've found on this topic, developers say to that GetSystemInfo (and the SYSTEM_INFO structure) report the number of LOGICAL processors. When I search again, I find that MS did release some info on this (and a hot fix), here (spaces added to get through spam filter): http:// support. microsoft.com/ kb/936235 Reading that, it sounds like on Xp, pre-service Pack 3, GetSystemInfo reports the number of LOGICAL processors in the SYSTEM_INFO structure. It also reads to me that on Windows Vista and Windows 7, GetSystemInfo should be reporting the number of PHYSICAL processors (different to Windows XP pre-service Pack 3). Does anyone know what it actually does? Does GetSystemInfo really report the number of physical processors (on the same computer) differently, depending on which OS it's running on?

    Read the article

  • Problem with setVisible (true)

    - by Jessy
    The two examples shown below are same. Both are supposed to produce same result e.g. generate the coordinates of images displayed on JPanel. Example 1, works perfectly (print the coordinates of images), however example 2 returning 0 for the coordinate. I was wondering why because, I have put the setvisible (true) after adding the panel, in both examples. The only difference is that example 1 used extends JPanel and example 2 extends JFrame EXAMPLE 1: public class Grid extends JPanel{ public static void main(String[] args){ JFrame jf=new JFrame(); jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); final Grid grid = new Grid(); jf.add(grid); jf.pack(); Component[] components = grid.getComponents(); for (Component component : components) { System.out.println("Coordinate: "+ component.getBounds()); } jf.setVisible(true); } } EXAMPLE 2: public class Grid extends JFrame { public Grid () { setLayout(new GridBagLayout()); GridBagLayout m = new GridBagLayout(); Container c = getContentPane(); c.setLayout (m); GridBagConstraints con = new GridBagConstraints(); //construct the JPanel pDraw = new JPanel(); ... m.setConstraints(pDraw, con); pDraw.add (new GetCoordinate ()); // call new class to generate the coordinate c.add(pDraw); pack(); setVisible(true); } public static void main(String[] args) { new Grid(); } }

    Read the article

  • Object-oriented GUI development in python

    - by ptabatt
    Hey guys, new programmer here. I have an assignment for class and I'm stuck... What I need to do is a create a GUI that gives someone a basic arithmetic problem in one box, asks the person to answer it, evaluates it, and tells you if you're right or wrong... Basically, what I have is this: [code] class Lesson(Frame): def init (self, parent=None): Frame.init(self, parent) self.pack() Lesson.make_widgets(self) def make_widgets(self): Label(self, text="").pack(side=TOP) ent = Entry(self) self.a = randrange(1,10) self.b = randrange(1,10) self.expr = choice(["+","-"]) ent.insert(END, str(self.a) + str(self.expr) + str(self.a)) [/code] I've broken this down into many little steps and basically, what I'm trying to do right now is insert a default random expression into the first entry widget. When I run this code, I just get a blank Label. Why is that? How can I put a something like "7+7" into the box? If you absolutely need background to the problem, it's question #3 on this link. http://reed.cs.depaul.edu/lperkovic/csc242/homeworks/Homework8.html -Thanks for all help in advance.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >