Search Results

Search found 2396 results on 96 pages for 'rate'.

Page 85/96 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Poor Customer Service Example

    - by MightyZot
    Lately I have been frustrated by examples of poor customer service. At least one is worth writing about because I don’t think companies realize the effects of their service policies on loyal customers. Bad Customer Service Example #1 Recently, I received an offer in the mail from my cable company, suddenLink. The offer was for an updated TiVo for $12/mo. Normally I ignore offers like this one because I already have the service they’re offering and many times advertisers are offering alternatives to what is already an excellent product offering. I tend to exhibit a high level of loyalty to the products and brands that I use. In this case, we were looking to upgrade our TiVo and this deal is attractive for several reasons: I don’t want to pay a huge amount up-front for the device, so paying a monthly amount for the device is attractive to me. My entertainment is almost all on a single invoice. I’m no longer going to be billed by suddenLink and TiVo. TiVo is still involved, so I am still loyal to the brand I love. I have resisted moving to other DVRs and services for over a decade. I called suddenLink to order the new TiVo and was rewarded with great customer service. In fact, I can’t remember ever getting poor customer service from suddenLink. They are always there to answer my technical support questions and they are very responsive to outages. Then I called TiVo. First of all, I chose the option on the phone system to change or cancel my service, which was consequently met by an inordinate hold time. (I’m calling this time inordinate because I get through very quickly if I want to purchase something.) This is a trend that I’ve noticed with companies – if you want me to be loyal to you, it should be just as easy to cancel your service as it is to purchase it. Because, I should never be cancelling because I am unhappy. And, if you ever want my business again, or more importantly a reference, then you’d better make the exit door open just as easy as the enter door. After quite some time on hold, I talked to “Victor” who was very courteous. Victor canceled my service and then told me that I could keep my current TiVo and transfer recorded programs to it from the new TiVo.  Cool I said, but what about the cost?  He said there was no extra cost.  This was also attractive to me because I paid for my TiVo and it would be good to use it for something at least.  That was four months ago. This month I noticed that TiVo was still charging me for my original service. I was a little upset, but I decided to give them the benefit of the doubt. After all, I am a loyal TiVo customer and I have resisted moving to other solutions for over a decade. I’m sure they will do whatever it takes to keep my business, through TiVo or through suddenLink. After quite some time on hold, I was able to talk to a customer service representative, “Les”. I explained that I am a loyal TiVo customer, but I purchased this deal through my cable provider. I’m still with TiVo, I just wanted a single bill and to take advantage of the pay-over-time option. “Les” told me that he was very sorry to hear that I’m leaving TiVo, to which I responded again that I wasn’t leaving TiVo, I just want one invoice, and to take advantage of the pay-over-time. So, after explaining that I requested a termination of the non-suddenLink account (TiVo can see both of course), I was put on hold again for quite some time while my refund was “approved”.  “Les” said that he could see my cancellation request back in July. Note that it is now November, so they have billed me inappropriately four times. After quite some time, he came back on the line and told me that he was able to “get me most of my money back.” He got approval to refund 90 days. Even though I requested cancellation of one of my accounts, TiVo has that cancellation request on file and they admit overbilling me, I am going to get “most” of my money back. To top this experience off, when we were ready to hang up, “Les” told me that he was sorry to see me go and that he hoped I would come back to TiVo again. Again, I explained to “Les” that I have not left TiVo. I am just paying them through suddenLink. At that point, he went into a small dissertation about how this is a special arrangement they have with suddenLink and very few others. He made me feel like I was doing something wrong. Why should I feel that way? TiVo made the deal with suddenLink, not me, and the deal seemed like a good compromise for me to be able to get what I need. Here is what TiVo Customer Service accomplished on those two calls – I no longer feel like I need to be loyal to the TiVo brand or service. If I had been treated better on these two calls, I would still be recommending TiVo to my friends. They would still be getting revenue from a loyal customer, who paid the same rate for over a decade, and this article wouldn’t be here for you to read. Interesting… In my opinion, if you want brand loyalty, be loyal to your customers!

    Read the article

  • Why does calling CreateDXGIFactory prevent my program from exiting?

    - by smoth190
    I'm using CreateDXGIFactory to get the graphics adapters and display modes. When I call it, it works fine and I get all the data. However, when I exit my program, the main Win32 thread exits, but something stays open because it keeps debugging. Does CreateDXGIFactory create an extra thread and I'm not closing it? I don't understand. The only thing I would suspect is that in the documentation it says it doesn't work if it's called from DllMain. It is in a DLL, but it's not called from DllMain. And it doesn't fail, either. I'm using DirectX 11. Here is the function that initializes DirectX. I haven't gotten past retrieving the refresh rate because of this problem. I commented everything out to pinpoint the problem. bool CGraphicsManager::InitDirectX(HWND hWnd, int width, int height) { HRESULT result; IDXGIFactory* factory; IDXGIOutput* output; IDXGIAdapter* adapter; DXGI_MODE_DESC* displayModes; DXGI_ADAPTER_DESC adapterDesc; unsigned int modeCount = 0; unsigned int refreshNum = 0; unsigned int refreshDen = 0; //First, we need to get the monitors refresh rater result = CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory); //if(FAILED(result)) //{ //MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to create DXGI factory\nError:\n%s"), DXGetErrorDescription(result)); //return false; //} /*//Create a graphics card adapter result = factory->EnumAdapters(0, &adapter); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get graphics adapters\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Get the output result = adapter->EnumOutputs(0, &output); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get adapter output\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Get the modes result = output->GetDisplayModeList(DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ENUM_MODES_INTERLACED, &modeCount, 0); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get mode count\nError:\n%s"), DXGetErrorDescription(result)); return false; } displayModes = new DXGI_MODE_DESC[modeCount]; result = output->GetDisplayModeList(DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ENUM_MODES_INTERLACED, &modeCount, displayModes); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get display modes\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Now we need to find one for our screen size for(unsigned int i = 0; i < modeCount; i++) { if(displayModes[i].Width == (unsigned int)width) { if(displayModes[i].Height == (unsigned int)height) { refreshNum = displayModes[i].RefreshRate.Numerator; refreshDen = displayModes[i].RefreshRate.Denominator; break; } } } //Store the video card data result = adapter->GetDesc(&adapterDesc); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get adapter description\nError:\n%s"), DXGetErrorDescription(result)); return false; } m_videoCard = new CVideoCard(); MemoryUtil::CreateGameObject(m_videoCard); m_videoCard->VideoCardMemory = (unsigned int)(adapterDesc.DedicatedVideoMemory); wcstombs_s(0, m_videoCard->VideoCardDescription, 128, adapterDesc.Description, 128);*/ //ReleaseCOM(output); //ReleaseCOM(adapter); ReleaseCOM(factory); //DeletePointerArray(displayModes); return true; } Also, I don't know if this means anything, but this is some of the output log when the function is commented out: //... 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msvcr100d.dll', Symbols loaded. 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\imm32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msctf.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\uxtheme.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Program Files (x86)\Common Files\microsoft shared\ink\tiptsf.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleaut32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\clbcatq.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Cannot find or open the PDB file The program '[6560] LostRock.exe: Native' has exited with code 0 (0x0). And when it isn't commented out... //... 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\cfgmgr32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\devobj.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\wintrust.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\crypt32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msasn1.dll', Cannot find or open the PDB file 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\setupapi.dll' 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\devobj.dll' 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\cfgmgr32.dll' 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\clbcatq.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Cannot find or open the PDB file The thread 'Win32 Thread' (0xb94) has exited with code 0 (0x0). The program '[8096] LostRock.exe: Native' has exited with code 0 (0x0). //This is called when I click "Stop Debugging" P.S. I know it is CreateDXGIFactory because if I comment it out, the program exits correctly.

    Read the article

  • Atheros 922 PCI WIFI is disabled in Unity but enabled in terminal - How to get it to work?

    - by zewone
    I am trying to get my PCI Wireless Atheros 922 card to work. It is disabled in Unity: both the network utility and the desktop (see screenshot http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) I tried many different advises on many different forums. Installed 12.10 instead of 12.04, enabled all interfaces... etc. I have read about the aht9 driver... The terminal shows no hw or sw lock for the Atheros card, nevertheless, it is still disabled. Nothing worked so far, the card is still disabled. Any help is much appreciated. Here are more tech details: myuser@adri1:~$ sudo lshw -C network *-network:0 DISABLED description: Wireless interface product: AR922X Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 2 bus info: pci@0000:03:02.0 logical name: wlan1 version: 01 serial: 00:18:e7:cd:68:b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-17-generic firmware=N/A latency=168 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:d8000000-d800ffff *-network:1 description: Ethernet interface product: VT6105/VT6106S [Rhine-III] vendor: VIA Technologies, Inc. physical id: 6 bus info: pci@0000:03:06.0 logical name: eth0 version: 8b serial: 00:11:09:a3:76:4a size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=via-rhine driverversion=1.5.0 duplex=half latency=32 link=no maxlatency=8 mingnt=3 multicast=yes port=MII speed=10Mbit/s resources: irq:18 ioport:d300(size=256) memory:d8013000-d80130ff *-network DISABLED description: Wireless interface physical id: 1 bus info: usb@1:8.1 logical name: wlan0 serial: 00:11:09:51:75:36 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=3.5.0-17-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg myuser@adri1:~$ sudo rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy1: Wireless LAN Soft blocked: no Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no myuser@adri1:~$ dmesg | grep wlan0 [ 15.114235] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready myuser@adri1:~$ dmesg | egrep 'ath|firm' [ 14.617562] ath: EEPROM regdomain: 0x30 [ 14.617568] ath: EEPROM indicates we should expect a direct regpair map [ 14.617572] ath: Country alpha2 being used: AM [ 14.617575] ath: Regpair used: 0x30 [ 14.637778] ieee80211 phy0: >Selected rate control algorithm 'ath9k_rate_control' [ 14.639410] Registered led device: ath9k-phy0 myuser@adri1:~$ dmesg | grep wlan1 [ 15.119922] IPv6: ADDRCONF(NETDEV_UP): wlan1: link is not ready myuser@adri1:~$ lspci -nn | grep 'Atheros' 03:02.0 Network controller [0280]: Atheros Communications Inc. AR922X Wireless Network Adapter [168c:0029] (rev 01) myuser@adri1:~$ sudo ifconfig eth0 Link encap:Ethernet HWaddr 00:11:09:a3:76:4a inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fea3:764a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5457 errors:0 dropped:0 overruns:0 frame:0 TX packets:2548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3425684 (3.4 MB) TX bytes:282192 (282.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:590 errors:0 dropped:0 overruns:0 frame:0 TX packets:590 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:53729 (53.7 KB) TX bytes:53729 (53.7 KB) myuser@adri1:~$ sudo iwconfig wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on lo no wireless extensions. eth0 no wireless extensions. wlan1 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off myuser@adri1:~$ lsmod | grep "ath9k" ath9k 116549 0 mac80211 461161 3 rt2x00usb,rt2x00lib,ath9k ath9k_common 13783 1 ath9k ath9k_hw 376155 2 ath9k,ath9k_common ath 19187 3 ath9k,ath9k_common,ath9k_hw cfg80211 175375 4 rt2x00lib,ath9k,mac80211,ath myuser@adri1:~$ iwlist scan wlan0 Failed to read scan data : Network is down lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. wlan1 Failed to read scan data : Network is down myuser@adri1:~$ lsb_release -d Description: Ubuntu 12.10 myuser@adri1:~$ uname -mr 3.5.0-17-generic i686 ![Schizophrenic Ubuntu](http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) Any help much appreciated... Thanks, Philippe 31-10-2012 ... I have some more updates. When I do the following command it does see my Wifi router... So even if it is still disabled... the card seems to work and see the router (ESSID:"5791BC26-CE9C-11D1-97BF-0000F81E") See below: sudo iwlist wlan1 scanning wlan1 Scan completed : Cell 01 - Address: 00:19:70:8F:B0:EA Channel:10 Frequency:2.457 GHz (Channel 10) Quality=51/70 Signal level=-59 dBm Encryption key:on ESSID:"5791BC26-CE9C-11D1-97BF-0000F81E" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000025dbf2188 Extra: Last beacon: 108ms ago IE: Unknown: 002035373931424332362D434539432D313144312D393742462D3030303046383145 IE: Unknown: 010882848B960C121824 IE: Unknown: 03010A IE: Unknown: 0706424520010D14 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101030003A4000027A4000042435E0062322F00 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010000000000

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Why Executives Need Enterprise Project Portfolio Management: 3 Key Considerations to Drive Value Across the Organization

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} By: Guy Barlow, Oracle Primavera Industry Strategy Director Over the last few years there has been a tremendous shift – some would say tectonic in nature – that has brought project management to the forefront of executive attention. Many factors have been driving this growing awareness, most notably, the global financial crisis, heightened regulatory environments and a need to more effectively operationalize corporate strategy. Executives in India are no exception. In fact, given the phenomenal rate of progress of the country, top of mind for all executives (whether in finance, operations, IT, etc.) is the need to build capacity, ramp-up production and ensure that the right resources are in place to capture growth opportunities. This applies across all industries from asset-intensive – like oil & gas, utilities and mining – to traditional manufacturing and the public sector, including services-based sectors such as the financial, telecom and life sciences segments are also part of the mix. However, compounding matters is a complex, interplay between projects – big and small, complex and simple – as companies expand and grow both domestically and internationally. So, having a standardized, enterprise wide solution for project portfolio management is natural. Failing to do so is akin to having two ERP systems, one to manage “large” invoices and one to manage “small” invoices. It makes no sense and provides no enterprise wide visibility. Therefore, it is imperative for executives to understand the full range of their business commitments, the benefit to the company, current performance and associated course corrections if needed. Irrespective of industry and regardless of the use case (e.g., building a power plant, launching a new financial service or developing a new automobile) company leaders need to approach the value of enterprise project portfolio management via 3 critical areas: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} 1. Greater Financial Discipline – Improve financial rigor and results through better governance and control is an imperative given today’s financial uncertainty and greater investment scrutiny. For example, as India plans a US$1 trillion investment in the country’s infrastructure how do companies ensure costs are managed? How do you control cash flow? Can you easily report this to stakeholders? 2. Improved Operational Excellence – Increase efficiency and reduce costs through robust collaboration and integration. Upwards of 66% of cost variances are driven by poor supplier collaboration. As you execute initiatives do you have visibility into the performance of your supply base? How are they integrated into the broader program plan? 3. Enhanced Risk Mitigation – Manage and react to uncertainty through improved transparency and contingency planning. What happens if you’re faced with a skills shortage? How do you plan and account for geo-political or weather related events? In summary, projects are not just the delivery of a product or service to a customer inside a predetermined schedule; they often form a contractual and even moral obligation to shareholders and stakeholders alike. Hence the intimate connection between executives and projects, with the latter providing executives with the platform to demonstrate that their organization has the capabilities and competencies needed to meet and, whenever possible, exceed their customer commitments. Effectively developing and operationalizing corporate strategy is the hallmark of successful executives and enterprise project and portfolio management allows them to achieve this goal. Article was first published for Manage India, an e-newsletter, PMI India.

    Read the article

  • Identity Globe Trotters (Sep Edition): The Social Customer

    - by Tanu Sood
    Welcome to the inaugural edition of our monthly series - Identity Globe Trotters. Starting today, the last Friday of every month, we will explore regional commentary on Identity Management. We will invite guest contributors from around the world to share their opinions and experiences around Identity Management and highlight regional nuances, specific drivers, solutions and more. Today's feature is contributed by Michael Krebs, Head of Business Development at esentri consulting GmbH, a (SOA) specialized Oracle Gold Partner based in Ettlingen, Germany. In his current role, Krebs is dealing with the latest developments in Enterprise Social Networking and the Integration of Social Media within business processes.  By Michael Krebs The relevance of "easy sign-on" in the age of the "Social Customer" With the growth of Social Networks, the time people spend within those closed "eco-systems" is growing year by year. With social networks looking to integrate search engines, like Facebook announced some weeks ago, their relevance will continue to grow in contrast to the more conventional search engines. This is one of the reasons why social network accounts of the users are getting more and more like a virtual fingerprint. With the growing relevance of social networks the importance of a simple way for customers to get in touch with say, customer care or contract departments, will be crucial for sales processes in critical markets. Customers want to have one single point of contact and also an easy "login-method" with no dedicated usernames, passwords or proprietary accounts. The golden rule in the future social media driven markets will be: The lower the complexity of the initial contact, the better a company can profit from social networks. If you, for example, can generate a smart way of how an existing customer can use self-service portals, the cost in providing phone support can be lowered significantly. Recruiting and Hiring of "Digital Natives" Another particular example is "social" recruiting processes. The so called "digital natives" don´t want to type in their profile facts and CV´s in proprietary systems. Why not use the actual LinkedIn profile? In German speaking region, the market in the area of professional social networks is dominated by XING, the equivalent to LinkedIn. A few weeks back, this network also opened up their interfaces for integrating social sign-ons or the usage of profile data for recruiting-purposes. In the European (and especially the German) employment market, where the number of young candidates is shrinking because of the low birth rate in the region, it will become essential to use social-media supported hiring processes to find and on-board the rare talents. In fact, you will see traditional recruiting websites integrated with social hiring to attract the best talents in the market, where the pool of potential candidates has decreased dramatically over the years. Identity Management as a key factor in the Customer Experience process To create the biggest value for customers and also future employees, companies need to connect their HCM or CRM-systems with powerful Identity management solutions. With the highly efficient Oracle (social & mobile enabling) Identity Management solution, enterprises can combine easy sign on with secure connections to the backend infrastructure. This combination enables a "one-stop" service with personalized content for customers and talents. In addition, companies can collect valuable data for the enrichment of their CRM-data. The goal is to enrich the so called "Customer Experience" via all available customer channels and contact points. Those systems have already gained importance in the B2C-markets and will gradually spread out to B2B-channels in the near future. Conclusion: Central and "Social" Identity management is key to Customer Experience Management and Talent Management For a seamless delivery of "Customer Experience Management" and a modern way of recruiting the best talent, companies need to integrate Social Sign-on capabilities with modern CX - and Talent management infrastructure. This lowers the barrier for existing and future customers or employees to get in touch with sales, support or human resources. Identity management is the technology enabler and backbone for a modern Customer Experience Infrastructure. Oracle Identity management solutions provide the opportunity to secure Social Applications and connect them with modern CX-solutions. At the end, companies benefit from "best of breed" processes and solutions for enriching customer experience without compromising security. About esentri: esentri is a provider of enterprise social networking and brings the benefits of social network communication into business environments. As one key strength, esentri uses Oracle Identity Management solutions for delivering Social and Mobile access for Oracle’s CRM- and HCM-solutions. …..End Guest Post…. With new and enhanced features optimized to secure the new digital experience, the recently announced Oracle Identity Management 11g Release 2 enables organizations to securely embrace cloud, mobile and social infrastructures and reach new user communities to help further expand and develop their businesses. Additional Resources: Oracle Identity Management 11gR2 release Oracle Identity Management website Datasheet: Mobile and Social Access (pdf) IDM at OOW: Focus on Identity Management Facebook: OracleIDM Twitter: OracleIDM We look forward to your feedback on this post and welcome your suggestions for topics to cover in Identity Globe Trotters. Last Friday, every month!

    Read the article

  • Adjusting server-side tickrate dynamically

    - by Stuart Blackler
    I know nothing of game development/this site, so I apologise if this is completely foobar. Today I experimented with building a small game loop for a network game (think MW3, CSGO etc). I was wondering why they do not build in automatic rate adjustment based on server performance? Would it affect the client that much if the client knew this frame is based on this tickrate? Has anyone attempted this before? Here is what my noobish C++ brain came up with earlier. It will improve the tickrate if it has been stable for x ticks. If it "lags", the tickrate will be reduced down by y amount: // GameEngine.cpp : Defines the entry point for the console application. // #ifdef WIN32 #include <Windows.h> #else #include <sys/time.h> #include <ctime> #endif #include<iostream> #include <dos.h> #include "stdafx.h" using namespace std; UINT64 GetTimeInMs() { #ifdef WIN32 /* Windows */ FILETIME ft; LARGE_INTEGER li; /* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it * to a LARGE_INTEGER structure. */ GetSystemTimeAsFileTime(&ft); li.LowPart = ft.dwLowDateTime; li.HighPart = ft.dwHighDateTime; UINT64 ret = li.QuadPart; ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */ ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */ return ret; #else /* Linux */ struct timeval tv; gettimeofday(&tv, NULL); uint64 ret = tv.tv_usec; /* Convert from micro seconds (10^-6) to milliseconds (10^-3) */ ret /= 1000; /* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */ ret += (tv.tv_sec * 1000); return ret; #endif } int _tmain(int argc, _TCHAR* argv[]) { int sv_tickrate_max = 1000; // The maximum amount of ticks per second int sv_tickrate_min = 100; // The minimum amount of ticks per second int sv_tickrate_adjust = 10; // How much to de/increment the tickrate by int sv_tickrate_stable_before_increment = 1000; // How many stable ticks before we increase the tickrate again int sys_tickrate_current = sv_tickrate_max; // Always start at the highest possible tickrate for the best performance int counter_stable_ticks = 0; // How many ticks we have not lagged for UINT64 __startTime = GetTimeInMs(); int ticks = 100000; while(ticks > 0) { int maxTimeInMs = 1000 / sys_tickrate_current; UINT64 _startTime = GetTimeInMs(); // Long code here... cout << "."; UINT64 _timeTaken = GetTimeInMs() - _startTime; if(_timeTaken < maxTimeInMs) { Sleep(maxTimeInMs - _timeTaken); counter_stable_ticks++; if(counter_stable_ticks >= sv_tickrate_stable_before_increment) { // reset the stable # ticks counter counter_stable_ticks = 0; // make sure that we don't go over the maximum tickrate if(sys_tickrate_current + sv_tickrate_adjust <= sv_tickrate_max) { sys_tickrate_current += sv_tickrate_adjust; // let me know in console #DEBUG cout << endl << "Improving tickrate. New tickrate: " << sys_tickrate_current << endl; } } } else if(_timeTaken > maxTimeInMs) { cout << endl; if((sys_tickrate_current - sv_tickrate_adjust) > sv_tickrate_min) { sys_tickrate_current -= sv_tickrate_adjust; } else { if(sys_tickrate_current == sv_tickrate_min) { cout << "Please reduce sv_tickrate_min..." << endl; } else{ sys_tickrate_current = sv_tickrate_min; } } // let me know in console #DEBUG cout << "The server has lag. Reduced tickrate to: " << sys_tickrate_current << endl; } ticks--; } UINT64 __timeTaken = GetTimeInMs() - __startTime; cout << endl << endl << "Total time in ms: " << __timeTaken; cout << endl << "Ending tickrate: " << sys_tickrate_current; char test; cin >> test; return 0; }

    Read the article

  • Waiting for Windows 8: A Long, Hot Summer

    - by andrewbrust
    Microsoft has revealed some things about Windows 8, and revealed a part of the developer story for new Windows 8 “tailored,” “immersive” applications.  In retrospect, very little was shared.  The bit that was revealed to us is that those applications can be developed using a combination of HTML 5 and JavaScript.  Not much else was said, except that additional details would be revealed at Microsoft’s //Build/ conference in Anaheim, California in September. This has left a lot of people in suspense, and it seems that suspended state is going to last all summer.  The problem, of course, is that in the absence of hard information, people fill the void with Speculation, Rumor and Gloom.  That’s a bit like Fear, Uncertainty and Doubt, except that it’s self-imposed by the Microsoft community and not planted by Microsoft’s competitors. This is a less-than-perfect situation.  Not only is it causing developers to worry about the value of their skill sets, but I am already hearing from consulting shops that customers are getting nervous too and, in extreme cases, opting for non-Microsoft tools for their projects as a result.  I’m also hearing from dev tool ISVs that sales have suffered as a result. It’s quite possible that the customers moving off .NET wanted to do so anyway and it’s also possible that dev tool ISVs are suffering slower sales this year due a slowed rate of economic recovery. Without hard information, tend to people interpret things negatively.  Actually, that’s the major point in all of this. While there is multitude of opinions about what the Windows 8 development platform will look like once fully revealed, there is an emerging consensus around one thing: it sure would help if Microsoft revealed more of its strategy…just enough to quash absurd rumors, stabilize the .NET ecosystem and get people to stay calm. We’ve had some reassurances thus far: there will be a Windows desktop mode; we’ll still have Windows Explorer, we’ll still run Office, we’ll still have a task bar, and all the skills and tools we use now will still work there.  But with reassurances like that…people still feel insecure.  Because telling us that Windows 8 will have what is essentially a “classic” mode sure makes it sound like today’s skill sets will soon be “classic” too…and then maybe they’ll just become obsolete. Humans find change scary; it’s natural.  And when left alone with their fears – because no one is saying anything to dispel them – people can go from frightened to paranoid, and can start to viewing things in a downright conspiratorial light.  It would be great if Microsoft stepped into the void now and told us what is coming – especially because whatever they tell us is bound to be at least a little better than what people think they are going to hear. I don’t know what the announcements will be, but I do have it on authority, from a number of sources, that Microsoft isn’t gong to talk until //Build/.  That means no news until September September 13th.  Nothing until after Labor Day.  You get zippo until after the Back-to-School sales are done. What to do?  Try not to let the dark voices of gloom and doom fill your head.  Even in the absence of answers, we still have some important facts: The .NET developer community is huge. Microsoft’s customers have major investments in .NET, and in .NET skills. Political infighting in Redmond might make for irrational decisions, but ultimately public companies can’t just alienate their advocates and piss off their customers.  Spite doesn’t trump fiduciary responsibility. The computing device markets are changing, software is changing, software business models are changing and developers are changing.  Microsoft has to keep up. The HTML + JavaScript community is huge too, and it includes many of the “changed” developers. Public companies can’t ignore new markets nor the popular standards that can help them enter those new markets.  Loyalty doesn’t trump fiduciary responsibility either. If Microsoft can appeal to new developers, then it should. If Microsoft can keep catering to its existing developers and customers -- not just through legacy support, but also through empowering futures -- then it probably will. You don’t have to shove your old friends out into the rain to make room for new ones; you can bring those new constituents in under a bigger tent.  I hope Microsoft will enlarge the tent, and I have trouble imagining why it would not.

    Read the article

  • Answers to Conference Revenue Tweet Questions

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2014/05/27/156612.aspxI tweeted this the other day… …and I had some people tweet back questioning/asking about the profit number. So here’s how I came to that figure. Total Revenue Let’s talk total revenue first. This conference has a huge list of companies/organizations paying some amount for sponsorship. Platinum ($1500) x 5 = $7500 Gold ($1000) x 3 = $3000 Silver ($500) x 9 = $4500 Bronze ($250) x 13 = $3250 There’s also a title sponsor level but there’s no mention of how much that is…more than $1500 though, so let’s just say $2500. Total Sponsorship Revenue: $20750.00 For registrations, this conference is claiming over 300 attendees. We’ll just calculate at 300 and the discounted “member rate” – $249. Total Registration Revenue: $74700.00 Booth space is also sold for a vendor area, but let’s just leave that out of the calculation. Total Event Revenue: $95450.00 Now that we know how much money we’re playing with, let’s knock out the costs for the event. Total Costs Hard Costs Audio/Visual Services $2000 Conference Rooms (4 Breakouts + Plenary) $2500 Insurance $700 Printing/Signage $1500 Travel/Hotel Rooms $2000 Keynotes $2000 So let’s talk about these hard costs first. First you may be asking about the Audio Visual. Yes those services can be that high, actually higher. But since there’s an A/V company touted as the official A/V provider, I gotta think there’s some discount for being branded as such. Conference rooms are actually an inflated amount of $500 per. Venues make money on the food they sell at events, not on room rentals. The more food, the cheaper the rooms tend to be offered at. Still, for the sake of argument, let’s set the rooms at $500 each knowing that they could be lower. For travel and hotel rooms…it appears that most of the speakers at this conference are local, meaning there’s no travel or hotel cost. But a few of them I wasn’t too sure…so let’s factor in enough to cover two outside speakers (airfare and hotel). There are two keynotes for this event and depending on the event those may be paid gigs. I’m not sure if they are or not, but considering the closing one is a comedian I’m going to add some funds here for that just in case. Total Hard Costs: $10700 Now that the hard costs are out of the way, let’s talk about the food costs. Food Costs The conference is providing a continental breakfast (YEEEESH!), some level of luncheon, and I have to assume coffee breaks in between. Let’s look at those costs. Continental Breakfast $12 per person Lunch Buffet $18 per person Coffee Breaks (2) $6 per person (or $3 a cup) Snacks (2) $10 per person (or $5 each) Note that the lunch buffet assumes a *good* lunch buffet – two entrees, starch, vegetable, salads, and bread. Not sure if there’ll be snacks during coffee breaks but let’s assume so. Total Food Cost Per Person: $46 Food Cost: $14950 Gratuity: $2691 Total Food Cost: $17641 Total food cost is based on the $46 per person cost x 325. 300 for attendance, 12 for speakers, extra 13 for volunteers/organizers. Gratuity is 18%. Grand Totals So let’s sum things up here. Total Costs Hard Costs: $10700.00 Food Costs: $17641.00 Total:          $28341.00 Taxes:         $3685.00 Grand Total  $32026.00 Total Revenue Sponsorship  $20750 Registration   $74700 Grand Total   $95450.00 Total Profit $63424.00 Now what if the registration numbers were lower and they only got 100 people to show up. In that scenario there’d still be a profit of just under $26000. Closing Comments A couple of things to note: - I haven’t factored in anything for prizes. Not sure if any will be given out - We didn’t add in the booth space revenue - We’re assuming speakers aren’t getting paid, but even if they were at the high end its $12000 ($1000 per session), which is probably an inflated number for local speakers. - Note that all registrations were set to the “member” discounted price. The non-member registration price is higher. There is also an option for those that just want to show up for the opening keynote. There you have it! Let me know if you have any questions. D

    Read the article

  • Understanding implementation of glu.PickMatrix()

    - by stoney78us
    I am working on an OpenGL project which requires object selection feature. I use OpenTK framework to do this; however OpenTK doesn't support glu.PickMatrix() method to define the picking region. I ended up googling its implementation and here is what i got: void GluPickMatrix(double x, double y, double deltax, double deltay, int[] viewport) { if (deltax <= 0 || deltay <= 0) { return; } GL.Translate((viewport[2] - 2 * (x - viewport[0])) / deltax, (viewport[3] - 2 * (y - viewport[1])) / deltay, 0); GL.Scale(viewport[2] / deltax, viewport[3] / deltay, 1.0); } I totally fail to understand this piece of code. Moreover, this doesn't work with my following code sample: //selectbuffer private int[] _selectBuffer = new int[512]; private void Init() { float[] triangleVertices = new float[] { 0.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f }; float[] _triangleColors = new float[] { 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f }; GL.GenBuffers(2, _vBO); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleVertices.Length), _triangleVertices, BufferUsageHint.StaticDraw); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[1]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleColors.Length), _triangleColors, BufferUsageHint.StaticDraw); GL.ColorPointer(3, ColorPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.ColorArray); //Selectbuffer set up GL.SelectBuffer(512, _selectBuffer); } private void glControlWindow_Paint(object sender, PaintEventArgs e) { GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); float[] eyes = { 0.0f, 0.0f, -10.0f }; float[] target = { 0.0f, 0.0f, 0.0f }; Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); //45 degree = 0.785398163 rads Matrix4 view = Matrix4.LookAt(eyes[0], eyes[1], eyes[2], target[0], target[1], target[2], 0, 1, 0); Matrix4 model = Matrix4.Identity; Matrix4 MV = view * model; //First Clear Buffers GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); GL.LoadIdentity(); GL.LoadMatrix(ref MV); GL.Viewport(0, 0, glControlWindow.Width, glControlWindow.Height); GL.Enable(EnableCap.DepthTest); //Enable correct Z Drawings GL.DepthFunc(DepthFunction.Less); //Enable correct Z Drawings GL.MatrixMode(MatrixMode.Modelview); GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); //Finally... GraphicsContext.CurrentContext.VSync = true; //Caps frame rate as to not over run GPU glControlWindow.SwapBuffers(); //Takes from the 'GL' and puts into control } private void DrawTriangle() { GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.DrawArrays(BeginMode.Triangles, 0, 3); GL.DisableClientState(ArrayCap.VertexArray); } //mouse click event implementation private void glControlWindow_MouseClick(object sender, System.Windows.Forms.MouseEventArgs e) { //Enter Select mode. Pretend drawing. GL.RenderMode(RenderingMode.Select); int[] viewport = new int[4]; GL.GetInteger(GetPName.Viewport, viewport); GL.PushMatrix(); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GluPickMatrix(e.X, e.Y, 5, 5, viewport); Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); // this projection matrix is the same as one in glControlWindow_Paint method. GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); int i = 0; int hits; GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); i++; GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); hits = GL.RenderMode(RenderingMode.Render); .....hits processing code goes here... GL.PopMatrix(); glControlWindow.Invalidate(); } I expect to get only one hit everytime i click inside a triangle, but i always get 2 no matter where i click. I suspect there is something wrong with the implementation of the GluPickMatrix, I haven't figured out yet.

    Read the article

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

  • Linux - serial port read returning EAGAIN...

    - by Andre
    Hello all! I am having some trouble reading some data from a serial port I opened the following way. I've used this instance of code plenty of times and all worked fine, but now, for some reason that I cant figure out, I am completely unable to read anything from the serial port. I am able to write and all is correctly received on the other end, but the replies (which are correctly sent) are never received (No, the cables are all ok ;) ) The code I used to open the serial port is the following: fd = open("/dev/ttyUSB0", O_RDWR | O_NONBLOCK | O_NOCTTY); if (fd == -1) { Aviso("Unable to open port"); return (fd); } else { //Get the current options for the port... bzero(&options, sizeof(options)); /* clear struct for new port settings */ tcgetattr(fd, &options); /*-- Set baud rate -------------------------------------------------------*/ if (cfsetispeed(&options, SerialBaudInterp(BaudRate))==-1) perror("On cfsetispeed:"); if (cfsetospeed(&options, SerialBaudInterp(BaudRate))==-1) perror("On cfsetospeed:"); //Enable the receiver and set local mode... options.c_cflag |= (CLOCAL | CREAD); options.c_cflag &= ~PARENB; /* Parity disabled */ options.c_cflag &= ~CSTOPB; options.c_cflag &= ~CSIZE; /* Mask the character size bits */ options.c_cflag |= SerialDataBitsInterp(8); /* CS8 - Selects 8 data bits */ options.c_cflag &= ~CRTSCTS; // disable hardware flow control options.c_iflag &= ~(IXON | IXOFF | IXANY); // disable XON XOFF (for transmit and receive) options.c_cflag |= CRTSCTS; /* enable hardware flow control */ options.c_cc[VMIN] = 0; //min carachters to be read options.c_cc[VTIME] = 0; //Time to wait for data (tenths of seconds) //Set the new options for the port... tcflush(fd, TCIFLUSH); if (tcsetattr(fd, TCSANOW, &options)==-1) { perror("On tcsetattr:"); } PortOpen[ComPort] = fd; } return PortOpen[ComPort]; After the port is initializeed I write some stuff to it through simple write command... int nc = write(hCom, txchar, n); where hCom is the file descriptor (and it's ok), and (as I said) this works. But... when I do a read afterwards, I get a "Resource Temporarily Unavailable" error from errno. I tested select to see when the file descriptor had something t read... but it always times out! I read data like this: ret = read(hCom, rxchar, n); and I always get an EAGAIN and I have no idea why. All help would be appreciated. Cheers

    Read the article

  • Getting problem while capturing image through web cam in Java -OpenCV code.

    - by Chetan
    Hi.. I am developing Face detector using OpenCV library in java... I have written sample code for this,but it is giving Error while capturing image. can any one help please. Here is the code import java.awt.; import java.awt.event.; import java.awt.image.MemoryImageSource; import hypermedia.video.OpenCV; @SuppressWarnings("serial") public class FaceDetection extends Frame implements Runnable { ///Main method. public static void main(String[] args) { //System.out.println( "\nOpenCV face detection sample\n" ); new FaceDetection(); } // program execution frame rate (millisecond) final int FRAME_RATE = 1000/30; OpenCV cv = null; // OpenCV Object Thread t = null; // the sample thread // the input video stream image Image frame = null; // list of all face detected area Rectangle[] squares = new Rectangle[0]; //** Setup Frame and Object(s). FaceDetection() { super( "Face Detection" ); // OpenCV setup cv = new OpenCV(); //cv.capture(1, 1, 100); cv.capture( 320, 240 ); cv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // frame setup this.setBounds( 100, 100, cv.width, cv.height ); this.setBackground( Color.BLACK ); this.setVisible( true ); this.addKeyListener( new KeyAdapter() { public void keyReleased( KeyEvent e ) { if ( e.getKeyCode()==KeyEvent.VK_ESCAPE ) { // ESC : release OpenCV resources cv.dispose(); System.exit(0); } } } ); // start running program t = new Thread( this ); t.start(); } // Draw video frame and each detected faces area. public void paint( Graphics g ) { // draw image g.drawImage( frame, 0, 0, null ); // draw squares g.setColor( Color.RED ); for( Rectangle rect : squares ) g.drawRect( rect.x, rect.y, rect.width, rect.height ); } @SuppressWarnings("static-access") public void run() { while( t!=null && cv!=null ) { try { t.sleep( FRAME_RATE ); // grab image from video stream cv.read(); // create a new image from cv pixels data MemoryImageSource mis = new MemoryImageSource( cv.width, cv.height, cv.pixels(), 0, cv.width ); frame = createImage( mis ); // detect faces squares = cv.detect( 1.2f, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 20, 20 ); // of course, repaint repaint(); } catch( InterruptedException e ) {;} } } } Error while starting capture : device 0

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • Rx Reactive extensions: Unit testing with FromAsyncPattern

    - by Andrew Anderson
    The Reactive Extensions have a sexy little hook to simplify calling async methods: var func = Observable.FromAsyncPattern<InType, OutType>( myWcfService.BeginDoStuff, myWcfService.EndDoStuff); func(inData).ObserveOnDispatcher().Subscribe(x => Foo(x)); I am using this in an WPF project, and it works great at runtime. Unfortunately, when trying to unit test methods that use this technique I am experiencing random failures. ~3 out of every five executions of a test that contain this code fails. Here is a sample test (implemented using a Rhino/unity auto-mocking container): [TestMethod()] public void SomeTest() { // arrange var container = GetAutoMockingContainer(); container.Resolve<IMyWcfServiceClient>() .Expect(x => x.BeginDoStuff(null, null, null)) .IgnoreArguments() .Do( new Func<Specification, AsyncCallback, object, IAsyncResult>((inData, asyncCallback, state) => { return new CompletedAsyncResult(asyncCallback, state); })); container.Resolve<IRepositoryServiceClient>() .Expect(x => x.EndRetrieveAttributeDefinitionsForSorting(null)) .IgnoreArguments() .Do( new Func<IAsyncResult, OutData>((ar) => { return someMockData; })); // act var target = CreateTestSubject(container); target.DoMethodThatInvokesService(); // Run the dispatcher for everything over background priority Dispatcher.CurrentDispatcher.Invoke(DispatcherPriority.Background, new Action(() => { })); // assert Assert.IsTrue(my operation ran as expected); } The problem that I see is that the code that I specified to run when the async action completed (in this case, Foo(x)), is never called. I can verify this by setting breakpoints in Foo and observing that they are never reached. Further, I can force a long delay after calling DoMethodThatInvokesService (which kicks off the async call), and the code is still never run. I do know that the lines of code invoking the Rx framework were called. Other things I've tried: I have attempted to modify the second last line according to the suggestions here: Reactive Extensions Rx - unit testing something with ObserveOnDispatcher No love. I have added .Take(1) to the Rx code as follows: func(inData).ObserveOnDispatcher().Take(1).Subscribe(x = Foo(x)); This improved my failure rate to something like 1 in 5, but they still occurred. I have rewritten the Rx code to use the plain jane Async pattern. This works, however my developer ego really would love to use Rx instead of boring old begin/end. In the end I do have a work around in hand (i.e. don't use Rx), however I feel that it is not ideal. If anyone has ran into this problem in the past and found a solution, I'd dearly love to hear it.

    Read the article

  • Interfacing Android Nexus One with Arduino + BlueSmirf

    - by efgomez
    I'm a bit new to all of this, so bear with me - I'd really appreciate your help. I am trying to link the Android Nexus One with an arduino (Duemilanove) that is connected to a BlueSmirf. I have a program that is simply outputting the string "Hello Bluetooth" to whatever device the BlueSmirf is connected to. Here is the Arduino program: void setup(){ Serial.begin(115200); int i; } void loop(){Serial.print("Hello Bluetooth!"); delay(1000); } One my computer BT terminal I can see the message and connect no problem. The trouble is with my android code. I can connect to the device with android, but when I look at the log it is not displaying "Hello Bluetooth". Here is the debug log: 04-09 16:27:49.022: ERROR/BTArduino(17288): FireFly-2583 connected 04-09 16:27:49.022: ERROR/BTArduino(17288): STARTING TO CONNECT THE SOCKET 04-09 16:27:55.705: ERROR/BTArduino(17288): Received: 16 04-09 16:27:56.702: ERROR/BTArduino(17288): Received: 1 04-09 16:27:56.712: ERROR/BTArduino(17288): Received: 15 04-09 16:27:57.702: ERROR/BTArduino(17288): Received: 1 04-09 16:27:57.702: ERROR/BTArduino(17288): Received: 15 04-09 16:27:58.704: ERROR/BTArduino(17288): Received: 1 04-09 16:27:58.704: ERROR/BTArduino(17288): Received: 15 ect... Here is the code, I'm trying to put only the relative code but if you need more please let me know: private class ConnectThread extends Thread { private final BluetoothSocket mySocket; private final BluetoothDevice myDevice; public ConnectThread(BluetoothDevice device) { myDevice = device; BluetoothSocket tmp = null; try { tmp = device.createRfcommSocketToServiceRecord(MY_UUID); } catch (IOException e) { Log.e(TAG, "CONNECTION IN THREAD DIDNT WORK"); } mySocket = tmp; } public void run() { Log.e(TAG, "STARTING TO CONNECT THE SOCKET"); InputStream inStream = null; boolean run = false; //...More Connection code here... The more relative code is here: byte[] buffer = new byte[1024]; int bytes; // handle Connection try { inStream = mySocket.getInputStream(); while (run) { try { bytes = inStream.read(buffer); Log.e(TAG, "Received: " + bytes); } catch (IOException e3) { Log.e(TAG, "disconnected"); } } I am reading bytes = inStream.read(buffer). I know bytes is an integer, so I tried sending integers over bluetooth because "bytes" was an integer but it still didn't make sense. It almost appears that is sending incorrect baud rate. Could this be true? Any help would be appreciated. Thank you very much.

    Read the article

  • Voice Recognition Connection problem

    - by user244190
    I,m trying to work through and test a Voice Recognition example based on the VoiceRecognition.java example at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html but when click on the button to create the activity, I get a dialog that says Connection problem. My Manifest file is using the Internet Permission, and I understand it passes the to the Google Servers. Do I need to do anything else to use this. Code below UPDATE: Ok, I was able to replace my emulator image with one from HTC that appears to come with Google Voice Search, however now when I run from the emulator, i'm getting an Audio Problem message with Speak Again or Cancel buttons. It appears to make it back to the onActivityResult(), but the resultCode is 0. Here is the LogCat output: 03-07 20:21:25.396: INFO/ActivityManager(578): Starting activity: Intent { action=android.speech.action.RECOGNIZE_SPEECH comp={com.google.android.voicesearch/com.google.android.voicesearch.RecognitionActivity} (has extras) } 03-07 20:21:25.406: WARN/ActivityManager(578): Activity is launching as a new task, so cancelling activity result. 03-07 20:21:25.968: WARN/ActivityManager(578): Activity pause timeout for HistoryRecord{434f7850 {com.ikonicsoft.mileagegenie/com.ikonicsoft.mileagegenie.MileageGenie}} 03-07 20:21:26.206: WARN/AudioHardwareInterface(554): getInputBufferSize bad sampling rate: 16000 03-07 20:21:26.256: ERROR/AudioRecord(819): Recording parameters are not supported: sampleRate 16000, channelCount 1, format 1 03-07 20:21:26.696: INFO/ActivityManager(578): Displayed activity com.google.android.voicesearch/.RecognitionActivity: 1295 ms 03-07 20:21:29.890: DEBUG/dalvikvm(806): threadid=3: still suspended after undo (s=1 d=1) 03-07 20:21:29.896: INFO/dalvikvm(806): Uncaught exception thrown by finalizer (will be discarded): 03-07 20:21:29.896: INFO/dalvikvm(806): Ljava/lang/IllegalStateException;: Finalizing cursor android.database.sqlite.SQLiteCursor@435d3c50 on ml_trackdata that has not been deactivated or closed 03-07 20:21:29.896: INFO/dalvikvm(806): at android.database.sqlite.SQLiteCursor.finalize(SQLiteCursor.java:596) 03-07 20:21:29.896: INFO/dalvikvm(806): at dalvik.system.NativeStart.run(Native Method) 03-07 20:21:31.468: DEBUG/dalvikvm(806): threadid=5: still suspended after undo (s=1 d=1) 03-07 20:21:32.436: WARN/IInputConnectionWrapper(806): showStatusIcon on inactive InputConnection I,m still not sure why I,m getting the Connect problem on the Droid. I can use Voice Search ok. I also tried clearing the cache, and data as described in some posts, butstill not working?? /** * Fire an intent to start the speech recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo"); startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE); } /** * Handle the results from the recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) { // Fill the list view with the strings the recognizer thought it could have heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, matches)); } super.onActivityResult(requestCode, resultCode, data); }

    Read the article

  • OCR an RSA key fob (security token)

    - by user130582
    I put together a quick WinForm/embedded IE browser control which logs into our company's bank website each morning and scrapes/exports the desired deposit information (the bank is a smallish regional bank). Since we have a few dozen "pseudoaccounts" that draw from the same master account, this actually takes 10-15 minutes to retrieve. Anyway, the only problem is that our business bank account reuires an RSA security token (http://www.rsa.com/node.aspx?id=1156)--if you are not familiar, it is a small device which shows a random 6 digit number every 15(?) seconds, so I have to prompt for this value before starting. This is on top of the website's login based security model, so even if you create a read-only account that can't do anything, you still have to put the RSA number in. We have 5 of these tokens for different people in the company. From our perspective this is nusiance security. I was joking about using a web camera to OCR the digits from the key fob so they didn't have to type it in -- mainly so that the scraping/export would be done before anyone arrives in the morning. Well, they asked if I could really do it. So now I ask you, how hard (how many hours) do you think it would take to OCR these digits reliably from a JPEG image produced by the camera? I already know I can get the JPEG easily. I think you get 3 tries to log in, so it really needs to hit a 99% accuracy rate. I could work on this on my off time, but they don't want me to put more than a few hours into it, so I want to leverage as much existing code as possible. This is a 7-segment display (like an alarm clock) so it's not exactly text that an OCR package would be used to seeing. Also--there is a countdown timer on the side of the display; typically when it is down to 1 bar, you wait until the next number appears and it starts over at 5 bars (like signal strength on your cell phone). So this would need to be OCRd as well but it is not text. Anyway the more I think about it as I type this, the less convinced I am that I can truly get this right, so maybe I should just work on it in my spare time?

    Read the article

  • Using Artificial Intelligence (AI) to predict Stock Prices

    - by akaphenom
    Given a set of datavery similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5 (ie corellation coeffient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie corellation coeffient = -1) or somewhere inbetween. Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be: user direction (long/short) market direction sector of stock The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a cobination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended). The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specfic time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumtion is that a single user is right no more than 57% of the time - but who knows. I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings. NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coeeficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price. Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started... My hope is to implment this with F# and be able to impress my friends with a new skillset in F# with an implementation of machine learnign and potentially something (application / source) I can include in a tech portfolio or blog space; Thank you for any advice in advance.

    Read the article

  • Is this a good way to do a game loop for an iPhone game?

    - by Danny Tuppeny
    Hi all, I'm new to iPhone dev, but trying to build a 2D game. I was following a book, but the game loop it created basically said: function gameLoop update() render() sleep(1/30th second) gameLoop The reasoning was that this would run at 30fps. However, this seemed a little mental, because if my frame took 1/30th second, then it would run at 15fps (since it'll spend as much time sleeping as updating). So, I did some digging and found the CADisplayLink class which would sync calls to my gameLoop function to the refresh rate (or a fraction of it). I can't find many samples of it, so I'm posting here for a code review :-) It seems to work as expected, and it includes passing the elapsed (frame) time into the Update method so my logic can be framerate-independant (however I can't actually find in the docs what CADisplayLink would do if my frame took more than its allowed time to run - I'm hoping it just does its best to catch up, and doesn't crash!). // // GameAppDelegate.m // // Created by Danny Tuppeny on 10/03/2010. // Copyright Danny Tuppeny 2010. All rights reserved. // #import "GameAppDelegate.h" #import "GameViewController.h" #import "GameStates/gsSplash.h" @implementation GameAppDelegate @synthesize window; @synthesize viewController; - (void) applicationDidFinishLaunching:(UIApplication *)application { // Create an instance of the first GameState (Splash Screen) [self doStateChange:[gsSplash class]]; // Set up the game loop displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(gameLoop)]; [displayLink setFrameInterval:2]; [displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; } - (void) gameLoop { // Calculate how long has passed since the previous frame CFTimeInterval currentFrameTime = [displayLink timestamp]; CFTimeInterval elapsed = 0; // For the first frame, we want to pass 0 (since we haven't elapsed any time), so only // calculate this in the case where we're not the first frame if (lastFrameTime != 0) { elapsed = currentFrameTime - lastFrameTime; } // Keep track of this frames time (so we can calculate this next time) lastFrameTime = currentFrameTime; NSLog([NSString stringWithFormat:@"%f", elapsed]); // Call update, passing the elapsed time in [((GameState*)viewController.view) Update:elapsed]; } - (void) doStateChange:(Class)state { // Remove the previous GameState if (viewController.view != nil) { [viewController.view removeFromSuperview]; [viewController.view release]; } // Create the new GameState viewController.view = [[state alloc] initWithFrame:CGRectMake(0, 0, IPHONE_WIDTH, IPHONE_HEIGHT) andManager:self]; // Now set as visible [window addSubview:viewController.view]; [window makeKeyAndVisible]; } - (void) dealloc { [viewController release]; [window release]; [super dealloc]; } @end Any feedback would be appreciated :-) PS. Bonus points if you can tell me why all the books use "viewController.view" but for everything else seem to use "[object name]" format. Why not [viewController view]?

    Read the article

  • How to quickly acquire and process real time screen output

    - by Akusete
    I am trying to write a program to play a full screen PC game for fun (as an experiment in Computer Vision and Artificial Intelligence). For this experiment I am assuming the game has no underlying API for AI players (nor is the source available) so I intend to process the visual information rendered by the game on the screen. The game runs in full screen mode on a win32 system (direct-X I assume). Currently I am using the win32 functions #include <windows.h> #include <cvaux.h> class Screen { public: HWND windowHandle; HDC windowContext; HBITMAP buffer; HDC bufferContext; CvSize size; uchar* bytes; int channels; Screen () { windowHandle = GetDesktopWindow(); windowContext = GetWindowDC (windowHandle); size = cvSize (GetDeviceCaps (windowContext, HORZRES), GetDeviceCaps (windowContext, VERTRES)); buffer = CreateCompatibleBitmap (windowContext, size.width, size.height); bufferContext = CreateCompatibleDC (windowContext); SelectObject (bufferContext, buffer); channels = 4; bytes = new uchar[size.width * size.height * channels]; } ~Screen () { ReleaseDC(windowHandle, windowContext); DeleteDC(bufferContext); DeleteObject(buffer); delete[] bytes; } void CaptureScreen (IplImage* img) { BitBlt(bufferContext, 0, 0, size.width, size.height, windowContext, 0, 0, SRCCOPY); int n = size.width * size.height; int imgChannels = img->nChannels; GetBitmapBits (buffer, n * channels, bytes); uchar* src = bytes; uchar* dest = (uchar*) img->imageData; uchar* end = dest + n * imgChannels; while (dest < end) { dest[0] = src[0]; dest[1] = src[1]; dest[2] = src[2]; dest += imgChannels; src += channels; } } The rate at which I can process frames using this approach is much to slow. Is there a better way to acquire screen frames?

    Read the article

  • Drawing performance with CGImageCreateWithJPEGDataProvider?

    - by Rnegi
    I've actually curious about this for the iPhone. I am getting an MJPEG stream from a server and trying to render it natively on the iphone (without the use of safari class). Reasons for this is because the safari class while CAN render MJPEG natively, does not do so at the framerate I would like. So I tried drawing it natively, but I've come up with performance issues, namely a syncing issue between what I'm getting from the server and what I am able to draw onto the screen of the phone. (There should be a little lag, but the drift gets really bad, which is what I want to avoid). So I have a connection set up to my server and I do get the JPEGS. It's just data I insert into a NSMutableArray buffer CFMutableDataRef _t_data_ref = (CFMutableDataRef)[_buffer_array objectAtIndex:0]; //CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (_cf_buffer_data); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(_t_data_ref); CGImageRef image = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault); CGImageRef imgRef = image; CGContextDrawImage(context, CGRectMake(0, 17, 380, 285), imgRef); CGImageRelease(image); CGDataProviderRelease(imgDataProvider); please note this is the gist of my code, but it should summarize what I am trying to accomplish with regards to drawing. Also in order to get the framerate in sync, I had to detach a separate thread that sleeps X seconds and calls [self setNeedsDisplay]. NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; // Top-level pool while(1) { //[NSThread sleepForTimeInterval:TIMER_REFRESH_VALUE]; //sleep(unsigned int ); usleep(MICRO_REFRESH_VALUE); if ([_buffer_array count] > 10) { //NSLog(@"stuff %d", [_buffer_array count]); //[self setNeedsDisplay]; [self performSelectorOnMainThread:@selector(setNeedsDisplay) withObject:nil waitUntilDone:NO]; } } [pool release]; // Release the objects in the pool. My buffer of jpeg data actually fills up quite quick, but I can't seem to actually consume what i'm getting at the same rate, actually much slower. Are there any documents that can describe what kind of performance tuning I can do to make it go faster when rendering the JPEG to the screen? Or am I kind of stuck here? Thanks!

    Read the article

  • How do I prevent the concurrent execution of a javascript function?

    - by RyanV
    I am making a ticker similar to the "From the AP" one at The Huffington Post, using jQuery. The ticker rotates through a ul, either by user command (clicking an arrow) or by an auto-scroll. Each list-item is display:none by default. It is revealed by the addition of a "showHeadline" class which is display:list-item. HTML for the UL Looks like this: <ul class="news" id="news"> <li class="tickerTitle showHeadline">Test Entry</li> <li class="tickerTitle">Test Entry2</li> <li class="tickerTitle">Test Entry3</li> </ul> When the user clicks the right arrow, or the auto-scroll setTimeout goes off, it runs a tickForward() function: function tickForward(){ var $active = $('#news li.showHeadline'); var $next = $active.next(); if($next.length==0) $next = $('#news li:first'); $active.stop(true, true); $active.fadeOut('slow', function() {$active.removeClass('showHeadline');}); setTimeout(function(){$next.fadeIn('slow', function(){$next.addClass('showHeadline');})}, 1000); if(isPaused == true){ } else{ startScroll() } }; This is heavily inspired by Jon Raasch's A Simple jQuery Slideshow. Basically, find what's visible, what should be visible next, make the visible thing fade and remove the class that marks it as visible, then fade in the next thing and add the class that makes it visible. Now, everything is hunky-dory if the auto-scroll is running, kicking off tickForward() once every three seconds. But if the user clicks the arrow button repeatedly, it creates two negative conditions: Rather than advance quickly through the list for just the number of clicks made, it continues scrolling at a faster-than-normal rate indefinitely. It can produce a situation where two (or more) list items are given the .showHeadline class, so there's overlap on the list. I can see these happening (especially #2) because the tickForward() function can run concurrently with itself, producing different sets of $active and $next. So I think my question is: What would be the best way to prevent concurrent execution of the tickForward() method? Some things I have tried or considered: Setting a Flag: When tickForward() runs, it sets an isRunning flag to true, and sets it back to false right before it ends. The logic for the event handler is set to only call tickForward() if isRunning is false. I tried a simple implementation of this, and isRunning never appeared to be changed. The jQuery queue(): I think it would be useful to queue up the tickForward() commands, so if you clicked it five times quickly, it would still run as commanded but wouldn't run concurrently. However, in my cursory reading on the subject, it appears that a queue has to be attached to the object its queue applies to, and since my tickForward() method affects multiple lis, I don't know where I'd attach it.

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • Non-linear regression models in PostgreSQL using R

    - by Dave Jarvis
    Background I have climate data (temperature, precipitation, snow depth) for all of Canada between 1900 and 2009. I have written a basic website and the simplest page allows users to choose category and city. They then get back a very simple report (without the parameters and calculations section): The primary purpose of the web application is to provide a simple user interface so that the general public can explore the data in meaningful ways. (A list of numbers is not meaningful to the general public, nor is a website that provides too many inputs.) The secondary purpose of the application is to provide climatologists and other scientists with deeper ways to view the data. (Using too many inputs, of course.) Tool Set The database is PostgreSQL with R (mostly) installed. The reports are written using iReport and generated using JasperReports. Poor Model Choice Currently, a linear regression model is applied against annual averages of daily data. The linear regression model is calculated within a PostgreSQL function as follows: SELECT regr_slope( amount, year_taken ), regr_intercept( amount, year_taken ), corr( amount, year_taken ) FROM temp_regression INTO STRICT slope, intercept, correlation; The results are returned to JasperReports using: SELECT year_taken, amount, year_taken * slope + intercept, slope, intercept, correlation, total_measurements INTO result; JasperReports calls into PostgreSQL using the following parameterized analysis function: SELECT year_taken, amount, measurements, regression_line, slope, intercept, correlation, total_measurements, execute_time FROM climate.analysis( $P{CityId}, $P{Elevation1}, $P{Elevation2}, $P{Radius}, $P{CategoryId}, $P{Year1}, $P{Year2} ) ORDER BY year_taken This is not an optimal solution because it gives the false impression that the climate is changing at a slow, but steady rate. Questions Using functions that take two parameters (e.g., year [X] and amount [Y]), such as PostgreSQL's regr_slope: What is a better regression model to apply? What CPAN-R packages provide such models? (Installable, ideally, using apt-get.) How can the R functions be called within a PostgreSQL function? If no such functions exist: What parameters should I try to obtain for functions that will produce the desired fit? How would you recommend showing the best fit curve? Keep in mind that this is a web app for use by the general public. If the only way to analyse the data is from an R shell, then the purpose has been defeated. (I know this is not the case for most R functions I have looked at so far.) Thank you!

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >