Search Results

Search found 3489 results on 140 pages for 'clock rate'.

Page 127/140 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Computer Networks UNISA - Chap 15 &ndash; Network Management

    - by MarkPearl
    After reading this section you should be able to Understand network management and the importance of documentation, baseline measurements, policies, and regulations to assess and maintain a network’s health. Manage a network’s performance using SNMP-based network management software, system and event logs, and traffic-shaping techniques Identify the reasons for and elements of an asset managements system Plan and follow regular hardware and software maintenance routines Fundamentals of Network Management Network management refers to the assessment, monitoring, and maintenance of all aspects of a network including checking for hardware faults, ensuring high QoS, maintaining records of network assets, etc. Scope of network management differs depending on the size and requirements of the network. All sub topics of network management share the goals of enhancing the efficiency and performance while preventing costly downtime or loss. Documentation The way documentation is stored may vary, but to adequately manage a network one should at least record the following… Physical topology (types of LAN and WAN topologies – ring, star, hybrid) Access method (does it use Ethernet 802.3, token ring, etc.) Protocols Devices (Switches, routers, etc) Operating Systems Applications Configurations (What version of operating system and config files for serve / client software) Baseline Measurements A baseline is a report of the network’s current state of operation. Baseline measurements might include the utilization rate for your network backbone, number of users logged on per day, etc. Baseline measurements allow you to compare future performance increases or decreases caused by network changes or events with past network performance. Obtaining baseline measurements is the only way to know for certain whether a pattern of usage has changed, or whether a network upgrade has made a difference. There are various tools available for measuring baseline performance on a network. Policies, Procedures, and Regulations Following rules helps limit chaos, confusion, and possibly downtime. The following policies and procedures and regulations make for sound network management. Media installations and management (includes designing physical layout of cable, etc.) Network addressing policies (includes choosing and applying a an addressing scheme) Resource sharing and naming conventions (includes rules for logon ID’s) Security related policies Troubleshooting procedures Backup and disaster recovery procedures In addition to internal policies, a network manager must consider external regulatory rules. Fault and Performance Management After documenting every aspect of your network and following policies and best practices, you are ready to asses you networks status on an on going basis. This process includes both performance management and fault management. Network Management Software To accomplish both fault and performance management, organizations often use enterprise-wide network management software. There various software packages that do this, each collect data from multiple networked devices at regular intervals, in a process called polling. Each managed device runs a network management agent. So as not to affect the performance of a device while collecting information, agents do not demand significant processing resources. The definition of a managed devices and their data are collected in a MIB (Management Information Base). Agents communicate information about managed devices via any of several application layer protocols. On modern networks most agents use SNMP which is part of the TCP/IP suite and typically runs over UDP on port 161. Because of the flexibility and sophisticated network management applications are a challenge to configure and fine-tune. One needs to be careful to only collect relevant information and not cause performance issues (i.e. pinging a device every 5 seconds can be a problem with thousands of devices). MRTG (Multi Router Traffic Grapher) is a simple command line utility that uses SNMP to poll devices and collects data in a log file. MRTG can be used with Windows, UNIX and Linux. System and Event Logs Virtually every condition recognized by an operating system can be recorded. This is typically done using event logs. In Windows there is a GUI event log viewer. Similar information is recorded in UNIX and Linux in a system log. Much of the information collected in event logs and syslog files does not point to a problem, even if it is marked with a warning so it is important to filter your logs appropriately to reduce the noise. Traffic Shaping When a network must handle high volumes of network traffic, users benefit from performance management technique called traffic shaping. Traffic shaping involves manipulating certain characteristics of packets, data streams, or connections to manage the type and amount of traffic traversing a network or interface at any moment. Its goals are to assure timely delivery of the most important traffic while offering the best possible performance for all users. Several types of traffic prioritization exist including prioritizing traffic according to any of the following characteristics… Protocol IP address User group DiffServr VLAN tag in a Data Link layer frame Service or application Caching In addition to traffic shaping, a network or host might use caching to improve performance. Caching is the local storage of frequently needed files that would otherwise be obtained from an external source. By keeping files close to the requester, caching allows the user to access those files quickly. The most common type of caching is Web caching, in which Web pages are stored locally. To an ISP, caching is much more than just convenience. It prevents a significant volume of WAN traffic, thus improving performance and saving money. Asset Management Another key component in managing networks is identifying and tracking its hardware. This is called asset management. The first step to asset management is to take an inventory of each node on the network. You will also want to keep records of every piece of software purchased by your organization. Asset management simplifies maintaining and upgrading the network chiefly because you know what the system includes. In addition, asset management provides network administrators with information about the costs and benefits of certain types of hardware or software. Change Management Networks are always in a stage of flux with various aspects including… Software changes and patches Client Upgrades Shared Application Upgrades NOS Upgrades Hardware and Physical Plant Changes Cabling Upgrades Backbone Upgrades For a detailed explanation on each of these read the textbook (Page 750 – 761)

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Why does calling CreateDXGIFactory prevent my program from exiting?

    - by smoth190
    I'm using CreateDXGIFactory to get the graphics adapters and display modes. When I call it, it works fine and I get all the data. However, when I exit my program, the main Win32 thread exits, but something stays open because it keeps debugging. Does CreateDXGIFactory create an extra thread and I'm not closing it? I don't understand. The only thing I would suspect is that in the documentation it says it doesn't work if it's called from DllMain. It is in a DLL, but it's not called from DllMain. And it doesn't fail, either. I'm using DirectX 11. Here is the function that initializes DirectX. I haven't gotten past retrieving the refresh rate because of this problem. I commented everything out to pinpoint the problem. bool CGraphicsManager::InitDirectX(HWND hWnd, int width, int height) { HRESULT result; IDXGIFactory* factory; IDXGIOutput* output; IDXGIAdapter* adapter; DXGI_MODE_DESC* displayModes; DXGI_ADAPTER_DESC adapterDesc; unsigned int modeCount = 0; unsigned int refreshNum = 0; unsigned int refreshDen = 0; //First, we need to get the monitors refresh rater result = CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory); //if(FAILED(result)) //{ //MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to create DXGI factory\nError:\n%s"), DXGetErrorDescription(result)); //return false; //} /*//Create a graphics card adapter result = factory->EnumAdapters(0, &adapter); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get graphics adapters\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Get the output result = adapter->EnumOutputs(0, &output); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get adapter output\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Get the modes result = output->GetDisplayModeList(DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ENUM_MODES_INTERLACED, &modeCount, 0); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get mode count\nError:\n%s"), DXGetErrorDescription(result)); return false; } displayModes = new DXGI_MODE_DESC[modeCount]; result = output->GetDisplayModeList(DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ENUM_MODES_INTERLACED, &modeCount, displayModes); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get display modes\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Now we need to find one for our screen size for(unsigned int i = 0; i < modeCount; i++) { if(displayModes[i].Width == (unsigned int)width) { if(displayModes[i].Height == (unsigned int)height) { refreshNum = displayModes[i].RefreshRate.Numerator; refreshDen = displayModes[i].RefreshRate.Denominator; break; } } } //Store the video card data result = adapter->GetDesc(&adapterDesc); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get adapter description\nError:\n%s"), DXGetErrorDescription(result)); return false; } m_videoCard = new CVideoCard(); MemoryUtil::CreateGameObject(m_videoCard); m_videoCard->VideoCardMemory = (unsigned int)(adapterDesc.DedicatedVideoMemory); wcstombs_s(0, m_videoCard->VideoCardDescription, 128, adapterDesc.Description, 128);*/ //ReleaseCOM(output); //ReleaseCOM(adapter); ReleaseCOM(factory); //DeletePointerArray(displayModes); return true; } Also, I don't know if this means anything, but this is some of the output log when the function is commented out: //... 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msvcr100d.dll', Symbols loaded. 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\imm32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msctf.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\uxtheme.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Program Files (x86)\Common Files\microsoft shared\ink\tiptsf.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleaut32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\clbcatq.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Cannot find or open the PDB file The program '[6560] LostRock.exe: Native' has exited with code 0 (0x0). And when it isn't commented out... //... 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\cfgmgr32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\devobj.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\wintrust.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\crypt32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msasn1.dll', Cannot find or open the PDB file 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\setupapi.dll' 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\devobj.dll' 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\cfgmgr32.dll' 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\clbcatq.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Cannot find or open the PDB file The thread 'Win32 Thread' (0xb94) has exited with code 0 (0x0). The program '[8096] LostRock.exe: Native' has exited with code 0 (0x0). //This is called when I click "Stop Debugging" P.S. I know it is CreateDXGIFactory because if I comment it out, the program exits correctly.

    Read the article

  • Poor Customer Service Example

    - by MightyZot
    Lately I have been frustrated by examples of poor customer service. At least one is worth writing about because I don’t think companies realize the effects of their service policies on loyal customers. Bad Customer Service Example #1 Recently, I received an offer in the mail from my cable company, suddenLink. The offer was for an updated TiVo for $12/mo. Normally I ignore offers like this one because I already have the service they’re offering and many times advertisers are offering alternatives to what is already an excellent product offering. I tend to exhibit a high level of loyalty to the products and brands that I use. In this case, we were looking to upgrade our TiVo and this deal is attractive for several reasons: I don’t want to pay a huge amount up-front for the device, so paying a monthly amount for the device is attractive to me. My entertainment is almost all on a single invoice. I’m no longer going to be billed by suddenLink and TiVo. TiVo is still involved, so I am still loyal to the brand I love. I have resisted moving to other DVRs and services for over a decade. I called suddenLink to order the new TiVo and was rewarded with great customer service. In fact, I can’t remember ever getting poor customer service from suddenLink. They are always there to answer my technical support questions and they are very responsive to outages. Then I called TiVo. First of all, I chose the option on the phone system to change or cancel my service, which was consequently met by an inordinate hold time. (I’m calling this time inordinate because I get through very quickly if I want to purchase something.) This is a trend that I’ve noticed with companies – if you want me to be loyal to you, it should be just as easy to cancel your service as it is to purchase it. Because, I should never be cancelling because I am unhappy. And, if you ever want my business again, or more importantly a reference, then you’d better make the exit door open just as easy as the enter door. After quite some time on hold, I talked to “Victor” who was very courteous. Victor canceled my service and then told me that I could keep my current TiVo and transfer recorded programs to it from the new TiVo.  Cool I said, but what about the cost?  He said there was no extra cost.  This was also attractive to me because I paid for my TiVo and it would be good to use it for something at least.  That was four months ago. This month I noticed that TiVo was still charging me for my original service. I was a little upset, but I decided to give them the benefit of the doubt. After all, I am a loyal TiVo customer and I have resisted moving to other solutions for over a decade. I’m sure they will do whatever it takes to keep my business, through TiVo or through suddenLink. After quite some time on hold, I was able to talk to a customer service representative, “Les”. I explained that I am a loyal TiVo customer, but I purchased this deal through my cable provider. I’m still with TiVo, I just wanted a single bill and to take advantage of the pay-over-time option. “Les” told me that he was very sorry to hear that I’m leaving TiVo, to which I responded again that I wasn’t leaving TiVo, I just want one invoice, and to take advantage of the pay-over-time. So, after explaining that I requested a termination of the non-suddenLink account (TiVo can see both of course), I was put on hold again for quite some time while my refund was “approved”.  “Les” said that he could see my cancellation request back in July. Note that it is now November, so they have billed me inappropriately four times. After quite some time, he came back on the line and told me that he was able to “get me most of my money back.” He got approval to refund 90 days. Even though I requested cancellation of one of my accounts, TiVo has that cancellation request on file and they admit overbilling me, I am going to get “most” of my money back. To top this experience off, when we were ready to hang up, “Les” told me that he was sorry to see me go and that he hoped I would come back to TiVo again. Again, I explained to “Les” that I have not left TiVo. I am just paying them through suddenLink. At that point, he went into a small dissertation about how this is a special arrangement they have with suddenLink and very few others. He made me feel like I was doing something wrong. Why should I feel that way? TiVo made the deal with suddenLink, not me, and the deal seemed like a good compromise for me to be able to get what I need. Here is what TiVo Customer Service accomplished on those two calls – I no longer feel like I need to be loyal to the TiVo brand or service. If I had been treated better on these two calls, I would still be recommending TiVo to my friends. They would still be getting revenue from a loyal customer, who paid the same rate for over a decade, and this article wouldn’t be here for you to read. Interesting… In my opinion, if you want brand loyalty, be loyal to your customers!

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • An error occured synchronizing windows with time.windows.com

    - by Killrawr
    Okay so I've tried stopping/registering the win32tm service on this Windows Server 2008 Enterprise Computer. C:\Users\Administrator>net stop w32time The Windows Time service is stopping. The Windows Time service was stopped successfully. C:\Users\Administrator>w32tm /unregister The following error occurred: Access is denied. (0x80070005) C:\Users\Administrator>w32tm /unregister W32Time successfully unregistered. C:\Users\Administrator>w32tm /register W32Time successfully registered. C:\Users\Administrator>net start w32time The Windows Time service is starting. The Windows Time service was started successfully. (Source : http://social.technet.microsoft.com/Forums/en-US/winserverDS/thread/9bdfc2cc-4775-4435-8868-57d214e1e3ba/) And I get this error from the Date and Time, Internet Time tab (After also following the steps here). I've even tried the Atomic Time Clock Worldtimeserver and I get the error The following error occurred: The specified module could not be found. (0x8007007E). I've also disabled the Windows Firewall, that might of been blocking the synchronization. I've done a file scan with sfc /scannow that came back with no errors. C:\Users\Administrator>sfc /scannow Beginning system scan. This process will take some time. Beginning verification phase of system scan. Verification 100% complete. Windows Resource Protection did not find any integrity violations. C:\Users\Administrator> But I'm not having much luck. Is there anyway lo possibly solve this? or is the time.windows.com servers unsupported? because the software is from 2008? (I really don't know :/), My ping result to time.windows.com C:\Users\Administrator>ping time.windows.com Pinging time.microsoft.akadns.net [65.55.21.22] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 65.55.21.22: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), And tracert result C:\Users\Administratortracert time.windows.com Tracing route to time.microsoft.akadns.net [65.55.21.24] over a maximum of 30 hops: 1 1 ms <1 ms <1 ms 192.168.1.1 2 32 ms 31 ms 32 ms be2-100.bras1wtc.wlg.vf.net.nz [203.109.129.113] 3 31 ms 32 ms 31 ms be5-100.ppnzwtc01.wlg.vf.net.nz.129.109.203.in-a ddr.arpa [203.109.129.114] 4 31 ms 31 ms 31 ms gi0-2-0-3.ppnzwtc01.wlg.vf.net.nz.180.109.203.in -addr.arpa [203.109.180.210] 5 31 ms 31 ms 30 ms gi0-2-0-3.ppnzwtc02.wlg.vf.net.nz [203.109.180.2 09] 6 167 ms 166 ms 166 ms ip-141.199.31.114.VOCUS.net.au [114.31.199.141] 7 175 ms 175 ms 175 ms microsoft.com.any2ix.coresite.com [206.223.143.1 43] 8 177 ms 180 ms 176 ms xe-7-0-2-0.by2-96c-1a.ntwk.msn.net [207.46.42.17 6] 9 205 ms 205 ms 204 ms xe-10-0-2-0.co1-96c-1b.ntwk.msn.net [207.46.45.3 1] 10 * * * Request timed out. 11 * * * Request timed out. 12 * * * Request timed out. 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 ^C And nslookup C:\Users\Administrator>nslookup time.windows.com Server: UnKnown Address: 192.168.1.1 Non-authoritative answer: Name: time.microsoft.akadns.net Address: 65.55.21.22 Aliases: time.windows.com

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Why Executives Need Enterprise Project Portfolio Management: 3 Key Considerations to Drive Value Across the Organization

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} By: Guy Barlow, Oracle Primavera Industry Strategy Director Over the last few years there has been a tremendous shift – some would say tectonic in nature – that has brought project management to the forefront of executive attention. Many factors have been driving this growing awareness, most notably, the global financial crisis, heightened regulatory environments and a need to more effectively operationalize corporate strategy. Executives in India are no exception. In fact, given the phenomenal rate of progress of the country, top of mind for all executives (whether in finance, operations, IT, etc.) is the need to build capacity, ramp-up production and ensure that the right resources are in place to capture growth opportunities. This applies across all industries from asset-intensive – like oil & gas, utilities and mining – to traditional manufacturing and the public sector, including services-based sectors such as the financial, telecom and life sciences segments are also part of the mix. However, compounding matters is a complex, interplay between projects – big and small, complex and simple – as companies expand and grow both domestically and internationally. So, having a standardized, enterprise wide solution for project portfolio management is natural. Failing to do so is akin to having two ERP systems, one to manage “large” invoices and one to manage “small” invoices. It makes no sense and provides no enterprise wide visibility. Therefore, it is imperative for executives to understand the full range of their business commitments, the benefit to the company, current performance and associated course corrections if needed. Irrespective of industry and regardless of the use case (e.g., building a power plant, launching a new financial service or developing a new automobile) company leaders need to approach the value of enterprise project portfolio management via 3 critical areas: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} 1. Greater Financial Discipline – Improve financial rigor and results through better governance and control is an imperative given today’s financial uncertainty and greater investment scrutiny. For example, as India plans a US$1 trillion investment in the country’s infrastructure how do companies ensure costs are managed? How do you control cash flow? Can you easily report this to stakeholders? 2. Improved Operational Excellence – Increase efficiency and reduce costs through robust collaboration and integration. Upwards of 66% of cost variances are driven by poor supplier collaboration. As you execute initiatives do you have visibility into the performance of your supply base? How are they integrated into the broader program plan? 3. Enhanced Risk Mitigation – Manage and react to uncertainty through improved transparency and contingency planning. What happens if you’re faced with a skills shortage? How do you plan and account for geo-political or weather related events? In summary, projects are not just the delivery of a product or service to a customer inside a predetermined schedule; they often form a contractual and even moral obligation to shareholders and stakeholders alike. Hence the intimate connection between executives and projects, with the latter providing executives with the platform to demonstrate that their organization has the capabilities and competencies needed to meet and, whenever possible, exceed their customer commitments. Effectively developing and operationalizing corporate strategy is the hallmark of successful executives and enterprise project and portfolio management allows them to achieve this goal. Article was first published for Manage India, an e-newsletter, PMI India.

    Read the article

  • Identity Globe Trotters (Sep Edition): The Social Customer

    - by Tanu Sood
    Welcome to the inaugural edition of our monthly series - Identity Globe Trotters. Starting today, the last Friday of every month, we will explore regional commentary on Identity Management. We will invite guest contributors from around the world to share their opinions and experiences around Identity Management and highlight regional nuances, specific drivers, solutions and more. Today's feature is contributed by Michael Krebs, Head of Business Development at esentri consulting GmbH, a (SOA) specialized Oracle Gold Partner based in Ettlingen, Germany. In his current role, Krebs is dealing with the latest developments in Enterprise Social Networking and the Integration of Social Media within business processes.  By Michael Krebs The relevance of "easy sign-on" in the age of the "Social Customer" With the growth of Social Networks, the time people spend within those closed "eco-systems" is growing year by year. With social networks looking to integrate search engines, like Facebook announced some weeks ago, their relevance will continue to grow in contrast to the more conventional search engines. This is one of the reasons why social network accounts of the users are getting more and more like a virtual fingerprint. With the growing relevance of social networks the importance of a simple way for customers to get in touch with say, customer care or contract departments, will be crucial for sales processes in critical markets. Customers want to have one single point of contact and also an easy "login-method" with no dedicated usernames, passwords or proprietary accounts. The golden rule in the future social media driven markets will be: The lower the complexity of the initial contact, the better a company can profit from social networks. If you, for example, can generate a smart way of how an existing customer can use self-service portals, the cost in providing phone support can be lowered significantly. Recruiting and Hiring of "Digital Natives" Another particular example is "social" recruiting processes. The so called "digital natives" don´t want to type in their profile facts and CV´s in proprietary systems. Why not use the actual LinkedIn profile? In German speaking region, the market in the area of professional social networks is dominated by XING, the equivalent to LinkedIn. A few weeks back, this network also opened up their interfaces for integrating social sign-ons or the usage of profile data for recruiting-purposes. In the European (and especially the German) employment market, where the number of young candidates is shrinking because of the low birth rate in the region, it will become essential to use social-media supported hiring processes to find and on-board the rare talents. In fact, you will see traditional recruiting websites integrated with social hiring to attract the best talents in the market, where the pool of potential candidates has decreased dramatically over the years. Identity Management as a key factor in the Customer Experience process To create the biggest value for customers and also future employees, companies need to connect their HCM or CRM-systems with powerful Identity management solutions. With the highly efficient Oracle (social & mobile enabling) Identity Management solution, enterprises can combine easy sign on with secure connections to the backend infrastructure. This combination enables a "one-stop" service with personalized content for customers and talents. In addition, companies can collect valuable data for the enrichment of their CRM-data. The goal is to enrich the so called "Customer Experience" via all available customer channels and contact points. Those systems have already gained importance in the B2C-markets and will gradually spread out to B2B-channels in the near future. Conclusion: Central and "Social" Identity management is key to Customer Experience Management and Talent Management For a seamless delivery of "Customer Experience Management" and a modern way of recruiting the best talent, companies need to integrate Social Sign-on capabilities with modern CX - and Talent management infrastructure. This lowers the barrier for existing and future customers or employees to get in touch with sales, support or human resources. Identity management is the technology enabler and backbone for a modern Customer Experience Infrastructure. Oracle Identity management solutions provide the opportunity to secure Social Applications and connect them with modern CX-solutions. At the end, companies benefit from "best of breed" processes and solutions for enriching customer experience without compromising security. About esentri: esentri is a provider of enterprise social networking and brings the benefits of social network communication into business environments. As one key strength, esentri uses Oracle Identity Management solutions for delivering Social and Mobile access for Oracle’s CRM- and HCM-solutions. …..End Guest Post…. With new and enhanced features optimized to secure the new digital experience, the recently announced Oracle Identity Management 11g Release 2 enables organizations to securely embrace cloud, mobile and social infrastructures and reach new user communities to help further expand and develop their businesses. Additional Resources: Oracle Identity Management 11gR2 release Oracle Identity Management website Datasheet: Mobile and Social Access (pdf) IDM at OOW: Focus on Identity Management Facebook: OracleIDM Twitter: OracleIDM We look forward to your feedback on this post and welcome your suggestions for topics to cover in Identity Globe Trotters. Last Friday, every month!

    Read the article

  • Adjusting server-side tickrate dynamically

    - by Stuart Blackler
    I know nothing of game development/this site, so I apologise if this is completely foobar. Today I experimented with building a small game loop for a network game (think MW3, CSGO etc). I was wondering why they do not build in automatic rate adjustment based on server performance? Would it affect the client that much if the client knew this frame is based on this tickrate? Has anyone attempted this before? Here is what my noobish C++ brain came up with earlier. It will improve the tickrate if it has been stable for x ticks. If it "lags", the tickrate will be reduced down by y amount: // GameEngine.cpp : Defines the entry point for the console application. // #ifdef WIN32 #include <Windows.h> #else #include <sys/time.h> #include <ctime> #endif #include<iostream> #include <dos.h> #include "stdafx.h" using namespace std; UINT64 GetTimeInMs() { #ifdef WIN32 /* Windows */ FILETIME ft; LARGE_INTEGER li; /* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it * to a LARGE_INTEGER structure. */ GetSystemTimeAsFileTime(&ft); li.LowPart = ft.dwLowDateTime; li.HighPart = ft.dwHighDateTime; UINT64 ret = li.QuadPart; ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */ ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */ return ret; #else /* Linux */ struct timeval tv; gettimeofday(&tv, NULL); uint64 ret = tv.tv_usec; /* Convert from micro seconds (10^-6) to milliseconds (10^-3) */ ret /= 1000; /* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */ ret += (tv.tv_sec * 1000); return ret; #endif } int _tmain(int argc, _TCHAR* argv[]) { int sv_tickrate_max = 1000; // The maximum amount of ticks per second int sv_tickrate_min = 100; // The minimum amount of ticks per second int sv_tickrate_adjust = 10; // How much to de/increment the tickrate by int sv_tickrate_stable_before_increment = 1000; // How many stable ticks before we increase the tickrate again int sys_tickrate_current = sv_tickrate_max; // Always start at the highest possible tickrate for the best performance int counter_stable_ticks = 0; // How many ticks we have not lagged for UINT64 __startTime = GetTimeInMs(); int ticks = 100000; while(ticks > 0) { int maxTimeInMs = 1000 / sys_tickrate_current; UINT64 _startTime = GetTimeInMs(); // Long code here... cout << "."; UINT64 _timeTaken = GetTimeInMs() - _startTime; if(_timeTaken < maxTimeInMs) { Sleep(maxTimeInMs - _timeTaken); counter_stable_ticks++; if(counter_stable_ticks >= sv_tickrate_stable_before_increment) { // reset the stable # ticks counter counter_stable_ticks = 0; // make sure that we don't go over the maximum tickrate if(sys_tickrate_current + sv_tickrate_adjust <= sv_tickrate_max) { sys_tickrate_current += sv_tickrate_adjust; // let me know in console #DEBUG cout << endl << "Improving tickrate. New tickrate: " << sys_tickrate_current << endl; } } } else if(_timeTaken > maxTimeInMs) { cout << endl; if((sys_tickrate_current - sv_tickrate_adjust) > sv_tickrate_min) { sys_tickrate_current -= sv_tickrate_adjust; } else { if(sys_tickrate_current == sv_tickrate_min) { cout << "Please reduce sv_tickrate_min..." << endl; } else{ sys_tickrate_current = sv_tickrate_min; } } // let me know in console #DEBUG cout << "The server has lag. Reduced tickrate to: " << sys_tickrate_current << endl; } ticks--; } UINT64 __timeTaken = GetTimeInMs() - __startTime; cout << endl << endl << "Total time in ms: " << __timeTaken; cout << endl << "Ending tickrate: " << sys_tickrate_current; char test; cin >> test; return 0; }

    Read the article

  • Waiting for Windows 8: A Long, Hot Summer

    - by andrewbrust
    Microsoft has revealed some things about Windows 8, and revealed a part of the developer story for new Windows 8 “tailored,” “immersive” applications.  In retrospect, very little was shared.  The bit that was revealed to us is that those applications can be developed using a combination of HTML 5 and JavaScript.  Not much else was said, except that additional details would be revealed at Microsoft’s //Build/ conference in Anaheim, California in September. This has left a lot of people in suspense, and it seems that suspended state is going to last all summer.  The problem, of course, is that in the absence of hard information, people fill the void with Speculation, Rumor and Gloom.  That’s a bit like Fear, Uncertainty and Doubt, except that it’s self-imposed by the Microsoft community and not planted by Microsoft’s competitors. This is a less-than-perfect situation.  Not only is it causing developers to worry about the value of their skill sets, but I am already hearing from consulting shops that customers are getting nervous too and, in extreme cases, opting for non-Microsoft tools for their projects as a result.  I’m also hearing from dev tool ISVs that sales have suffered as a result. It’s quite possible that the customers moving off .NET wanted to do so anyway and it’s also possible that dev tool ISVs are suffering slower sales this year due a slowed rate of economic recovery. Without hard information, tend to people interpret things negatively.  Actually, that’s the major point in all of this. While there is multitude of opinions about what the Windows 8 development platform will look like once fully revealed, there is an emerging consensus around one thing: it sure would help if Microsoft revealed more of its strategy…just enough to quash absurd rumors, stabilize the .NET ecosystem and get people to stay calm. We’ve had some reassurances thus far: there will be a Windows desktop mode; we’ll still have Windows Explorer, we’ll still run Office, we’ll still have a task bar, and all the skills and tools we use now will still work there.  But with reassurances like that…people still feel insecure.  Because telling us that Windows 8 will have what is essentially a “classic” mode sure makes it sound like today’s skill sets will soon be “classic” too…and then maybe they’ll just become obsolete. Humans find change scary; it’s natural.  And when left alone with their fears – because no one is saying anything to dispel them – people can go from frightened to paranoid, and can start to viewing things in a downright conspiratorial light.  It would be great if Microsoft stepped into the void now and told us what is coming – especially because whatever they tell us is bound to be at least a little better than what people think they are going to hear. I don’t know what the announcements will be, but I do have it on authority, from a number of sources, that Microsoft isn’t gong to talk until //Build/.  That means no news until September September 13th.  Nothing until after Labor Day.  You get zippo until after the Back-to-School sales are done. What to do?  Try not to let the dark voices of gloom and doom fill your head.  Even in the absence of answers, we still have some important facts: The .NET developer community is huge. Microsoft’s customers have major investments in .NET, and in .NET skills. Political infighting in Redmond might make for irrational decisions, but ultimately public companies can’t just alienate their advocates and piss off their customers.  Spite doesn’t trump fiduciary responsibility. The computing device markets are changing, software is changing, software business models are changing and developers are changing.  Microsoft has to keep up. The HTML + JavaScript community is huge too, and it includes many of the “changed” developers. Public companies can’t ignore new markets nor the popular standards that can help them enter those new markets.  Loyalty doesn’t trump fiduciary responsibility either. If Microsoft can appeal to new developers, then it should. If Microsoft can keep catering to its existing developers and customers -- not just through legacy support, but also through empowering futures -- then it probably will. You don’t have to shove your old friends out into the rain to make room for new ones; you can bring those new constituents in under a bigger tent.  I hope Microsoft will enlarge the tent, and I have trouble imagining why it would not.

    Read the article

  • Answers to Conference Revenue Tweet Questions

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2014/05/27/156612.aspxI tweeted this the other day… …and I had some people tweet back questioning/asking about the profit number. So here’s how I came to that figure. Total Revenue Let’s talk total revenue first. This conference has a huge list of companies/organizations paying some amount for sponsorship. Platinum ($1500) x 5 = $7500 Gold ($1000) x 3 = $3000 Silver ($500) x 9 = $4500 Bronze ($250) x 13 = $3250 There’s also a title sponsor level but there’s no mention of how much that is…more than $1500 though, so let’s just say $2500. Total Sponsorship Revenue: $20750.00 For registrations, this conference is claiming over 300 attendees. We’ll just calculate at 300 and the discounted “member rate” – $249. Total Registration Revenue: $74700.00 Booth space is also sold for a vendor area, but let’s just leave that out of the calculation. Total Event Revenue: $95450.00 Now that we know how much money we’re playing with, let’s knock out the costs for the event. Total Costs Hard Costs Audio/Visual Services $2000 Conference Rooms (4 Breakouts + Plenary) $2500 Insurance $700 Printing/Signage $1500 Travel/Hotel Rooms $2000 Keynotes $2000 So let’s talk about these hard costs first. First you may be asking about the Audio Visual. Yes those services can be that high, actually higher. But since there’s an A/V company touted as the official A/V provider, I gotta think there’s some discount for being branded as such. Conference rooms are actually an inflated amount of $500 per. Venues make money on the food they sell at events, not on room rentals. The more food, the cheaper the rooms tend to be offered at. Still, for the sake of argument, let’s set the rooms at $500 each knowing that they could be lower. For travel and hotel rooms…it appears that most of the speakers at this conference are local, meaning there’s no travel or hotel cost. But a few of them I wasn’t too sure…so let’s factor in enough to cover two outside speakers (airfare and hotel). There are two keynotes for this event and depending on the event those may be paid gigs. I’m not sure if they are or not, but considering the closing one is a comedian I’m going to add some funds here for that just in case. Total Hard Costs: $10700 Now that the hard costs are out of the way, let’s talk about the food costs. Food Costs The conference is providing a continental breakfast (YEEEESH!), some level of luncheon, and I have to assume coffee breaks in between. Let’s look at those costs. Continental Breakfast $12 per person Lunch Buffet $18 per person Coffee Breaks (2) $6 per person (or $3 a cup) Snacks (2) $10 per person (or $5 each) Note that the lunch buffet assumes a *good* lunch buffet – two entrees, starch, vegetable, salads, and bread. Not sure if there’ll be snacks during coffee breaks but let’s assume so. Total Food Cost Per Person: $46 Food Cost: $14950 Gratuity: $2691 Total Food Cost: $17641 Total food cost is based on the $46 per person cost x 325. 300 for attendance, 12 for speakers, extra 13 for volunteers/organizers. Gratuity is 18%. Grand Totals So let’s sum things up here. Total Costs Hard Costs: $10700.00 Food Costs: $17641.00 Total:          $28341.00 Taxes:         $3685.00 Grand Total  $32026.00 Total Revenue Sponsorship  $20750 Registration   $74700 Grand Total   $95450.00 Total Profit $63424.00 Now what if the registration numbers were lower and they only got 100 people to show up. In that scenario there’d still be a profit of just under $26000. Closing Comments A couple of things to note: - I haven’t factored in anything for prizes. Not sure if any will be given out - We didn’t add in the booth space revenue - We’re assuming speakers aren’t getting paid, but even if they were at the high end its $12000 ($1000 per session), which is probably an inflated number for local speakers. - Note that all registrations were set to the “member” discounted price. The non-member registration price is higher. There is also an option for those that just want to show up for the opening keynote. There you have it! Let me know if you have any questions. D

    Read the article

  • Understanding implementation of glu.PickMatrix()

    - by stoney78us
    I am working on an OpenGL project which requires object selection feature. I use OpenTK framework to do this; however OpenTK doesn't support glu.PickMatrix() method to define the picking region. I ended up googling its implementation and here is what i got: void GluPickMatrix(double x, double y, double deltax, double deltay, int[] viewport) { if (deltax <= 0 || deltay <= 0) { return; } GL.Translate((viewport[2] - 2 * (x - viewport[0])) / deltax, (viewport[3] - 2 * (y - viewport[1])) / deltay, 0); GL.Scale(viewport[2] / deltax, viewport[3] / deltay, 1.0); } I totally fail to understand this piece of code. Moreover, this doesn't work with my following code sample: //selectbuffer private int[] _selectBuffer = new int[512]; private void Init() { float[] triangleVertices = new float[] { 0.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f }; float[] _triangleColors = new float[] { 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f }; GL.GenBuffers(2, _vBO); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleVertices.Length), _triangleVertices, BufferUsageHint.StaticDraw); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[1]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleColors.Length), _triangleColors, BufferUsageHint.StaticDraw); GL.ColorPointer(3, ColorPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.ColorArray); //Selectbuffer set up GL.SelectBuffer(512, _selectBuffer); } private void glControlWindow_Paint(object sender, PaintEventArgs e) { GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); float[] eyes = { 0.0f, 0.0f, -10.0f }; float[] target = { 0.0f, 0.0f, 0.0f }; Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); //45 degree = 0.785398163 rads Matrix4 view = Matrix4.LookAt(eyes[0], eyes[1], eyes[2], target[0], target[1], target[2], 0, 1, 0); Matrix4 model = Matrix4.Identity; Matrix4 MV = view * model; //First Clear Buffers GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); GL.LoadIdentity(); GL.LoadMatrix(ref MV); GL.Viewport(0, 0, glControlWindow.Width, glControlWindow.Height); GL.Enable(EnableCap.DepthTest); //Enable correct Z Drawings GL.DepthFunc(DepthFunction.Less); //Enable correct Z Drawings GL.MatrixMode(MatrixMode.Modelview); GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); //Finally... GraphicsContext.CurrentContext.VSync = true; //Caps frame rate as to not over run GPU glControlWindow.SwapBuffers(); //Takes from the 'GL' and puts into control } private void DrawTriangle() { GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.DrawArrays(BeginMode.Triangles, 0, 3); GL.DisableClientState(ArrayCap.VertexArray); } //mouse click event implementation private void glControlWindow_MouseClick(object sender, System.Windows.Forms.MouseEventArgs e) { //Enter Select mode. Pretend drawing. GL.RenderMode(RenderingMode.Select); int[] viewport = new int[4]; GL.GetInteger(GetPName.Viewport, viewport); GL.PushMatrix(); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GluPickMatrix(e.X, e.Y, 5, 5, viewport); Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); // this projection matrix is the same as one in glControlWindow_Paint method. GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); int i = 0; int hits; GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); i++; GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); hits = GL.RenderMode(RenderingMode.Render); .....hits processing code goes here... GL.PopMatrix(); glControlWindow.Invalidate(); } I expect to get only one hit everytime i click inside a triangle, but i always get 2 no matter where i click. I suspect there is something wrong with the implementation of the GluPickMatrix, I haven't figured out yet.

    Read the article

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

  • Linux - serial port read returning EAGAIN...

    - by Andre
    Hello all! I am having some trouble reading some data from a serial port I opened the following way. I've used this instance of code plenty of times and all worked fine, but now, for some reason that I cant figure out, I am completely unable to read anything from the serial port. I am able to write and all is correctly received on the other end, but the replies (which are correctly sent) are never received (No, the cables are all ok ;) ) The code I used to open the serial port is the following: fd = open("/dev/ttyUSB0", O_RDWR | O_NONBLOCK | O_NOCTTY); if (fd == -1) { Aviso("Unable to open port"); return (fd); } else { //Get the current options for the port... bzero(&options, sizeof(options)); /* clear struct for new port settings */ tcgetattr(fd, &options); /*-- Set baud rate -------------------------------------------------------*/ if (cfsetispeed(&options, SerialBaudInterp(BaudRate))==-1) perror("On cfsetispeed:"); if (cfsetospeed(&options, SerialBaudInterp(BaudRate))==-1) perror("On cfsetospeed:"); //Enable the receiver and set local mode... options.c_cflag |= (CLOCAL | CREAD); options.c_cflag &= ~PARENB; /* Parity disabled */ options.c_cflag &= ~CSTOPB; options.c_cflag &= ~CSIZE; /* Mask the character size bits */ options.c_cflag |= SerialDataBitsInterp(8); /* CS8 - Selects 8 data bits */ options.c_cflag &= ~CRTSCTS; // disable hardware flow control options.c_iflag &= ~(IXON | IXOFF | IXANY); // disable XON XOFF (for transmit and receive) options.c_cflag |= CRTSCTS; /* enable hardware flow control */ options.c_cc[VMIN] = 0; //min carachters to be read options.c_cc[VTIME] = 0; //Time to wait for data (tenths of seconds) //Set the new options for the port... tcflush(fd, TCIFLUSH); if (tcsetattr(fd, TCSANOW, &options)==-1) { perror("On tcsetattr:"); } PortOpen[ComPort] = fd; } return PortOpen[ComPort]; After the port is initializeed I write some stuff to it through simple write command... int nc = write(hCom, txchar, n); where hCom is the file descriptor (and it's ok), and (as I said) this works. But... when I do a read afterwards, I get a "Resource Temporarily Unavailable" error from errno. I tested select to see when the file descriptor had something t read... but it always times out! I read data like this: ret = read(hCom, rxchar, n); and I always get an EAGAIN and I have no idea why. All help would be appreciated. Cheers

    Read the article

  • Getting problem while capturing image through web cam in Java -OpenCV code.

    - by Chetan
    Hi.. I am developing Face detector using OpenCV library in java... I have written sample code for this,but it is giving Error while capturing image. can any one help please. Here is the code import java.awt.; import java.awt.event.; import java.awt.image.MemoryImageSource; import hypermedia.video.OpenCV; @SuppressWarnings("serial") public class FaceDetection extends Frame implements Runnable { ///Main method. public static void main(String[] args) { //System.out.println( "\nOpenCV face detection sample\n" ); new FaceDetection(); } // program execution frame rate (millisecond) final int FRAME_RATE = 1000/30; OpenCV cv = null; // OpenCV Object Thread t = null; // the sample thread // the input video stream image Image frame = null; // list of all face detected area Rectangle[] squares = new Rectangle[0]; //** Setup Frame and Object(s). FaceDetection() { super( "Face Detection" ); // OpenCV setup cv = new OpenCV(); //cv.capture(1, 1, 100); cv.capture( 320, 240 ); cv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // frame setup this.setBounds( 100, 100, cv.width, cv.height ); this.setBackground( Color.BLACK ); this.setVisible( true ); this.addKeyListener( new KeyAdapter() { public void keyReleased( KeyEvent e ) { if ( e.getKeyCode()==KeyEvent.VK_ESCAPE ) { // ESC : release OpenCV resources cv.dispose(); System.exit(0); } } } ); // start running program t = new Thread( this ); t.start(); } // Draw video frame and each detected faces area. public void paint( Graphics g ) { // draw image g.drawImage( frame, 0, 0, null ); // draw squares g.setColor( Color.RED ); for( Rectangle rect : squares ) g.drawRect( rect.x, rect.y, rect.width, rect.height ); } @SuppressWarnings("static-access") public void run() { while( t!=null && cv!=null ) { try { t.sleep( FRAME_RATE ); // grab image from video stream cv.read(); // create a new image from cv pixels data MemoryImageSource mis = new MemoryImageSource( cv.width, cv.height, cv.pixels(), 0, cv.width ); frame = createImage( mis ); // detect faces squares = cv.detect( 1.2f, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 20, 20 ); // of course, repaint repaint(); } catch( InterruptedException e ) {;} } } } Error while starting capture : device 0

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • Rx Reactive extensions: Unit testing with FromAsyncPattern

    - by Andrew Anderson
    The Reactive Extensions have a sexy little hook to simplify calling async methods: var func = Observable.FromAsyncPattern<InType, OutType>( myWcfService.BeginDoStuff, myWcfService.EndDoStuff); func(inData).ObserveOnDispatcher().Subscribe(x => Foo(x)); I am using this in an WPF project, and it works great at runtime. Unfortunately, when trying to unit test methods that use this technique I am experiencing random failures. ~3 out of every five executions of a test that contain this code fails. Here is a sample test (implemented using a Rhino/unity auto-mocking container): [TestMethod()] public void SomeTest() { // arrange var container = GetAutoMockingContainer(); container.Resolve<IMyWcfServiceClient>() .Expect(x => x.BeginDoStuff(null, null, null)) .IgnoreArguments() .Do( new Func<Specification, AsyncCallback, object, IAsyncResult>((inData, asyncCallback, state) => { return new CompletedAsyncResult(asyncCallback, state); })); container.Resolve<IRepositoryServiceClient>() .Expect(x => x.EndRetrieveAttributeDefinitionsForSorting(null)) .IgnoreArguments() .Do( new Func<IAsyncResult, OutData>((ar) => { return someMockData; })); // act var target = CreateTestSubject(container); target.DoMethodThatInvokesService(); // Run the dispatcher for everything over background priority Dispatcher.CurrentDispatcher.Invoke(DispatcherPriority.Background, new Action(() => { })); // assert Assert.IsTrue(my operation ran as expected); } The problem that I see is that the code that I specified to run when the async action completed (in this case, Foo(x)), is never called. I can verify this by setting breakpoints in Foo and observing that they are never reached. Further, I can force a long delay after calling DoMethodThatInvokesService (which kicks off the async call), and the code is still never run. I do know that the lines of code invoking the Rx framework were called. Other things I've tried: I have attempted to modify the second last line according to the suggestions here: Reactive Extensions Rx - unit testing something with ObserveOnDispatcher No love. I have added .Take(1) to the Rx code as follows: func(inData).ObserveOnDispatcher().Take(1).Subscribe(x = Foo(x)); This improved my failure rate to something like 1 in 5, but they still occurred. I have rewritten the Rx code to use the plain jane Async pattern. This works, however my developer ego really would love to use Rx instead of boring old begin/end. In the end I do have a work around in hand (i.e. don't use Rx), however I feel that it is not ideal. If anyone has ran into this problem in the past and found a solution, I'd dearly love to hear it.

    Read the article

  • Interfacing Android Nexus One with Arduino + BlueSmirf

    - by efgomez
    I'm a bit new to all of this, so bear with me - I'd really appreciate your help. I am trying to link the Android Nexus One with an arduino (Duemilanove) that is connected to a BlueSmirf. I have a program that is simply outputting the string "Hello Bluetooth" to whatever device the BlueSmirf is connected to. Here is the Arduino program: void setup(){ Serial.begin(115200); int i; } void loop(){Serial.print("Hello Bluetooth!"); delay(1000); } One my computer BT terminal I can see the message and connect no problem. The trouble is with my android code. I can connect to the device with android, but when I look at the log it is not displaying "Hello Bluetooth". Here is the debug log: 04-09 16:27:49.022: ERROR/BTArduino(17288): FireFly-2583 connected 04-09 16:27:49.022: ERROR/BTArduino(17288): STARTING TO CONNECT THE SOCKET 04-09 16:27:55.705: ERROR/BTArduino(17288): Received: 16 04-09 16:27:56.702: ERROR/BTArduino(17288): Received: 1 04-09 16:27:56.712: ERROR/BTArduino(17288): Received: 15 04-09 16:27:57.702: ERROR/BTArduino(17288): Received: 1 04-09 16:27:57.702: ERROR/BTArduino(17288): Received: 15 04-09 16:27:58.704: ERROR/BTArduino(17288): Received: 1 04-09 16:27:58.704: ERROR/BTArduino(17288): Received: 15 ect... Here is the code, I'm trying to put only the relative code but if you need more please let me know: private class ConnectThread extends Thread { private final BluetoothSocket mySocket; private final BluetoothDevice myDevice; public ConnectThread(BluetoothDevice device) { myDevice = device; BluetoothSocket tmp = null; try { tmp = device.createRfcommSocketToServiceRecord(MY_UUID); } catch (IOException e) { Log.e(TAG, "CONNECTION IN THREAD DIDNT WORK"); } mySocket = tmp; } public void run() { Log.e(TAG, "STARTING TO CONNECT THE SOCKET"); InputStream inStream = null; boolean run = false; //...More Connection code here... The more relative code is here: byte[] buffer = new byte[1024]; int bytes; // handle Connection try { inStream = mySocket.getInputStream(); while (run) { try { bytes = inStream.read(buffer); Log.e(TAG, "Received: " + bytes); } catch (IOException e3) { Log.e(TAG, "disconnected"); } } I am reading bytes = inStream.read(buffer). I know bytes is an integer, so I tried sending integers over bluetooth because "bytes" was an integer but it still didn't make sense. It almost appears that is sending incorrect baud rate. Could this be true? Any help would be appreciated. Thank you very much.

    Read the article

  • Voice Recognition Connection problem

    - by user244190
    I,m trying to work through and test a Voice Recognition example based on the VoiceRecognition.java example at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html but when click on the button to create the activity, I get a dialog that says Connection problem. My Manifest file is using the Internet Permission, and I understand it passes the to the Google Servers. Do I need to do anything else to use this. Code below UPDATE: Ok, I was able to replace my emulator image with one from HTC that appears to come with Google Voice Search, however now when I run from the emulator, i'm getting an Audio Problem message with Speak Again or Cancel buttons. It appears to make it back to the onActivityResult(), but the resultCode is 0. Here is the LogCat output: 03-07 20:21:25.396: INFO/ActivityManager(578): Starting activity: Intent { action=android.speech.action.RECOGNIZE_SPEECH comp={com.google.android.voicesearch/com.google.android.voicesearch.RecognitionActivity} (has extras) } 03-07 20:21:25.406: WARN/ActivityManager(578): Activity is launching as a new task, so cancelling activity result. 03-07 20:21:25.968: WARN/ActivityManager(578): Activity pause timeout for HistoryRecord{434f7850 {com.ikonicsoft.mileagegenie/com.ikonicsoft.mileagegenie.MileageGenie}} 03-07 20:21:26.206: WARN/AudioHardwareInterface(554): getInputBufferSize bad sampling rate: 16000 03-07 20:21:26.256: ERROR/AudioRecord(819): Recording parameters are not supported: sampleRate 16000, channelCount 1, format 1 03-07 20:21:26.696: INFO/ActivityManager(578): Displayed activity com.google.android.voicesearch/.RecognitionActivity: 1295 ms 03-07 20:21:29.890: DEBUG/dalvikvm(806): threadid=3: still suspended after undo (s=1 d=1) 03-07 20:21:29.896: INFO/dalvikvm(806): Uncaught exception thrown by finalizer (will be discarded): 03-07 20:21:29.896: INFO/dalvikvm(806): Ljava/lang/IllegalStateException;: Finalizing cursor android.database.sqlite.SQLiteCursor@435d3c50 on ml_trackdata that has not been deactivated or closed 03-07 20:21:29.896: INFO/dalvikvm(806): at android.database.sqlite.SQLiteCursor.finalize(SQLiteCursor.java:596) 03-07 20:21:29.896: INFO/dalvikvm(806): at dalvik.system.NativeStart.run(Native Method) 03-07 20:21:31.468: DEBUG/dalvikvm(806): threadid=5: still suspended after undo (s=1 d=1) 03-07 20:21:32.436: WARN/IInputConnectionWrapper(806): showStatusIcon on inactive InputConnection I,m still not sure why I,m getting the Connect problem on the Droid. I can use Voice Search ok. I also tried clearing the cache, and data as described in some posts, butstill not working?? /** * Fire an intent to start the speech recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo"); startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE); } /** * Handle the results from the recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) { // Fill the list view with the strings the recognizer thought it could have heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, matches)); } super.onActivityResult(requestCode, resultCode, data); }

    Read the article

  • Using Artificial Intelligence (AI) to predict Stock Prices

    - by akaphenom
    Given a set of datavery similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5 (ie corellation coeffient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie corellation coeffient = -1) or somewhere inbetween. Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be: user direction (long/short) market direction sector of stock The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a cobination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended). The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specfic time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumtion is that a single user is right no more than 57% of the time - but who knows. I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings. NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coeeficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price. Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started... My hope is to implment this with F# and be able to impress my friends with a new skillset in F# with an implementation of machine learnign and potentially something (application / source) I can include in a tech portfolio or blog space; Thank you for any advice in advance.

    Read the article

  • Is this a good way to do a game loop for an iPhone game?

    - by Danny Tuppeny
    Hi all, I'm new to iPhone dev, but trying to build a 2D game. I was following a book, but the game loop it created basically said: function gameLoop update() render() sleep(1/30th second) gameLoop The reasoning was that this would run at 30fps. However, this seemed a little mental, because if my frame took 1/30th second, then it would run at 15fps (since it'll spend as much time sleeping as updating). So, I did some digging and found the CADisplayLink class which would sync calls to my gameLoop function to the refresh rate (or a fraction of it). I can't find many samples of it, so I'm posting here for a code review :-) It seems to work as expected, and it includes passing the elapsed (frame) time into the Update method so my logic can be framerate-independant (however I can't actually find in the docs what CADisplayLink would do if my frame took more than its allowed time to run - I'm hoping it just does its best to catch up, and doesn't crash!). // // GameAppDelegate.m // // Created by Danny Tuppeny on 10/03/2010. // Copyright Danny Tuppeny 2010. All rights reserved. // #import "GameAppDelegate.h" #import "GameViewController.h" #import "GameStates/gsSplash.h" @implementation GameAppDelegate @synthesize window; @synthesize viewController; - (void) applicationDidFinishLaunching:(UIApplication *)application { // Create an instance of the first GameState (Splash Screen) [self doStateChange:[gsSplash class]]; // Set up the game loop displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(gameLoop)]; [displayLink setFrameInterval:2]; [displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; } - (void) gameLoop { // Calculate how long has passed since the previous frame CFTimeInterval currentFrameTime = [displayLink timestamp]; CFTimeInterval elapsed = 0; // For the first frame, we want to pass 0 (since we haven't elapsed any time), so only // calculate this in the case where we're not the first frame if (lastFrameTime != 0) { elapsed = currentFrameTime - lastFrameTime; } // Keep track of this frames time (so we can calculate this next time) lastFrameTime = currentFrameTime; NSLog([NSString stringWithFormat:@"%f", elapsed]); // Call update, passing the elapsed time in [((GameState*)viewController.view) Update:elapsed]; } - (void) doStateChange:(Class)state { // Remove the previous GameState if (viewController.view != nil) { [viewController.view removeFromSuperview]; [viewController.view release]; } // Create the new GameState viewController.view = [[state alloc] initWithFrame:CGRectMake(0, 0, IPHONE_WIDTH, IPHONE_HEIGHT) andManager:self]; // Now set as visible [window addSubview:viewController.view]; [window makeKeyAndVisible]; } - (void) dealloc { [viewController release]; [window release]; [super dealloc]; } @end Any feedback would be appreciated :-) PS. Bonus points if you can tell me why all the books use "viewController.view" but for everything else seem to use "[object name]" format. Why not [viewController view]?

    Read the article

  • How to quickly acquire and process real time screen output

    - by Akusete
    I am trying to write a program to play a full screen PC game for fun (as an experiment in Computer Vision and Artificial Intelligence). For this experiment I am assuming the game has no underlying API for AI players (nor is the source available) so I intend to process the visual information rendered by the game on the screen. The game runs in full screen mode on a win32 system (direct-X I assume). Currently I am using the win32 functions #include <windows.h> #include <cvaux.h> class Screen { public: HWND windowHandle; HDC windowContext; HBITMAP buffer; HDC bufferContext; CvSize size; uchar* bytes; int channels; Screen () { windowHandle = GetDesktopWindow(); windowContext = GetWindowDC (windowHandle); size = cvSize (GetDeviceCaps (windowContext, HORZRES), GetDeviceCaps (windowContext, VERTRES)); buffer = CreateCompatibleBitmap (windowContext, size.width, size.height); bufferContext = CreateCompatibleDC (windowContext); SelectObject (bufferContext, buffer); channels = 4; bytes = new uchar[size.width * size.height * channels]; } ~Screen () { ReleaseDC(windowHandle, windowContext); DeleteDC(bufferContext); DeleteObject(buffer); delete[] bytes; } void CaptureScreen (IplImage* img) { BitBlt(bufferContext, 0, 0, size.width, size.height, windowContext, 0, 0, SRCCOPY); int n = size.width * size.height; int imgChannels = img->nChannels; GetBitmapBits (buffer, n * channels, bytes); uchar* src = bytes; uchar* dest = (uchar*) img->imageData; uchar* end = dest + n * imgChannels; while (dest < end) { dest[0] = src[0]; dest[1] = src[1]; dest[2] = src[2]; dest += imgChannels; src += channels; } } The rate at which I can process frames using this approach is much to slow. Is there a better way to acquire screen frames?

    Read the article

  • Drawing performance with CGImageCreateWithJPEGDataProvider?

    - by Rnegi
    I've actually curious about this for the iPhone. I am getting an MJPEG stream from a server and trying to render it natively on the iphone (without the use of safari class). Reasons for this is because the safari class while CAN render MJPEG natively, does not do so at the framerate I would like. So I tried drawing it natively, but I've come up with performance issues, namely a syncing issue between what I'm getting from the server and what I am able to draw onto the screen of the phone. (There should be a little lag, but the drift gets really bad, which is what I want to avoid). So I have a connection set up to my server and I do get the JPEGS. It's just data I insert into a NSMutableArray buffer CFMutableDataRef _t_data_ref = (CFMutableDataRef)[_buffer_array objectAtIndex:0]; //CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (_cf_buffer_data); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(_t_data_ref); CGImageRef image = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault); CGImageRef imgRef = image; CGContextDrawImage(context, CGRectMake(0, 17, 380, 285), imgRef); CGImageRelease(image); CGDataProviderRelease(imgDataProvider); please note this is the gist of my code, but it should summarize what I am trying to accomplish with regards to drawing. Also in order to get the framerate in sync, I had to detach a separate thread that sleeps X seconds and calls [self setNeedsDisplay]. NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; // Top-level pool while(1) { //[NSThread sleepForTimeInterval:TIMER_REFRESH_VALUE]; //sleep(unsigned int ); usleep(MICRO_REFRESH_VALUE); if ([_buffer_array count] > 10) { //NSLog(@"stuff %d", [_buffer_array count]); //[self setNeedsDisplay]; [self performSelectorOnMainThread:@selector(setNeedsDisplay) withObject:nil waitUntilDone:NO]; } } [pool release]; // Release the objects in the pool. My buffer of jpeg data actually fills up quite quick, but I can't seem to actually consume what i'm getting at the same rate, actually much slower. Are there any documents that can describe what kind of performance tuning I can do to make it go faster when rendering the JPEG to the screen? Or am I kind of stuck here? Thanks!

    Read the article

  • How do I prevent the concurrent execution of a javascript function?

    - by RyanV
    I am making a ticker similar to the "From the AP" one at The Huffington Post, using jQuery. The ticker rotates through a ul, either by user command (clicking an arrow) or by an auto-scroll. Each list-item is display:none by default. It is revealed by the addition of a "showHeadline" class which is display:list-item. HTML for the UL Looks like this: <ul class="news" id="news"> <li class="tickerTitle showHeadline">Test Entry</li> <li class="tickerTitle">Test Entry2</li> <li class="tickerTitle">Test Entry3</li> </ul> When the user clicks the right arrow, or the auto-scroll setTimeout goes off, it runs a tickForward() function: function tickForward(){ var $active = $('#news li.showHeadline'); var $next = $active.next(); if($next.length==0) $next = $('#news li:first'); $active.stop(true, true); $active.fadeOut('slow', function() {$active.removeClass('showHeadline');}); setTimeout(function(){$next.fadeIn('slow', function(){$next.addClass('showHeadline');})}, 1000); if(isPaused == true){ } else{ startScroll() } }; This is heavily inspired by Jon Raasch's A Simple jQuery Slideshow. Basically, find what's visible, what should be visible next, make the visible thing fade and remove the class that marks it as visible, then fade in the next thing and add the class that makes it visible. Now, everything is hunky-dory if the auto-scroll is running, kicking off tickForward() once every three seconds. But if the user clicks the arrow button repeatedly, it creates two negative conditions: Rather than advance quickly through the list for just the number of clicks made, it continues scrolling at a faster-than-normal rate indefinitely. It can produce a situation where two (or more) list items are given the .showHeadline class, so there's overlap on the list. I can see these happening (especially #2) because the tickForward() function can run concurrently with itself, producing different sets of $active and $next. So I think my question is: What would be the best way to prevent concurrent execution of the tickForward() method? Some things I have tried or considered: Setting a Flag: When tickForward() runs, it sets an isRunning flag to true, and sets it back to false right before it ends. The logic for the event handler is set to only call tickForward() if isRunning is false. I tried a simple implementation of this, and isRunning never appeared to be changed. The jQuery queue(): I think it would be useful to queue up the tickForward() commands, so if you clicked it five times quickly, it would still run as commanded but wouldn't run concurrently. However, in my cursory reading on the subject, it appears that a queue has to be attached to the object its queue applies to, and since my tickForward() method affects multiple lis, I don't know where I'd attach it.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >