Search Results

Search found 858 results on 35 pages for 'micro isvs'.

Page 31/35 | < Previous Page | 27 28 29 30 31 32 33 34 35  | Next Page >

  • 6 Reasons Why You Can’t Move Your Cell Phone To Any Carrier You Want

    - by Chris Hoffman
    You can buy a laptop or Wi-Fi tablet and use it on Wi-Fi anywhere in the world, so why are cell phones and devices with mobile data not portable between different cellular networks in the same country? Unlike with Wi-Fi, there are many different competing cellular network standards — both around the world and within countries. Cellular carriers also like locking you to their specific network and making it difficult to move. That’s what contracts are for. Phone Locking Many phones are sold locked to a specific network. When you buy a phone from a cellular carrier, they often lock that phone to their network so you can’t take it to a competitor’s network. That’s why you’ll often need to unlock a phone before you can move it to a different cellular provider or take it to a different country and use it on a local provider instead of roaming. Cellular carriers will generally unlock your phone for you as long as you’re no longer in a contract with them. However, unlocking a cell phone you’ve paid for without your carrier’s permission is currently a crime in the USA. GSM vs. CDMA Some cellular networks use the GSM (Global System for Mobile Communications) standard, while some use CDMA (Code-division multiple access). Worldwide, most cellular networks use GSM. In the USA, both GSM and CDMA are popular. Verizon, Sprint, and other carriers that use their networks use CDMA. AT&T, T-Mobile, and other carriers that use their networks are use GSM. These are two competing standards and are not interoperable. This means you can’t simply take a phone from Verizon to T-Mobile, or from AT&T to Sprint. These carriers have incompatible phones. CDMA Restrictions CDMA is more restricted than GSM. GSM phones have SIM cards. Simply open the phone, pop out the SIM card, and pop in a new SIM card to switch carriers. (In reality, it’s more complicated thanks to phone locking and other factors here.) CDMA phones don’t have removable modules like this. All CDMA phones ship locked to a specific network and you’d have to get both your old carrier and your new carrier to cooperate to switch phones between them. In reality, many people just consider CDMA phones eternally locked to a specific carrier. Frequencies Different cellular networks throughout the USA and the rest of the world use different frequencies. These radio frequencies have to be supported by your phone’s hardware or your phone simply can’t work on a network using those frequencies. Many GSM phones support three or four bands of frequencies — 900/1800/1900 MHz, 850/1800/1900 MHz, or 850/900/1800/1900 MHz. These are sometimes called “world phones” because they allow easier roaming. This allows the manufacturer to produce a phone that will support all GSM networks in the world and allows their customers to travel with those phones. If your phone doesn’t support the appropriate frequencies, it won’t work on certain networks. LTE Bands When it comes to newer, faster LTE networks, different frequencies are still a concern. LTE frequencies are generally known as “LTE bands.” To use a smartphone on a certain LTE network, that smartphone will have to support that LTE network’s frequency. Different models of phones are often created to work on different LTE networks around the world. However, phones are generally supporting more and more LTE networks and becoming more and more interoperable over time. SIM Card Sizes The SIM cards used in GSM phones come in different sizes. Newer phones use smaller SIM cards to save space and be more compact. This isn’t a big obstacle, as the different sizes of SIM cards — full-size SIM, mini-SIM, micro-SIM, and nano-SIM are actually compatible. The only difference between them is the size of the plastic card surrounding the SIM’s chip. The actual chip is the same size between all the SIM cards. This means you can take an old SIM card and cut the plastic off until it becomes a smaller-size SIM card that fits in a modern phone. Or, you can take a smaller-size SIM card and insert it into a tray so that it becomes a larger-size SIM card that fits in an older phone. Be aware that it’s very possible to damage your SIM card and make it not work properly by cutting it to the wrong dimensions. Your cellular carrier will often be able to cut your SIM card for you or give you a new one if you want to use an old SIM card in a new phone. Hopefully they won’t overcharge you for this service, too. Be sure to check what types of networks, frequencies, and LTE bands your phone supports before trying to move it between networks. You may have to buy a new phone when moving between certain cellular carriers. Image Credit: Morgan on Flickr, 22n on Flickr

    Read the article

  • AWS: setting up auto-scale for EC2 instances

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/16/aws-setting-up-auto-scale-for-ec2-instances.aspxWith Amazon Web Services, there’s no direct equivalent to Azure Worker Roles – no Elastic Beanstalk-style application for .NET background workers. But you can get the auto-scale part by configuring an auto-scaling group for your EC2 instance. This is a step-by-step guide, that shows you how to create the auto-scaling configuration, which for EC2 you need to do with the command line, and then link your scaling policies to CloudWatch alarms in the Web console. I’m using queue size as my metric for CloudWatch,  which is a good fit if your background workers are pulling messages from a queue and processing them.  If the queue is getting too big, the “high” alarm will fire and spin up a new instance to share the workload. If the queue is draining down, the “low” alarm will fire and shut down one of the instances. To start with, you need to manually set up your app in an EC2 VM, for a background worker that would mean hosting your code in a Windows Service (I always use Topshelf). If you’re dual-running Azure and AWS, then you can isolate your logic in one library, with a generic entry point that has Start() and Stop()  functions, so your Worker Role and Windows Service are essentially using the same code. When you have your instance set up with the Windows Service running automatically, and you’ve tested it starts up and works properly from a reboot, shut the machine down and take an image of the VM, using Create Image (EBS AMI) from the Web Console: When that completes, you’ll have your own AMI which you can use to spin up new instances, and you’re ready to create your auto-scaling group. You need to dip into the command-line tools for this, so follow this guide to set up the AWS autoscale command line tool. Now we’re ready to go. 1. Create a launch configuration This launch configuration tells AWS what to do when a new instance needs to be spun up. You create it with the as-create-launch-config command, which looks like this: as-create-launch-config sc-xyz-launcher # name of the launch config --image-id ami-7b9e9f12 # id of the AMI you extracted from your VM --region eu-west-1 # which region the new instance gets created in --instance-type t1.micro # size of the instance to create --group quicklaunch-1 #security group for the new instance 2. Create an auto-scaling group The auto-scaling group links to the launch config, and defines the overall configuration of the collection of instances: as-create-auto-scaling-group sc-xyz-asg # auto-scaling group name --region eu-west-1 # region to create in --launch-configuration sc-xyz-launcher # name of the launch config to invoke for new instances --min-size 1 # minimum number of nodes in the group --max-size 5 # maximum number of nodes in the group --default-cooldown 300 # period to wait (in seconds) after each scaling event, before checking if another scaling event is required --availability-zones eu-west-1a eu-west-1b eu-west-1c # which availability zones you want your instances to be allocated in – multiple entries means EC@ will use any of them 3. Create a scale-up policy The policy dictates what will happen in response to a scaling event being triggered from a “high” alarm being breached. It links to the auto-scaling group; this sample results in one additional node being spun up: as-put-scaling-policy scale-up-policy # policy name -g sc-psod-woker-asg # auto-scaling group the policy works with --adjustment 1 # size of the adjustment --region eu-west-1 # region --type ChangeInCapacity # type of adjustment, this specifies a fixed number of nodes, but you can use PercentChangeInCapacity to make an adjustment relative to the current number of nodes, e.g. increasing by 50% 4. Create a scale-down policy The policy dictates what will happen in response to a scaling event being triggered from a “low” alarm being breached. It links to the auto-scaling group; this sample results in one node from the group being taken offline: as-put-scaling-policy scale-down-policy -g sc-psod-woker-asg "--adjustment=-1" # in Windows, use double-quotes to surround a negative adjustment value –-type ChangeInCapacity --region eu-west-1 5. Create a “high” CloudWatch alarm We’re done with the command line now. In the Web Console, open up the CloudWatch view and create a new alarm. This alarm will monitor your metrics and invoke the scale-up policy from your auto-scaling group, when the group is working too hard. Configure your metric – this example will fire the alarm if there are more than 10 messages in my queue for over a minute: Then link the alarm to the scale-up policy in your group: 6. Create a “low” CloudWatch alarm The opposite of step 4, this alarm will trigger when the instances in your group don’t have enough work to do (e.g fewer than 2 messages in the queue for 1 minute), and will invoke the scale-down policy. And that’s it. You don’t need your original VM as the auto-scale group has a minimum number of nodes connected. You can test out the scaling by flexing your CloudWatch metric – in this example, filling up a queue from a  stub publisher – and watching AWS create new nodes as required, then stopping the publisher and watch AWS kill off the spare nodes.

    Read the article

  • Oracle SQL Developer v3.2.1 Now Available

    - by thatjeffsmith
    Oracle SQL Developer version 3.2.1 is now available. I recommend that everyone now upgrade to this release. It features more than 200 bug fixes, tweaks, and polish applied to the 3.2 edition. The high profile bug fixes submitted by customers and users on our forums are listed in all their glory for your review. I want to highlight a few of the changes though, as I recognize many of you lack the time and/or patience to ‘read the docs.’ That would include me, which is why I enjoy writing these kinds of blog posts. I’m lazy – just like you! No more artificial line breaks between CREATE OR REPLACE and your PL/SQL In versions 3.2 and older, when you pull up your stored procedural objects in our editor, you would see a line break inserted between the CREATE OR REPLACE and then the body of your code. In version 3.2.1, we have removed the line break. 3.1 3.2.1 Trivia Did You Know? The database doesn’t store the ‘CREATE’ or ‘CREATE OR REPLACE’ bit of your PL/SQL code in the database. If we look at the USER_SOURCE view, we can see that the code begins with the object name. So the CREATE OR REPLACE bit is ‘artificial’ The intent is to give you the code necessary to recreate your object – and have it ‘compile’ into the database. We pretty much HAVE to add the ‘CREATE OR REPLACE.’ From now on it will appear inline with the first line of your code. Exporting Tables & Views When exporting data from your tables or views, previous versions of SQL Developer presented a 3 step wizard. It allows you to choose your columns and apply data filters for what is exported. This was kind of redundant. The grids already allowed you to select your columns and apply filters. Wouldn’t it be more intuitive AND efficient to just make the grids behave in a What You See Is What You Get (WYSIWYG) fashion? In version 3.2.1, that is exactly what will happen. The wizard now only has two steps and the grid will export the data and columns as defined in the visible grid. Let the grid properties define what is actually exported! And here is what is pasted into my worksheet: "BREWERY"|"CITY" "3 Brewers Restaurant Micro-Brewery"|"Toronto" "Amsterdam Brewing Co."|"Toronto" "Ball Brewing Company Ltd."|"Toronto" "Big Ram Brewing Company"|"Toronto" "Black Creek Historic Brewery"|"Toronto" "Black Oak Brewing"|"Toronto" "C'est What?"|"Toronto" "Cool Beer Brewing Company"|"Toronto" "Denison's Brewing"|"Toronto" "Duggan's Brewery"|"Toronto" "Feathers"|"Toronto" "Fermentations! - Danforth"|"Toronto" "Fermentations! - Mount Pleasant"|"Toronto" "Granite Brewery & Restaurant"|"Toronto" "Labatt's Breweries of Canada"|"Toronto" "Mill Street Brew Pub"|"Toronto" "Mill Street Brewery"|"Toronto" "Molson Breweries of Canada"|"Toronto" "Molson Brewery at Air Canada Centre"|"Toronto" "Pioneer Brewery Ltd."|"Toronto" "Post-Production Bistro"|"Toronto" "Rotterdam Brewing"|"Toronto" "Steam Whistle Brewing"|"Toronto" "Strand Brasserie"|"Toronto" "Upper Canada Brewing"|"Toronto" JUST what I wanted And One Last Thing Speaking of export, sometimes I want to send data to Excel. And sometimes I want to send multiple objects to Excel – to a single Excel file that is. In version 3.2.1 you can now do that. Let’s export the bulk of the HR schema to Excel, with each table going to it’s own workbook in the same worksheet. Select many tables, put them in in a single Excel worksheet If you try this in previous versions of SQL Developer it will just write the first table to the Excel file. This is one of the bugs we addressed in v3.2.1. Here is what the output Excel file looks like now: Many tables - Many workbooks in an Excel Worksheet I have a sneaky suspicion that this will be a frequently used feature going forward. Excel seems to be the cornerstone of many of our popular features. Imagine that!

    Read the article

  • How do I prove or disprove "god" objects are wrong?

    - by honestduane
    Problem Summary: Long story short, I inherited a code base and an development team I am not allowed to replace and the use of God Objects is a big issue. Going forward, I want to have us re-factor things but I am getting push-back from the teams who want to do everything with God Objects "because its easier" and this means I would not be allowed to re-factor. I pushed back citing my years of dev experience, that I'm the new boss who was hired to know these things, etc, and so did the third party offshore companies account sales rep, and this is now at the executive level and my meeting is tomorrow and I want to go in with a lot of technical ammo to advocate best practices because I feel it will be cheaper in the long run (And I personally feel that is what the third party is worried about) for the company. My issue is from a technical level, I know its good long term but I'm having trouble with the ultra short term and 6 months term, and while its something I "know" I cant prove it with references and cited resources outside of one person (Robert C. Martin, aka Uncle Bob), as that is what I am being asked to do as I have been told having data from one person and only one person (Robert C Martin) is not good enough of an argument. Question: What are some resources I can cite directly (Title, year published, page number, quote) by well known experts in the field that explicitly say this use of "God" Objects/Classes/Systems is bad (or good, since we are looking for the most technically valid solution)? Research I have already done: I have a number of books here and I have searched their indexes for the use of the words "god object" and "god class". I found that oddly its almost never used and the copy of the GoF book I have for example, never uses it (At least according to the index in front of me) but I have found it in 2 books per the below, but I want more I can use. I checked the Wikipedia page for "God Object" and its currently a stub with little reference links so although I personally agree with that it says, It doesn't have much I can use in an environment where personal experience is not considered valid. The book cited is also considered too old to be valid by the people I am debating these technical points with as the argument they are making is that "it was once thought to be bad but nobody could prove it, and now modern software says "god" objects are good to use". I personally believe that this statement is incorrect, but I want to prove the truth, whatever it is. In Robert C Martin's "Agile Principles, Patterns, and Practices in C#" (ISBN: 0-13-185725-8, hardcover) where on page 266 it states "Everybody knows that god classes are a bad idea. We don't want to concentrate all the intelligence of a system into a single object or a single function. One of the goals of OOD is the partitioning and distribution of behavior into many classes and many function." -- And then goes on to say sometimes its better to use God Classes anyway sometimes (Citing micro-controllers as an example). In Robert C Martin's "Clean Code: A Handbook of Agile Software Craftsmanship" page 136 (And only this page) talks about the "God class" and calls it out as a prime example of a violation of the "classes should be small" rule he uses to promote the Single Responsibility Principle" starting on on page 138. The problem I have is all my references and citations come from the same person (Robert C. Martin), and am from the same single person/source. I am being told that because he is just one guy, my desire to not use "God Classes" is invalid and not accepted as a standard best practice in the software industry. Is this true? Am I doing things wrong from a technical perspective by trying to keep to the teaching of Uncle Bob? God Objects and Object Oriented Programming and Design: The more I think of this the more I think this is more something you learn when you study OOP and its never explicitly called out; Its implicit to good design is my thinking (Feel free to correct me, please, as I want to learn), The problem is I "know" this, but but not everybody does, so in this case its not considered a valid argument because I am effectively calling it out as universal truth when in fact most people are statistically ignorant of it since statistically most people are not programmers. Conclusion: I am at a loss on what to search for to get the best additional results to cite, since they are making a technical claim and I want to know the truth and be able to prove it with citations like a real engineer/scientist, even if I am biased against god objects due to my personal experience with code that used them. Any assistance or citations would be deeply appreciated.

    Read the article

  • How to make creating viewmodels at runtime less painful

    - by Mr Happy
    I apologize for the long question, it reads a bit as a rant, but I promise it's not! I've summarized my question(s) below In the MVC world, things are straightforward. The Model has state, the View shows the Model, and the Controller does stuff to/with the Model (basically), a controller has no state. To do stuff the Controller has some dependencies on web services, repository, the lot. When you instantiate a controller you care about supplying those dependencies, nothing else. When you execute an action (method on Controller), you use those dependencies to retrieve or update the Model or calling some other domain service. If there's any context, say like some user wants to see the details of a particular item, you pass the Id of that item as parameter to the Action. Nowhere in the Controller is there any reference to any state. So far so good. Enter MVVM. I love WPF, I love data binding. I love frameworks that make data binding to ViewModels even easier (using Caliburn Micro a.t.m.). I feel things are less straightforward in this world though. Let's do the exercise again: the Model has state, the View shows the ViewModel, and the ViewModel does stuff to/with the Model (basically), a ViewModel does have state! (to clarify; maybe it delegates all the properties to one or more Models, but that means it must have a reference to the model one way or another, which is state in itself) To do stuff the ViewModel has some dependencies on web services, repository, the lot. When you instantiate a ViewModel you care about supplying those dependencies, but also the state. And this, ladies and gentlemen, annoys me to no end. Whenever you need to instantiate a ProductDetailsViewModel from the ProductSearchViewModel (from which you called the ProductSearchWebService which in turn returned IEnumerable<ProductDTO>, everybody still with me?), you can do one of these things: call new ProductDetailsViewModel(productDTO, _shoppingCartWebService /* dependcy */);, this is bad, imagine 3 more dependencies, this means the ProductSearchViewModel needs to take on those dependencies as well. Also changing the constructor is painful. call _myInjectedProductDetailsViewModelFactory.Create().Initialize(productDTO);, the factory is just a Func, they are easily generated by most IoC frameworks. I think this is bad because Init methods are a leaky abstraction. You also can't use the readonly keyword for fields that are set in the Init method. I'm sure there are a few more reasons. call _myInjectedProductDetailsViewModelAbstractFactory.Create(productDTO); So... this is the pattern (abstract factory) that is usually recommended for this type of problem. I though it was genius since it satisfies my craving for static typing, until I actually started using it. The amount of boilerplate code is I think too much (you know, apart from the ridiculous variable names I get use). For each ViewModel that needs runtime parameters you'll get two extra files (factory interface and implementation), and you need to type the non-runtime dependencies like 4 extra times. And each time the dependencies change, you get to change it in the factory as well. It feels like I don't even use a DI container anymore. (I think Castle Windsor has some kind of solution for this [with it's own drawbacks, correct me if I'm wrong]). do something with anonymous types or dictionary. I like my static typing. So, yeah. Mixing state and behavior in this way creates a problem which don't exist at all in MVC. And I feel like there currently isn't a really adequate solution for this problem. Now I'd like to observe some things: People actually use MVVM. So they either don't care about all of the above, or they have some brilliant other solution. I haven't found an in-depth example of MVVM with WPF. For example, the NDDD-sample project immensely helped me understand some DDD concepts. I'd really like it if someone could point me in the direction of something similar for MVVM/WPF. Maybe I'm doing MVVM all wrong and I should turn my design upside down. Maybe I shouldn't have this problem at all. Well I know other people have asked the same question so I think I'm not the only one. To summarize Am I correct to conclude that having the ViewModel being an integration point for both state and behavior is the reason for some difficulties with the MVVM pattern as a whole? Is using the abstract factory pattern the only/best way to instantiate a ViewModel in a statically typed way? Is there something like an in depth reference implementation available? Is having a lot of ViewModels with both state/behavior a design smell?

    Read the article

  • MVVM - how to make creating viewmodels at runtime less painfull

    - by Mr Happy
    I apologize for the long question, it reads a bit as a rant, but I promise it's not! I've summarized my question(s) below In the MVC world, things are straightforward. The Model has state, the View shows the Model, and the Controller does stuff to/with the Model (basically), a controller has no state. To do stuff the Controller has some dependencies on web services, repository, the lot. When you instantiate a controller you care about supplying those dependencies, nothing else. When you execute an action (method on Controller), you use those dependencies to retrieve or update the Model or calling some other domain service. If there's any context, say like some user wants to see the details of a particular item, you pass the Id of that item as parameter to the Action. Nowhere in the Controller is there any reference to any state. So far so good. Enter MVVM. I love WPF, I love data binding. I love frameworks that make data binding to ViewModels even easier (using Caliburn Micro a.t.m.). I feel things are less straightforward in this world though. Let's do the exercise again: the Model has state, the View shows the ViewModel, and the ViewModel does stuff to/with the Model (basically), a ViewModel does have state! (to clarify; maybe it delegates all the properties to one or more Models, but that means it must have a reference to the model one way or another, which is state in itself) To do stuff the ViewModel has some dependencies on web services, repository, the lot. When you instantiate a ViewModel you care about supplying those dependencies, but also the state. And this, ladies and gentlemen, annoys me to no end. Whenever you need to instantiate a ProductDetailsViewModel from the ProductSearchViewModel (from which you called the ProductSearchWebService which in turn returned IEnumerable<ProductDTO>, everybody still with me?), you can do one of these things: call new ProductDetailsViewModel(productDTO, _shoppingCartWebService /* dependcy */);, this is bad, imagine 3 more dependencies, this means the ProductSearchViewModel needs to take on those dependencies as well. Also changing the constructor is painfull. call _myInjectedProductDetailsViewModelFactory.Create().Initialize(productDTO);, the factory is just a Func, they are easily generated by most IoC frameworks. I think this is bad because Init methods are a leaky abstraction. You also can't use the readonly keyword for fields that are set in the Init method. I'm sure there are a few more reasons. call _myInjectedProductDetailsViewModelAbstractFactory.Create(productDTO); So... this is the pattern (abstract factory) that is usually recommended for this type of problem. I though it was genious since it satisfies my craving for static typing, until I actually started using it. The amount of boilerplate code is I think too much (you know, apart from the ridiculous variable names I get use). For each ViewModel that needs runtime parameters you'll get two extra files (factory interface and implementation), and you need to type the non-runtime dependencies like 4 extra times. And each time the dependencies change, you get to change it in the factory as well. It feels like I don't even use an DI container anymore. (I think Castle Windsor has some kind of solution for this [with it's own drawbacks, correct me if I'm wrong]). do something with anonymous types or dictionary. I like my static typing. So, yeah. Mixing state and behavior in this way creates a problem which don't exist at all in MVC. And I feel like there currently isn't a really adequate solution for this problem. Now I'd like to observe some things: People actually use MVVM. So they either don't care about all of the above, or they have some brilliant other solution. I haven't found an indepth example of MVVM with WPF. For example, the NDDD-sample project immensely helped me understand some DDD concepts. I'd really like it if someone could point me in the direction of something similar for MVVM/WPF. Maybe I'm doing MVVM all wrong and I should turn my design upside down. Maybe I shouldn't have this problem at all. Well I know other people have asked the same question so I think I'm not the only one. To summarize Am I correct to conclude that having the ViewModel being an integration point for both state and behavior is the reason for some difficulties with the MVVM pattern as a whole? Is using the abstract factory pattern the only/best way to instantiate a ViewModel in a statically typed way? Is there something like an in depth reference implementation available? Is having a lot of ViewModels with both state/behavior a design smell?

    Read the article

  • Adjusting server-side tickrate dynamically

    - by Stuart Blackler
    I know nothing of game development/this site, so I apologise if this is completely foobar. Today I experimented with building a small game loop for a network game (think MW3, CSGO etc). I was wondering why they do not build in automatic rate adjustment based on server performance? Would it affect the client that much if the client knew this frame is based on this tickrate? Has anyone attempted this before? Here is what my noobish C++ brain came up with earlier. It will improve the tickrate if it has been stable for x ticks. If it "lags", the tickrate will be reduced down by y amount: // GameEngine.cpp : Defines the entry point for the console application. // #ifdef WIN32 #include <Windows.h> #else #include <sys/time.h> #include <ctime> #endif #include<iostream> #include <dos.h> #include "stdafx.h" using namespace std; UINT64 GetTimeInMs() { #ifdef WIN32 /* Windows */ FILETIME ft; LARGE_INTEGER li; /* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it * to a LARGE_INTEGER structure. */ GetSystemTimeAsFileTime(&ft); li.LowPart = ft.dwLowDateTime; li.HighPart = ft.dwHighDateTime; UINT64 ret = li.QuadPart; ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */ ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */ return ret; #else /* Linux */ struct timeval tv; gettimeofday(&tv, NULL); uint64 ret = tv.tv_usec; /* Convert from micro seconds (10^-6) to milliseconds (10^-3) */ ret /= 1000; /* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */ ret += (tv.tv_sec * 1000); return ret; #endif } int _tmain(int argc, _TCHAR* argv[]) { int sv_tickrate_max = 1000; // The maximum amount of ticks per second int sv_tickrate_min = 100; // The minimum amount of ticks per second int sv_tickrate_adjust = 10; // How much to de/increment the tickrate by int sv_tickrate_stable_before_increment = 1000; // How many stable ticks before we increase the tickrate again int sys_tickrate_current = sv_tickrate_max; // Always start at the highest possible tickrate for the best performance int counter_stable_ticks = 0; // How many ticks we have not lagged for UINT64 __startTime = GetTimeInMs(); int ticks = 100000; while(ticks > 0) { int maxTimeInMs = 1000 / sys_tickrate_current; UINT64 _startTime = GetTimeInMs(); // Long code here... cout << "."; UINT64 _timeTaken = GetTimeInMs() - _startTime; if(_timeTaken < maxTimeInMs) { Sleep(maxTimeInMs - _timeTaken); counter_stable_ticks++; if(counter_stable_ticks >= sv_tickrate_stable_before_increment) { // reset the stable # ticks counter counter_stable_ticks = 0; // make sure that we don't go over the maximum tickrate if(sys_tickrate_current + sv_tickrate_adjust <= sv_tickrate_max) { sys_tickrate_current += sv_tickrate_adjust; // let me know in console #DEBUG cout << endl << "Improving tickrate. New tickrate: " << sys_tickrate_current << endl; } } } else if(_timeTaken > maxTimeInMs) { cout << endl; if((sys_tickrate_current - sv_tickrate_adjust) > sv_tickrate_min) { sys_tickrate_current -= sv_tickrate_adjust; } else { if(sys_tickrate_current == sv_tickrate_min) { cout << "Please reduce sv_tickrate_min..." << endl; } else{ sys_tickrate_current = sv_tickrate_min; } } // let me know in console #DEBUG cout << "The server has lag. Reduced tickrate to: " << sys_tickrate_current << endl; } ticks--; } UINT64 __timeTaken = GetTimeInMs() - __startTime; cout << endl << endl << "Total time in ms: " << __timeTaken; cout << endl << "Ending tickrate: " << sys_tickrate_current; char test; cin >> test; return 0; }

    Read the article

  • Why won't USB 3.0 external hard drive run at USB 3.0 speeds?

    - by jgottula
    I recently purchased a PCI Express x1 USB 3.0 controller card (containing the NEC USB 3.0 controller) with the intent of using a USB 3.0 external hard drive with my Linux box. I installed the card in an empty PCIe slot on my motherboard, connected the card to a power cable, strung a USB 3.0 cable between one of the new ports and my external HDD, and connected the HDD to a wall socket for power. Booting the system, the drive works 100% as intended, with the one exception of throughput: rather than using SuperSpeed 4.8 Gbps connectivity, it seems to be falling back to High Speed 480 Mbps USB 2.0-style throughput. Disk Utility shows it as a 480 Mbps device, and running a couple Disk Utility and dd benchmarks confirms that the drive fails to exceed ~40 MB/s (the approximate limit of USB 2.0), despite it being an SSD capable of far more than that. When I connect my USB 3.0 HDD, dmesg shows this: [ 3923.280018] usb 3-2: new high speed USB device using ehci_hcd and address 6 where I would expect to find this: [ 3923.280018] usb 3-2: new SuperSpeed USB device using xhci_hcd and address 6 My system was running on kernel 2.6.35-25-generic at the time. Then, I stumbled upon this forum thread by an individual who found that a bug, which was present in kernels prior to 2.6.37-rc5, could be the culprit for this type of problem. Consequently, I installed the 2.6.37-generic mainline Ubuntu kernel to determine if the problem would go away. It didn't, so I tried 2.6.38-rc3-generic, and even the 2.6.38 nightly from 2010.02.01, to no avail. In short, I'm trying to determine why, with USB 3.0 support in the kernel, my USB 3.0 drive fails to run at full SuperSpeed throughput. See the comments under this question for additional details. Output that might be relevant to the problem (when booting from 2.6.38-rc3): Relevant lines from dmesg: [ 19.589491] xhci_hcd 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 19.589512] xhci_hcd 0000:03:00.0: setting latency timer to 64 [ 19.589516] xhci_hcd 0000:03:00.0: xHCI Host Controller [ 19.589623] xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 12 [ 19.650492] xhci_hcd 0000:03:00.0: irq 17, io mem 0xf8100000 [ 19.650556] xhci_hcd 0000:03:00.0: irq 47 for MSI/MSI-X [ 19.650560] xhci_hcd 0000:03:00.0: irq 48 for MSI/MSI-X [ 19.650563] xhci_hcd 0000:03:00.0: irq 49 for MSI/MSI-X [ 19.653946] xHCI xhci_add_endpoint called for root hub [ 19.653948] xHCI xhci_check_bandwidth called for root hub Relevant section of sudo lspci -v: 03:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f8100000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable+ Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] #18 Kernel driver in use: xhci_hcd Kernel modules: xhci-hcd Relevant section of sudo lsusb -v: Bus 012 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.38-020638rc3-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:03:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0003 Self Powered Remote Wakeup Enabled Full, non-verbose lsusb: Bus 012 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 011 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 010 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 003: ID 04d9:0702 Holtek Semiconductor, Inc. Bus 009 Device 002: ID 046d:c068 Logitech, Inc. G500 Laser Mouse Bus 009 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 006: ID 174c:5106 ASMedia Technology Inc. Bus 003 Device 004: ID 0bda:0151 Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader) Bus 003 Device 002: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 006: ID 1687:0163 Kingmax Digital Inc. Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 046d:081b Logitech, Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Full output: full dmesg full lspci full lsusb

    Read the article

  • How to make creating viewmodels at runtime less painfull

    - by Mr Happy
    I apologize for the long question, it reads a bit as a rant, but I promise it's not! I've summarized my question(s) below In the MVC world, things are straightforward. The Model has state, the View shows the Model, and the Controller does stuff to/with the Model (basically), a controller has no state. To do stuff the Controller has some dependencies on web services, repository, the lot. When you instantiate a controller you care about supplying those dependencies, nothing else. When you execute an action (method on Controller), you use those dependencies to retrieve or update the Model or calling some other domain service. If there's any context, say like some user wants to see the details of a particular item, you pass the Id of that item as parameter to the Action. Nowhere in the Controller is there any reference to any state. So far so good. Enter MVVM. I love WPF, I love data binding. I love frameworks that make data binding to ViewModels even easier (using Caliburn Micro a.t.m.). I feel things are less straightforward in this world though. Let's do the exercise again: the Model has state, the View shows the ViewModel, and the ViewModel does stuff to/with the Model (basically), a ViewModel does have state! (to clarify; maybe it delegates all the properties to one or more Models, but that means it must have a reference to the model one way or another, which is state in itself) To do stuff the ViewModel has some dependencies on web services, repository, the lot. When you instantiate a ViewModel you care about supplying those dependencies, but also the state. And this, ladies and gentlemen, annoys me to no end. Whenever you need to instantiate a ProductDetailsViewModel from the ProductSearchViewModel (from which you called the ProductSearchWebService which in turn returned IEnumerable<ProductDTO>, everybody still with me?), you can do one of these things: call new ProductDetailsViewModel(productDTO, _shoppingCartWebService /* dependcy */);, this is bad, imagine 3 more dependencies, this means the ProductSearchViewModel needs to take on those dependencies as well. Also changing the constructor is painfull. call _myInjectedProductDetailsViewModelFactory.Create().Initialize(productDTO);, the factory is just a Func, they are easily generated by most IoC frameworks. I think this is bad because Init methods are a leaky abstraction. You also can't use the readonly keyword for fields that are set in the Init method. I'm sure there are a few more reasons. call _myInjectedProductDetailsViewModelAbstractFactory.Create(productDTO); So... this is the pattern (abstract factory) that is usually recommended for this type of problem. I though it was genious since it satisfies my craving for static typing, until I actually started using it. The amount of boilerplate code is I think too much (you know, apart from the ridiculous variable names I get use). For each ViewModel that needs runtime parameters you'll get two extra files (factory interface and implementation), and you need to type the non-runtime dependencies like 4 extra times. And each time the dependencies change, you get to change it in the factory as well. It feels like I don't even use an DI container anymore. (I think Castle Windsor has some kind of solution for this [with it's own drawbacks, correct me if I'm wrong]). do something with anonymous types or dictionary. I like my static typing. So, yeah. Mixing state and behavior in this way creates a problem which don't exist at all in MVC. And I feel like there currently isn't a really adequate solution for this problem. Now I'd like to observe some things: People actually use MVVM. So they either don't care about all of the above, or they have some brilliant other solution. I haven't found an indepth example of MVVM with WPF. For example, the NDDD-sample project immensely helped me understand some DDD concepts. I'd really like it if someone could point me in the direction of something similar for MVVM/WPF. Maybe I'm doing MVVM all wrong and I should turn my design upside down. Maybe I shouldn't have this problem at all. Well I know other people have asked the same question so I think I'm not the only one. To summarize Am I correct to conclude that having the ViewModel being an integration point for both state and behavior is the reason for some difficulties with the MVVM pattern as a whole? Is using the abstract factory pattern the only/best way to instantiate a ViewModel in a statically typed way? Is there something like an in depth reference implementation available? Is having a lot of ViewModels with both state/behavior a design smell?

    Read the article

  • The Dreaded Startup Repair Loop on Win 7

    - by HighAltitudeCoder
    For most people, upgrading to Windows 7 has been a relatively painless process.  Not me.  I am in the unlucky 1% or less who had a somewhat less pleasant experience.  First, I cloned my entire onto a larger (and much faster) solid state hard drive, only experiencing minimal problems. Then, I bought the Retail version of Windows 7 Ultimate, took a deep breath and... oh yeah, I almost forgot - BACK UP THE COMPUTER.  The next morning I upgraded to Win 7 and everything seemed fine, until... I rebooted the system, the nice Windows 7 launch graphics come up, it's about to launch and AWWW, are you kidding me?!?!  Back to the BIOS splash screen?  Next comes the sequence of failure - attempt repair - unable to repair - do you want to wipe your hard drive decisions. Because I purchased the retail version, a number is provided where I could call Microsoft Tech support.  When I did, they instructed me to click "Install" from my installation CD, which did not work.  When I tried the "Upgrade" option, it reaches an impasse, telling you that yoiu have a newer version of Win 7, and thus cannot Upgrade.  If you choose "Install" you willl lose everything... files, programs, EVERYTHING.  Or at least this is what it tells you.  I was not willing to take the risk. To make things worse, I had installed a new antivirus software application before I realized my system was unstable (Trend Micro Titanium Internet Security), and this was causing additional problems. One interesting thing, and the only saving grace as it turns out, was that my system WOULD successfully reboot into the OS if I chose to restart it, rather than shut it down.  If I chose to shut down, I would have to go through the loop again until I was given the option to restart. As it turned out, I needed to update my BIOS.  I assumed that since I had updated my BIOS a long time ago to settings that were stable under Windows Vista Ultimate x64, I incorrectly expected Win 7 to adopt the same settings and didn't expect there to be any problems.  WRONG. My BIOS had a setting to halt the boot cycle if various kinds of errors were detected.  Windows Vista didn't care about this, but forget it under Windows 7.  I turned immediately corrected that BIOS setting.  Next, there were the two separate BIOS settings: enable USB mouse and enable USB keyboard.  The only sequence of events that would work were to start my reboot process over from stratch with a hard-wired non-usb keyboard and mouse.  Whent the system booted under these settings, it doesn't detect any errors due to either the mouse or keyboard, and actually booted for the first time in a long while (let me tell ya, that's an amazing experience after fiddling with settings for two entire weekends!) Next step: leave your old mouse and keyboard connected, but also connect your other two devices (mouse, keyboard) that use USB connections.  During the boot cycle, the operating system will not fail due to missing requirements during startup, and it will then pick up the new drivers necessary to use your new hardware. If you think you are in the clear here, you are wrong.  The next VERY IMPORTANT step is to remember to change your settings in the BIOS upon next startup.  Specifically, yoiu will need ot change your BIOS to enable USB mouse and enable USB keyboard input.  If you don't, Windows will detect an incompatibility upon the next startup, and you will be stuck once again in the endless cycle of reboot/Startup Repair/reboot/Startup Repair, without ever reaching a successful boot. Here's the thing - the BIOS and the drivers registered in Win 7 need to match.  If they don't, you're going to lose another weekend worrying and fiddling, all the while wondering if you've permanently damaged your hard drive beyond repair. (Sigh).  In the end, things worked out.  I must note that it is saddening to see how many posts there are out there that recommend just doing a clean install, as if it's the only option.  How many countless poor souls have lost their data, their backups, their pictures and videos, all for nothing other than the fact that the person giving advice just didn't know what to do at that point? My advice to you, try having a look at your BIOS settings first and making sure Win 7 can find your BIOS settings, and also disabling in your BIOS anything that might halt your system boot-up process if it encounters errors.

    Read the article

  • Introducing… SharePress!

    - by Bil Simser
    For those that follow me I’ve been away from blogging and twittering for a couple of months. This is the reason. For the last few months I’ve been working with a cross-functional team putting together a new product from the people that run WordPress, the free premiere blogging platform. The result is a new product we call SharePress, a highly extensible blogging and content management platform with the usability of WordPress and the power of SharePoint combined into a single product. SharePress gives you SharePoint sites that are SEO-friendly delivered with a Web 2.0 ease of use, leveraging all of the existing abilities of SharePoint and WordPress that we know today. The Reason Back in December I was approached by the WordPress team about building a new platform that took advantage of the power of SharePoint but the ease of WordPress. I’m no stranger to WordPress and it’s 5 minute no-holds-barred install (I’ve always wanted SharePoint to do this!) and I run my personal blog on WordPress as does my better half, Princess Jenn. There’s always been a pitch by so-called Web 2.0 applications to deliver the power of SharePoint but the ease of [insert product here] over the past year or so. I checked each and every one of them out, but they fell woefully short when it came to SharePoint’s document management, versioning, and customization. They try, but it’s never been up to par in my books. On the flipside, SharePoint has always been tops in collaboration in the Enterprise but it’s painful to develop web parts, UI customization can be tricky, and there’s just no user community for something as simple as themes and designs. The Product Enter SharePress. Is it SharePoint? Is it WordPress? It’s both, and neither. Everything you like about both products are there but this is a bold new product that is positioned to bring SharePoint to the masses while maintaining the fidelity of an Enterprise 2.0 collaboration platform. SharePress delivers on all fronts including: The ability to leverage any WordPress/Joomla/Drupal/DotNetNuke themes and skins inside of SharePoint Run any WordPress/Drupal/Joomla/DotNetNuke/SharePoint plug-in/module/web part/feature works out of the box with SharePress SEO-friendly URLs and pages Permalinks for all content All the features of SharePoint Server 2010 (including InfoPath, Excel, and Access services) included in the price Small deployment footprint. You decide how much to deploy and where. Independent Database Abstraction Layer (iDal) that allows you to deploy to SQL Server 2005/2008, MySQL, and PostgreSQL Portable Rendering Engine Layer (PREL) so you host .NET or PHP on Apache or IIS (version 7 or higher). The install feature is built around WordPress and it’s famous 5-minute install (actually, it’s never taken me more than 1 minute). SharePress installs with two screens after the files are uploaded to your server (which can be done entirely using FTP): After you enter two fields of information click “Install SharePress” and you’ll be done: No mess, no fuss, no complicated dependencies, and no server access required! How simpler could this be? The Technology WordPress plug-ins and themes working with SharePoint? Of course! The answer is IronPython which has now reached a maturity level capable of doing on the fly code language conversions. SharePress is a brand new product not built on top of any previous platform but leverages all the power of each of those applications through a patent pending technique called SharePress Multi-plAtfoRm Technology (SMART). SMART will convert PHP code on the fly into Python (using SWIG as an intermediate processor) which is then compiled to MSIL and then delivered back as an ASP.NET MVC application (output is C# or VB.NET, but you can build your own SMART converter to output a different language). Sound complicated? It is, but it’s all behind the scenes and you don’t have to worry about a thing. This image illustrates the technology stack and process: So users can load up out of the box PHP themes and plug-ins from the WordPress/Joomla/Drupal community into the SMART converter and output MSIL that is used by the SharePress engine and rendered on the fly to the end user. Supported PHP versions are 4.xx and 5.xx with version 6 support to come when it’s released. Similarly you can take any .NET application, DotNetNuke Module, SharePoint Web Part or event handler and feed it into the converter to output the same. Everything is reverse compiled into MSIL so it becomes technology agnostic. No source code access is needed and the SMART converter can handle obfuscated .NET assemblies that were built with .NET 1.0, 1.1, 2.0, 3.5, and 4.0. With this technology you can also with the flip of a switch have the output create PHP pages for you. This allows you to run SharePress on Unix based systems running PHP and MySQL, allowing you to deliver your SharePoint like experience to your users with a $0 infrastructure footprint. Here’s SharePress with the default WordPress post imported then a stock SharePoint collaboration site was imported. The site was then applied with the default Kubrick theme from WordPress. The Features Deploy any of the freely available 100,000 WordPress/Joomla/Drupal themes instantly to your runtime SharePress environment and preview or activate them right from your browser. Built-in Web 2.0 jQuery Enabled End User and Administrator Web Interface. Never have to remote into a server again! Run any SharePoint Web Part or Event Handler directly without modification or access to source code in SharePress. Use any WordPress/Joomla/Drupal plug-in directly in SharePress, no local admin or access to server. Just upload and activate. Upload and Activate any SharePoint Solution Package to any site remotely. No rebuilding. Changes made to sites require no compiling or rebuilding and are published immediately. Password Protected Content. You can give passwords to individual posts, articles, pages, documents, forms, and list items. A powerful polymorphic Captcha system backs the security interface and vendors can easily tie into smart card readers, fingerprint readers, and retina scanners for authorization and identification. OpenID, Windows Live, and Windows Authentication are supported out of the box. Infinitely customizable and extensible. You can leverage plug-ins from the open source community to do practically anything, all configured and uploaded via the browser. Additionally the developer API (available soon) allows you to build extensions in .NET, PHP, and Python with little effort. Easy Importing. We have importers for Blogger, WordPress, Drupal, Joomla, DotNetNuke, and SharePoint so you can populate your site quickly and easily with full metadata modeling and creation. Banner Management. It’s easy to setup banners for your web site complete with impression numbers, special URLs, and more. Menu Manager. The Menu Manager allows you to create as many menus as you want, each one can be associated to specific audiences or roles and then be styled across multiple contexts including the same menu delivered as a fly out, rollover, drop down, and just about any navigation you can think of. Collaborative ShareBook. Our exclusive book feature allows you to setup a “book” and then authorize individuals to contribute content. Permalinks. All content in SharePress has a permanent or “perma link” associated with it so people can link to it freely without fear of broken links. Apache or IIS, Unix / Linux / BSD / Solaris / Windows / Mac OS X support. Deliver SharePress the way *you* want from the platform *you* decide. Database Independence. We know people wanted to run on any database platform so SharePress is built on top of a database abstraction layer that allows you to run on SQL Server, MySQL, PostgreSQL. Other databases can be supported by writing a supporting database script consisting of fourteen function calls. The script can be written in Perl, Python, AWK, PowerShell, Unix Shell scripts, VBA, or simple DOS batch files. The Team SharePress is the work of a lot of people in both the WordPress and SharePoint community. I worked with a lot of SharePoint MVPs to create this new product as we really wanted to deliver the most compatible and feature rich system in a product that we would be proud of. Many thanks go out to Eli Bleeker, Todd Robillard, Scot Larson, Daniel Hillier, Shane Fox, Box Peran, Amanda English, and Bill Murray for doing the heavy lifting and all of their expertise and innovative thinking to get this product out. Licensing and Pricing SharePress is still in the final stages for pricing but we’re looking at a price point somewhere between $99-$100 to make it affordable for everyone. We plan to announce final pricing sometime in the next few weeks. There are no additional charges for Enterprise versions or additional features. Everything you see is what’s available and it’s just a matter of lighting up your site with whatever feature you want to enable. The product will not be open source but source code licenses will be available to ISVs who are interested in interfacing with the API at a low level. Cost will be $25,000 USD per developer and gives you complete access to the source code to the SharePress Foundation System and the .NET 4.0 Framework source code. Conclusion We hope you enjoy the launch of SharePress as the new premium blogging and content management platform for both Intranets and the Internet. We think we’ve build the best of breed solutions here and made it easy for anyone to get started with a minimal of infrastructure but allow the scalability of SharePress to shine through in the Enterprise 2.0 world. We encourage your feedback so please leave comments as to what you’re looking for in this system as we’re always evolving it to make it a better product for everyone.

    Read the article

  • Windows 8 Will be Here Tomorrow; but Should Silverlight be Gone Today?

    - by andrewbrust
    The software industry lives within an interesting paradox. IT in the enterprise moves slowly and cautiously, upgrading only when safe and necessary.  IT interests intentionally live in the past.  On the other hand, developers, and Independent Software Vendors (ISVs) not only want to use the latest and greatest technologies, but this constituency prides itself on gauging tech’s future, and basing its present-day strategy upon it.  Normally, we as an industry manage this paradox with a shrug of the shoulder and musings along the lines of “it takes all kinds.”  Different subcultures have different tendencies.  So be it. Microsoft, with its Windows operating system (OS), can’t take such a laissez-faire view of the world though.  Redmond relies on IT to deploy Windows and (at the very least) influence its procurement, but it also relies on developers to build software for Windows, especially software that has a dependency on features in new versions of the OS.  It must indulge and nourish developers’ fetish for an early birthing of the next generation of software, even as it acknowledges the IT reality that the next wave will arrive on-schedule in Redmond and will travel very slowly to end users. With the move to Windows 8, and the corresponding shift in application development models, this paradox is certainly in place. On the one hand, the next version of Windows is widely expected sometime in 2012, and its full-scale deployment will likely push into 2014 or even later.  Meanwhile, there’s a technology that runs on today’s Windows 7, will continue to run in the desktop mode of Windows 8 (the next version’s codename), and provides absolutely the best architectural bridge to the Windows 8 Metro-style application development stack.  That technology is Silverlight.  And given what we now know about Windows 8, one might think, as I do, that Microsoft ecosystem developers should be flocking to it. But because developers are trying to get a jump on the future, and since many of them believe the impending v5.0 release of Silverlight will be the technology’s last, not everyone is flocking to it; in fact some are fleeing from it.  Is this sensible?  Is it not unprecedented?  What options does it lead to?  What’s the right way to think about the situation? Is v5.0 really the last major version of the technology called Silverlight?  We don’t know.  But Scott Guthrie, the “father” and champion of the technology, left the Developer Division of Microsoft months ago to work on the Windows Azure team, and he took his people with him.  John Papa, who was a very influential Redmond-based evangelist for Silverlight (and is a Visual Studio Magazine author), left Microsoft completely.  About a year ago, when initial suspicion of Silverlight’s demise reached significant magnitude, Papa interviewed Guthrie on video and their discussion served to dispel developers’ fears; but now they’ve moved on. So read into that what you will and let’s suppose, for the sake of argument, speculation that Silverlight’s days of major revision and iteration are over now is correct.  Let’s assume the shine and glimmer has dimmed.  Let’s assume that any Silverlight application written today, and that therefore any investment of financial and human resources made in Silverlight development today, is destined for rework and extra investment in a few years, if the application’s platform needs to stay current. Is this really so different from any technology investment we make?  Every framework, language, runtime and operating system is subject to change, to improvement, to flux and, yes, to obsolescence.  What differs from project to project, is how near-term that obsolescence is and how disruptive the change will be.  The shift from .NET 1.1. to 2.0 was incremental.  Some of the further changes were too.  But the switch from Windows Forms to WPF was major, and the change from ASP.NET Web Services (asmx) to Windows Communication Foundation (WCF) was downright fundamental. Meanwhile, the transition to the .NET development model for Windows 8 Metro-style applications is actually quite gentle.  The finer points of this subject are covered nicely in Magenic’s excellent white paper “Assessing the Windows 8 Development Platform.” As the authors of that paper (including Rocky Lhotka)  point out, Silverlight code won’t just “port” to Windows 8.  And, no, Silverlight user interfaces won’t either; Metro always supports XAML, but that relationship is not commutative.  But the concepts, the syntax, the architecture and developers’ skills map from Silverlight to Windows 8 Metro and the Windows Runtime (WinRT) very nicely.  That’s not a coincidence.  It’s not an accident.  This is a protected transition.  It’s not a slap in the face. There are few things that are unnerving about this transition, which make it seem markedly different from others: The assumed end of the road for Silverlight is something many think they can see.  Instead of being ignorant of the technology’s expiration date, we believe we know it.  If ignorance is bliss, it would seem our situation lacks it. The new technology involving WinRT and Metro involves a name change from Silverlight. .NET, which underlies both Silverlight and the XAML approach to WinRT development, has just about reached 10 years of age.  That’s equivalent to 80 in human years, or so many fear. My take is that the combination of these three factors has contributed to what for many is a psychologically compelling case that Silverlight should be abandoned today and HTML 5 (the agnostic kind, not the Windows RT variety) should be embraced in its stead.  I understand the logic behind that.  I appreciate the preemptive, proactive, vigilant conscientiousness involved in its calculus.  But for a great many scenarios, I don’t agree with it.  HTML 5 clients, no matter how impressive their interactivity and the emulation of native application interfaces they present may be, are still second-class clients.  They are getting better, especially when hardware acceleration and fast processors are involved.  But they still lag.  They still feel like they’re emulating something, like they’re prototypes, like they’re not comfortable in their own skins.  They are based on compromise, and they feel compromised too. HTML 5/JavaScript development tools are getting better, and will get better still, but they are not as productive as tools for other environments, like Flash, like Silverlight or even more primitive tooling for iOS or Android.  HTML’s roots as a document markup language, rather than an application interface, create a disconnect that impedes productivity.  I do not necessarily think that problem is insurmountable, but it’s here today. If you’re building line-of-business applications, you need a first-class client and you need productivity.  Lack of productivity increases your costs and worsens your backlog.  A second class client will erode user satisfaction, which is never good.  Worse yet, this erosion will be inconspicuous, rather than easily identified and diagnosed, because the inferiority of an HTML 5 client over a native one is hard to identify and, notably, doing so at this juncture in the industry is unpopular.  Why would you fault a technology that everyone believes is revolutionary?  Instead, user disenchantment will remain latent and yet will add to the malaise caused by slower development. If you’re an ISV and you’re coveting the reach of running multi-platform, it’s a different story.  You’ve likely wanted to move to HTML 5 already, and the uncertainty around Silverlight may be the only remaining momentum or pretext you need to make the shift.  You’re deploying many more copies of your application than a line-of-business developer is anyway; this makes the economic hit from lower productivity less impactful, and the wider potential installed base might even make it profitable. But no matter who you are, it’s important to take stock of the situation and do it accurately.  Continued, but merely incremental changes in a development model lead to conservatism and general lack of innovation in the underlying platform.  Periods of stability and equilibrium are necessary, but permanence in that equilibrium leads to loss of platform relevance, market share and utility.  Arguably, that’s already happened to Windows.  The change Windows 8 brings is necessary and overdue.  The marked changes in using .NET if we’re to build applications for the new OS are inevitable.  We will ultimately benefit from the change, and what we can reasonably hope for in the interim is a migration path for our code and skills that is navigable, logical and conceptually comfortable. That path takes us to a place called WinRT, rather than a place called Silverlight.  But considering everything that is changing for the good, the number of disruptive changes is impressively minimal.  The name may be changing, and there may even be some significance to that in terms of Microsoft’s internal management of products and technologies.  But as the consumer, you should care about the ingredients, not the name.  Turkish coffee and Greek coffee are much the same. Although you’ll find plenty of interested parties who will find the names significant, drinkers of the beverage should enjoy either one.  It’s all coffee, it’s all sweet, and you can tell your fortune from the grounds that are left at the end.  Back on the software side, it’s all XAML, and C# or VB .NET, and you can make your fortune from the product that comes out at the end.  Coffee drinkers wouldn’t switch to tea.  Why should XAML developers switch to HTML?

    Read the article

  • Top 5 Developer Enabling Nuggets in MySQL 5.6

    - by Rob Young
    MySQL 5.6 is truly a better MySQL and reflects Oracle's commitment to the evolution of the most popular and widelyused open source database on the planet.  The feature-complete 5.6 release candidate was announced at MySQL Connect in late September and the production-ready, generally available ("GA") product should be available in early 2013.  While the message around 5.6 has been focused mainly on mass appeal, advanced topics like performance/scale, high availability, and self-healing replication clusters, MySQL 5.6 also provides many developer-friendly nuggets that are designed to enable those who are building the next generation of web-based and embedded applications and services. Boiling down the 5.6 feature set into a smaller set, of simple, easy to use goodies designed with developer agility in mind, these things deserve a quick look:Subquery Optimizations Using semi-JOINs and late materialization, the MySQL 5.6 Optimizer delivers greatly improved subquery performance. Specifically, the optimizer is now more efficient in handling subqueries in the FROM clause; materialization of subqueries in the FROM clause is now postponed until their contents are needed during execution. Additionally, the optimizer may add an index to derived tables during execution to speed up row retrieval. Internal tests run using the DBT-3 benchmark Query #13, shown below, demonstrate an order of magnitude improvement in execution times (from days to seconds) over previous versions. select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)from customer, orders, lineitemwhere o_orderkey in (                select l_orderkey                from lineitem                group by l_orderkey                having sum(l_quantity) > 313  )  and c_custkey = o_custkey  and o_orderkey = l_orderkeygroup by c_name, c_custkey, o_orderkey, o_orderdate, o_totalpriceorder by o_totalprice desc, o_orderdateLIMIT 100;What does this mean for developers?  For starters, simplified subqueries can now be coded instead of complex joins for cross table lookups: SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_actor GROUP BY film_id HAVING count(*) > 12); And even more importantly subqueries embedded in packaged applications no longer need to be re-written into joins.  This is good news for both ISVs and their customers who have access to the underlying queries and who have spent development cycles writing, testing and maintaining their own versions of re-written queries across updated versions of a packaged app.The details are in the MySQL 5.6 docs. Online DDL OperationsToday's web-based applications are designed to rapidly evolve and adapt to meet business and revenue-generationrequirements. As a result, development SLAs are now most often measured in minutes vs days or weeks. For example, when an application must quickly support new product lines or new products within existing product lines, the backend database schema must adapt in kind, and most commonly while the application remains available for normal business operations.  MySQL 5.6 supports this level of online schema flexibility and agility by providing the following new ALTER TABLE online DDL syntax additions:  CREATE INDEX DROP INDEX Change AUTO_INCREMENT value for a column ADD/DROP FOREIGN KEY Rename COLUMN Change ROW FORMAT, KEY_BLOCK_SIZE for a table Change COLUMN NULL, NOT_NULL Add, drop, reorder COLUMN Again, the details are in the MySQL 5.6 docs. Key-value access to InnoDB via Memcached APIMany of the next generation of web, cloud, social and mobile applications require fast operations against simple Key/Value pairs. At the same time, they must retain the ability to run complex queries against the same data, as well as ensure the data is protected with ACID guarantees. With the new NoSQL API for InnoDB, developers have allthe benefits of a transactional RDBMS, coupled with the performance capabilities of Key/Value store.MySQL 5.6 provides simple, key-value interaction with InnoDB data via the familiar Memcached API.  Implemented via a new Memcached daemon plug-in to mysqld, the new Memcached protocol is mapped directly to the native InnoDB API and enables developers to use existing Memcached clients to bypass the expense of query parsing and go directly to InnoDB data for lookups and transactional compliant updates.  The API makes it possible to re-use standard Memcached libraries and clients, while extending Memcached functionality by integrating a persistent, crash-safe, transactional database back-end.  The implementation is shown here:So does this option provide a performance benefit over SQL?  Internal performance benchmarks using a customized Java application and test harness show some very promising results with a 9X improvement in overall throughput for SET/INSERT operations:You can follow the InnoDB team blog for the methodology, implementation and internal test cases that generated these results here. How to get started with Memcached API to InnoDB is here. New Instrumentation in Performance SchemaThe MySQL Performance Schema was introduced in MySQL 5.5 and is designed to provide point in time metrics for key performance indicators.  MySQL 5.6 improves the Performance Schema in answer to the most common DBA and Developer problems.  New instrumentations include: Statements/Stages What are my most resource intensive queries? Where do they spend time? Table/Index I/O, Table Locks Which application tables/indexes cause the most load or contention? Users/Hosts/Accounts Which application users, hosts, accounts are consuming the most resources? Network I/O What is the network load like? How long do sessions idle? Summaries Aggregated statistics grouped by statement, thread, user, host, account or object. The MySQL 5.6 Performance Schema is now enabled by default in the my.cnf file with optimized and auto-tune settings that minimize overhead (< 5%, but mileage will vary), so using the Performance Schema ona production server to monitor the most common application use cases is less of an issue.  In addition, new atomic levels of instrumentation enable the capture of granular levels of resource consumption by users, hosts, accounts, applications, etc. for billing and chargeback purposes in cloud computing environments.The MySQL docs are an excellent resource for all that is available and that can be done with the 5.6 Performance Schema. Better Condition Handling - GET DIAGNOSTICSMySQL 5.6 enables developers to easily check for error conditions and code for exceptions by introducing the new MySQL Diagnostics Area and corresponding GET DIAGNOSTICS interface command. The Diagnostic Area can be populated via multiple options and provides 2 kinds of information:Statement - which provides affected row count and number of conditions that occurredCondition - which provides error codes and messages for all conditions that were returned by a previous operation The addressable items for each are: The new GET DIAGNOSTICS command provides a standard interface into the Diagnostics Area and can be used via the CLI or from within application code to easily retrieve and handle the results of the most recent statement execution.  An example of how it is used might be:mysql> DROP TABLE test.no_such_table; ERROR 1051 (42S02): Unknown table 'test.no_such_table' mysql> GET DIAGNOSTICS CONDITION 1 -> @p1 = RETURNED_SQLSTATE, @p2 = MESSAGE_TEXT; mysql> SELECT @p1, @p2; +-------+------------------------------------+| @p1   | @p2                                | +-------+------------------------------------+| 42S02 | Unknown table 'test.no_such_table' | +-------+------------------------------------+ Options for leveraging the MySQL Diagnotics Area and GET DIAGNOSTICS are detailed in the MySQL Docs.While the above is a summary of some of the key developer enabling 5.6 features, it is by no means exhaustive. You can dig deeper into what MySQL 5.6 has to offer by reading this developer zone article or checking out "What's New in MySQL 5.6" in the MySQL docs.BONUS ALERT!  If you are developing on Windows or are considering MySQL as an alternative to SQL Server for your next project, application or shipping product, you should check out the MySQL Installer for Windows.  The installer includes the MySQL 5.6 RC database, all drivers, Visual Studio and Excel plugins, tray monitor and development tools all a single download and GUI installer.   So what are your next steps? Register for Dec. 13 "MySQL 5.6: Building the Next Generation of Web-Based Applications and Services" live web event.  Hurry!  Seats are limited. Download the MySQL 5.6 Release Candidate (look under the Development Releases tab) Provide Feedback <link to http://bugs.mysql.com/> Join the Developer discussion on the MySQL Forums Explore all MySQL Products and Developer Tools As always, thanks for your continued support of MySQL!

    Read the article

  • int, short, byte performance in back-to-back for-loops

    - by runrunraygun
    (background: http://stackoverflow.com/questions/1097467/why-should-i-use-int-instead-of-a-byte-or-short-in-c) To satisfy my own curiosity about the pros and cons of using the "appropriate size" integer vs the "optimized" integer i wrote the following code which reinforced what I previously held true about int performance in .Net (and which is explained in the link above) which is that it is optimized for int performance rather than short or byte. DateTime t; long a, b, c; t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (short index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } b=DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (byte index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } c=DateTime.Now.Ticks - t.Ticks; Console.WriteLine(a.ToString()); Console.WriteLine(b.ToString()); Console.WriteLine(c.ToString()); This gives roughly consistent results in the area of... ~950000 ~2000000 ~1700000 which is in line with what i would expect to see. However when I try repeating the loops for each data type like this... t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; the numbers are more like... ~4500000 ~3100000 ~300000 Which I find puzzling. Can anyone offer an explanation? NOTE: In the interest of compairing like for like i've limited the loops to 127 because of the range of the byte value type. Also this is an act of curiosity not production code micro-optimization.

    Read the article

  • Am I "wasting" my time learning C and other low level stuff ?

    - by Andreas Grech
    I have just recently started learning C and the reason I did that was because frankly, I consider myself to be of a "less-developer" than the people who know and work with C. Thus I planned to start learning ASM, C, C++ and bought the K&R book and started pushing myself to learn the C Programming Language and up till now I'm doing great...learning about arrays the low level way (ie the pointer + offset thing), pointers and all that and obviously asking questions on stackoverflow for guidance. My problem is that sometimes I get thinking if instead of learning this low level stuff, maybe I should maybe spend more time learning newer, more widely used technologies...basically, more web stuff. Now I am well versed with both C# and ASP.Net and currently that's what I do for a living, but still there exists Microsoft technologies that I haven't quite touched upon...such as ASP.Net MVC, The Entity Framework etc... And those are only Microsoft Technologies...obviously there are other stuff that I would like to touch upon...stuff like Ruby, which would lead me to Ruby on Rails, or Python for Django or even Java and J2EE, or maybe even PHP; ie, basically mainly Web Stuff. Mind you, I did touch upon some of the stuff I mentioned earlier on, such as PHP and Java but I am still not quite versed in them as I am in C# and ASP.Net...but still, I think that by learning other languages that are used in the web environment will broaden my horizons...both as a developer who loves learning, and also Career wise. My point is, am I really using up my time correctly by learning older, lower level stuff? Stuff that for my current line of work, will most probably never use, but still is interesting to know ? To be frankly honest, I am also learning C so that I could, maybe someday, get into Electronics and Micro-controller programming but that is a whole new world for me and, if I choose to go there, will take some time to get adjusted to. And even then, I don't know if I can get a career in working in that line of work. ...but I still wonder about this question over and over...Am I doing the right thing by learning C instead of something (Web-stuff) that will most probably be more useful for me career-wise? I'm sorry for such asking such a long and most probably a boring question, but I feel as if this is the only place where I can ask such a question and get an honest answer from experts in the field. Thank you for your time.

    Read the article

  • C socket programming: connect() hangs

    - by Fantastic Fourier
    Hey all, I'm about to rip my hair out. I have this client that tries to connect to a server, everything seems to be fine, using gethostbyname(), socket(), bind(), but when trying toconnect()` it just hangs there and the server doesn't see anything from the client. I know that the server works because another client (also in C) can connect just fine. What causes the server to not see this incoming connection? I'm at the end of my wits here. The two different clients are pretty similar too so I'm even more lost. if (argc == 2) { host = argv[1]; // server address } else { printf("plz read the manual\n"); exit(1); } hserver = gethostbyname(host); if (hserver) { printf("host found: %p\n", hserver); printf("host found: %s\n", hserver->h_name ); } else { printf("host not found\n"); exit(1); } bzero((char * ) &server_address, sizeof(server_address)); // copy zeroes into string server_address.sin_family = AF_INET; server_address.sin_addr.s_addr = htonl(hserver->h_addr); server_address.sin_port = htons(SERVER_PORT); bzero((char * ) &client_address, sizeof(client_address)); // copy zeroes into string client_address.sin_family = AF_INET; client_address.sin_addr.s_addr = htonl(INADDR_ANY); client_address.sin_port = htons(SERVER_PORT); sockfd = socket(AF_INET, SOCK_STREAM, 0); if (sockfd < 0) exit(1); else { printf("socket is opened: %i \n", sockfd); info.sock_fd = sockfd; rv = fcntl(sockfd, F_SETFL, O_NONBLOCK); // socket set to NONBLOCK if(rv < 0) printf("nonblock failed: %i %s\n", errno, strerror(errno)); else printf("socket is set nonblock\n"); } timeout.tv_sec = 0; // seconds timeout.tv_usec = 500000; // micro seconds ( 0.5 seconds) setsockopt(sockfd, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(struct timeval)); rv = bind(sockfd, (struct sockaddr *) &client_address, sizeof(client_address)); if (rv < 0) { printf("MAIN: ERROR bind() %i: %s\n", errno, strerror(errno)); exit(1); } else printf("socket is bound\n"); rv = connect(sockfd, (struct sockaddr *) &server_address, sizeof(server_address)); printf("rv = %i\n", rv); if (rv < 0) { printf("MAIN: ERROR connect() %i: %s\n", errno, strerror(errno)); exit(1); } else printf("connected\n"); Any thoughts or insights are deeply greatly humongously appreciated. -Fourier

    Read the article

  • The 80 column limit, still useful?

    - by Tim Post
    Related: While coding, how many columns do you format for? Is there a valid reason for enforcing a maximum width of 80 characters in a code file, this day and age? I mostly use C, however this question is language agnostic. Its also subjective, so I'll tag it as such. Many individual projects set their own various coding standards, a guide to adjust your coding style. Many enforce an 80 column limit on code, i.e. don't force a dumb 80 x 25 terminal to wrap your lines in someone else's editor of choice if they are stuck with such a display, don't force them to turn off wrapping. Both private and open source projects usually have some style guidelines. My question is, in this day and age, is that requirement more of a pest than a helper? Does anyone still login via the local console with no framebuffer and actually edit code? If so, how often and why cant you use SSH? I help to manage a few open source projects, I was considering extending this limit to 110 columns, but I wanted to get feedback first. So, any feedback is appreciated. I can see the need to make certain OUTPUT of programs (i.e. a --help /h display) 80 columns or less, but I really don't see the need to force people to break up code under 110 columns long into 2 lines, when its easier to read on one line. I can also see the case for adhering to an 80 column limit if you're writing code that will be used on micro controllers that have to be serviced in the field with a god-knows-what terminal emulator. Beyond that, what are your thoughts? Edit: This is not an exact duplicate. I am asking very specific questions, such as how many people are actually still using such a display. I am also not asking "what is a good column limit", I'm proposing one and hoping to gather feedback. Beyond that, I'm also citing cases where the 80 column limit is still a good idea. I don't want a guide to my own "c-style", I'm hoping to adjust standards for several projects. If the duplicate in question had answered all of my questions, I would not have posted this one :) That will teach me to mention it next time. Edit 2 question |= COMMUNITY_WIKI

    Read the article

  • Controlling the USB from Windows

    - by b-gen-jack-o-neill
    Hi, I know this probably is not the easiest thing to do, but I am trying to connect Microcontroller and PC using USB. I dont want to use internal USART of Microcontroller or USB to RS232 converted, its project indended to help me understand various principles. So, getting the communication done from the Microcontroller side is piece of cake - I mean, when I know he protocol, its relativelly easy to implement it on Micro, becouse I am in direct control of evrything, even precise timing. But this is not the case of PC. I am not very familiar with concept of Windows handling the devices connected. In one of my previous question I ask about how Windows works with devices thru drivers. I understood that for internal use of Windows, drivers must have some default set of functions available to OS. I mean, when OS wants to access HDD, it calls HDD driver (which is probably internal in OS), with specific "questions" so that means that HDD driver has to be written to cooperate with Windows, to have write function in the proper place to be called by the OS. Something similiar is for GPU, Even DirectX, I mean DirectX must call specific functions from drivers, so drivers must be written to work with DX. I know, many functions from WinAPI works on their own, but even "simple" window must be in the end written into framebuffer, using MMIO to adress specified by drivers. Am I right? So, I expected that Windows have internal functions, parts of WinAPI designed to work with certain comonly used things. To call manufacturer-designed drivers. But this seems to not be entirely true becouse Windows has no way to communicate thru Paralel port. I mean, there is no function in the WinAPI to work with serial port, but there are funcions to work with HDD,GPU and so. But now there comes the part I am getting very lost at. So, I think Windows must have some built-in functions to communicate thru USB, becouse for example it handles USB flash memory. So, is there any WinAPI function designed to let user to operate USB thru that function, or when I want to use USB myself, do I have to call desired USB-driver function myself? Becouse all you need to send to USB controller is device adress and the infromation right? I mean, I don´t have to write any new drivers, am I right? Just to call WinAPI function if there is such, or directly call original USB driver. Does any of this make some sense?

    Read the article

  • Why do GPUs overheat?

    - by JAD
    About a year ago, I added a 9800GT (1 GB version) and a Corsair CX500 PSU to an HP M8000N computer. A few weeks ago, the HDD overheated and I decided to transfer the GPU & PSU to a new build, which consists of: i3 @ 3.3Ghz Gigabyte H61 Micro ATX Mobo 4GB RAM 500GB WD HDD DVD RW Drive Cooler Master Elite 430 Tower Once I had Win7 up and running, I installed all the essential drivers that came with the Gigabyte Mobo CD. However, whenever I tried installing the Graphics Media Accelerator driver, the computer would crash and enter an endless boot sequence on the next startup. I skipped installing this driver and installed the CD driver for the 9800GT, which by now is a year old. Everything was working fine, WEI rated my GPU at 6.6 graphics & aero performance. However, after updating my Nvidia drivers to the latest, the WEI dropped my rating to 3.3 for Aero, and 4.7 for graphics performance. Just to make sure that everything was ok, I ran Bad Company 2 on medium settings. The first few minutes ran just fine at a smooth framerate, so I dismissed this as Windows being Windows. About 6 hours later, I ran BC2 again. This time I averaged anywhere from 2-5 FPS. I checked the GPU temperature through GPU-Z, and it came back as 120C. The problem with this, is that the computer was on for six hours up to that point. Wouldn't the card have experienced a reactor core meltdown a lot sooner than that? Granted, the computer was "sleeping" some of the time, but still... The next day I took out a temperature gun and ran some tests. I would point the laser at a very specific area on the reverse side of the card (not the fan or "front"), and compare the temp reading with GPU-Z. After leaving the system on idle on idle for a few minutes, I ran BC2 twice. Here are the results: GPU-Z Reading / Temp Gun Reading / Time Null / 22.3°C / Comp is Off 53°C / 33.5°C / 1:49 78°C / 46°C / 1:53 - (First BC2 run; good framerate) 102°C / 64.6°C / 2:01 - (System is again on idle) 113°C / 64.8°C / 2:10 119°C / 71.8°C / 2:17 - (Second BC2 run; poor framerate) I should also mention that I also took a temp recording of another part of the GPU from 2:01-2:17. The temp in this area jumped from 75°C to 82.9°C in that time frame. This pretty much confirms that GPU-Z is reporting the temperature accurately, and the card is overheating. But I'd like to know why; the cars is doing nothing and still the temperature climbs at a steady rate. I thoroughly cleaned the GPU and PSU when I salvaged them from the old HP M8000N computer with a can of compressed air, dust cant be the issue. Similarly, the rest of the computer is brand new. I installed various Nvidia drivers, but no luck. It seems strange to me that a year-old card is suddenly failing on me; aren't they supposed to last at least two years? Could this be a driver issue? Is the motherboard faulty? Could the PSU be overfeeding the card on voltage? Neither case seems likely, as the CPU, RAM and otherwise the rest of the comp has worked flawlessly and has stayed well within respectable temp ranges (the i3 lingers around 50C, the HDD stays at 30C, so does the PSU). How can I pinpoint the issue?

    Read the article

  • Bash can't start a programme that's there and has all the right permissions

    - by Rory
    This is a gentoo server. There's a programme prog that can't execute. (Yes the execute permission is set) About the file $ ls prog $ ./prog bash: ./prog: No such file or directory $ file prog prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped $ pwd /usr/local/bin $ /usr/local/bin/prog bash: /usr/local/bin/prog: No such file or directory $ less prog | head ELF Header: Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Intel 80386 Version: 0x1 I have a fancy less, to show that it's an actual executable, here's some more data: $ xxd prog |head 0000000: 7f45 4c46 0101 0100 0000 0000 0000 0000 .ELF............ 0000010: 0200 0300 0100 0000 c092 0408 3400 0000 ............4... 0000020: 0401 0a00 0000 0000 3400 2000 0700 2800 ........4. ...(. 0000030: 2600 2300 0600 0000 3400 0000 3480 0408 &.#.....4...4... 0000040: 3480 0408 e000 0000 e000 0000 0500 0000 4............... 0000050: 0400 0000 0300 0000 1401 0000 1481 0408 ................ 0000060: 1481 0408 1300 0000 1300 0000 0400 0000 ................ 0000070: 0100 0000 0100 0000 0000 0000 0080 0408 ................ 0000080: 0080 0408 21f1 0500 21f1 0500 0500 0000 ....!...!....... 0000090: 0010 0000 0100 0000 40f1 0500 4081 0a08 ........@...@... and $ ls -l prog -rwxrwxr-x 1 1000 devs 725706 Aug 6 2007 prog $ ldd prog not a dynamic executable $ strace ./prog 1249403877.639076 execve("./prog", ["./prog"], [/* 27 vars */]) = -1 ENOENT (No such file or directory) 1249403877.640645 dup(2) = 3 1249403877.640875 fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE) 1249403877.641143 fstat(3, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0 1249403877.641484 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b3b8954a000 1249403877.641747 lseek(3, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) 1249403877.642045 write(3, "strace: exec: No such file or dir"..., 40strace: exec: No such file or directory ) = 40 1249403877.642324 close(3) = 0 1249403877.642531 munmap(0x2b3b8954a000, 4096) = 0 1249403877.642735 exit_group(1) = ? About the server FTR the server is a xen domU, and the programme is a closed source linux application. This VM is a copy of another VM that has the same root filesystem (including this programme), that works fine. I've tried all the above as root and same problem. Did I mention the root filesystem is mounted over NFS. However it's mounted 'defaults,nosuid', which should include execute. Also I am able to run many other programmes from that mounted drive /proc/cpuinfo: processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 1 cpu MHz : 2992.692 cache size : 1024 KB fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl cid cx16 xtpr bogmips : 5989.55 clflush size : 64 cache_alignment : 128 address sizes : 36 bits physical, 48 bits virtual power management: Example of a file that I can run I can run other programmes on that mounted filesystem on that server. For example: $ ls -l ls -rwxr-xr-x 1 root root 105576 Jul 25 17:14 ls $ file ls ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), stripped $ ./ls attr cat cut echo getfacl ln more ... (you get the idea) ... rmdir sort tty $ less ls | head ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Advanced Micro Devices X86-64 Version: 0x1

    Read the article

  • virtual host setup: can't access wordpress site without www

    - by two7s_clash
    I would like to access my site both with and without using the www. Currently it only works with. Leaving out the www just goes to a blank page. Also, wp-admin just loads a blank page too. I have set an A record for mysite.com and www.mysite.com, both pointing to my static Bitnami IP. I also have a subdomain mapped to another directory that is working just fine (conference.mysite.com and www.conference.mysite.com). I'm using a Bitnami stack on an AWS EC2 micro instance. Here is my httpd.conf: ServerRoot "/opt/bitnami/apache2" Listen 80 LoadModule authn_file_module modules/mod_authn_file.so blah blah blah.... LoadModule php5_module modules/libphp5.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> ServerAdmin [email protected] ServerName localhost:80 DocumentRoot "/opt/bitnami/apps/wordpress1/htdocs/" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Allow from all </Directory> <Directory "/opt/bitnami/apps/wordpress1/htdocs/"> Options Indexes MultiViews +FollowSymLinks LanguagePriority en AllowOverride All Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> <FilesMatch "^\\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error_log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \\"%r\\" %>s %b \\"%{Referer}i\\" \\"%{User-Agent}i\\"" combined LogFormat "%h %l %u %t \\"%r\\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \\"%r\\" %>s %b \\"%{Referer}i\\" \\"%{User-Agent}i\\" %I %O" combinedio </IfModule> CustomLog "logs/access_log" common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "/opt/bitnami/apache2/cgi-bin/" </IfModule> <Directory "/opt/bitnami/apache2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> Include conf/extra/httpd-mpm.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> AddType application/x-httpd-php .php .phtml LoadModule wsgi_module modules/mod_wsgi.so WSGIPythonHome /opt/bitnami/python ServerSignature Off ServerTokens Prod AddType application/x-httpd-php .php PHPIniDir "/opt/bitnami/php/etc" Include "/opt/bitnami/apps/phpmyadmin/conf/phpmyadmin.conf" ExtendedStatus On <Location /server-status> SetHandler server-status Order Deny,Allow Deny from all Allow from localhost </Location> Include "/opt/bitnami/apache2/conf/bitnami/httpd.conf" Include "/opt/bitnami/apps/virtualhost.conf" Here is my virtual hosts file: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin xx DocumentRoot "/opt/bitnami/apps/wordpress1/htdocs" ServerName mbird.com ServerAlias www.mbird.com ErrorLog "logs/wordpress-error_log" CustomLog "logs/wordpress-access_log" common </VirtualHost> <Directory "/opt/bitnami/apps/wordpress1/htdocs"> Options Indexes MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ### WordPress conference.mbird.com configuration ### <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/bitnami/apps/wordpress/htdocs" ServerName conference.mbird.com ServerAlias www.conference.mbird.com ErrorLog "logs/confwordpress-error_log" CustomLog "logs/confwordpress-access_log" common </VirtualHost> <Directory "/opt/bitnami/apps/wordpress/htdocs"> Options Indexes MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ###

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • Thursday Community Keynote: "By the Community, For the Community"

    - by Janice J. Heiss
    Sharat Chander, JavaOne Community Chairperson, began Thursday's Community Keynote. As part of the morning’s theme of "By the Community, For the Community," Chander noted that 60% of the material at the 2012 JavaOne conference was presented by Java Community members. "So next year, when the call for papers starts, put-in your submissions," he urged.From there, Gary Frost, Principal Member of Technical Staff, AMD, expanded upon Sunday's Strategy Keynote exploration of Project Sumatra, an OpenJDK project targeted at bringing Java to heterogeneous computing platforms (which combine the CPU and the parallel processor of the GPU into a single piece of silicon). Sumatra entails enhancing the JVM to make maximum use of these advanced platforms. Within this development space, AMD created the Aparapi API, which converts Java bytecode into OpenCL for execution on such GPU devices. The Aparapi API was open sourced in September 2011.Whether it was zooming-in on a Mandelbrot set, "the game of life," or a swarm of 10,000 Dukes in a space-bound gravitational dance, Frost's demos, using an Aparapi/OpenCL implementation, produced stunningly faster display results. He indicated that the Java 9 timeframe is where they see Project Sumatra coming to ultimate fruition, employing the Lamdas of Java 8.Returning to the theme of the keynote, Donald Smith, Director, Java Product Management, Oracle, explored a mind map graphic demonstrating the importance of Community in terms of fostering innovation. "It's the sharing and mixing of culture, the diversity, and the rapid prototyping," he said. Within this topic, Smith, brought up a panel of representatives from Cloudera, Eclipse, Eucalyptus, Perrone Robotics, and Twitter--ideal manifestations of community and innovation in the world of Java.Marten Mickos, CEO, Eucalyptus Systems, explored his company's open source cloud software platform, written in Java, and used by gaming companies, technology companies, media companies, and more. Chris Aniszczyk, Operations Engineering,Twitter, noted the importance of the JVM in terms of their multiple-language development environment. Mike Olson, CEO, Cloudera, described his company's Apache Hadoop-based software, support, and training. Mike Milinkovich, Executive Director, Eclipse Foundation, noted that they have about 270 tools projects at Eclipse, with 267 of them written in Java. Milinkovich added that Eclipse will even be going into space in 2013, as part of the control software on various experiments aboard the International Space Station. Lastly, Paul Perrone, CEO, Perrone Robotics, detailed his company's robotics and automation software platform built 100% on Java, including Java SE and Java ME--"on rat, to cat, to elephant-sized systems." Milinkovic noted that communities are by nature so good at innovation because of their very openness--"The more open you make your innovation process, the more ideas are challenged, and the more developers are focused on justifying their choices all the way through the process."From there, Georges Saab, VP Development Java SE OpenJDK, continued the topic of innovation and helping the Java Community to "Make the Future Java." Martijn Verburg, representing the London Java Community (winner of a Duke's Choice Award 2012 for their activity in OpenJDK and JCP), soon joined Saab onstage. Verburg detailed the LJC's "Adopt a JSR" program--"to get day-to-day developers more involved in the innovation that's happening around them."  From its London launching pad, the innovative program has spread to Brazil, Morocco, Latvia, India, and more.Other active participants in the program joined Verburg onstage--Ben Evans, London Java Community; James Gough, Stackthread; Bruno Souza, SOUJava; Richard Warburton, jClarity; and Cecelia Borg, Oracle--OpenJDK Onboarding. Together, the group explored the goals and tasks inherent in the Adopt a JSR program--from organizing hack days (testing prototype implementations), to managing mailing lists and forums, to triaging issues, to evangelism—all with the goal of fostering greater community/developer involvement, but equally importantly, building better open standards. “Come join us, and make your ecosystem better!" urged Verburg.Paul Perrone returned to profile the latest in his company's robotics work around Java--including the AARDBOTS family of smaller robotic vehicles, running the Perrone MAX platform on top of the Java JVM. Perrone took his "Rumbles" four-wheeled robot out for a spin onstage--a roaming, ARM-based security-bot vehicle, complete with IR, ultrasonic, and "cliff" sensors (the latter, for the raised stage at JavaOne). As an ultimate window into the future of robotics, Perrone displayed a "head-set" controller--a sensor directed at the forehead to monitor brainwaves, for the someday-implementation of brain-to-robot control.Then, just when it seemed this might be the end of the day's futuristic offerings, a mystery voice from offstage pronounced "I've got some toys"--proving to be guest-visitor James Gosling, there to explore his cutting-edge work with Liquid Robotics. While most think of robots as something with wheels or arms or lasers, Gosling explained, the Liquid Robotics vehicle is an entirely new and innovative ocean-going 'bot. Looking like a floating surfboard, with an attached set of underwater wings, the autonomous devices roam the oceans using only the energy of ocean waves to propel them, and a single actuated rudder to steer. "We have to accomplish all guidance just by wiggling the rudder," Gosling said. The devices offer applications from self-installing weather buoy, to pollution monitoring station, to marine mammal monitoring device, to climate change data gathering, to even ocean life genomic sampling. The early versions of the vehicle used C code on very tiny industrial micro controllers, where they had to "count the bytes one at a time."  But the latest generation vehicles, which just hit the water a week or so ago, employ an ARM processor running Linux and the ARM version of JDK 7. Gosling explained that vehicle communication from remote locations is achieved via the Iridium satellite network. But because of the costs of this communication path, the data must be sent in very small bursts--using SBD short burst data. "It costs $1/kb, so that rules everything in the software design,” said Gosling. “If you were trying to stream a Netflix video over this, it would cost a million dollars a movie. …We don't have a 'big data' problem," he quipped. There are currently about 150 Liquid Robotics vehicles out traversing the oceans. Gosling demonstrated real time satellite tracking of several vehicles currently at sea, noting that Java is actually particularly good at AI applications--due to the language having garbage collection, which facilitates complex data structures. To close-out his time onstage, Gosling of course participated in the ceremonial Java tee-shirt toss out to the audience…In parting, Chander passed the JavaOne Community Chairperson baton to Stephen Chin, Java Technology Evangelist, Oracle. Onstage in full motorcycle gear, Chin noted that he'll soon be touring Europe by motorcycle, meeting Java Community Members and streaming live via UStream--the ultimate manifestation of community and technology!  He also reminded attendees of the upcoming JavaOne Latin America 2012, São Paulo, Brazil (December 4-6, 2012), and stated that the CFP (call for papers) at the conference has been extended for one more week. "Remember, December is summer in Brazil!" Chin said.

    Read the article

  • [GEEK SCHOOL] Network Security 4: Windows Firewall: Your System’s Best Defense

    - by Ciprian Rusen
    If you have your computer connected to a network, or directly to your Internet connection, then having a firewall is an absolute necessity. In this lesson we will discuss the Windows Firewall – one of the best security features available in Windows! The Windows Firewall made its debut in Windows XP. Prior to that, Windows system needed to rely on third-party solutions or dedicated hardware to protect them from network-based attacks. Over the years, Microsoft has done a great job with it and it is one of the best firewalls you will ever find for Windows operating systems. Seriously, it is so good that some commercial vendors have decided to piggyback on it! Let’s talk about what you will learn in this lesson. First, you will learn about what the Windows Firewall is, what it does, and how it works. Afterward, you will start to get your hands dirty and edit the list of apps, programs, and features that are allowed to communicate through the Windows Firewall depending on the type of network you are connected to. Moving on from there, you will learn how to add new apps or programs to the list of allowed items and how to remove the apps and programs that you want to block. Last but not least, you will learn how to enable or disable the Windows Firewall, for only one type of networks or for all network connections. By the end of this lesson, you should know enough about the Windows Firewall to use and manage it effectively. What is the Windows Firewall? Windows Firewall is an important security application that’s built into Windows. One of its roles is to block unauthorized access to your computer. The second role is to permit authorized data communications to and from your computer. Windows Firewall does these things with the help of rules and exceptions that are applied both to inbound and outbound traffic. They are applied depending on the type of network you are connected to and the location you have set for it in Windows, when connecting to the network. Based on your choice, the Windows Firewall automatically adjusts the rules and exceptions applied to that network. This makes the Windows Firewall a product that’s silent and easy to use. It bothers you only when it doesn’t have any rules and exceptions for what you are trying to do or what the programs running on your computer are trying to do. If you need a refresher on the concept of network locations, we recommend you to read our How-To Geek School class on Windows Networking. Another benefit of the Windows Firewall is that it is so tightly and nicely integrated into Windows and all its networking features, that some commercial vendors decided to piggyback onto it and use it in their security products. For example, products from companies like Trend Micro or F-Secure no longer provide their proprietary firewall modules but use the Windows Firewall instead. Except for a few wording differences, the Windows Firewall works the same in Windows 7 and Windows 8.x. The only notable difference is that in Windows 8.x you will see the word “app” being used instead of “program”. Where to Find the Windows Firewall By default, the Windows Firewall is turned on and you don’t need to do anything special in order for it work. You will see it displaying some prompts once in a while but they show up so rarely that you might forget that is even working. If you want to access it and configure the way it works, go to the Control Panel, then go to “System and Security” and select “Windows Firewall”. Now you will see the Windows Firewall window where you can get a quick glimpse on whether it is turned on and the type of network you are connected to: private networks or public network. For the network type that you are connected to, you will see additional information like: The state of the Windows Firewall How the Windows Firewall deals with incoming connections The active network When the Windows Firewall will notify you You can easily expand the other section and view the default settings that apply when connecting to networks of that type. If you have installed a third-party security application that also includes a firewall module, chances are that the Windows Firewall has been disabled, in order to avoid performance issues and conflicts between the two security products. If that is the case for your computer or device, you won’t be able to view any information in the Windows Firewall window and you won’t be able to configure the way it works. Instead, you will see a warning that says: “These settings are being managed by vendor application – Application Name”. In the screenshot below you can see an example of how this looks. How to Allow Desktop Applications Through the Windows Firewall Windows Firewall has a very comprehensive set of rules and most Windows programs that you install add their own exceptions to the Windows Firewall so that they receive network and Internet access. This means that you will see prompts from the Windows Firewall on occasion, generally when you install programs that do not add their own exceptions to the Windows Firewall’s list. In a Windows Firewall prompt, you are asked to select the network locations to which you allow access for that program: private networks or public networks. By default, Windows Firewall selects the checkbox that’s appropriate for the network you are currently using. You can decide to allow access for both types of network locations or just to one of them. To apply your setting press “Allow access”. If you want to block network access for that program, press “Cancel” and the program will be set as blocked for both network locations. At this step you should note that only administrators can set exceptions in the Windows Firewall. If you are using a standard account without administrator permissions, the programs that do not comply with the Windows Firewall rules and exceptions are automatically blocked, without any prompts being shown. You should note that in Windows 8.x you will never see any Windows Firewall prompts related to apps from the Windows Store. They are automatically given access to the network and the Internet based on the assumption that you are aware of the permissions they require based on the information displayed by the Windows Store. Windows Firewall rules and exceptions are automatically created for each app that you install from the Windows Store. However, you can easily block access to the network and the Internet for any app, using the instructions in the next section. How to Customize the Rules for Allowed Apps Windows Firewall allows any user with an administrator account to change the list of rules and exceptions applied for apps and desktop programs. In order to do this, first start the Windows Firewall. On the column on the left, click or tap “Allow an app or feature through Windows Firewall” (in Windows 8.x) or “Allow a program or feature through Windows Firewall” (in Windows 7). Now you see the list of apps and programs that are allowed to communicate through the Windows Firewall. At this point, the list is grayed out and you can only view which apps, features, and programs have rules that are enabled in the Windows Firewall.

    Read the article

  • CodePlex Daily Summary for Sunday, May 02, 2010

    CodePlex Daily Summary for Sunday, May 02, 2010New ProjectsAdventureWorks in Access: AdventureWorks database in Access format. Data has been ported in Access starting from Adventure Works database for SQL Server 2008.amplifi: This project is still under construction. We will add more information here as soon as it is available.ASP.NET MVC Bug Tracker: Bug Track written in C# ASP.NET MVC 2BigDecimal: BigDecimal is an attempt to create a number class that can have large precision. It is developed in vb.net (.net 4).CBM-Command: Coming soon....Chuyou: ChuyouCMinus: A C Minus Compiler!Complex and advanced mathematical functions: Mathematics toolkit is a Class Library Project which help Programmers to Calculate Mathematics Functions easily.Confuser: Confuser is a obfuscator for .NET. It is developed in C# and using Mono.Cecil for assembly manipulation.easypos: Micro punto de venta que permite ventas express de ropa, que se acopla fácil y transaparente con el ERP Click OneElmech Address Book: Web based Address Book for maintaining details of your business clients. This project targets Suppliers - Traders - Manufacturers - users. Applicat...Feed Viewer: Feed Viewer is able to synchronize subscribed feed and red news among all computers you are using. It understands both RSS and Atom format. It can ...Google URL Shortener, C#: Implementation in C# of generating short URLs by Goo.gl service (Google URL Shortener)MARS - Medical Assistant Record System: MARS - Medical Assistant Record SystemRx Contrib: Rx Contrib is a library which contain extensions for the Rx frameworkSimple Service Administration Tool: A simple tool to start/stop/restart a service of a WinNT based system. The tool is placed in the task bar as a notify icon, so the specified servic...Vis3D: Visual 3D controls for Silverlight.VisContent: XML content controls for ASP.NET.Windows Phone 7 database: This project implements a Isolated Storage (IsolatedStorage) based database for Windows Phone 7. The database consists of table object, each one s...New Releases$log$ / Keyword Substitution / Expansion Check-In Policy (TFS - LogSubstPol): LogSubstPol_v1.2010.0.4 (VS2010): LogSubstPol is a TFS check-in policy which insertes the check-in comments and other keywords into your source code, so you can keep track of the ch...Bojinx: Bojinx Core V4.5.1: The following new features were added: You can now use either BojinxMXMLContext or ContextModule to configure your application or module context. ...CBM-Command: Initial Public Demonstration: Initial public demonstration version. Can browse attached drives and display directory of any attached drive. A common question is "How does it w...Confuser: Confuser v1.0: It is the Confuser v1.0 that used to confuse the reverse-engineers :)Font Family Name Retrieval: 2nd Release: Added New MKV Font Extractor application to showcase the library. MKV Font Extractor depends on MKVToolnix to be installed before it will work. R...Google URL Shortener, C#: Goo.gl-CS v1 Beta: Extract the ZIP file to any location. Two files have to be in the same folder!HouseFly controls: HouseFly controls alpha 0.9.6.1: HouseFly controls release 0.9.6.1 alphaIsWiX: IsWiX 1.0.261.0: Build 1.0.261.0 - built against Fireworks 1.0.264.0. Adds support for VS2010 Integration to support WiX 3.5 beta releases.Managed Extensibility Framework (MEF) Contrib: MefContrib 0.9.2.0: Added conventions based catalog (read more at http://www.thecodejunkie.com/2010/03/bringing-convention-based-registration.html) MEF + Unity integ...MARS - Medical Assistant Record System: license: licenseNSIS Autorun: NSIS Autorun 0.1.5: This release includes source code, executable binary, files and example materials.PHP.net: Release 0.0.0.1: This is the first release of PHP.Net. The features available in this release are: new File Save File Save As Open File In the rar file is th...Rx Contrib: V1: Rx Contrib is ongoing effort for community additions for Rx. Current features are: ReactiveQueue: ISubject that does not loose values if there are ...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4 v1.0: - Added a margin for icon display. - Added the PopupMenuItem class which is a derivative of the DockPanel. - Find* methods can now drill down the v...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4 v1.1 Beta: - Added a margin for icon display. - Added the PopupMenuItem class which is a derivative of the DockPanel. - Added a AddSeperator method. - The Fin...Simple Service Administration Tool: SSATool 0.1.3: New Simple Service Administration Tool Version 0.1.3 compiled with Visual Studio .NET 2010.sMAPedit: sMAPedit v0.7a + Map-Pack: Required Additional Map-Pack Added: height setting by color picker (shift+leftclick)sMAPedit: sMAPedit v0.7b: Fixed: force a gargabe collection update to prevent pictureBox's memory leaksqwarea: Sqwarea 0.0.228.0 (alpha): This release corrects a critical bug in ConnexityNotifier service. We strongly recommend you to upgrade to this version. Known bugs : if you open...StackOverflow Desktop Client in C# and WPF: StackOverflow Client 0.1: Source code for the sample.TortoiseHg: TortoiseHg 1.0.2: This is a bug fix release, we recommend all users upgrade to 1.0.2VCC: Latest build, v2.1.30501.0: Automatic drop of latest buildVidCoder: 0.4.0: Changes: Added ability to queue up multiple video files or titles at once. These queued jobs will use the currently selected encoding settings. Mul...WabbitStudio Z80 Software Tools: Wabbitemu 32-bit Test Release: Wabbitemu Visual Studio build for testing purposesWindows Phone 7 database: Initial Release v1.0: This project implements a Isolated Storage (IsolatedStorage) based database for Windows Phone 7. The usage of this software is very simple. You cre...YouTubeEmbeddedVideo WebControl for ASP.NET: VideoControls version 1: This zip file contains the VideoControls.dll, version 1.Most Popular ProjectsRawrWBFS ManagerAJAX Control Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionASP.NETDotNetNuke® Community EditionMost Active Projectspatterns & practices – Enterprise LibraryRawrIonics Isapi Rewrite FilterHydroServer - CUAHSI Hydrologic Information System Serverpatterns & practices: Azure Security GuidanceTinyProjectNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETDambach Linear Algebra FrameworkFacebook Developer Toolkit

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35  | Next Page >