Search Results

Search found 5436 results on 218 pages for 'transfer rate'.

Page 95/218 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • IOS 6.0.1 - 5W happiness for ALL.

    - by Barry Shulam
    In my first iPad Gen1 and iPad Gen2 I received a 10W charger.  When you plugged in your devices to a lower rated source It would display a msg box telling you the device could not charge from that source.It seem now the latest update for the iPad minis permits the larger brother iPad2 to charge from the 5w charger.I will take longer!  However you are no longer stranded by a limited division of which charger can charge the apple devices -  as they all can, just at different rate now.Try it yourself and let me know if you charge the 3 and 4 with the 5w iphone ipod adapters.Peace,Kosher Koder.

    Read the article

  • How to cull liquids

    - by Cyral
    I use culling on my Tiles in my 2D Tile Based Platformer, so only ones needed are drawn on screen. Thats easy to do. However, My Liquid tiles (Water, lava, etc) require an Update Method aswell as the normal Draw, which does checks against tiles, makes it flow, etc. So how should I cull liquid updates in my game? Not culling is to slow, culling only on screen looks awkward when you move. What do you think would be best for the player? Maybe someway of culling the visible tiles PLUS also adding the width/height of the viewport to start culling tiles at a fast enough rate in front of the player so it dosent look awkward when moving? (Not sure how to do this though, something with MaxSpeed of player and width of screen)

    Read the article

  • Are there any examples of a temporal field/object updater?

    - by Bryan Agee
    The system in question has numerous examples of temporal objects and fields--ones which are a certain variable at a certain point in time. An example of this would be someone's rate of pay--there are different answers depending on when you ask and what the constraints might be; eg, can there ever be more than one of a certain temporal object concurrently, etc. Ideally, there would be an object that handles those constraints when a new state/stateful object is introduced; when a new value is set, it would prevent creating negative ranges and overlaps. Martin Fowler has written some great material on this (such as this description of Temporal Objects) , but what I've found of it tends to be entirely theoretic, with no concrete implementations. PHP is the target language, but examples in any language would be most helpful.

    Read the article

  • Track those visitors who come through a particular link

    - by busybee235
    I want to track visitors who come to my site through a particular link. For example, those visitors coming from http://www.domain.com/abc123, I can get their pageviews, time on site, bounce rate, referrer pages per visit etc. After that I can store that info into by database on daily basis. Can anyone suggest any service or api or any software for the same? I have used Google Analytics utm tags that work straight well for my requirement but I don't know how many links I can track with it. I have around 80-100 links to track a day and the number of links will be increasing. I couldn't find any documentation regarding limit of campaigns in GA. If there's no such limit, I can start this project. Thanks

    Read the article

  • What ethical problems realistically arise in programming?

    - by Fishtoaster
    When I co-oped during college, I had to fill out an evaluation of the co-op afterwards. One metric I always had to rate was how much the company required me to "Make ethical decisions related to your profession." This always seemed kinda silly- I mean, my first co-op was writing java apps to manage industrial radios. There wasn't much moral ambiguity going on. Anyway, I'm wonder what sort of ethical dilemmas one might actually encounter in software development. Edit: It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter. - Nathanial Borenstein

    Read the article

  • Interpolation gives the appearance of collisions

    - by Akroy
    I'm implementing a simple 2D platformer with a constant speed update of the game logic, but with the rendering done as fast as the machine can handle. I interpolate positions between actual game updates by just using the position and velocity of objects at the last update. This makes things look really smooth in general, but when something hits a wall/floor, it appears to go through the wall for a moment before being positioned correctly. This is because the interpolator is not taking walls into account, so it guesses the position into walls until the actual game update fixes it. Are there any particularly elegant solutions for this? Simply increasing the update rate seems like a band-aid solution, and I'm trying to avoid increasing the system reqs. I could also check for collisions in the actual interpolator, but that seems like heavy overhead, and then I'm no longer dividing the drawing and the game updating.

    Read the article

  • Oracle Fusion HCM Gains Traction and Customer Recognition

    - by Scott Ewart
    Oracle Fusion HCM Gains Traction and Customer Recognition at the HRO Summit Europe in Barcelona Audience voted Oracle Fusion Human Capital Management as best in Most Reliable, Most Innovative and Best in Class. During the annual European HRO Summit in Barcelona, HRO buyers, service providers, third party advisors and other attendees were visibly impressed with the Fusion HCM product stack. Following the “present-off” among four technology vendors, Oracle was voted first in the following categories: Which technology could best suit the needs for your company Which technology do you think came across as the most reliable Which technology offers the most innovation Based on what you heard today, which technology presentation would you rate as best in class Oracle was voted second in the two other remaining categories Click here for the full article ==> http://bit.ly/sxC3tX

    Read the article

  • Geek Deals: Cheap SSDs, Discounted Monitors, and Free Apps

    - by Jason Fitzpatrick
    Looking to save some cash while stocking up on computers, peripherals, apps, and other goodies? Hit up our deal list for discounts on all manner of geeky gear. We’ve combed the net and grabbed some fresh off the press deals for you to take advantage of. Unlike traditional brick and mortar sales internet deals are fast and furious so don’t be surprised if by the time you get to a particularly hot deal the stock is gone or the uses-per-coupon rate has been exceeded. What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • How do I correctly group albums in Banshee?

    - by Kevin
    I use Banshee to manage & play my music (most of which came from emusic and CDs). I consistently have problems getting songs from the same album to appear as one album. This is particularly a problem with compilations, i.e. multiple artists on the same album. When I transfer songs to my Android phone (Nexus S), this issue follows. How do I edit files so that Banshee correctly identifies all the songs as belonging to the same album? I am using the latest Banshee via the unstable PPA.

    Read the article

  • What is a business problem?

    - by Juha Untinen
    Can you give an example or two of what a business problem is? I hear this word thrown around a lot, but even after searching, I cannot find a clear answer. Especially when it comes to software development. Are any of these classified as a business problem? Convert invoice from format A to format B Send data from Company A database to Company B database Create a program for time reporting Make the financial software retrieve data faster Generate a weekly data transfer volume statistic Or is a business problem something else?

    Read the article

  • Install package with dependencies offline

    - by ArtemStorozhuk
    Right now I have 2 computers: Has connection to the internet and has installed package A. Doesn't have connection to the WEB. On this PC I need to install package A. I decided to download all needed packages using first PC and transfer them to the second PC via USB. I have searched how to get all needed packages for some deb installation and here's what I've found. But when I run: apt-get --print-uris --yes install A | grep ^\' | cut -d\' -f2 > downloads.list on first PC I got empty file because this package is already installed there (and I don't want to uninstall it). Also package A is very complicated and depends on package B which depends on package C and package C is not installed on the second PC. So how can I download all needed packages? Or is there any other way of installing it? Thanks for the help.

    Read the article

  • What is the best way to promote a programming blog?

    - by paul
    (The guys from 'Programmers' referred me here...) How do you promote your programming blog? I recently started http://blackforestcoder.blogspot.com/ to record my progress working with new technologies and ideas. The main aim being to provide a list of pitfalls and solutions and also to get feedback from readers. Since I set it up 10 days ago I have only had about 2-3 hits even though Google is supposed to be indexing it. How might I boost the hit rate?

    Read the article

  • Texture artifacts on iPad

    - by MrDatabase
    I'm porting an iPhone game to the iPad. When I move textures "quickly" (5.0 pixels every update at a rate of 60 Hz) I start to see little "artifacts" or remnants of where the texture used to be. I'm not sure if I know the correct terminology for this... imagine a texture at some location on the screen... then next to it is the same texture but faded a bit... then the same texture again just faded a bit more. I'm using CADisplayLink to drive my update loop if that helps. Also I didn't see this issue on the 3G or the iPhone 4. Any ideas? Cheers!

    Read the article

  • Why have mp3 files ripped with Lame always have 128 kbit/s irrespect of settings?

    - by Takkat
    Using Sound Juicer I am able to rip Cds very conveniently. I would like to rip them in about 256 kbit/s variable bitrate. To accomplish this I have defined the settings for mp3 in gnome-audio-profiles-properties as follows: audio/x-raw-int,rate=44100,channels=2 ! lame name=enc mode=0 vbr-quality=0 ! id3v2mux where vbr-quality=0 should give me a variable bitrate averaging 245 kbit/s. The resulting files however always say they are in 128 kbit/s. Is this only a tagging bug or is indeed the bitrate that low? How could I find out?

    Read the article

  • Frame timing for GLFW versus GLUT

    - by linello
    I need a library which ensures me that the timing between frames are more constant as possible during an experiment of visual psychophics. This is usually done synchronizing the refresh rate of the screen with the main loop. For example if my monitor runs at 60Hz I would like to specify that frequency to my framework. For example if my gameloop is the following void gameloop() { // do some computation printDeltaT(); Flip buffers } I would like to have printed a constant time interval. Is it possible with GLFW?

    Read the article

  • The danger of changing the domain of your portfolio

    - by Mervin
    So I have a online portfolio that is available at mervin-ux-portfolio.com but I am planning to change hosts since the current host I am hosting it with is hitting me with a very high yearly renewal rate. When I was inquiring about domain transfers ,,they told me that since I had not initiated the domain transfer within 14 days of the expiry of the domain ,they cannot do it immediately and it would take about two weeks to to release the domain name. Since I dont like the idea of my site being down for like 2 weeks ,I was wondering if I should start afresh with a new domain on a new host and what were the potential dangers of that ( I have the entire site backup,so creating a replica of the site on the new host wont be hard) I also wont be losing any business or work since I work full time currently but I was just wondering about the challenges in terms of getting my domain name back to the top of search results and basically getting it out there assuming I go the new domain name approach. I know this is strictly not an UX question but I was hoping people could give some suggestions on what I should do

    Read the article

  • Move Joomla website to new folder

    - by Jon
    I currently have a website. I have created a new folder on the website called V2. Under this folder I have installed Joomla and configured my new looking site. I now want to make V2 the default website. I could point the website to that V2 directory however I have other folders under the current root website that I need to keep. How can I transfer V2 to the root of my website? Is it just a case of copying all the files?

    Read the article

  • is it possible for windows viruses when downloaded through ubuntu affect my windows os

    - by fr33c0untry
    I know that Ubuntu is immune to virus so there is no question of it getting infected while browsing the net.however i frequently transfer files from my pendrive (which i get from other virus infested computers) to my own laptop and save it on the data drive which is shared by both windows and ubuntu.i would like to know if there is a chance for windows viruses which might get saved and then infect it whenever i switch to windows later on.its ironic that i scan my pendrive using avast on windows and then save all my files to my hard drive to keep my laptop free from virus eventhough i have ubuntu.can anyone suggest an alternative.thanks in advance.

    Read the article

  • What has been your experience with paid support from Canonical ?

    - by gabkdlly
    I am considering buying "Ubuntu Desktop Support" from Canonical for 2 reasons: I have a couple of issues that I would like professional help with. ( Specifically a recurring kernel panic, and a slow wireless connection. ) I would like to lend a helping hand toward supporting Ubuntu financially. However, I am a bit worried that once I transfer the money, they will end up just referring me to the bug tracker on Launchpad. Also, free support options like this site have the pleasant property that they are open to the internet, meaning that if my issue gets fixed, it is more likely to help others with the same problem. What does paying for support from Canonical actually get you ?

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

  • Content Manager Assistant PSVita Linux Does NOT Recognize USB Port

    - by Nicky Bailuc
    I have an external copy of Windows 7 alongside Quantal and I installed Content Manager Assistant on it. I was able to start the program successfully by finding the Executable file of the program in the program folder in Windows and run it in Wine, however Wine didn't recognize my PSVita that was connected through one of my USB ports. Is there any way to configure WINE to properly recognize the Vita? Content Manager Assistant is a Windows and Mac only program that allows you to transfer files between your PC and PSVita, kinda like iTunes for iPod.

    Read the article

  • Reality Check Webinar: How Does Your End User Adoption Fare Against 300 other Companies?

    - by Di Seghposs
    Gain insight into Neochange's 2012 Adoption Insight Report and Compare your End User Adoption Rate and Strategy! Discover why user adoption is a key factor to your IT investment's success and how Oracle UPK Professional can help ensure it!  Join us as Chris Dowse, CEO of Neochange and Beth Renstrom, Manager of Oracle UPK Outbound Product Mangement, reveal the results of the user adoption survey in which user service models and productivity levels of 300 organizations are discussed in detail to identify trends that deliver higher business productivity. See how your organization's productivity and service model match up to those companies who are getting the most out of their IT investment. Thursday, April 5, 2012 -- 2:00 pm ET Click here, to register for the webcast.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Silverlight XAP Signing Certificate promotion from Thawte

    And the offers keep coming in! Another one of our key partners for testing XAP signing for trusted applications was Thawte. Their group helped provide us with valid certificates to verify their process and signing worked as expected (and verified) for Silverlight 4. Today I just got an email from their marketing department that they would like to offer Silverlight developers a discount on Thawte code-signing certificates to $89 for a 1-yearabout 70% off their current rate. Thats pretty amazing of...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >