Search Results

Search found 7212 results on 289 pages for 'cumulative updates'.

Page 173/289 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • About online game servers and how to handle data

    - by TreantBG
    So my question isn't about what technology to use or how to do this or that, but a more general question. I'm currently developing a action third person shooter. With elements of RPG - weapon,armor upgrades and items. Players will be able to create new games or join old ones. So my question is how to create the game server that players will play in. I have two ideas on my mind. The player who made the game is the server. All data passes trough him and he send this data to the server updating the database of the players with their XP points kills/deaths score and other. Or my host machine is the server, the player who made the game just will open new instance on my host and will be like client. And all players send their input data to the host, the host updates the game and send response back to client for any new changes like where is the enemy and other. And if i choose option 1 is there a chance the host to change the game content and manipulate the game results? (I think there is but i'm not sure) And if i choose option 2 isn't that raising the response time and potentially the game lag? or maybe there is another option?

    Read the article

  • CUDA & MSI GT60 with Optimus enabled GTX670M?

    - by user1076693
    I have a MSI GT60 Laptop with an Optimus enabled GTX 670M GPU, and I have been trying to get CUDA going in Ubuntu 12.04 environment. I realize that Optimus is not supported in Linux, but I have read the following post suggesting that CUDA works for hybrid GPUs. How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics? I installed the NVIDIA driver via sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current The resulting driver version is 302.17, and supposedly GTX 670M is supported since 295.59. I also downloaded CUDA 4.2 from the NVIDIA site, and compiled it against nvidia-current libraries. Unfortunately, when I run deviceQuery in the CUDA SDK, I get the following output cudaGetDeviceCount returned 38 -> no CUDA-capable device is detected Checking /proc/driver/nvidia/gpus/0/information gives the following Model: GeForce GTX 670M IRQ: 16 GPU UUID: GPU-????????-????-????-????-???????????? Video BIOS: ??.??.??.??.?? Bus Type: PCI-E DMA Size: 32 bits DMA Mask: 0xffffffffff Bus Location: 0000:01.00.0 Here is the output of "lspci | grep VGA" 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1213 (rev ff) So... what am I doing wrong? Thanks!

    Read the article

  • Making Modular, Reusable and Loosely Coupled MVC Components

    - by Dusan
    I am building MVC3 application and need some general guidelines on how to manage complex client side interaction between my components. Here is my definition of one component in general way: Component which has it's own controller, model and view. All of the component's logic is placed inside these three parts and component is sort of "standalone", it contains it's own form, data needed for interaction, updates itself with Ajax and so on. Beside this internal logic and behavior of the component, it needs to be able to "Talk" to the outside world. By this I mean it should provide data and events (sort of) so when this component gets embedded in pages can notify other components which then can update based on the current state and data. I have an idea to use client ViewModel (in java-script) which would hookup all relevant components on page and control interaction between them. This would make components reusable, modular - independent of the context in which they are used. How would you do this, I am a bit stuck as I do not know if this is a good approach and there is a technical possibility to achieve this using java-script/jquery. The confusing part is about update via Ajax, how to ensure that component is properly linked to ViewModel when component is Ajax updated (or even worse removed or dynamically added). Also, how should this ViewModel be constructed and which technicalities to use here and in components to work as synergy??? On the web, I have found the various examples of the similar approach, but they are oversimplified (even for dummies) or over specific and do not provide valuable resource or general solution for this kind of implementation. If you have some serious examples it would be, also, very helpful. Note: My aim is to make interactions between many components on the same page simpler and more robust and elegant.

    Read the article

  • Alternatives to Pessimistic Locking in Cluster Applications

    - by amphibient
    I am researching alternatives to database-level pessimistic locking to achieve transaction isolation in a cluster of Java applications going against the same database. Synchronizing concurrent access in the application tier is clearly not a solution in the present configuration because the same database transaction can be invoked from multiple JVMs concurrently. Currently, we are subject to occasional race conditions which, due to the optimistic locking we have in place via Hibernate, cause a StaleObjectStateException exception and data loss. I have a moderately large transaction within the scope of my refactoring project. Let's describe it as updating one top-level table row and then making various related inserts and/or updates to several of its child entities. I would like to insure exclusive access to the top-level table row and all of the children to be affected but I would like to stay away from pessimistic locking at the database level for performance reasons mostly. We use Hibernate for ORM. Does it make sense to start a single (perhaps synchronous) message queue application into which this method could be moved to insure synchronized access as opposed to each cluster node using its own, which is a clear race condition hazard? I am mentioning this approach even though I am not confident in it because both the top-level table row and its children could also be updated from other system calls, not just the mentioned transaction. So I am seeking to design a solution where the top-level table row and its children will all somehow be pseudo-locked (exclusive transaction isolation) but at the application and not the database level. I am open to ideas and suggestions, I understand this is not a very cut and dried challenge.

    Read the article

  • Network connection fails when downloading data

    - by Guus
    In the past few weeks, I've noticed my network connection becoming unstable while downloading files. I'm not sure how to diagnose this issue. I've found that my network connections appear to be temporarily unresponsive (for periods up to a minute or so) while I'm downloading a file (that's large enough to not be downloaded instantly). The problem occurs when downloading data through a webbrowser, but also when using SCP to download data from a remote location. During the period in which network connections are unresponsive, every resource that I try to access over the network is unavailable. This includes: The download itself (SCP reports a 'stalled' download) Web pages (won't load - browsers report 'resource unavailable') SSH sessions (CLI freezes) VPN connections (connections terminate) IM client connections (client starts reconnection attempts) ... (everything is pretty much dead) I've noticed that during such a period, I cannot even access the (web-based) administrative console of the router on my LAN at home (although it remains reachable for other devices). The problem occurs when connected to my home network, but also when I'm in the office. Other devices than my laptop are not affected. Given the above two characteristics I assume the problem cause lies within my laptop, not the network infrastructure. I'm running Ubuntu 11.10. Apart from applying automatic updates from Ubuntu, I can't think of a change that I applied to the OS that could have started the problems. I'm absolutely positive that this issue did not occur up to a few weeks ago (as it's very noticeable, and only started to annoy me in the last few days). When the problem occurs, applications that make use of a network connection fail visibly (I get popups telling me that a VPN connection is broken, for instance). The network manager does not report any issues related to my wifi-connection though. Help?

    Read the article

  • Public Solaris/SPARC roadmap until 2015

    - by Karim Berrah
    It now public, and give you a nice overview on what's going on, where Oracle is going with Solaris and SPARC processors. It's now available from here. What can we lean from this roadmap ? well, if you look carefully: Oracle is announcing Solaris 11 this year. The release date should be ... check OOW11 Solaris 10 updates should still be released in 2012 (remember, released in 2005). Check the Solaris lifecycle to understand how long is Solaris to stay side by side with Solaris 11. in 2011, a great 3x Single Strand improvement for the T-Series. Some thing great under preparation. Probably revealed at Oracle Open World 2011. Good news for ISVs ! in 2012, a great 6x Troughput improvement for the M-Serie ! How can this be done ? .... Nearly everything on the SPARC/SOLARIS level is said through the public roadmap,but as you know the evil is in the details ;)

    Read the article

  • Simple multi-seat

    - by Oli
    I've asked about multiseat before. The answer (for 10.04) involved doing it the proper way (eg through gdm, multiple server layouts). The problem was that gdm needs to be patched or reverted to 2.20 for multiseat. It's an ugly hack that, worse than anything, will hold up future updates. As a result, I didn't do anything. I still have a spare video card. I still have the monitor, keyboard and mouse all sitting waiting to jump into action. And I still want to be able to turn that into a simple desktop. My needs don't seem complicated. I have a second video card, a USB hub and anything connected to that USB hub that I want to be dedicated to another X server. I don't need a login screen (I'm happy hard-coding in a auto-login and I'd be happy with the user starting the X server if that's possible). This is so simple in my head that I only need two questions: How can I explicitly start an X server from the command line on an unused video adapter (by passing it whatever configuration I need to)? Can I have this new X session load a desktop environment on load? This seems like something you should be able to write in a little upstart script within 10 minutes. That would be perfect for me as then I'd have a nice start/stop control over the secondary desktop from the main desktop (that I want to leave unscathed!) I'm thinking something as simple as this for the payload: su -u other_user -c "startx -- localhost hardware-information" And use .xinitrc to load openbox or something...

    Read the article

  • Using 12.04 installation as a persistent pen drive

    - by Cawas
    Disclaimer: I aim to build a self contained pen drive with my application inside, so no matter about updates. Maybe I'm looking at the wrong linux distribution to do this... Please let me know if you think so. I've tried knoppix and even lubuntu, but they don't come with enough "drivers" for Unity3D to work. Creating a custom live persistent pen drive is a real pain and I'm trying for 1 day without any success. Sure, being able to do it would probably be ideal and occupy the minimum space. Using the installation image on a pen drive, however, is good enough and is really easy to create. We can even do it from any OS, using UNetBootin, LiLi USB Creator or some other methods. Straight forward. Some recommend installing it on a pen drive. But that requires a lot of space and, I believe, it won't behave as good as something meant to be installed on a usb disk, because of memory management. So, there are only a few negative points on using the installation image that I can think of. Question here, is how to remove those drawbacks: Having to press "Try Ubuntu". That's the big one. Couldn't find how. Unable to load everything on memory and keep on running without the pen drive (like this) Unable to remove "Install Ubuntu 12.04 LTS" app. Setting the ISO to use maximum amount of space for the OS will leave pen drive with zero space left and any file saved within it from ubuntu is inaccessible from the outside (when plugin the pen drive and not booting from it). Am I missing something? Can those points be fixed?

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO)

    - by Lionel Dubreuil
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure  Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected]

    Read the article

  • Ubuntu 12.04.1 LTS and Nvidia dirver (304.51) 64bit: problem 640x480

    - by nibianaswen
    I have a problem with this configuration: Asus K55V, Ubuntu 12.04 LTS and Nvidia driver 304.51. I have remove the nouveau driver with: apt-get --purge remove xserver-xorg-video-nouveau I installed the official nvidia driver (from www.nvidia.com) but when I reboot the PC the resolution of screen is only 640x480 and the monitor is resized. Mo solution at this problem if i change the xorg.conf. Now i have uninstall the nvidia driver and reinstall with sudo apt-get purge nvidia-current sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current When I reboot the screen resolution and size is OK, but if I start nvidia-setting I received the message: You do not appear to be using the NVIDIA X driver. and with command: sudo lshw -c display | grep driver I received configuration: driver=i915 latency=0 This sound like the system is using the Intel card. When I launch command lspci | grep VGA the output is: 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1058 (rev ff) And there is no /etc/X11/xorg.conf. I have read a lot of guides on internet but without success.. How i can use nvidia card with the driver that i have installed?

    Read the article

  • Need ideas for an innovation week

    - by slandau
    So 4 times a year we have an innovation week (to even out the odd sprint releases). This whole week is dedicated to experimenting with new technology/ideas that could potentially help progress the software department or the company as a whole, and serve as sort of a starting point for new ideas and brainstorming. For example, the last one contained a lot of projects. One was the re-design of our web app into more of a Web 2.0 look and feel using JQuery and a lot of cool CSS tricks. Another was a proposal for a new bug tracking software as opposed to the clearly outdated one we use, and another was a very cool JQuery/Js design that could show the same page to multiple users on different computers and allow each of them to take "charge" of the page, disabling the other one from doing anything, and vice versa, seeing all updates in real time -- sort of like Netmeeting through Js. Well, this is my first one as a new employee so I wanted to think of something cool. We get one week (anywhere from 40-60 hours or so), and we usually pair up or do this in groups of 3-4, depending on how many projects there are. Projects have to get approved but usually that doesn't prove to be too difficult. We are in the financial analysis software industry if the domain was leading you guys to think of anything helpful. I am primarily working on a web app in MVC 2 at the moment using a lot of JQuery and a C# backend. Do you guys have any idea of something that would be cool/beneficial/worth it?

    Read the article

  • Dual monitor setup can cause boot problem?

    - by kriszpontaz
    I have a two monitor setup, one 22" (1680x1050 res., 16:10) and another 19" (1280x1024 res. 5:4). I've installed ubuntu 11.10 beta2 x86, and the installation worked fine, the system boot was successful. I've upgraded ubuntu from the main server, and after restart, booting with the kernel 3.0.0-13, my system hangs up with a purple screen, and than nothing happens (the system boots successful with the kernel image 3.0.0-8). Nvidia current drivers not installed, but if i install it, the situation is the same. I have an Nvidia 9600GT installed. I tried to boot with one screen attached, I've tried each port, but no luck at all. With kernel image 3.0.0-8 the system successfully boots with each display attached, but the farther kernels (3.0.0-11; 3.0.0-12; ect.) all freezes, even one display, or multiple attached. I have two systems with ubuntu installed, and the other (with Ati HD 2400XT, latest closed drivers) don't have any issues like this, I wrote about. Update: The problem solved by reinstalling the operatin system, without automatically installing updates during install, with only one monitor attached. After completing installation, and clean reboot, i've installed closed nVidia drivers. After all, i found it's safe to connect another monitor to the system, it's not causing any problems. Probably the situation stays like this.

    Read the article

  • Pushing complete notifications to client

    - by ton.yeung
    So with cqrs, we accept that consistency is eventual. However, that doesn't mean that the user has to continually poll, or that eventual means an update has to take more then 500ms to sync. For the sake of UX, we want to at least give the illusion of consistency, or if not possible, be as transparent as possible. With that in mind, I have this setup: angularjs web client, consumes webapi restful services, sends commands to nservicebus command handlers, saves to neventstore, dispatches events to nservicebus event handlers, sends message to signalr hub, sends notifications to angularjs web client so with that setup, theoretically some initiates a request the server validates the request sends out the necessary commands In the mean time the client gets a 200 response updates the view: working on it gets message sometime later: done, here's the updated data Here's where things get interesting, each command could spawn multiple events. Not sure if this is a serious no, no, or not, but that's how it is currently. For example, a new customer spawns CustomerIDCreated, CustomerNameUpdated, CustomerAddressUpdated, etc... Which event handler needs to notify the client? Should all of them in a progress bar style update?

    Read the article

  • Windows Randomly Stop Accepting Input from Bluetooth Keyboard

    - by dragonmantank
    I've got a laptop running Ubuntu 11.10 with the most recent updates and an Apple Wireless Keyboard that syncs via bluetooth. The Ubuntu box is also a Synergy server, using QuickSynergy to run Synergy. I'm using xmodmap to swap the option and command keys, but nothing else. Throughout the day, windows that are long running will just stop accepting input. For example, I leave gnome-terminal up and running almost 24 hours a day. If it sits for a while, it just stops accepting input. It doesn't matter if I'm ssh'd into another machine or sitting on a local tty session, it just stops accepting input. If I open a new tab or window, those work fine. The 'broken' tabs stay broken. I'm also running Turpial (a Twitter client) which will do that same thing. I tend to use the arrow keys to navigate, and it just stops accepting input. Closing it and reopening it causes it to work fine. I don't seem to have the problem in Chrome, but I tend to open up new tabs when I go somewhere instead of using existing tabs. I've updated all the packages, rebooted, and the only thing that seems to cure it is if I type on the built-in keyboard, the window will start to accept text from the bluetooth keyboard (until it stops again). I don't think the keyboard is disassociating from the laptop because it can happen while I'm using the keyboard, it seems more linked with windows that I sit for a long time. As an example, I'm typing in Chrome with the bluetooth keyboard but I have a terminal window that won't accept input.

    Read the article

  • HTG Explains: What Is RSS and How Can I Benefit From Using It?

    - by Jason Fitzpatrick
    If you’re trying to keep up with news and content on multiple web sites, you’re faced with the never ending task of visiting those sites to check for new content. Read on to learn about RSS and how it can deliver the content right to your digital doorstep. In many ways, content on the internet is beautifully linked together and accessible, but despite the interconnectivity of it all we still frequently find ourselves visiting this site, then that site, then another site, all in an effort to check for updates and get the content we want. That’s not particular efficient and there’s a much better way to go about it. Imagine if you will a simple hypothetical situation. You’re a fan of a web comic, a few tech sites, an infrequently updated but excellent blog about an obscure music genre you’re a fan of, and you like to keep an eye on announcements from your favorite video game vendor. If you rely on manually visiting all those sites—and, let’s be honest, our hypothetical example has a scant half-dozen sites while the average person would have many, many, more—then you’re either going to be wasting a lot of time checking the sites every day for new content or you’re going to be missing out on content as you either forget to visit the sites or find the content after it’s not as useful or relevant to you. RSS can break you free from that cycle of either over-checking or under-finding content by delivering the content to you as it is published. Let’s take a look at what RSS is how it can help. HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Extension GLX missing...on a desktop PC

    - by Bart van Heukelom
    I just installed Ubuntu 12.10 on a new PC with an Nvidia GTX 560 graphics card, but after installing the Nvidia proprietary drivers (either -current or -current-updates), Unity won't start. When trying to start it manually I get the message "extension GLX missing". I've searched around and found results like this question which point out it's a problem with Nvidia Optimus laptops. However, I don't have this problem on a laptop, but on a desktop PC. lshw output for the graphics card: *-display description: VGA compatible controller product: GF114 [GeForce GTX 560 SE] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:f4000000-f5ffffff memory:e0000000-e7ffffff memory:e8000000-ebffffff ioport:e000(size=128) memory:f6000000-f607ffff and CPU: *-cpu description: CPU product: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz vendor: Intel Corp. physical id: 40 bus info: cpu@0 version: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz slot: SOCKET 0 size: 1600MHz capacity: 3800MHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms cpufreq configuration: cores=4 enabledcores=4 threads=4

    Read the article

  • What is a correct/polite way to inherit from an abandoned open-source project for a new open-source project?

    - by Kabumbus
    My team just tried to contact some guys from an old open source project hosted on code.google.com. We told them that we'd like to join their project and commit to it — at least to some branch of it — but no one responded to us. We tried everyone, owners and committers; no one was in any way active, and no one replied. But we have some code to commit and we really would love to continue work on that project. So we need to create a new project. We came up with a name for it which is close to but not a duplicate of the name of the project we want to inherit from. How should we do our first commit, and what should the commit message be? Should we just copy their code to our repository with a comment like "we inherited this code, we found it here under such and such a license ... now we're upgrading it to this more/less strict license ..."? Or should we just use their code as our first commit, with updates saying "we inherited from ... we made such and such changes ..."?

    Read the article

  • Weird system freeze. Nothing works keyboard/mouse/reset button - Ubuntu 12.04 64bits [closed]

    - by Simon
    I have fresh PC: i5 3570K with Intel 4000 onboard ASrock Z77 Extreme4-m 8GG RAM - Adata 1333Mhz... 1TB Seagate drive 7200rpm I've also fresh systems - Ubuntu/Win7 and today there was something strange. Ubuntu twice just freezed. Everything stopped, even keyboard and mouse wasn't responding. Even RESET button didn't work. Right now memtest is running, but I'd like to know, where else can i look for cause. Can it be software fault if even reset isn't working? Only long reset pressing rebooted PC... I'm a bit confused. Or should I test components - CPU, motherboard, disk. Which logs in Ubuntu should I check to diagnose cause? EHm I had few adventures with this PC already. Shipped motherboard was broken (ASrock Z68 Extreme3) and had old bios, so I had to contact with reseller, replace it and at the end decided on Z77, but everything took 3 weeks, so I have bad feelings... Edit: Both freezes were during editing something in gedit (it can be coincidence) and after few updates today - when memtest is end I'll check what was updated

    Read the article

  • Visual Studio 2010 Service Pack 1 Released

    - by krislankford
    The VS 2010 SP 1 release was simultaneous to the release of TFS 2010 SP1 and includes support for the Project Server Integration Feature Pack and updates to .NET Framework 4.0. The complete Visual Studio SP1 list including Test and Lab Manager: http://support.microsoft.com/kb/983509 The release addresses some of the most requested features from customers of Visual Studio 2010 like better help support IntelliTrace support for 64bit and SharePoint Silverlight 4 Tools in the box unit testing support on .NET 3.5 a new performance wizard for Silverlight Another major addition is the announcement of Unlimited Load Testing for Visual Studio 2010 Ultimate with MSDN Subscribers! The benefits of Visual Studio 2010 Load Test Feature Pack and useful links: Improved Overall Software Quality through Early Lifecycle Performance Testing: Lets you stress test your application early and throughout its development lifecycle with realistically modeled simulated load. By integrating performance validations early into your applications, you can ensure that your solution copes with real-world demands and behaves in a predictable manner, effectively increasing overall software quality. Higher Productivity and Reduced TCO with the Ability to Scale without Incremental Costs: Development teams no longer have to purchase Visual Studio Load Test Virtual User Pack 2010. Download the Visual Studio 2010 Load Test Feature Pack Deployment Guide Get started with stress and performance testing with Visual Studio 2010 Ultimate: Quality Solutions Best Practice: Enabling Performance and Stress Testing throughout the Application Lifecycle Hands-On-Lab: Introduction to Load Testing with ASP.NET Profile in Visual Studio 2010 How-Do-I videos: Use ASP.NET Profiler in Load Tests Use Network Emulation in Load Tests VHD/VPC walkthrough: Getting Started with Load and Performance Testing Best Practice guidance: Visual Studio Performance Testing Quick Reference Guide

    Read the article

  • ASP.NET MVC 3 (C#) Software Architecture

    - by ryanzec
    I am starting on a relatively large and ambitious ASP.NET MVC 3 project and just thinking about the best way to organize my code. The project is basically going to be a general management system that will be capable of supporting any type management system whether it be a blogging system, cms, reservation system, wikis, forums, project management system, etc…, each of them being just a separate 'module'. You can read more about it on my blog posted here : http://www.ryanzec.com/index.php/blog/details/8 (forgive me, the style of the site kinda sucks). For those who don't want to read the long blog post the basic idea is that the core system itself is nothing more than a users system with an admin interface to manage the users system. Then you just add on module as you need them and the module I will be creating is a simple blog post to test it out before I move on to the big module which is a project management system. Now I am just trying to think of the best way to structure this so that it is easy for users to add in there own modules but easy for me to update to core system without worrying about the user modifying the core code. I think the ideal way would be to have a number of core projects that user is specifically told not to modify otherwise the system may become unstable and future updates would not work. When the user wants to add in there own modules, they would just add in a new project (or multiple projects). The thing is I am not sure that it is even possible to use multiple projects all with their own controllers, razor view template, css, javascript, etc... in one web application. Ideally each module would have some of it own razor view templates, css, javascript, image files and also need access to some of the core razor view templates, css, javascript, image files which would is in a separate project. It is possible to have 1 web application run off of controllers, razor view templates, css, javascript, image files that are store in multiple projects? Is there a better was to structure this to allow the user to easily add in module with having to modify the core code?

    Read the article

  • LiveMeeting VC PowerShell PASS – Troubleshooting SQL Server with PowerShell

    - by Laerte Junior
    Guys, join me on Wednesday July 18th 12 noon EDT (GMT -4) for a presentation called Troubleshooting SQL Server With PowerShell. It will be in English, so please make allowances for this. I’m sure that you’re aware that my English is not perfect, but it is not so bad. I will do my best, you can be sure. The registration link will be available soon from PowerShell.sqlpass.org, so I hope to see you there. It will be a session without slides. Just code; pure PowerShell code. Trust me, We will see a lot of COOL stuff.Big thanks to Aaron Nelson (@sqlvariant) for the opportunity! Here are some more details about the presentation: “Troubleshooting SQL Server with PowerShell – The Next Level’ It is normal for us to have to face poorly performing queries or even complete failure in our SQL server environments. This can happen for a variety of reasons including poor Database Designs, hardware failure, improperly-configured systems and OS Updates applied without testing. As Database Administrators, we need to take precaution to minimize the impact of these problems when they occur, and so we need the tools and methodology required to identify and solve issues quickly. In this Session we will use PowerShell to explore some common troubleshooting techniques used in our day-to-day work as s DBA. This will include a variety of such activities including Gathering Performance Counters in several servers at the same time using background jobs, identifying Blocked Sessions and Reading & filtering the SQL Error Log even if the Instance is offline The approach will be using some advanced PowerShell techniques that allow us to scale the code for multiple servers and run the data collection in asynchronous mode.

    Read the article

  • Oracle E-Business Suite is Helping to Save Lives at the National Marrow Donor Program

    - by Di Seghposs
    To improve the management of its life-saving operations, the National Marrow Donor Program recently modernized its financial and procurement operations by upgrading to Oracle E-Business Suite 12.1.   As the global leader in bone marrow and umbilical cord blood transplants, the NMDP manages a complex ecosystem of donor, patient, hospital, and biological data. “Maintaining accurate data and having an efficient matching process is essential, particularly as our global database of bone marrow patients grows and donor lists expand,” says Bruce Schmaltz, director of finance/controller. “We rely on the Oracle E-Business Suite to ensure our procurement and financial management processes meet the highest standards, enabling our growing non-profit to work swiftly and efficiently to help improve and save lives.” As the non-profit organization and its registry grew larger, NMDP needed a modern platform to store and integrate its financial information and complicated procurement process. It selected Oracle E-Business Suite for its ability to fit seamlessly into NMDP’s enterprise architecture. NMDP initially implemented Oracle E-Business Suite release 12 by leveraging Oracle Business Accelerators, which are rapid implementation tools and templates that help reduce implementation time and costs. With Oracle Financial Management and Oracle Procurement, NMDP has streamlined back-office processes and integrated its procure-to-pay business processes by leveraging industry leading accounts payable, accounts receivable, and general ledger modules. NMDP is currently rolling out Oracle Hyperion Performance Management applications and plans to implement Oracle Order Management and Oracle Advanced Pricing by the end of 2012. Read more details about NMDP’s modernization efforts.  For more updates on Oracle Financial Management Solutions, view our November 2012 Oracle Information InDepth Financial Management newsletter. Subscribe Now. 

    Read the article

  • Fast determination of whether objects are onscreen in 2D

    - by Ben Ezard
    So currently, I have this in each object's renderer's update method: float a = transform.position.x * Main.scale; float b = transform.position.y * Main.scale; float c = Camera.main.transform.position.x * Main.scale; float d = Camera.main.transform.position.y * Main.scale; onscreen = a + width - c > 0 && a - c < GameView.width && b + height - d > 0 && b - d < GameView.height; transform.position is a 2D vector containing the game engine's definition of where the object is - this is then multiplied by Main.scale to translate that coordinate into actual screen space Similarly, Camera.main.transform.position is the in-engine representation of where the main camera is, and this is also multiplied by Main.scale The problem is, as my game is tile-based, thousands of these updates get called every frame, just to determine whether or not each object should be drawn - how can I improve this please?

    Read the article

  • Simple Interactive Search with jQuery and ASP.Net MVC

    - by Doug Lampe
    Google now has a feature where the search updates as you type in the search box.  You can implement the same feature in your MVC site with a little jQuery and MVC Ajax.  Here's how: Create a javascript global variable to hold the previous value of your search box. Use setTimeout to check to see if the search box has changed after some interval.  (Note: Don't use setInterval since we don't want to have to turn the timer off while Ajax is processing.) Submit the form if the value is changed. Set the update target to display your results. Set the on success callback to "start" the timer again.  This, along with step 2 above will make sure that you don't sent multipe requests until the initial request has finished processing. Here is the code: <script type="text/javascript"> var searchValue = $('#Search').val(); $(function () {     setTimeout(checkSearchChanged, 0.1); }); function checkSearchChanged() {     var currentValue = $('#Search').val();     if ((currentValue) && currentValue != searchValue && currentValue != '') {         searchValue = $('#Search').val();         $('#submit').click();     }     else {         setTimeout(checkSearchChanged, 0.1);     } } </script> <h2>Search</h2> <% using (Ajax.BeginForm("SearchResults", new AjaxOptions { UpdateTargetId = "searchResults", OnSuccess = "checkSearchChanged" })) { %>     Search: <%   = Html.TextBox("Search", null, new { @class = "wide" })%><input id="submit" type="submit" value="Search" /> <% } %> <div id="searchResults"></div> That's it!

    Read the article

  • Lucene best practice

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea(make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >