Search Results

Search found 22041 results on 882 pages for 'kill process'.

Page 465/882 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • Parsing HTML Documents with the Html Agility Pack

    Screen scraping is the process of programmatically accessing and processing information from an external website. For example, a price comparison website might screen scrape a variety of online retailers to build a database of products and what various retailers are selling them for. Typically, screen scraping is performed by mimicking the behavior of a browser - namely, by making an HTTP request from code and then parsing and analyzing the returned HTML. The .NET Framework offers a variety of classes for accessing data from a remote website, namely the WebClient class and the HttpWebRequest class. These classes are useful for making an HTTP request to a remote website and pulling down the markup from a particular URL, but they offer no assistance in parsing the returned HTML. Instead, developers commonly rely on string parsing methods like String.IndexOf, String.Substring, and the like, or through the use of regular expressions. Another option for parsing HTML documents is to use the Html Agility Pack, a free, open-source library designed to simplify reading from and writing to HTML documents. The Html Agility Pack constructs a Document Object Model (DOM) view of the HTML document being parsed. With a few lines of code, developers can walk through the DOM, moving from a node to its children, or vice versa. Also, the Html Agility Pack can return specific nodes in the DOM through the use of XPath expressions. (The Html Agility Pack also includes a class for downloading an HTML document from a remote website; this means you can both download and parse an external web page using the Html Agility Pack.) This article shows how to get started using the Html Agility Pack and includes a number of real-world examples that illustrate this library's utility. A complete, working demo is available for download at the end of this article. Read on to learn more! Read More >

    Read the article

  • Why does tracerpt use up all of my Sql Server's memory?

    - by Cypher
    We have a MS Sql Server 2008 machine with 12 GB of RAM... twice now within the last week this server was knocked on its backside by a process called "tracerpt.exe" which was found to have taken up ALL of the system's memory and leaving nothing for sqlserver. Done my homework, figured out what this program is... but still no idea why it's hogging up so much RAM (though I have an idea), nor what application is actually executing it. This server is the back-end to a Microsoft Dynamics CRM 4.0 application which is hosted on a separate server and is our production database used for just about everything. If this program is necessary, I would like to be able to find the application that is executing this thing and remove it or disable whatever feature is causing this quite annoying occurrence. Any ideas?

    Read the article

  • Has anyone else had problems with the miserable kb980408 update?

    - by Dan
    I wasn't experienced any problems before installing kb980408 for Windows 7 x86. The update made it impossible for me to log into my computer after displaying a "Setting up personalized settings for Windows Desktop Update" message, after killing that task a "Windows Explorer is not responding - Windows is checking for a solution to the problem" message appears, then it waits forever... If I killed that task, I was not able to continue the boot process, if I ran explore.exe manually, the same thing happens. The solution, by the way, was to boot into safe mode and use the add/remove program in the control panel to uninstall this wretched piece of code. Start-up repair and/or the use of system restore will not help you, you need to uninstall.

    Read the article

  • IIS mystery: "Deadlock detected" periodically makes site unavailable

    - by jskunkle
    A few times a day, our vb.net (IIS 6.0) website is randomly throwing the following error and becomes completely unavailable for 5-15 minutes at a time while the application is recycled: ISAPI 'c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll' reported itself as unhealthy for the following reason: 'Deadlock detected'. The website ran for months on the exact same server in beta without problem - but the problem started over the weekend when we made the site live. The live site is under some load but less than many of our other production websites. How should I attack this problem? I've looked into orphaning the worker process and creating a dump file - but I'm not sure how to analyze that. Any advice or information is appreciated. Thanks, Shane

    Read the article

  • Apache+Tomcat VS Stand Alone Tomcat or GlassFish

    - by TonyZ
    Hi, I am setting up a Debian server to serve Java web applications. I have done quite a bit of research for several weeks now. Tomcat's web site says it is better to use stand alone Tomcat for speed if you are not clustering. However, I have seen many people suggest that using Apache + Tomcat gives you better security and protection against attacks. Please assume that the process will be running on port 80 as an unprivileged user. I would assume that if you are running a firewall in front the server, Tomcat should be fine. If, however, you just want to run an exposed webserver using Linux firewall, what is the best option? Or maybe someone can recommend another open source web server. I am trying to keep the solution as light as possible as these webapps will be running in containers. All opinions welcome and valued. Thanks, Tony Z

    Read the article

  • SQLAuthority News – Downloads Available for Microsoft SQL Server Compact 3.5

    - by pinaldave
    There are few downloads released for Microsoft SQL Server Compact 3.5. Here is quick lists of the same. Microsoft SQL Server Compact 3.5 Service Pack 2 for Windows Desktop SQL Server Compact 3.5 SP2 is an embedded database that allows developers to build robust applications for Windows desktops and mobile devices. The download contains the files for installing SQL Server Compact 3.5 SP2 and Synchronization Services for ADO.NET version 1.0 SP1 on Windows desktop. Microsoft SQL Server Compact 3.5 Service Pack 2 Server Tools SQL Server Compact 3.5 SP2 Server Tools Windows Installer (MSI) file installs replication components on the computer running the Internet Information Services (IIS) for synchronizing data with SQL Server 2005, SQL Server 2008 and SQL Server 2008 R2 November CTP. Microsoft SQL Server Compact 3.5 Service Pack 2 Books Online SQL Server Compact 3.5 is a small footprint in-process database engine that allows developers to build robust applications for Windows Desktops and Mobile Devices. This download contains the Books Online for the SP2 version of SQL Server Compact 3.5. Note: The brief description below the download link is taken from respective download page. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • How do I get this Cisco VPN client to connect?

    - by WebWeasel
    I've got Ubuntu 10.10 64 bit and installed network-manager-vpnc and configured the connection but I keep getting this: NetworkManager[1217]: <info> Starting VPN service 'org.freedesktop.NetworkManager.vpnc'... NetworkManager[1217]: <info> VPN service 'org.freedesktop.NetworkManager.vpnc' started (org.freedesktop.NetworkManager.vpnc), PID 4420 NetworkManager[1217]: <info> VPN service 'org.freedesktop.NetworkManager.vpnc' appeared, activating connections NetworkManager[1217]: <info> VPN plugin state changed: 1 NetworkManager[1217]: <info> VPN plugin state changed: 3 NetworkManager[1217]: <info> VPN connection 'CSI' (Connect) reply received. modem-manager: (net/tun0): could not get port's parent device NetworkManager[1217]: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/tun0, iface: tun0) NetworkManager[1217]: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/tun0, iface: tun0): no ifupdown configuration found. kernel: [ 2281.723506] tun0: Disabled Privacy Extensions avahi-daemon[1109]: Withdrawing workstation service for tun0. NetworkManager[1217]: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/tun0, iface: tun0) NetworkManager[1217]: <warn> VPN plugin failed: 1 NetworkManager[1217]: <info> VPN plugin state changed: 6 NetworkManager[1217]: <info> VPN plugin state change reason: 0 NetworkManager[1217]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active. NetworkManager[1217]: <info> Policy set 'Auto eth0' (eth0) as default for IPv4 routing and DNS. NetworkManager[1217]: <info> Starting VPN service 'org.freedesktop.NetworkManager.vpnc'... NetworkManager[1217]: <info> VPN service 'org.freedesktop.NetworkManager.vpnc' started (org.freedesktop.NetworkManager.vpnc), PID 4547 NetworkManager[1217]: <info> VPN service 'org.freedesktop.NetworkManager.vpnc' appeared, activating connections NetworkManager[1217]: <info> VPN plugin state changed: 1 NetworkManager[1217]: <info> VPN plugin state changed: 3 I've seen a couple of bugs on Launchpad that could be the same thing or have I done something wrong?

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Tulsa - Launch 2010 Highlight Events

    - by dmccollough
    Tuesday May 04, 2010 Renaissance Tulsa Hotel and Convention Center Seville II and III 6808 S. 107th East Avenue Tulsa Oklahoma 74133   For the Developer 1:00 PM – 5:00 PM Event Overview MSDN Events Present:  Launch 2010 Highlights Join your local Microsoft Developer Evangelism team to find out first-hand about how the latest features in Microsoft® Visual Studio® 2010 can help boost your development creativity and performance.  Learn how to improve the process of refactoring your existing code base and drive tighter collaboration with testers. Explore innovative web technologies and frameworks that can help you build dynamic web applications and scale them to the cloud. And, learn about the wide variety of rich application platforms that Visual Studio 2010 supports, including Windows 7, the Web, Windows Azure, SQL Server, and Windows Phone 7 Series.   Click here to register.   For the IT Professional 8:00 AM – 12:00 PM Event Overview TechNet Events Present:  Launch 2010 Highlights Join your local Microsoft IT Pro Evangelism team to find out first-hand what Microsoft® Office® 2010 and SharePoint® 2010 mean for the productivity of you and your people—across PC, phone, and browser.  Learn how this latest wave of technologies provides revolutionary user experience and how it takes us into a future of greater productivity.  Come and explore the tools that will help you optimize desktop deployment.   Click here to register.

    Read the article

  • Day 2 - Game Design Documentation

    - by dapostolov
    So yesterday I didn't cut any code for my game but I was able to do a tiny bit of research on the XNA Game Development Technology and the communities out there and do you know what? I feel I'm a bit closer to my goal. The bad news is today I didn't cut code either. However, not all is lost because I wanted to get my ideas on paper and today I just did that.  Today, I began to jot down notes about the game and how I felt the visual elements would interact with each other. Unlike my workplace, my personal level of documentation is nothing more than a task list or a mind map of my ideas; it helps me streamline my solutions quiet effectively and circumvent the long process of articulating each thought to the n-th degree. I truly dislike documentation (because I have an extremely hard time articulating my thought and solutions); however, because I tend to do a really good job with documentation I tend to get stuck writing the buggers. But as a generalist remark: 'No Developer likes documentation.' For now let's stick with my basic notes and call this post a living document. Here are my notes, fresh, from after watching the new first episode of Merlin second season! Actually, a quick recommendation to anyone who is reading this (if anyone is): I truly recommend you envelope yourself in the medium or task you're trying to tackle. Be one with moment and feel it! For instance: Are you writing a fantasy script / game? What would the music of the genre sound like? For me the Conan the Barbarian soundtrack by Basil Poledouris is frackin awesome. There are many other good CD's out there, which I listen to (some who even use medival instruments, but Conan I keep returning to. It's a creative trigger for me. Ask yourself what would the imagery look like? Time to surf google for artist renditions of fantasy! What would the game feel like? Start playing some of your favorite games that inspire you, be wary though, have some self control and don't let it absorb your time. Anyhow, onto the documentation... Screens, Scenes, and Sprites. Oh My! (groan...) The first thing that came to mind were the screens, I thought the following would suffice: Menu Screen Character Customisation Screen Loading Screen? Battle Ground The Menu Screen Ok. So, the thought here is when the game loads a huge title is displayed: Wizard Wars. The player is prompted with 3 menu items: 1 Player Game, 2 Player Game, and Exit. Since I'm targetting the PC platform, as a non-networked game to start, I picture myself running my mouse over each menu option and the visual element of the menu item changes, along with a sound to indicate that I am over a curent menu item. And as I move my mouse away, it changes back, and possibly an exit mouse sound. Maybe on the screen somewhere is a brazier alit with a magical tome open right beside it, OR, maybe the tome is the menu! I hear the menu music as mellow, not obtrusive or piercing. On a menu item select, a confirmation sound bellows to indicate the players selection. The Esc key will always return me to the previous screens or desktop. The menu screen must feel...dark, like a really important ritual is about to happen and thus the music should build up. 1 Player Game - > Customize Character(s) 2 Player Game - > Customize Character(s) Exit - > Back to Windows Notes: So the first thing I pick up here are a couple things: First and foremost, my artistic abilities suck crap, so I may have to hire an artist (now that i've said that, lets get techy) graphical objects will be positioned within a scene on each screen / window. Menu items will be represented grapically, possibly animated, and have sound / animation effects triggered by user input or a time line. I have an animated scene involving a brazier or fire on a stick IF I was to move this game to the xbox, I'd have to track which menu item is currently selected (unless I do a mouse pointer type thing.) WindowObject has a scene A Scene has many GameObjects GameObject has a position graphic or animation MenuObject is a GameObject which has a mouse in, mouse out, and click event which either does something graphically (animation), does something with sound, or moves to another screen.  Character Customisation Screen With either the 1 or 2 player option selected, both selections will come to this screen; a wizard requires a name, powers, and vestements of course! Player one will configure his character first and then player two. I considered a split screen for PC but to have two people fighting over a keyboard would probably suck. For XBox, a split screen could work; maybe when I get into the networking portion (phase 2 blog?) of this game I will remove the 2 player option for PC and provide only multiplayer and I will leave 2 player for xbox...hmm... Anyhow...I picture the creation process as follows: Name: (textbox / keyboard entry) - for xbox, this would have to be different. Robe Color: (color box, or something) Stats: Speed, Oomph, and Health. (as sliders) 1 as minimum and 10 as maximum. Ok, Back, and Cancel buttons / options. Each stat has a benefit which are listed below. The idea is the player decides if he wants his wizard to run fast, be a tank and ... hit with a purse.Regardless, the player will have a pool of 12 points to use. Ideally, A balanced wizard will have 5 in each attribute. Spells? The only spell of choice is a ball of fire which comes without question. The music and screen should still feel like a ritual. The Character Speed Basically, how fast your character moves and casts. Oomph (Best Monster Truck Voice): PURE POWAH!!! The damage output of your fireball. Health How much damage you can take. Notes: I realise the game dynamics may sound uninteresting at the moment; but I think after a couple releases, we could have some other grand ideas such as: saved profiles, gold to upgrade arsenal of spells, talents, etc...but for now...a vanilla fireball thrower mage will suffice for this experiment. OK. So... a MenuObject  may need to be loosely coupled to allow future items such as networking? may be a button? a CharacterObject has a name speed oomph health and a funky robe color. cap on the three stats (1-10) an arsenal of 1 spell (possibly could expand this) The Loading Screen As is. The Battleground Screen For now, I'm keeping the screen as max resolution for the PC. The screen isn't going to move or even be a split screen. I'm not aiming high here because I want to see what level of change is involved when new features / concepts are added to game content. I'm interested to find out if we could apply techniques such as MVC or MVVM to this type of development or is it too tightly coupled? This reminds me when when my best friend and I were brainstorming our game idea (this is going back a while...1994, 6?) and he cringed at the thought of bringing business technology into games, especially when I suggested a database to store character information and COM / DCOM as the medium, but it seems I wasn't far off (reflecting); just like his implementation of a xml "config file" for dynamic direct-x menus back before .net in 1999...anyhow...i digress... The Battle One screen, two characters lobing balls of fire at each other...It doesn't get better than that. Every so often a scroll appears...and the fireballs bounce off walls, or the wizard has rapid fire, or even scrolls of healing! The scroll options are endless. Two bars at the top, each the color of the wizard (with their name beside the bar) indicate how much health they have. Possibly the appearance of the scrolls means the battle is taking too long? I'm thinking 1 player controls: up, down, left, right and space to fire the button. Or even possibly, mouse click and shift - mouse button to fire a spell in the direction they are facing. Two player controls: a, s, d, f and space AND arrows (up, down, left, right) and Del key or Crtl. The game ends when a player has 0 health and a dialog box appears asking for a rematch / reconfigure / exit. Health goes down when a fireball (friendly or not), connects with a wizard. When a wizard connects with a scroll, a countdown clock / icon appears near the health bar and the wizard begins to glow. For the most part, a wizard can have only scroll 1 effect on him at a time. Notes: Ok, there's alot to cover here. a CharacterObject is a GameObject it travels at a set velocity it travels in a direction it has sounds (walking, running, casting, impact, dying, laughing, whistling, other?) it has animations (walking, running, casting, impact, dying, laughing, idle, other?) it has a lifespan (determined by health) it is alive or dead it has a position a ScrollObject is a GameObject it carries a transferance of points "damage" (or healing, bad scroll effect?) (determinde by caster) it carries a transferance of "other" it is stationary it has a sound on impact it has a stationary animation it has an impact animation / or transfers an impact animation it has a fade animation? it has a lifespan (determined by game) it is alive or dead it has a position a WallObject is a GameObject it has a sound on fireball impact? it is a still image / stationary it has an impact animation / or transfers an impact animation it is dead it has a position A FireBall is a GameObject it carries a transferance of poinst "damage" (or healing, bad scroll effect?) (determinde by caster) it travels at a set velocity it travels in a direction it has a sound it has a travel animation it has an impact animation / or transfers an impact animation it has a fade animation? it has a lifespan (determined by caster) it is alive or dead it has a position As I look at this, I can see some common attributes in each object that I can carry up to the GameObject. I think I'm going to end the documentation here, it's taken me a bit of time to type this all out, tomorrow. I'll load up my IDE and my paint studio to get some good old fashioned cowboy hacking going!   D.

    Read the article

  • Video Recording Not Working in ICS

    - by Nirav Ranpara
    I have implement code Record video in Android Phone . This code is working in 2.2 , 2.3 . not in ICS But when I checked in ICS code is not working ? here I posted code and xml file. videorecord.java import java.io.File; import java.io.IOException; import android.app.Activity; import android.app.AlertDialog; import android.content.Context; import android.content.DialogInterface; import android.content.Intent; import android.content.SharedPreferences; import android.hardware.Camera; import android.media.CamcorderProfile; import android.media.MediaRecorder; import android.os.Bundle; import android.os.CountDownTimer; import android.os.Environment; import android.util.Log; import android.view.Display; import android.view.KeyEvent; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.View; import android.widget.EditText; import android.widget.FrameLayout; import android.widget.ImageView; import android.widget.LinearLayout; import android.widget.TextView; import android.widget.Toast; public class videorecord extends Activity{ SharedPreferences.Editor pre; String filename; CountDownTimer t; private Camera myCamera; private MyCameraSurfaceView myCameraSurfaceView; private MediaRecorder mediaRecorder; Integer cnt=0; LinearLayout myButton; TextView myButton1; SurfaceHolder surfaceHolder; boolean recording; private TextView txtcount; private ImageView btnplay; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); recording = false; setContentView(R.layout.videorecord); init(); myCamera = getCameraInstance(); if(myCamera == null){ } myCameraSurfaceView = new MyCameraSurfaceView(this, myCamera); FrameLayout myCameraPreview = (FrameLayout)findViewById(R.id.videoview); Display display = getWindowManager().getDefaultDisplay(); int width = display.getWidth(); int height = display.getHeight(); myCameraSurfaceView.setLayoutParams(new LinearLayout.LayoutParams(width, height-60)); myCameraPreview.addView(myCameraSurfaceView); myButton = (LinearLayout)findViewById(R.id.mybutton); btnplay.setOnClickListener(myButtonOnClickListener); } private void init() { txtcount = (TextView) findViewById(R.id.txtcounter); //myButton1 = (TextView) findViewById(R.id.mybutton1); btnplay = (ImageView)findViewById(R.id.btnplay); t = new CountDownTimer( Long.MAX_VALUE , 1000) { @Override public void onTick(long millisUntilFinished) { cnt++; String time = new Integer(cnt).toString(); long millis = cnt; int seconds = (int) (millis / 60); int minutes = seconds / 60; seconds = seconds % 60; txtcount.setText(String.format("%d:%02d:%02d", minutes, seconds,millis)); } @Override public void onFinish() { } }; } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { if ((keyCode == KeyEvent.KEYCODE_BACK)) { if(recording) { new AlertDialog.Builder(videorecord.this).setTitle("Do you want to save Video ?") .setPositiveButton("OK", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { filename(); //finish(); } }).setNegativeButton("Cancle", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { // TODO Auto-generated method stub } }).show(); } else { if ((keyCode == KeyEvent.KEYCODE_BACK)) { //Intent homeIntent= new Intent(Intent.ACTION_MAIN); //homeIntent.addCategory(Intent.CATEGORY_HOME); //homeIntent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); //startActivity(homeIntent); //this.finishActivity(1); finish(); } //moveTaskToBack(true); // finish(); return super.onKeyDown(keyCode, event); } } else { // Toast.makeText(getApplicationContext(), "asd", Toast.LENGTH_LONG).show(); android.os.Process.killProcess(android.os.Process.myPid()) ; } return super.onKeyDown(keyCode, event); } ImageView.OnClickListener myButtonOnClickListener = new ImageView.OnClickListener(){ public void onClick(View v) { if(recording){ Log.e("Record error", "error in recording ."); mediaRecorder.stop(); t.cancel(); filename(); releaseMediaRecorder(); }else{ releaseCamera(); Log.e("Record Stop error", "error in recording ."); // if(!prepareMediaRecorder()){ prepareMediaRecorder(); finish(); } mediaRecorder.start(); recording = true; // myButton1.setText("STOP Recording"); // btnplay.setImageResource(android.R.drawable.ic_media_pause); btnplay.setImageResource(R.drawable.stoprec); t.start(); } }}; private Camera getCameraInstance(){ Camera c = null; try { c = Camera.open(); } catch (Exception e){ } return c; } private void filename() { AlertDialog.Builder alert = new AlertDialog.Builder(this); alert.setTitle("Save Video"); alert.setMessage("Enter File Name"); final EditText input = new EditText(this); alert.setView(input); alert.setPositiveButton("Ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { if(input.getText().length()>=1) { filename = input.getText().toString(); File sdcard = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); File from = new File(sdcard,"null.mp4"); File to = new File(sdcard,filename+".mp4"); from.renameTo(to); SharedPreferences sp = videorecord.this.getSharedPreferences("data", MODE_WORLD_WRITEABLE); pre = sp.edit(); pre.clear(); pre.commit(); pre.putString("lastvideo", filename+".mp4"); pre.commit(); //btnplay.setImageResource(android.R.drawable.ic_media_play); btnplay.setImageResource(R.drawable.startrec); // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); Intent myIntent = new Intent(getApplicationContext(), StopVidoWatch_Activity.class).setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(myIntent); } else { filename(); } } }); alert.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); File file = new File(Environment.getExternalStorageDirectory() + "/VideoRecord/null.mp4"); //boolean deleted = file.delete(); file.delete(); finish(); } }); alert.show(); } private boolean prepareMediaRecorder(){ myCamera = getCameraInstance(); mediaRecorder = new MediaRecorder(); myCamera.unlock(); mediaRecorder.setCamera(myCamera); mediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH)); File folder = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); boolean success = false; if (!folder.exists()) { success = folder.mkdir(); } if (!success) { } else { } mediaRecorder.setOutputFile("/sdcard/VideoRecord/"+filename+".mp4"); mediaRecorder.setMaxDuration(60000); mediaRecorder.setMaxFileSize(5000000); Display display = getWindowManager().getDefaultDisplay(); int width = display.getHeight(); int height = display.getWidth(); String s = new String(); s= s.valueOf(width); String s1 = new String(); s1= s1.valueOf(height); // Toast.makeText(videorecord.this, "Width : " + s , Toast.LENGTH_LONG).show(); // Toast.makeText(videorecord.this, "Height : " + s1 , Toast.LENGTH_LONG).show(); mediaRecorder.setVideoSize(height, width); mediaRecorder.setPreviewDisplay(myCameraSurfaceView.getHolder().getSurface()); try { mediaRecorder.prepare(); } catch (IllegalStateException e) { releaseMediaRecorder(); return false; } catch (IOException e) { releaseMediaRecorder(); return false; } return true; } @Override protected void onPause() { super.onPause(); releaseMediaRecorder(); releaseCamera(); } private void releaseMediaRecorder() { if (mediaRecorder != null) { mediaRecorder.reset(); mediaRecorder.release(); mediaRecorder = null; myCamera.lock(); } } private void releaseCamera(){ if (myCamera != null){ myCamera.release(); myCamera = null; } } public class MyCameraSurfaceView extends SurfaceView implements SurfaceHolder.Callback{ private SurfaceHolder mHolder; private Camera mCamera; public MyCameraSurfaceView(Context context, Camera camera) { super(context); mCamera = camera; mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceChanged(SurfaceHolder holder, int format, int weight, int height) { if (mHolder.getSurface() == null){ return; } try { mCamera.stopPreview(); } catch (Exception e){ } try { mCamera.setPreviewDisplay(mHolder); mCamera.startPreview(); } catch (Exception e){ } } public void surfaceCreated(SurfaceHolder holder) { try { mCamera.setPreviewDisplay(holder); mCamera.startPreview(); } catch (IOException e) { } } public void surfaceDestroyed(SurfaceHolder holder) { } } } videorecord.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:id="@+id/videoview" android:layout_width="fill_parent" android:layout_height="fill_parent"></FrameLayout> <LinearLayout android:id="@+id/mybutton" android:layout_width="fill_parent" android:layout_marginBottom="0dip" android:layout_height="wrap_content" android:orientation="horizontal" android:layout_weight="0" > <!-- <TextView android:text="START Recording" android:id="@+id/mybutton1" android:layout_height="wrap_content" android:layout_width="wrap_content" style="@style/savestyle" android:layout_weight="1" android:gravity="left" > </TextView> --> <ImageView android:layout_height="wrap_content" android:id="@+id/btnplay" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" android:layout_width="wrap_content" android:src="@drawable/startrec" /> </LinearLayout> <TextView android:text="00:00:00" android:id="@+id/txtcounter" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="right|bottom" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" /> </FrameLayout> <RelativeLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@color/bgcolor" > <LinearLayout android:layout_above="@+id/mybutton" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent" > </LinearLayout> </RelativeLayout> </LinearLayout>

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • Agile PLM Partners and OCS offer valuable services

    - by Shane Goodwin
    One of the amazing benefits of using Agile is the ability to tap into a broad number of Partners and Oracle Consulting Services to leverage very talented people. Many of these people respond to postings on the Agile PLM Yahoo usergroups (WRAU and IAUG) as well as in the Oracle Support Community. As you are managing your Agile PLM project, either a new implementation or an upgrade, these Partners and OCS can jumpstart your internal resources with information and experience which takes years to develop. They can also offer other capabilities which add to your Agile experience. As an example, GoEngineer recently announced a new offering to host Agile PLM databases for companies who do not want to run their own Agile PLM system. This offering provides our smaller customers with more options on how to implement Agile PLM. The Oracle Consulting Services group has two new offerings, one for Rapid Deployments of new customers and another for Process and Technical Diagnostics of existing customers in order to better leverage the PLM investment. Contact your Oracle Sales representative to learn more. In addition, there are many other partners who are helping our customers get the most from Agile PLM. Some examples are below. In the coming months, the Oracle Partner Network will be working with partners to certify them as specialists in Agile PLM. Some of our other current partners working on Agile PLM are: Kalypso J Squared Domain Systems Sierra Atlantic Minerva

    Read the article

  • CodePlex Daily Summary for Saturday, March 10, 2012

    CodePlex Daily Summary for Saturday, March 10, 2012Popular ReleasesPlayer Framework by Microsoft: Player Framework for Windows 8 Metro (BETA): Player Framework for HTML/JavaScript and XAML/C# Metro Style Applications.WPF Application Framework (WAF): WAF for .NET 4.5 (Experimental): Version: 2.5.0.440 (Experimental): This is an experimental release! It can be used to investigate the new .NET Framework 4.5 features. The ideas shown in this release might come in a future release (after 2.5) of the WPF Application Framework (WAF). More information can be found in this dicussion post. Requirements .NET Framework 4.5 (The package contains a solution file for Visual Studio 11) The unit test projects require Visual Studio 11 Professional Changelog All: Upgrade all proje...SSH.NET Library: 2012.3.9: There are still few outstanding issues I wanted to include in this release but since its been a while and there are few new features already I decided to create a new release now. New Features Add SOCKS4, SOCKS5 and HTTP Proxy support when connecting to remote server. For silverlight only IP address can be used for server address when using proxy. Add dynamic port forwarding support using ForwardedPortDynamic class. Add new ShellStream class to work with SSH Shell. Add supports for mu...Test Case Import Utilities for Visual Studio 2010 and Visual Studio 11 Beta: V1.2 RTM: This release (V1.2 RTM) includes: Support for connecting to Hosted Team Foundation Server Preview. Support for connecting to Team Foundation Server 11 Beta. Fix to issue with read-only attribute being set for LinksMapping-ReportFile which may have led to problems when saving the report file. Fix to issue with “related links” not being set properly in certain conditions. Fix to ensure that tool works fine when the Excel file contained rich text data. Note: Data is still imported in pl...Audio Pitch & Shift: Audio Pitch And Shift 3.5.0: Modules (mod, xm, it, etc..) supportcallisto: callisto 2.0.19: BUG FIX: Autorun.load() function in scripting now has sandboxed path (Thanks Mikey!) BUG FIX: UserObject.Name property now allows full 20 byte string replacements. FEATURE REQUEST: File.* script functions now allow file extensions.EntitiesToDTOs - Entity Framework DTO Generator: EntitiesToDTOs.v2.1: Changelog Fixed template file access issue on Win7. Fix on configuration load when target project was not found and "Use project default namespace" was checked. Minor fix on loading latest configuration at startup. Minor fix in VisualStudioHelper class. DTO's properties accessors are now in one line. Improvements in PropertyHelper to get a cleaner and more performant code. Added Website project type as a not supported project type. Using Error List pane from VS IDE to show Enti...DotNetNuke® Community Edition CMS: 06.01.04: Major Highlights Fixed issue with loading the splash page skin in the login, privacy and terms of use pages Fixed issue when searching for words with special characters in them Fixed redirection issue when the user does not have permissions to access a resource Fixed issue when clearing the cache using the ClearHostCache() function Fixed issue when displaying the site structure in the link to page feature Fixed issue when inline editing the title of modules Fixed issue with ...Mayhem: Mayhem Developer Preview: This is the developer preview of Mayhem. Enjoy!Team Foundation Server Process Template Customization Guidance: v1 - For Visual Studio 11: Welcome to the BETA release of the Team Foundation Server Process Template Customization preview. As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has not been through an independent technical review Documentation has not been rev...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 1.2: Medium trust compliant lot of small change for medium trust compliance full refactoring of user management refactoring of Client Refactoring of user management Magelia.WebStore.Client no longer reference Magelia.WebStore.Services.Contract Refactoring page category multi parent category added copy category feature added Refactoring page catalog copy catalog feature added variant management improvement ability to define a default variant for a variable product ability to ord...PDFsharp - A .NET library for processing PDF: PDFsharp and MigraDoc Foundation 1.32: PDFsharp and MigraDoc Foundation 1.32 is a stable version that fixes a few bugs that were found with version 1.31. Version 1.32 includes solutions for Visual Studio 2010 only (but it should be possible to add the project files to existing solutions for VS 2005 or VS 2008). Users of VS 2005 or VS 2008 can still download version 1.31 with the solutions for those versions that allow them to easily try the samples that are included. While it may create smaller PDF files than version 1.30 because...Terminals: Version 2.0 - Release: Changes since version 1.9a:New art works New usability in Organize favorites window Improved usability of imports/exports and scans Large number of fixes Improvements in single instance mode Comparing November beta 4, this corrects: New application icons Doesn't show Logon error codes Fixed command line arguments exception for single instance mode Fixed detaching of tabs improved usability in detached window Fixed option settings for Capture manager Fixed system tray noti...MFCMAPI: March 2012 Release: Build: 15.0.0.1032 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeTortoiseHg: TortoiseHg 2.3.1: bugfix releaseSimple Injector: Simple Injector v1.4.1: This release adds two small improvements to the SimpleInjector.Extensions.dll. No changes have been made to the core library. New features and improvements in this release for the SimpleInjector.Extensions.dll The RegisterManyForOpenGeneric extension methods now accept non-generic decorator, as long as they implement the given open generic service type. GetTypesToRegister methods added to the OpenGenericBatchRegistrationExtensions class which allows to customize the behavior. Note that the...CommonLibrary: Code: CodeVidCoder: 1.3.1: Updated HandBrake core to 0.9.6 release (svn 4472). Removed erroneous "None" container choice. Change some logic and help text to stop assuming you have to pick the VIDEO_TS folder for a DVD scan. This should make previewing DVD titles on the Queue Multiple Titles window possible when you've picked the root DVD directory.Google Books Downloader for Windows: Google Books Downloader: Google Books Downloader 1.8ExtAspNet: ExtAspNet v3.1.0: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-03-04 v3.1.0 -??Hidden???????(〓?〓)。 -?PageManager??...New ProjectsAres Backup: Ares Backup is a Backup software which can save bytediffs and provides several storage plugins.BackItUp: Backup-Tool für Visual Studio Projektebinbin domain: binbin domain Blexus Service Plattform: Some cool stuff about Wcf Services. - can communicate files - can communicate xaml objects (generate dynamically Gui) CardPlay - a Solitaire Framework for .Net: CardPlay is a C# framework for developing Solitaire card games. The solution includes a sample WPF client along with over 100 games.Cloud Files Upload: Windows application to script cloud file uploads.Code First API Library, Scaffolding & Guidance for Coded UI Tests: Code first Coded UI Tests for web apps. Library, Scaffolding and Guidance.CPEBook by FMUG & TPAY: CPEBook by MUG & TPAY Projet dot NET CPEBookDot Net Application String Resources Viewer: Dot Net Application String Resources ViewerFilter for SharePoint Web Settings Page: This solution show a simple way to integrate a filter box by using jquery, a global farm feature and a simple delegate control for AdditionalPageHead.Google Books Downloader for Windows: Save Google books in PDF, JPEG or PNG format.GUIToolkit: C++ Windowless GUI,DirectUIHarvest Sports: Harvest SportsInfoPath Analyzer: InfoPath Analyzer makes InfoPath form development and troubleshooting much easier. You're easy to find the relationship between controls and data fields, search data fields or controls by name, edit InfoPath inner html directly.Kinectsignlanguage project: This project will help kinect be used for sign language to speech so that sign language people can be understood while talking to important people. LotteryVote: ????manager123: Trying out CodePlexMcRegister: McRegister is an asp.net mvc 3 razor website that enables you to register users on your minecraft server it works in conjunction with a minecraft mod called EasyAuth.MetroTipi: HelloTipi Sous l'interface Metro de Windows 8Microsoft AppFactory: AppFactory is a powerful data-driven build system for Windows Phone (and soon Windows 8) projects. Its purpose is to help developers start with template projects and turn them into suites of applications.MiniStock: MiniStock is an experimental reference architecture for scalable cloud-based architectures. Implemented in .net.online book shopping: online book shoppingScopa: Carousel Team Scopa per WP7 XNA testtom03092012tfs01: testtom03092012tfs01UsingLib: A library of automatically removed utilities: 1.Changing cursor to hourglass in Windows Forms 2.Logging It's developed in C#WHMCS Library: WHMCS is an all-in-one client management, billing & support solution for online businesses. Handling everything from signup to termination, WHMCS is a powerful business automation tool that puts you firmly in control. The WHMCS Library is a .NET wrapper for the WHMCS API. Written entirely in C# but really easy to port over to VB.NET. Coming from a VB.NET background we tried hard to make sure porting would be simple for VB.NET community members. ZoneEditService: Windows service to update ZoneEdit for dynamic dns.

    Read the article

  • Turning a board game idea into a browser based, slow paced gameplay

    - by guillaume31
    Suppose I want to create a strategy game with global mutable state shared between all players (think game board). But unlike a board game, I don't want it to be real time action and/or turn-based. Instead, players should be able to log in at any time of the day and spend a fixed number of action points per day as they wish. As opposed to a few hours, game sessions would run over a few weeks. This is meant to reward good strategy rather than time spent playing (as an alternative, hardcore players could always play multiple games in parallel instead) as well as all kind of issues related to live playing like disconnections and synchronization. The game should remain addictive still have a low time investment footprint for casual players. So far so good, but this still leaves open the question of when to solve actions and when they should be visible. I want to avoid "ninja play" like doing all your moves just a few minutes before daily point reset to take other players by surprise, or people spamming F5 to place a well-timed action which would defeat the whole point of a non real-time game. I thought of a couple of approaches to that : Resolve all events in a single scheduled process running once a day. This basically means a "blind" gameplay where players can take actions but don't see their results immediately. The thing is, I played a similar browser game a few years ago and didn't like the fact that you feel disconnected and powerless until there's that deus ex machina telling you what really happened during all that time. You see the world evolve in large increments of one day, which often doesn't seem like seeing it evolve at all. For actions that have an big impact on the game or on other players (attacks, big achievements), make them visible to everyone immediately but delay their effect by something like 24 hours. Opposing players could be notified when such an event happens, so that they can react to it. Do you have any other ideas how I could go about solving this ? Are there any known approaches in similar existing games ?

    Read the article

  • Remove a Digital Camera’s IR Filter for IR Photography on the Cheap

    - by Jason Fitzpatrick
    Whether you have a DSLR or a point-and-shoot, this simple hack allows you to shoot awesome IR photographs without the expense of a high-quality IR filter (or the accompanying loss of light that comes with using it). How does it work? You’ll need to take apart your camera and remove a single fragile layer of IR blocking glass from the CCD inside the camera body. After doing so, you’ll have a camera that sees infrared light by default, no special add-on filters necessary. Because it sees the IR light without the filters you’ll also skip out on the light loss that occurs with the addition of the add-on IR filter. The downside? You’re altering the camera in permanent and warranty-voiding way. This is most definitely not a hack for your brand new $2,000 DSLR, but it is a really fun hack to try out on an old point and shoot camera or your circa-2004 depreciated DSLR. Hit up the link below to see the process performed on an old Canon point and shoot–we’d strongly recommend searching for a break down guide for your specific camera model before attempting the trick on your own gear. Are You Brave Enough to IR-ize Your Camera [DIY Photography] HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Best Practices vs Reality

    - by RonHill
    On a scale depicting how closely best practices are followed, with "always" on one end and "never" on the other, my current company falls uncomfortably close to the latter. Just a couple trivial examples: We have no code review process There is very little documentation despite a very large code base (and some of it is blatantly incorrect/misleading) Untested/buggy/uncompilable code is frequently checked in to source control It is comically complicated to create a debuggable build for some of our components because of its underlying architecture. Unhandled exceptions are not uncommon in our releases Empty Catch{ } blocks are everywhere. Now, with the understanding that it's neither practical nor realistic to follow ALL best practices ALL the time, my question is this: How closely have commonly accepted best practices been followed at the companies you've worked for? I'm kind of a noob--this is only the second company I've worked for--so I'm not sure if I'm just more of an anal retentive coder or if I've just ended up at mediocre companies. My guess (hope?) is the latter, but a coworker with way more experience than me says every company he's ever worked for is like this. Given the obvious benefits of following most best practices most of the time, I find it hard to believe it's like this everywhere. Am I wrong?

    Read the article

  • BAD ARCHIVE MIRROR using PXE BOOT method

    - by omkar
    i m trying to automatically install UBUNTU on a client PC by using the method of PXE BOOT method....my Objectives are below:- i m following the steps given in this link installation using PXE BOOT 1:-the server will have a KICKSTART config file which contains the parameters for the OS installation and the files which are required for the OS installations. 2:-the client will have to detect this configuration along with the setup files and complete the installation without any input from the user. In my server i have installed DHCP3-server,Apache2 and TFTP for helping me with the installation. i have nearly achieved my first objective,i m able to boot my client using the files stored in the server,but during the installation stage it is asking me to "CHOOSE A MIRROR of UBUNTU ARCHIVE".i gave the server's IP address and the path in the server where the files are located but then too its giving me error "BAD ARCHIVE MIRROR". so is it possible that instead of downloading all the files from the internet and storing them on my disk , can i use the files which comes with the UBUNTU-CD, and how to store this files in what format (should i zip them ) on the disk. secondly i am also generating the ks.cfg which i wanted to give to the client for automatic installation of the OS ,so how should the configuration file be given to the installation process.

    Read the article

  • Trouble using Upstart to launch Redis as redis user

    - by Chris
    I'm trying to launch redis-server as a user (called redis) via Upstart. My /etc/init/redis-server.conf looks like this: description "redis server" start on runlevel [23] stop on shutdown exec sudo -u redis /usr/local/bin/redis-server /var/lib/redis/redis.conf Looks good, right? I start redis-server using $start redis-server redis-server start/running, process 16808 $redis-cli Could not connect to Redis at 127.0.0.1:6379: Connection refused $ps ax | grep ps 168 16810 tty1 R+ 0:00 ps ax 16811 tty1 S+ 0:00 grep 168 So redis-server definitely isn't running. Let's try executing the Upstart command by hand, shall we? exec sudo -u redis /usr/local/bin/redis-server /var/lib/redis/redis.conf [16852] 19 Jun 10:37:21 # Can't chdir to './': Permission denied Connection to 10.19.2.94 closed. And then I get logged off. I'm at a loss. Any ideas?

    Read the article

  • Confusion of the "stack" in Assembly-level programming

    - by Bigyellow Bastion
    What is the "stack" exactly? I've read articles, tried comprehending it through my understanding, experience, and educated guessing of programming and computers, but I'm a bit perplexed here. The "stack" is a region in RAM? Or is it some other space I'm uncertain of here? The processor pushes bits through registers on to the stack in RAM, or do I have it wrong here? Also, the processor moves the bits from the RAM to the register to "process" it, such as maybe a compare, arithmetic, etc. But what actually can help understand, in some visual or verbal description or both, of how to implement the idea of a "stack" here? Is the stack actually the same in terminology with a "machine stack" meaning it's in RAM? I'm sorry, I don't want to solicit debate or arguments, but I really could use some help here if anyone can straighten things out. TO ADD: I know what a software stack is. I know about LIFO, FIFO, etc. I just want to gain a better understanding of the Assembly-level stack, what it is, where it is, how exactly it works, etc. Thanks for reading!

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authorization

    - by Your DisplayName here!
    In the previous post I showed how token based authentication can be implemented for WCF HTTP based services. Authentication is the process of finding out who the user is – this includes anonymous users. Then it is up to the service to decide under which circumstances the client has access to the service as a whole or individual operations. This is called authorization. By default – my framework does not allow anonymous users and will deny access right in the service authorization manager. You can however turn anonymous access on – that means technically, that instead of denying access, an anonymous principal is placed on Thread.CurrentPrincipal. You can flip that switch in the configuration class that you can pass into the service host/factory. var configuration = new WebTokenWebServiceHostConfiguration {     AllowAnonymousAccess = true }; But this is not enough, in addition you also need to decorate the individual operations to allow anonymous access as well, e.g.: [AllowAnonymousAccess] public string GetInfo() {     ... } Inside these operations you might have an authenticated or an anonymous principal on Thread.CurrentPrincipal, and it is up to your code to decide what to do. Side note: Being a security guy, I like this opt-in approach to anonymous access much better that all those opt-out approaches out there (like the Authorize attribute – or this.). Claims-based Authorization Since there is a ClaimsPrincipal available, you can use the standard WIF claims authorization manager infrastructure – either declaratively via ClaimsPrincipalPermission or programmatically (see also here). [ClaimsPrincipalPermission(SecurityAction.Demand,     Resource = "Claims",     Operation = "View")] public ViewClaims GetClientIdentity() {     return new ServiceLogic().GetClaims(); }   In addition you can also turn off per-request authorization (see here for background) via the config and just use the “domain specific” instrumentation. While the code is not 100% done – you can download the current solution here. HTH (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • hosting database on separate server

    - by Amit Aggarwal
    Hello Experts, We have an enterprise web app to which our clients post/upload lots of documents [mainly images and pdf files] via web interface, iphone app etc etc. We are also using imagick to split pdf documents into images. Also, large number of mysql SELECT/UPDATE/DELETE queries happen all day. Currently, all of this is happening on same server and we are planning to split the process in 3 stages : 1) a server only for database 2) a server only for documents (document upload, splitting etc) 3) a server for the main php web app Is there any drawback with this kind of structure as compared to hosting everything on same server ? Please guide. Thanks, Amit

    Read the article

  • Ubuntu hibernate resume fails: "PM: Resume from disk failed"

    - by Neil
    I just upgraded to Ubuntu 10.4 from 9.10, and it's now hiding the hibernate and suspend options. How do I get them back? So the way you do this is make sure that your swap partition is in /etc/fstab and swap is enabled, and big enough. Look at /proc/swaps to see if anything is listed. Now I'm getting this error when I boot after suspending: init: ureadahead-other main process (705) terminated with status 4 Does anyone know how to fix this? I'm using Ubuntu with kernel 2.6.32-22-generic.

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >