Search Results

Search found 31269 results on 1251 pages for 'process management'.

Page 430/1251 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • What is the best way to code the XNA Game Server for FPS game?

    - by AgentFire
    I'm writing a FPS XNA game. It gonna be multiplayer so I came up with following: I'm making two different assemblies — one for the game logic and the second for drawing it and the game irrelevant stuff (like rocket trails). The type of the connection is client-server (not peer-to-peer), so every client at first connects to the server and then the game begins. I'm completly decided to use XNA.Framework.Game class for the clients to run their game in window (or fullscreen) and the GameComponent/DrawableGameComponent classes to store the game objects and update&draw them on each frame. Next, I want to get the answer to the question: What should I do on the server side? I got few options: Create my own Game class on the server, which will process all the game logic (only, no graphics). The reason why I am not using the standart Game class is when I call Game.Run() the white window appears and I cant figure out how to get rid of it. Use somehow the original XNA's Game class, which is already has the GameComponent collection and Update event (60 times per second, just what I need). UPDATE: I got more questions: First, what socket mode should I use? TCP or UDP? And how to actually let the client know that this packet is meant to be processed after that one? Second, if I is going to use exacly GameComponent class for the game objects which is stored and process on the server, how to make them to be drawn on the client? Inherit them (while they are combined to an assembly)? Something else?

    Read the article

  • cannot get upstart to run user job

    - by dre
    I am trying to get upstart start a user job during the boot of my machine. I have my conky.conf upstart config file in my $HOME/.init directory. Wenn I run "start conky" I get this error: dre@dre-laptop:~$ start conky start: Rejected send message, 1 matched rules; type="method_call", sender=":1.76" (uid=1000 pid=2843 comm="start conky ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init") dre@dre-laptop:~$ I (think I) know that this error has to do with the d-bus system (authentification). I also read (http://upstart.ubuntu.com/cookbook/#id96) that ubuntu 12.10 already has the right configuration in the d-bus config file "/etc/dbus-1/system.d/Upstart.conf" to allow normal users to use upstart. dre@dre-laptop:~/.init$ cat conky.conf description "conky, a system monitor appled" start on lightdm stop on shutdown # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background #expect fork # Start conky exec /usr/bin/conky So, who knows, what do I do wrong??? greetings Andre No one??? Please... give me your best shot.

    Read the article

  • Webcast : Les nouveautés de Microsoft visual studio 2010, mardi 22 juin

    Les nouveautés de Microsoft visual studio 2010 Venez découvrir les nouveautés de visual studio 2010 le mardi 22 juin 2010, de 11h00 à 12h00 à travers une présentation web. Microsoft vous fera découvrir les évolutions de la nouvelle version Microsoft Visual Studio 2010 grâca à un expert qui sera mis à votre dispostion. Vous aurez la possibilité de vous familiariser avec les fonctionnalités de Test : IntelliTrace, Lab Management? Microsoft visual studio 2010 vous invite aussi à découvrir le Team Foundation Server et la collaboration de vos équipes de développement, y compris le développement Parallèle, Cloud, Sharepoint

    Read the article

  • Finding Stuff in SQL Server Database DDL

    You'd have thought that nothing would be easier than using SQL Server Management Studio (SSMS) for searching through the DDL for both the names and definitions of the structural metadata of your databases, for the occurrence of a particular string of letters. Not so easy, it turns out, though Phil Factor is able to come up with various methods for various purposes.

    Read the article

  • Why would someone want to take over control of my domain name?

    - by mike jones
    I was approached by a person wanting to help me set up a website. In order to do this he has requested that I allow him to transfer my domain name to his account, for easier management. I would retain the right of usage and he would pay the bill for maintaining the name. This sounds fishy, but I can't figure out what he hopes to gain if this is a scam. Is this a common practice among 'Administrative Contacts'?

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • is it worth defragging an iPod

    - by alimack
    Essentially my 5G iPod was cutting tracks off and generally misbehaving. So I did the following: 1) Use Diskwarrior - heavy directory fragmentation which it fixed; 2) Use iDefrag - some fragmentation but it kept halting as it couldn't move files; 3) Try to write out drive with Disk Utility - got a warning from DU so gave up before I started; 4) Completely restored using iTunes; 5) Reran Diskwarrior - still heavy directory fragmentation; 6) Reran iDefrag, still fragmentation although limited to two bands; The iPod is much quicker to traverse menus and no more track skipping. My question is this - is defragging worth it or does the heat generated by the process kill the drive and make it a self-defeating process. Anyone have any metrics/ figures? Clearly it's a bad idea for solid state drives like the nano & touch.

    Read the article

  • Outstanding SQL Saturday

    - by merrillaldrich
    I had the privilege to attend the SQL Saturday held in Redmond today, and it was really outstanding. Among the many sessions, I especially enjoyed and took a lot of useful information away from Greg Larsen’s Dynamic Management Views session, Kalen Delaney’s Compression Session – I am planning to implement 2008 Enterprise compression on my company’s data warehouse later this year – Remus Rusanu’s session on Service Broker to process NAP data, and Matt Masson’s presentation on high performance SSIS...(read more)

    Read the article

  • Where could one go to suggest new packages for Ubuntu

    - by Luis Alvarado
    Sometimes one finds a package that looks very good and can help in some ways, but the package is not found in the repositories of Ubuntu or in any PPA site. Apart from creating a PPA, how and where can one suggest a package to be included to Ubuntu in a future or present version. Some quick examples might be: Komodo (Another alternative to Notepad++): http://www.liberiangeek.net/2012/01/komodo-edit-the-best-notepad-alternative-in-ubuntu/ mkahawa (Cafe Management System): http://sourceforge.net/projects/mkahawa/ And many others that can help in making users use Ubuntu more.

    Read the article

  • unable to run OpenOffice from text mode

    - by dilip
    I have installed OpenOffice in Redhat5 in text mode. When I tried to start OpenOffice using the command: sudo /usr/lib/openoffice/program/soffice "-accept=socket,host=localhost,port=8100; urp;StarOffice.ServiceManager " -nologo -headless -nofirststartwizard: " It shows the error saying javaldx: Could not find a Java Runtime Environment! So I installed jre in my system, and then I did not get any error but OpenOffice does not start. Also I checked the process regarding OpenOffice, but I haven's seen any process related to that. Can any one help me to fix this issue?

    Read the article

  • Scalable solution for website polling

    - by Tom Irving
    I'm looking to add push notifications to one of my iOS apps. The app is a client for a website which doesn't offer push notifications. What I've come up with so far: App sends a message to home server when transitioning to background, asking the server to start polling the website for the logged in user. The home server starts a new process to poll for that user. Polling happens every so many seconds / minutes. When the user returns to the iOS app, the app sends a message to the home server to stop polling. The home server kills the process polling for the user. Repeat. The problem is that this soon becomes stupid: 100s of users means 100s of different processes. It's just not scalable in the slighest. What I've written so far is in PHP, using CURL to do the polling and I started with PHP a few days ago, so maybe I'm missing something obvious that could help me with this. Some advice would be great.

    Read the article

  • Automating and deploying new linux servers

    - by luckytaxi
    I'm in the process of developing a method to automate new virtual machines into my environment. 90% of our machines are virtual but the process is similar for both physical and vmware based images. What I do now is I use cobbler to install the base OS. The kickstart script has post hooks to modify the yum repo and installs puppet and func. Once the servers are running, I manually add them into nagios and sign the certificate via the puppetmaster. I've since migrated most of the resources to use mysql as the backend. I wanted to see what others are doing and my goal for 2011 is to have puppet inventory the hardware into mysql, and somehow i'll script a python script to have nagios grab the info and automatically add it for monitoring purposes. It's kind of tedious to have to add each new server into nagios, puppet's dashboard, munin, etc...

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

  • Portal and Content - Components, part 3 – Applied Customization Framework (4 of 7)

    - by Stefan Krantz
    Have you ever been challenged with the situation where your work task asks you to implement functionality in the WebCenter Portal and you browse through the Resource Catalog (Business Dictionary) and find the functionality you need. However when you get started there is small short comings and you ask your self- how can I re-use what is out of the box ca?- I wonder what code I need to use to produce the similar functions and include my new requirements?- Must I write a new taskflow? The answer to above questions are in many times answered with simply you can  do a taskflow customization to out-of-the-box taskflows. In this post I will help you understand how to do such customization. Best described is a 4 step process, see image flow below for illustration: Just to clarify few naming confusions that might occur when go through above process. Customization Role is a function within JDeveloper that will allow you to implement view and flow customizations to existing taskflows WebCenter Portal – Spaces Taskflow Customization Framework this technology scope do not only refer to WebCenter Spaces, this also include WebCenter Portal/Framework A taskflow customization do not overwrite or replace any code, it just creates an additional tip view of the taskflow in the MDS for the current application (WebCenter Portal or WebCenter Spaces) To sum up this simple procedure I also like to help you find your way around the main topic for this post series, this post series is focusing primarily on Content integration with WebCenter Portal, so where can I find content related taskflows in the WebCenter Libraries. The list below mention some useful locations to taskflows and each taskflow page fragments. Library Reference - WebCenter Document Library Service View Content Presenter Path: oracle.webcenter.doclib.view.jsf.taskflows.presenterTaskflow: contentPresenter.xml - The Content Presenter taskflowTaskflow: contentPresenterWizard.xml - The publishing wizard to select content, select template and preview including contributionDocument Manager Path: oracle.webcenter.doclib.view.jsf.taskflows.docManager Taskflow: documentManager.xml - The Document Manager taskflow which includes references to document management feature including browsing, download, uploading and viewing. For more information on Taskflow customizations please see following documentation:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_taskflows.htm#BACIEGJD

    Read the article

  • asus n550jv audio problem: no sound from notebook' speakers

    - by skywalker
    Ubuntu 13.10. The problem is: the internal speakers don't work. I have no problem when I'm using the headphones. There is no hardware issue since in windows 8 everything works perfectly(external subwoofer included). I'm trying to modify /etc/modprobe.d/alsa-base.conf but I can't find the correct model to put into: options snd-hda-intel model= The file HD-Audio-Models.txt doesn't contain the model for ALC668. Some info: :~sudo aplay -l **** List of PLAYBACK Hardware Devices **** card 0: MID [HDA Intel MID], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 8: HDMI 2 [HDMI 2] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: PCH [HDA Intel PCH], device 0: ALC668 Analog [ALC668 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 :~$ sudo lspci -v | grep -A7 -i "audio" 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) Subsystem: Intel Corporation Device 2010 Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at f7a14000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Kernel driver in use: snd_hda_intel -- 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04) Subsystem: ASUSTeK Computer Inc. Device 11cd Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at f7a10000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel PS info :~$ amixer -c 0 Simple mixer control 'IEC958',0 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',1 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',2 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] :~$ pacmd dump-volumes Welcome to PulseAudio! Use "help" for usage information. Sink 0: reference = 0: 76% 1: 76%, real = 0: 76% 1: 76%, soft = 0: 100% 1: 100%, current_hw = 0: 76% 1: 76%, save = yes Input 8: volume = 0: 100% 1: 100%, reference_ratio = 0: 100% 1: 100%, real_ratio = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, volume_factor = 0: 100% 1: 100%, volume_factor_sink = 0: 100% 1: 100%, save = no Source 0: reference = 0: 100% 1: 100%, real = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, current_hw = 0: 100% 1: 100%, save = no Source 1: reference = 0: 16% 1: 16%, real = 0: 16% 1: 16%, soft = 0: 100% 1: 100%, current_hw = 0: 16% 1: 16%, save = yes

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • In windows xp, how can I set the default browser from chrome to IE via Command Line, without admin privs

    - by Bugmage
    Situation: 1. Need to Set the defualt browser to IE via cmd(problem) 2. Need to do a citrix login via IE(amounts to loading a url) beause it wont run in google chrome 3. then set default browser to chrome Environment: Windows XP, no admin priv's no admin priv's mean I can't touch registry Basic Steps I'm Doing: In a bat file: 1.Set default browser to IE 2.run a citrix SSO login via IE (not compatible with chrome) 3.Set default browser to Chrome 4.kill IE 5.live long and prosper So i have it all running except "Set default browser to IE" I can set the default browser to chome by using portable chrome's cmd line argument --make-default-browser but I can't undo that process. If I launch IE, it pops up that 'make ie default browser' window which stops the SSO process. So If I can disable that check via bat file, that would also work for me. Things i've tried that didn't work: shmgrate.exe OCInstallReinstallIE We are using IE8 Maybe someone can find a chrome switch that undoes default browser: http://peter.sh/experiments/chromium-command-line-switches/ thanks for the help guy's

    Read the article

  • Learn more about SPARC by listening to our newly recorded podcasts

    - by Cinzia Mascanzoni
    Please listen to our newly recorded series of four podcasts focused on SPARC. The topics are: How SPARC T4 Servers Open New Opportunities SPARC Roadmap and SPARC T4 Architecture Highlights SPARC T4 For Installed Base Refresh and Consolidation SPARC T4 – How Does it Stack up Against the Competition? Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant, are your hosts. The intent is to continue to help you understand how to position and sell SPARC/T4 into your customer architecture.Details on how to access these podcasts can be found here.

    Read the article

  • Webcast : Les nouveautés de Microsoft Visual Studio 2010, organisé par SOS developers le mardi 22 ju

    Webcast : Les nouveautés de Microsoft visual studio 2010 Comsoft / SOS Developer's vous invite à découvrir les nouveautés de Microsoft Visual Studio 2010 le mardi 22 juin 2010, de 11h00 à 12h00 à travers une présentation web. Microsoft vous fera découvrir les évolutions de la nouvelle version Microsoft Visual Studio 2010 grâce à un expert qui sera mis à votre disposition. Vous aurez la possibilité de vous familiariser avec les fonctionnalités de Test : IntelliTrace, Lab Management? Microsoft visual studio 2010 vous invite aussi à découvrir le Team Foundation Server et la collaboration de vos équipes de développement, y compris le développement Parallèle, Cloud, Sharepoint

    Read the article

  • Find files containing a string on the whole filesystem

    - by Fabio
    I need to find all the instances of a given string in the whole filesystem, because I don't remember in which configuration files, script or any other programs I put it and I need to update that string with a new one. I tried with the following command `grep -nr 'needle' / --exclude-dir=.svn | mail [email protected] -s 'References on xxx' If I run this command on a small directory it gives me the output I need in the form /path1/:nn:line containing needle /path2/:nn:line containing needle where /path1 is the full path of the file, nn is the row containing the needle and last field is the content of the line. However when I run the command on the root directory the grep process hang after a while. I run this script about 8 hours ago and even on a small filesystem (less than 5GB) it doesn't end and if I run top or ps the process seems sleeping root 24909 0.0 0.1 3772 1520 pts/1 S+ Feb10 0:15 grep -nr needle / --exclude-dir=.svn Why it doesn't end? Is there any better way to do this (it's a one time job, I don't need to execute this more than once) Thanks.

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • Inspiration

    - by Oracle Campus Blog
    Once again, I find myself back in Seoul – ASEM Tower, 16th Floor in a mobile room. I’m busy preparing for the interview process that is about to take place for Oracle Korea’s GIP 7th (Graduate Intake Program): scheduling the first round interviews, organizing interview guidelines, educating interviewers on the process and framework and  getting all the logistics ready for the 1st round interview. Seoul or Korea rather is a fascinating place. Highly efficient, the utmost respect for seniors and results orientated. When students come in for an interview at first it was hard to tell them apart – there seems to be accepted interview attire that must be worn when attending an interview. Males and Females, all dress in black suits, with white shirts underneath – with males to wear simple and dark colored ties. During the interview, they would all sit very upright, all would bow when entering the room, place their hands on the laps and very often they would hold minimal eye contact. They would project their voice loud to portray confidence, they would talk in the Korean formal dialect at all times and will treat every question, every moment with extreme clarity and the utmost professionalism. When the interview concludes, they will all stand hands by their sides, bow 90 degrees and thank all the interviewers for their precious time and opportunity. As soon as they leave the interview room, I could hear all their sighs of relief and commended each other on their efforts. More and more I learn about the Korean culture it inspires me. Their patriotism, their respect for each, their values, their appreciation, their motivation, their desires and passion – it truly was an experience for me (even as a recruiter) and can’t help but feel truly impressed and motivated to live for every moment. Philip Yi     Oracle Campus Recruiter 

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >