Search Results

Search found 19365 results on 775 pages for 'machine vision'.

Page 151/775 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Add the Vista Style Sidebar Back to Windows 7

    - by Mysticgeek
    If you are moving from Vista to Windows 7, you might miss the Sidebar which was introduced in Vista. Today we take a look at a couple options for getting a Sidebar back in Windows 7. Copy Files from Vista Note: In this example we are using 32-bit versions of Vista and Windows 7. Make sure you are logged in with Administrator credentials. If you have a Vista machine running, we can copy the Windows Sidebar files over to the Windows 7 machine. On the Vista machine navigate to C:\Program Files and copy the Windows Sidebar folder and all of its contents over to a flash drive or network location. On the Windows 7 machine go to C:\Program Files and rename the Windows Sidebar folder to something like Windows Sidebar_old. Now copy the Vista Windows Sidebar folder into C:\Program Files… Now you will have both folders…Windows Sidebar and Windows Sidebar_old in your C:\Program Files folder. Right-click on the desktop and select Gadgets. There you are…the Original Vista Sidebar is back and will act as it did in Vista. Move Sidebar Gadgets Another work around if you don’t have a copy of Vista, you can simply move the Desktop Gadgets you want over to the right side of the screen and they will stay there…no dock needed. Type gadgets into the Search box in the Windows Start Menu and click on Desktop Gadgets. Then drag the included Gadgets you want over to the right side of the screen. Or click on the link to Get more gadgets online to find more. Once you have them where you want, each time you reboot they will still be in the same location. This holds true no matter where you place them on your desktop as well. Install Desktop Sidebar If you want an enhanced sidebar that includes a lot of different features, and don’t have a copy of Vista, you might want to check out Desktop Sidebar Beta (link below). This is a freeware application that works with Windows XP, Vista, and Windows 7. After installation you can access it from the Start Menu… Here is how it will look after you launch it… It includes several pre-installed panels including a clock, Media Player, Search Bar, Slideshow, Messenger, Outlook inbox, Tasks, Quick Launch, Performance…and a lot more. It is highly customizable and allows you to change skins, add various levels of transparency, and a lot more. One caveat with going with Desktop Sidebar is we didn’t find a way to add Windows Gadgets to it (though there might be a plugin for it that we’re not aware of). But there are so many options, you may not mind. However, you can still use the desktop gadgets as you normally would in Windows 7. Believe it or not, some people actually prefer the Vista style Sidebar and would like it back in Windows 7. With these options you can get the Vista Sidebar back if you have a copy of Vista, place the Gadgets on the desktop, or go the freeware route. Download Desktop Sidebar (freeware) Similar Articles Productive Geek Tips Disable Windows Sidebar in VistaHow To Repair Your Crashed or Hanging Vista SidebarApplying Themes To Your Windows Vista SidebarDisable Sidebar / Desktop Gadgets on Windows 7Put AOL Instant Messenger (AIM) In your Windows Sidebar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox)

    Read the article

  • Make Text and Images Easier to Read with the Windows 7 Magnifier

    - by DigitalGeekery
    Do you have impaired vision or find it difficult to read small print on your computer screen? Today, we’ll take a closer look at how to magnify that hard to read content with the Magnifier in Windows 7. Magnifier was available in previous versions of Windows, but the Windows 7 version comes with some notable improvements. There are now three screen modes in Magnifier. Full Screen and Lens mode, however, require Windows Aero to be enabled. If your computer doesn’t support Aero, or if you’re not using am Aero theme, Magnifier will only work in Docked mode. Using Magnifier in Windows 7 You can find the Magnifier by going to Start > All Programs > Accessories > Ease of Access > Magnifier.   Alternately, you can type magnifier into the Search box in the Start Menu and hit Enter. On the Magnifier toolbar, choose your View mode by clicking Views and choosing from the available options. Clicking the plus (+) and minus (-) buttons will zoom in or zoom out. You can change the zoom in/out percentage by adjusting the slider bar. You can also enable color inversion and select tracking options. Click OK when finished to save your settings.   After a brief period, the Magnifier Toolbar will switch to a magnifying glass icon. Simply click the magnifying glass to display the Magnifier Toolbar again.   Docked Mode In Docked mode, a portion of the screen is magnified and docked at the top of the screen. The rest of your desktop will remain in it’s normal state. You can then control which area of the screen is magnified by moving your mouse.   Full Screen Mode This magnifies your entire screen and follows your mouse as you move it around. If you loose track of where you are on the screen, use the Ctrl + Alt + Spacebar shortcut to preview where your mouse pointer is on the screen.   Lens Mode The Lens screen mode is similar to holding a magnifying glass up to your screen. Full screen mode magnifies the area around the mouse. The magnified area moves around the screen with your mouse.    Shortcut Keys Windows key + (+) to zoom in Windows key + (-) to zoom out Windows key + ESC to exit Ctrl + Alt + F – Full screen mode Ctrl + Alt + L – Lens mode Ctrl + Alt + D – Dock mode Ctrl + Alt + R – Resize the lens Ctrl + Alt + Spacebar – Preview full screen Conclusion Windows Magnifier is a nice little tool if you have impaired vision or just need to make items on the screen easier to read. Similar Articles Productive Geek Tips New Features in WordPad and Paint in Windows 7How-To Geek on Lifehacker: How to Make Windows Vista Less AnnoyingUsing Comments in Word 2007 DocumentsMake Your PC Look Like Windows Phone 7Use Image Placeholders to Display Documents Faster in Word TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • Migrating SQL Server Compact Edition (SQL CE) database to SQL Server using Web Matrix

    - by Harish Ranganathan
    One of the things that is keeping us busy is the Web Camps we are delivering across 5 cities.  If you are a reader of this blog, and also attended one of these web camps, there is a good chance that you have seen me since I was there in all the places, so far.  The topics that we cover include Visual Studio 2010 SP1, SQL CE, ASP.NET MVC & HTML5.  Whenever I talk about SQL CE, the immediate response is that, people are wow that Microsoft has shipped a FREE compact edition database, which is an embedded database that can be x-copy deployed.  If you think, well didn’t Microsoft ship SQL Express which is FREE?  The difference is that, SQL Express runs as a service in the machine (if you open SQL Configuration Manager, you can notice that SQL Express is running as a service along with your SQL Server Engine (if you have installed ).  This makes it that, even if you are willing to use SQL Express when you deploy your application, it needs to be installed on the production machine (hosting provider) and it needs to run as a service.  Many hosters don’t allow such services to run on their space. SQL CE comes as a x-Copy deploy-able database with just a few DLLs required to run it on the machine and they don’t even need to be installed in GAC on the production machine.  In fact, if you have Visual Studio 2010 SP1 installed, you can use the “Add Deployable Dependencies” option in Project-Properties and it would detect that SQL CE is something you would probably want to add as a deploy-able dependency for your project.  With that, it bundles the required DLLs as a part of the “_bin_deployableAssemblies” folder.  So your project can be x-Copy deployed and just works fine. However, SQL CE has the limit of 4GB storage space.  Real world applications often require more than just 4GB of data storage and it often turns out that people would like to use SQL CE for development/ramp up stages but would like to migrate to full fledged SQL Server after a while.  So, its only natural that the question arises “How do I move my SQL CE database to SQL Server”  And honestly, it doesn’t come across as a straight forward support.  I was talking to Ambrish Mishra (PM in SQL CE Team, Hyderabad) since I got this question in almost all the places where we talked about SQL CE.   He was kind enough to demonstrate how this can be accomplished using Web Matrix.  Open Web Matrix (Web Matrix can be installed for free from www.microsoft.com/web) and click on “Site from Template” Click on the “Bakery” template (since by default it uses a SQL CE database and has all the required sample data) and click “Ok”. In the project, you can navigate to the Database tab and will be able to find that the Bakery site uses a SQL CE database “bakery.sdf” Select the “bakery.sdf” and you will be able to see the “Migrate” button on the top right Once you click on the “Migrate” button, you will notice that the popup wizard opens up and by default is configured for SQL Express.  You can edit the same to point to your local SQL Server instance, or a remote server. Upon filling in the Server Name, Username and Password, when you click “Ok”, couple of things happen.  1. The database is migrated to SQL Server (local or remote – subject to permissions on remote server).   You can open up SQL Server Management Studio and connect to the server to verify that the “bakery” database exists under “Databases” node. 2. You can also notice that in Web Matrix, when you navigate to the “Files” tab and open up the web.config file, connection string now points to the SQL Server instance (yes, the Migrate button was smart enough to make this change too ) And there it is, your SQL Server Compact Edition database, now migrated to SQL Server!! In a future post, I would explain the steps involved when using Visual Studio. Cheers !!!

    Read the article

  • Running Multiple WebLogic and OSB Domains

    - by jeff.x.davies
    I have any number of OSB domain created on my machine at any point in time. For example, I have different domains for different version of Oracle Service Bus and Oracle SOA Suite. I also have different domains for different purposes. I have a demo domain and another domain for the projects in my blog. Starting with OSB 11g and the Apache Derby server, there is a small "gotcha" if you want to create multiple domains on a devevelopment machine. When you create a new domain for OSB 11g it will use the same database info for all databases and this will cause an error when starting the admin server of the second domain (the first domain doesn't have to be running for this error to occur). Here is an example of the error message in the server console: ####<Mar 8, 2011 2:58:48 PM PST> <Critical> <JTA> <jeff-laptop> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1299625128464> <BEA-110482> <A logging last resource failed during initialization. The server cannot boot unless all configured logging last resources (LLRs) initialize. Failing reason: weblogic.transaction.loggingresource.LoggingResourceException: java.sql.SQLException: JDBC LLR, table verify failed for table 'WL_LLR_ADMINSERVER', row 'JDBC LLR Domain//Server' record had unexpected value 'osb11gR1PS3//AdminServer' expected 'OSBCIM//AdminServer'*** ONLY the original domain and server that creates an LLR table may access it *** The solution is to create a database instance for each of your domains and this is very simple to do. After you create a domain using the Configuration Wizard, locate the wlsbjmsrpDataSource-jdbc.xml file that is found under the DOMAIN_HOME/config/jdbc directory. Near the top of the file you will see the following entry: <url>jdbc:derby://localhost:1527/osbexamples;create=true;ServerName=localhost;databaseName=osbexamples</url> You need to modify this entry with a different and unique database name. The easiest way to do this is to substiture the name of your domain. For example: <url>jdbc:derby://localhost:1527/mydomain;create=true;ServerName=localhost;databaseName=mydomain</url> will create a database named mydomain . Now, when you restart the admin server for the domain, it will create the new database for you. Do this for each domain you create on your development machine and you'll have no troubles. The process is much simpler if you are creating a domain using the Configuration Wizard. Simply name the database when you get to the Configure JDBC Component Schema step of the Configuration Wizard, select the OSB JMS Reporting Provider and set the name in the DBMS/Service field to whatever name you like, as shown in Figure 1 below. Figure 1 – Configuring the JDBC Component Schema That is all there is to it. Now you can create as many domains on your leptop or development machine as you like and not have to worry about them conflicting with each other.

    Read the article

  • DAC pack up all your troubles

    - by Tony Davis
    Visual Studio 2010, or perhaps its apparently-forthcoming sister, "SQL Studio", is being geared up to become the natural way for developers to create databases. Central to this drive is the introduction of 'data-tier application components', or DACs. Applications are developed as normal but when it comes to deployment, instead of supplying the DBA with a bunch of scripts to create the required database objects, the developer creates a single DAC Package ("DAC Pack"); a zipped XML file containing all the database objects needed by the application, along with versioning information, policies for deployment, and so on. It's an intriguing prospect. Developers can work on their development database using their existing tools and source control, and then package up the changes into a single DACPAC for deployment and management. DBAs get an "application level view" of how their instances are being used and the ability to collectively, rather than individually, manage the objects. The DBA needing to manage a large number of relatively small databases can use "DAC snapshots" to get a quick overview of what has changed across all the databases they manage. The reason that DAC packs haven't caused more excitement is that they can only be pushed to SQL Server 2008 R2, and they must be developed or inspected using Visual Studio 2010. Furthermore, what we see right now in VS2010 is more of a 'work-in-progress' or 'vision of the future', with serious shortcomings and restrictions that render it unsuitable for anything but small 'non-critical' departmental databases. The first problem is that DAC packs support a limited set of schema objects (corresponding closely to the features available on 'Azure'). This means that Service Broker queues, CLR Objects, and perhaps most critically security (permissions, certificates etc.), are off-limits. Applications that require these objects will need to add them via a post-deployment TSQL script, rather defeating the whole idea. More worrying still is the process for altering a database with a DAC pack. The grand 'collective' philosophy, whereby a single XML file can be used for deploying and managing builds and changes, extends, unfortunately, to database upgrades. Any change to a database object will result in the creation of a new database, copying the data from the old version, nuking the previous one, and then renaming the new one. Simple eh? The problem is that even something as trivial as adding a comment to a stored procedure in a 5GB database will require the server to find at least twice as much space, as well sufficient elbow-room in the transaction log for copying the largest table. Of course, you'll need to take the database offline for the full course of the deployment, which is likely to take a long time if there is a lot of data. This upgrade/rename process breaks the log chain, makes any subsequent full restore operation highly complicated, and will also break log shipping. As with any grand vision, the devil is always in the detail. It's hard to fathom why Microsoft hasn't used a SQL Compare-style approach to the upgrade process, altering a database with a change script, and this will surely be adopted in the near future. Something had to be in place for VS2010, but right now DAC packs only make sense for Azure. For this, they're cute, but hardly compelling. Nevertheless, DBAs would do well to get familiar with VS 2010 and DAC packs. Like it or not, they're both coming. Cheers, Tony.

    Read the article

  • New Rapid Install StartCD 12.2.0.48 for EBS 12.2 Now Available

    - by Max Arderius
    A new Rapid Install startCD (Patch 18086193) for Oracle E-Business Suite Release 12.2 is now available. We recommend that all EBS customers installing or upgrading to EBS 12.2 use this latest update. The startCD updates are distributed to customers via My Oracle Support Patch which can be uncompressed on top of any previous 12.2 startCD under the main staging area. This patch replaces any previous startCDs. What's New in This Update? This new startCD version 12.2.0.48 includes important fixes for multi-node Installs, RAC, pre-install checks, platform specific issues, and upgrade scenario failures: 18703814 - QREP:122:RI:ISSUE WITH CHECKOS.CMD 18689527 - QREP:122:RI:ISSUE WITH FNDCORE.DLL SHIPPED AS PART OF R122 PACKAGE 18548485 - QREP1224:4:JAR SIGNER ISSUE DUE TO THE RI UPGRADE AUTOCONFIG CHANGES 18535812 - QREP:1220.48_4: 12.2.0 UPGRADE FILE SYSTEM LAY OUT IS AFFECTING THE DB TABLES 18507545 - WIN: UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING RUNNING DB 18476041 - UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING PRODUCTION DB 18459887 - R12.2 INSTALLATION FAILURE - OPMNCTL: NOT FOUND 18436053 - START CD 48_4 - ISSUES WITH TEMP SPACE CHECK 18424747 - QREP1224.3:ADD SERVER BROWSE BUTTON NOT WORKING 18421132 - *RW-50010: ERROR: - SCRIPT HAS RETURNED AN ERROR: 1 18403700 - QREP122.48:RI:UPGRADE RI PRECHECK HUNG IN SPLIT TIER APPS NODE ( NO SILENT ) 18383075 - ADD VERBOSE OPTION TO RAC VALIDATION 18363584 - UPTAKE INSTALL SCRIPTS FOR XB48_4 18336093 - QREP:122:RI:PATCH FS ADMIN SERVICE RUNNING AFTER RI UPGRADE CONFIGURE MODE 18320278 - QREP:1224.3:PLATFORM SPECIFIC SYNTAX ERRORS WITH DATE COMMAND IN DB CHECKER 18314643 - DISABLE SID=DB_NAME FOR RI UPGRADE FLOW IN RAC 18298977 - RI: EXCEPTION WHILE CLICKING RAC NODES BUTTON ON A NON-RAC SERVER 18286816 - QREP122:STARTCD48_3:TRAVERSING FROM VISION PASSW SCREEN TO PROD 18286371 - QREP122:STARTCD48_3:AMBIGUOUS MESSAGE DURING STAGE AREA CHECK ON HP 18275403 - QREP122:48:RI UPGRADE WITH EOH POST CHECKS HANGS IN SPLIT TIER DB NODE 18270631 - QREP122.48:MULTI-NODE RI USING NON-DEFAULT PASSWORDS NOT WORKING 18266046 - QREP122:48:RI NOT ALLOWING TO IGNORE THE RAC PRE-CHECK FAILURE 18242201 - UPTAKE TXK INSTALL SCRIPTS AND PLATFORMS.ZIP INTO STARTCD XB48_3 18236428 - QREP122.47:RI UPGRADE EXISTING OH FOR NON-DEFAULT APPS PASSWORD NOT WORKING 18220640 - INCONSISTENT DATABASE PORTS DURING EBS 12.2 INSTALLATION FOR STARTCD 12.2.0.47 18138796 - QREP122:47:RI 10.1.2 TECHSTACK NOT WORKING IF WE RUN RI FROM NEW STARTCD LOC 18138396 - TST1220: CONTROL FILE NAMING IN RAPID INSTALL SEEMS TO HAVE ISSUES 18124144 - IMPROVE HANDLING ERRORS FOUND IN CLUVFY LOG DURING PREINSTALL CHECKS 18111361 - VALIDATE ASM DB DATA FILES PATH AS +<DATA GROUP>/<PATH> 18102504 - QREP1220.47_5: UNZIP PANEL DOES NOT CREATE THE CORRECT STAGE 18083342 - 12.2 UPGRADE JAVA.NET.BINDEXCEPTION: CANNOT ASSIGN REQUESTED ADDRESS 18082140 - QREP122:47:RAC DB VALIDATION IS FAILS WITH EXIT STATUS IS 6 18062350 - 12.2.3 UPG: 12.2.0 INSTALLATION LOGS 18050840 - RI: UPGRADE WITH EXISTING RAC OH:SECONDARY DB NODE NAME IS BLANK 18049813 - RAC LOV DEFAULTS NOT SAVED UNLESS "SELECT" IS CLICKED 18003592 - TST1220:ADDITIONAL FREE SPACE CHECK FOR RI NEEDS TO BE CHECKED 17981471 - REMOVE ASM SPACE CHECK FROM RACVALIDATIONS.SH 17942179 - R12.2 INSTALL FAILING AT ADRUN11G.SH WITH ERRORS RW-50004 & RW-50010 17893583 - QREP1220.47:VALIDATION OF O.S IN RAPIDWIZ IN THE DB NODE CONFIGURATION SCREEN 17886258 - CLEANUP FND_NODES DURING UPGRADE FLOW 17858010 - RI POST INSTALL CHECKS (SSH VERIFICATION) STEP IS FAILING 17799807 - GEOHR: 12.2.0 - ERRORS IN RAPIDWIZ AND ADCONFIG LOGS 17786162 - QREP1223.4:RI:SERVICE_NAMES IS PRINTED AS SERVICE_NAME IN RI SCREEN 17782455 - RI: CONFIRM DEFAULT APPS PASSWORD IN SILENT MODE KICKOFF 17778130 - RI:ADMIN SERVER TO BE UP ON PRIMARY MID-TIER IN MULTI-NODE UPGRADE FS CREATION 17773989 - UN-SUPPORTED PLATFORM SHOWS 32 BIT AS HARD-CODED 17772655 - RELEVANT MESSAGE DURING THE RAPDIWIZ -TECHSTACK 17759279 - VERIFICATION PANEL DOES NOT EXPAND TECHNOLOGY STACK 17759183 - BUILDSTAGE SCRIPT MENU NEEDS TO BE ADJUSTED 17737186 - DATABASE PRE-REQ CHECK INCORRECTLY REPORTS SUCCESS ON AIX 17708082 - 12.2 INSTALLATION - OS PRE-REQUISITES CHECK 17701676 - TST122: GENERATE WRONG S_DBSID FOR PATCH FILE SYSTEM AT PHASE PREPARE 17630972 - /TMP PRE-REQ INSTALLATION CHECK 17617245 - 12.2 VISION INSTALL FAILS ON AIX 17603342 - OMCS: DB STAGING COMPLAINS WHILE MOVING IT TO FINAL LOCATION 17591171 - OMCS: DB STAGING FAILS WITH FRESH INSTALL R12.2 17588765 - CHECKER VERSION AND PLUGIN VERSION 17561747 - BUILDSTAGE.SH FAILS WITH ERROR WHEN STAGE HOSTED ON 32BIT LINUX 17539198 - RAPID INSTALL NEEDS TO IGNORE NON-REQUIRED STAGE ELEMENTS 17272808 - APPS USERS THAT HAVE DEFAULT PASSWORD AFTER 12.2 RAPID INSTALL References 12.2 Documentation Library 1581299.1 : EBS 12.2 Product Information Center 1320300.1 : Oracle E-Business Suite Release Notes, Release 12.2 1606170.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for Release 12.2.3 1624423.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for R12.TXK.C.Delta.4 and R12.AD.C.Delta.4 1594274.1 : Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes Related Articles Oracle E-Business Suite 12.2 Now Available startCD options to install Oracle E-Business Suite Release 12.2

    Read the article

  • Free RAM disappears - Memory leak?

    - by Izzy
    On a fresh started system, free reports about 1.5G used RAM (8G RAM alltogether, Ubuntu 12.04 with lightdm and plasma desktop, one konsole window started). Having the apps running I use, it still consumes not more than 2G. However, having the system running for a couple of days, more and more of my free RAM disappears -- without showing up in the list of used apps: while smem --pie=name reports less than 20% used (and 80% being available), everything else says differently. free -m for example reports on about day 7: total used free shared buffers cached Mem: 7459 7013 446 0 178 997 -/+ buffers/cache: 5836 1623 Swap: 9536 296 9240 (so you can see, it's not the buffers or the cache). Today this finally ended with the system crashing completely: the windows manager being gone, apps "hanging in the air" (frameless) -- and a popup notifying me about "too many open files". Syslog reports: kernel: [856738.020829] VFS: file-max limit 752838 reached So I closed those applications I was able to close, and killed X using Ctrl-Alt-backspace. X tried to come up again after that with failsafeX, but was unable to do so as it could no longer detect its configuration. So I switched to a console using Ctrl-Alt-F2, captured all information I could think of (vmstat, free, smem, proc/meminfo, lsof, ps aux), and finally rebooted. X again came up with failsafeX; this time I told it to "recover from my backed-up configuration", then switched to a console and successfully used startx to bring up the graphical environment. I have no real clue to what is causing this issue -- though it must have to do either with X itself, or with some user processes running on X -- as after killing X, free -m output looked like this: total used free shared buffers cached Mem: 7459 2677 4781 0 62 419 -/+ buffers/cache: 2195 5263 Swap: 9536 59 9477 (~3.5GB being freed) -- to compare with the output after a fresh start: total used free shared buffers cached Mem: 7459 1483 5975 0 63 730 -/+ buffers/cache: 689 6769 Swap: 9536 0 9536 Two more helpful outputs are provided by memstat -u. Shortly before the crash: User Count Swap USS PSS RSS mail 1 0 200 207 616 whoopsie 1 764 740 817 2300 colord 1 3200 836 894 2156 root 62 70404 352996 382260 569920 izzy 80 177508 1465416 1519266 1851840 After having X killed: User Count Swap USS PSS RSS mail 1 0 184 188 356 izzy 1 1400 708 739 1080 whoopsie 1 848 668 826 1772 colord 1 3204 804 888 1728 root 62 54876 131708 149950 267860 And after a restart, back in X: User Count Swap USS PSS RSS mail 1 0 212 217 628 whoopsie 1 0 1536 1880 5096 colord 1 0 3740 4217 7936 root 54 0 148668 180911 345132 izzy 47 0 370928 437562 915056 Edit: Just added two graphs from my monitoring system. Interesting to see: everytime when there's a "jump" in memory consumption, CPU peaks as well. Just found this right now -- and it reminds me of another indicator pointing to X itself: Often when returning to my machine and unlocking the screen, I found something doing heavvy work on my CPU. Checking with top, it always turned out to be /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none. So after this long explanation, finally my questions: What could be the possible causes? How can I better identify involved processes/applications? What steps could be taken to avoid this behaviour -- short from rebooting the machine all X days? I was running 8.04 (Hardy) for about 5 years on my old machine, never having experienced the like (always more than 100 days uptime, before rebooting for e.g. kernel updates). This now is a complete new machine with a fresh install of 8.04. In case it matters, some specs: AMD A4-3400 APU with Radeon(tm) HD Graphics, using the open-source ati/radeon driver (so no fglrx installed), 8GB RAM, WDC WD1002FAEX-0 hdd (1TB), Asus F1A75-V Evo mainboard. Ubuntu 12.04 64-bit with KDE4/Plasma. Apps usually open more or less permanently include Evolution, Firefox, konsole (with Midnight Commander running inside, about 4 tabs), and LibreOffice -- plus occasionally Calibre, Gimp and Moneyplex (banking software I'm already using for almost 20 years now, in a version which did fine on Hardy).

    Read the article

  • MPI Cluster Debugger launch integration in VS2010

    Let's assume that you have all the HPC bits installed and that you have existing MPI code (or you created a "Hello World" project using the MPI project template). Of course, you create a single MPI application and at runtime it will correspond to multiple processes (of the same app) launched on multiple nodes (i.e. machines) on the cluster. So how do you debug such a situation by simply hitting the familiar "F5" keystroke (i.e. Debug - Start Debugging)?WATCH IT INSTEAD OF READING ABOUT ITIf you can't bear to read through all the details below, just watch this 19-minute screencast explaining this VS2010 feature. Alternatively, or even additionally, keep on reading.REQUIREMENTWhen you debug an MPI application, you would want the copying of resources from your client machine (where Visual Studio is installed) to each compute node (where Windows HPC Server is installed) to take place automatically for you. 'Resources' in the previous sentence includes your application binary, plus any binary or data dependencies it may have, plus PDBs if needed, plus the debug CRT of the correct bitness, plus msvsmon for remote debugging to work. You would also want, after copying is complete, to have your app and msvsmon launched and attached so that you can hit breakpoints back in Visual Studio on your client machine. All these thing that you would want are delivered in VS2010.STEPS TO F51. In your MPI project where you have placed a breakpoint go to Project Properties - Configuration Properties - Debugging. Ensure the "Debugger to launch" combo box value is set to MPI Cluster Debugger.2. There are a whole bunch of properties here and typically you can ignore all of them except one: Run Environment. By default it is set to run 1 process on your local machine and if you change the number after that to, for example, 4 it will launch 4 processes of your app on your local machine.You want this to run on your cluster though, so go to the dropdown arrow at the end of the Run Environment cell and open it to expose the "Edit Hpc node" menu which opens the Node Selector dialog:In this dialog you can enter (or pick from a list) the cluster head node name and then the number of processes you want to execute on the cluster and then hit OK and… you are done.3. Press F5 and watch your breakpoint get hit (after giving it some time for copying, remote execution, attachment and symbol resolution to take place).GOING DEEPERIn the MPI Cluster Debugger project properties above, you can see many additional properties to the Run Environment. They are all optional, but you may want to understand them in order to fine tune your cluster debugging. Read all about each one of these on the MSDN page Configuration Properties for the MPI Cluster Debugger.In the Node Selector dialog above you can see more options than just the Head Node name and Number of Process to run. They should be self-explanatory but I also cover them in depth in my screencast showing you an example of why you would choose to schedule processes per core versus per node. You can also read about these options on MSDN as part of the page How to: Configure and Launch the MPI Cluster Debugger.To read through an example that touches on MPI project creation, project properties, node selector, and also usage of MPI with OpenMP plus MPI with PPL, read the MSDN page Walkthrough: Launching the MPI Cluster Debugger in Visual Studio 2010.Happy MPI debugging! Comments about this post welcome at the original blog.

    Read the article

  • Walkthrough: Scheduling jobs using Quartz.net &ndash; Part 1: What is Quartz.Net?

    - by Tarun Arora
    Quartz.NET is a full-featured, open source enterprise job scheduling system written in .NET platform that can be used from smallest apps to large scale enterprise systems. What is the problem that we trying to address? I want to schedule the execution of a task but only when something happens. Let’s call that something a trigger, so... if the trigger is met => execute the task. Sounds simple, why not use windows task scheduler for this? Well, windows task scheduler is great for tasks where the trigger can be easily defined. With windows task scheduler will you be able to schedule a task to run on every working day according to the UK calendar (exclude all weekends & bank holidays) without either writing the logic for day check in the task or a wrapper script calling into the task. The task should just contain the execution logic and should not have anything to do with the schedule for execution; Quartz.net allows you to achieve this and lots more. A quartz.net trigger gives you the flexibility for task invocation based on the following triggers, 1. at a certain time of day (to the millisecond) 2. on certain days of the week 3. on certain days of the month 4. on certain days of the year 5. not on certain days listed within a registered Calendar (such as business holidays) 6. repeated a specific number of times 7. repeated until a specific time/date 8. repeated indefinitely 9. repeated with a delay interval Did 8 – repeat indefinitely just ring a bell? I’ll be covering that in the future post. Using Quartz.net as a windows service You can have Quartz.net run as a standalone instance within its own .NET virtual machine instance via .NET Remoting. Let’s take a look at typical application architecture. In the figure below, I have the application tier set up on Machine 1, database set up on Machine 2 and Quartz.net set up on Machine 3 which is normally the architecture for most (if not all) enterprise applications. Figure 1 -  Typical Application architecture while using Quartz.net as a windows service What other options do I have if I don’t want to use Quartz.net? Quartz.net is just one of the many job scheduling services. Have a look at this comprehensive list of free and paid enterprise job scheduling software along with their feature comparison. http://en.wikipedia.org/wiki/List_of_job_scheduler_software This was first in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering how to Install Quartz.net as a windows service. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • MPI Cluster Debugger launch integration in VS2010

    Let's assume that you have all the HPC bits installed and that you have existing MPI code (or you created a "Hello World" project using the MPI project template). Of course, you create a single MPI application and at runtime it will correspond to multiple processes (of the same app) launched on multiple nodes (i.e. machines) on the cluster. So how do you debug such a situation by simply hitting the familiar "F5" keystroke (i.e. Debug - Start Debugging)?WATCH IT INSTEAD OF READING ABOUT ITIf you can't bear to read through all the details below, just watch this 19-minute screencast explaining this VS2010 feature. Alternatively, or even additionally, keep on reading.REQUIREMENTWhen you debug an MPI application, you would want the copying of resources from your client machine (where Visual Studio is installed) to each compute node (where Windows HPC Server is installed) to take place automatically for you. 'Resources' in the previous sentence includes your application binary, plus any binary or data dependencies it may have, plus PDBs if needed, plus the debug CRT of the correct bitness, plus msvsmon for remote debugging to work. You would also want, after copying is complete, to have your app and msvsmon launched and attached so that you can hit breakpoints back in Visual Studio on your client machine. All these thing that you would want are delivered in VS2010.STEPS TO F51. In your MPI project where you have placed a breakpoint go to Project Properties - Configuration Properties - Debugging. Ensure the "Debugger to launch" combo box value is set to MPI Cluster Debugger.2. There are a whole bunch of properties here and typically you can ignore all of them except one: Run Environment. By default it is set to run 1 process on your local machine and if you change the number after that to, for example, 4 it will launch 4 processes of your app on your local machine.You want this to run on your cluster though, so go to the dropdown arrow at the end of the Run Environment cell and open it to expose the "Edit Hpc node" menu which opens the Node Selector dialog:In this dialog you can enter (or pick from a list) the cluster head node name and then the number of processes you want to execute on the cluster and then hit OK and… you are done.3. Press F5 and watch your breakpoint get hit (after giving it some time for copying, remote execution, attachment and symbol resolution to take place).GOING DEEPERIn the MPI Cluster Debugger project properties above, you can see many additional properties to the Run Environment. They are all optional, but you may want to understand them in order to fine tune your cluster debugging. Read all about each one of these on the MSDN page Configuration Properties for the MPI Cluster Debugger.In the Node Selector dialog above you can see more options than just the Head Node name and Number of Process to run. They should be self-explanatory but I also cover them in depth in my screencast showing you an example of why you would choose to schedule processes per core versus per node. You can also read about these options on MSDN as part of the page How to: Configure and Launch the MPI Cluster Debugger.To read through an example that touches on MPI project creation, project properties, node selector, and also usage of MPI with OpenMP plus MPI with PPL, read the MSDN page Walkthrough: Launching the MPI Cluster Debugger in Visual Studio 2010.Happy MPI debugging! Comments about this post welcome at the original blog.

    Read the article

  • Top 5 characteristics Recruiters are looking for

    - by Maria Sandu
    Of course many skills and characteristics recruiters are looking for are job specific. But whether you are a graduate fresh out of college or seasoned in the workplace, recruiters are also looking for generic skills and attitude to see whether you are a good fit to the company. So make sure you prepare and show through examples that you have these skills. 1. Drive/passion Liking the job you are applying for is paramount and something recruiters are always looking for. Show and prove your drive for the role and/or the field you are applying for. Always be prepared to pitch yourself, this shows your drive in the role you are applying for. 2. Communication skills People often make the mistake by thinking this skill is related to how good they are able to talk about their background and expertise. This is important, but as least as important is it that you listen well to questions that are asked. Make sure you answer to the point and ask questions if you want questions to be clarified. This shows your interest in the role and the ability to communicate clearly. This also helps you building trust with the recruiter every time you speak to him/her. 3. Confidence Recruiters are looking for the best candidate for the job. So if you don’t think you are the best candidate why should the recruiter? Show with confidence, without being arrogant (think about building trust), why you are the right person for the job. Confidence also shows in your answers to difficult questions. Be confident enough to explain why some experiences went wrong and how you learnt from them. If you don’t have a direct explanation on a question, it is better to ask for a second to think instead of a random answer. 4. Vision The main reason to hire graduates for many companies is that graduates are perceived to be flexible. The organisation will train and up skill you in the direction best suitable for the organisation. However the most intense learning path is realised when you also know where you want to go. Companies are often happy to accommodate you to support with training and development, but if you don’t have a clear vision on what you want to achieve for yourself and what value you bring to the company, recruiters can decide you are not the right candidate as they are afraid you aren’t going to stay in the company. 5. Business awareness For every job you apply you will get challenged on your knowledge and interest for the market and business they are in. All companies add value in different ways in their respective markets. So make sure you are aware of what a company is doing, what their goal is and why and how they exist and how you can add value for the company in the role you are applying for. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • BI&EPM in Focus November 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers ·       San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings ·       NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI:  Video ·       Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: Video  ·       Sodexo chose Oracle Exalytics as their business analytics platform:  Video ·       AstraZeneca (US, Canada, MedImmune) Improves Insight, Analytics, and Reporting, Enterprisewide with Unified Planning on a Single Platform ·       Experian Consolidates Reporting Systems for One, Global View of Financial Data and Improves Planning for Continued Growth ·       Munchkin Gives its Line of Children’s Products Plenty of Room to Grow in an Upgraded Enterprise Application Environment ·       Top 20 EPM Customer Snapshots, in One Handy Booklet (link) ·       Customer and Partner Successes: Link to Complete Archive Enterprise Performance Management ·       Nov 15: Is Hope and Email the Core of your Reconciliation Process? (link) ·       Replay: Integrated Business Planning, Featuring Leggett & Platt (link) ·       Whitepaper: The New Competitive Advantage - Strategic CIO's Embrace the Cloud (link) ·       Press: Oracle Enterprise Performance Management Driving Significant Improvements in Budget Management and Reporting for Public Sector Organizations (link) ·       Enterprise Performance Management Video Feature Overviews, Now Available on YouTube (link) ·       NEW Solution Brief - Oracle Hyperion Planning on the Oracle Exalytics In-Memory Machine (link) ·       For Insurance sector: Datasheet for new release V2.0 - Oracle Quantitative Management and Reporting for Solvency II (link) ·       Whitepaper FSN 2012: Managing Risk and Uncertainty, an Executive's Guide to Integrated Business Planning (link) ·       NEW Datasheet for Oracle Planning and Budgeting Cloud Service (link) ·       Blog: Planning in the Cloud - For Real Business Intelligence ·       Press: Latest Release of Oracle Exalytics In-Memory Machine Software Enables Customers to View and Analyze Data at the Speed of Business (link) ·       Press: New Release (11.1.1.6.2BP1) of Oracle Business Intelligence Enables Users to Quickly Access and Analyze Key Business Information, Anytime, Anywhere (link) ·       Mark Hurd Interviewed on USA Today about Big Data & Analytics (link) ·       Whitepaper: Mastering Big Data - CFO Strategies to Transform Insight into Opportunity (link) ·       Nov 15: Improve Asset Utilization. Achieve Greater Profitability: Oracle Enterprise Asset Management Analytics (link) ·       Replay: Oracle Enterprise Asset Mgmt Analytics and Oracle Manufacturing Analytics (link) ·       Overload to Impact: An Industry Scorecard on Big Data Business Challenges (link) ·       Webcast Replay: Overview of Oracle Endeca Informational Discovery (link) ·       OBIEE 11g: Required and Recommended Patches and Patch Sets (link)

    Read the article

  • James - mail server configuration help needed

    - by Chaitanya
    Hi, I am trying to setup James mail server on a linux machine. The linux machine has public static ip address assigned. I installed James and added in the config.xml added the servername as mydomain.com. In the DNS for mydomain.com, I have created a A-record, say mx.mydomain.com, which corresponds to the ipaddress of the above mail server machine. Then added mx.mydomain.com as MX record for mydomain.com. In James, I have created a new user test. Then from gmail, I sent a mail to [email protected]. The mail is not received back and it didn't even bounce back. The linux machine is behind a firewall with only 22, 80, 8080 ports open for external network. My question here is, Do I require do open any other ports on the firewall so that the mail I send from gmail arrives to James? If it's not the port problem, any views on solving this issue? I don't want to send mails from this server. It's only for receiving the mails.

    Read the article

  • CultureInfo on a IValueConverter implementation

    - by slugster
    When a ValueConverter is used as part of a binding, one of the parameters to the Convert function is a System.Globalization.CultureInfo object. Can anyone tell me where this culture object gets its info from? I have some code that formats a date based on that culture. When i access my silverlight control which is hosted on my machine, it formats the date correctly (using the d/MM/yyyy format, which is set as the short date format on my machine). When i access the same control hosted on a different server (from my client machine), the date is being formatted as MM/dd/yyyy hh:mm:ss - which is totally wrong. Coincidentally the regional settings on the server are set to the same as my client machine. This is the code for my value converter: public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (value is DateTime) { if (parameter != null && !string.IsNullOrEmpty(parameter.ToString())) return ((DateTime)value).ToString(parameter.ToString()); else return ((DateTime)value).ToString(culture.DateTimeFormat.ShortDatePattern); } return value; } basically, a specific format can be specified as the converter parameter, but if it isn't then the short date pattern of the culture object is used.

    Read the article

  • SSL Slow in IE 8.0.7600.16385IC

    - by discovery.jerrya
    I'm having a performance problem on my company's web site using a specific version of IE 8 to load a page using https. Here's what I know. Server: Virtual machine running on VMWare ESX Windows Server 2003 Enterprise Edition SP 2 Tomcat 6.0.16 Client: Windows XP and Window 7 Internet Explorer 8.0.7600.16385IC Page loads/refreshes in under 1 second using HTTP. Page loads/refreshes in 15-16 seconds in HTTPS using this version of IE. Problem reproduced on multiple client machines with same IE version. Problem reproduced on multiple client machines with different Windows versions (XP and 7). No performance problem using Chrome, Firefox, Opera, or Safari from same machine. No performance problem using other versions of IE 8 on other machines. Slow load causes virtually no CPU, memory, or I/O spike on server or client machine. No performance problem on other sites using HTTPS on same client machine. The pages in question use JavaScript and innerHTML to replace the contents of div elements to create a collapsible menu, and an iframe to display some content. A couple of the div elements contain images. If I remove the iframe and the JavaScript, the performance issues go away. However, rewriting the entire site to make these changes would be very time consuming. We're in the process of replacing the whole site, but it may be 2-3 months before we do so and we really cannot live with this slowdown that long. I've already looked at several IE tuning options, such as disabling add ons, running IE-rereg, and resetting IE, with no luck. Does anyone have any suggestions?

    Read the article

  • ODBC continually prompts for password

    - by doublej92
    I have an application built in Access 2003 that uses a system DSN ODBC to connect to a SQL Server. The ODBC uses SQL authentication. When the application is started, the user is prompted to authenticate into the database. I have another computer set up within the same domain that has Access 2007 installed on it. I log in using the same credentials that I use to get on the machine that has Access 2003. I converted my application to Access 2007 format and everything works fine. However, when other users try to use the application, they are prompted to enter the database password every time a table is accessed. Thinking it was a problem with my ODBC, I confirmed that the connections were set up the same way on both of my machines, and the user's machine. Here is the interesting part, when the user logged into my machine, it started prompting for the password every time. When I logged into the user's machine, the application worked fine. Anyone have any ideas? All help is appreciated!

    Read the article

  • WCF NetTcpBinding Security - how does it work?

    - by RemotecUk
    Hi, encountered the following problems trying to work through the quagmire of settings in WCF... I created a WCF client server service using a NetTcp binding. I didn't make any changes to the security settings and when running on one machine it works very nicely. However, when I ran my client from another machine it complained that the server didnt like the security credentials that were sent. I understand now that NetTCP is "secured" by default and that my client would have been passing the wrong security details - namely the windows user name and password (or some form of domain authentication) to my server which as they are not running on the same domain it would not have liked. However, what I don't understand is as follows: I haven't specified any security in my binding - does the standard settings expect a windows user name or password to be sent? I don't have any certificate installed on my server - I understand that NetTCP bindings need some form of public private key to protect the credentials - yet this seemed to work when both client and server were on the same machine - how was the data getting encrypted? Or wants it as WCF knew it was on the same machine and encryption isn't needed? I have had to set my security mode on my client and server to "none" now and they connect nicely. However is there any way to encrypt my data without a certificate? Finally... what is the difference between Transport and Message security? To check my understanding (excuse the scenario!) message security is like if I sent a letter from person A to person B and I encode my hand writing to ensure that if anyone intercepts it they cannot read it? Transport Security is if I decide to have my letter sent by armed transport so that no one can get at it along the way? Is it possible to have any form of encryption in WCF without a certificate? My project is a private project and I dont want to purchase a certificate and the data isnt that sensitive anyway so its just for my own knowledge. Thanks in advance.

    Read the article

  • Deployed Qt5 Application Doesn't Print or Show Print Dialog

    - by MustacheMcLimey
    I'm experiencing Qt4 to Qt5 troubles. In my application when the user clicks the print button two things should happen, one is that a PDF gets written to disk (which still works fine in the new version, so I know that some of the printing functions are working properly) and the other is that a QPrintDialog should exec() and then send to a connected printer. I see the dialog when I launch from my development machine. The application launches on the deployed machine, but the QPrintDialog never shows and the document never prints. I am including print support. QT += core gui network webkitwidgets widgets printsupport I have been using Process Explorer to see what DLLs the application uses on my development machine, and I believe that everything is present. My application bundle includes: {myAppPath}\MyApp[MyApp.exe, Qt5PrintSupport.dll, ...] {myAppPath}\plugins\printsupport\windowsprintersupport.dll {myAppPath}\plugins\imageformats[ qgif.dll, qico.dll,qjpeg.dll, qmng.dll, qtga.dll, qtiff.dll, qwbmp.dll ] The following is the relevant code snippet: void PrintableForm::printFile() { //Writes the PDF to disk in every environment pdfCopy(); //Paper Copy only works on my dev machine QPrinter paperPrinter; QPrintDialog printDialog(&paperPrinter,this); if( printDialog.exec() == QDialog::Accepted ) { view->print(&paperPrinter); } this->accept(); } My first thought is that the relevant DLLs are not being found come print time, and that means that my application file system is incorrect, but I have not found anything that shows me a different file structure. Am I on the right track or is there something else wrong with this setup?

    Read the article

  • Using Nemerle in asp.net App_Code directory

    - by Andrew Davey
    I want to use Nemerle in an ASP.NET application. Specifically, putting .n files into App_Code. I added this to my web.config system.codedom/compilers section: <compiler language="n;Nemerle" extension=".n" type="Nemerle.Compiler.NemerleCodeProvider, Nemerle.Compiler"/> When running I get this exception: The assembly '' is already loaded in another appdomain. Setting in machine.config can help solve this issue. Stack trace [HttpException (0x80004005): The assembly '' is already loaded in another appdomain. Setting <deployment retail="true" /> in machine.config can help solve this issue.] System.Web.Compilation.CodeDirectoryCompiler.GetCodeDirectoryAssembly(VirtualPath virtualDir, CodeDirectoryType dirType, String assemblyName, StringSet excludedSubdirectories, Boolean isDirectoryAllowed) +8809675 System.Web.Compilation.BuildManager.CompileCodeDirectory(VirtualPath virtualDir, CodeDirectoryType dirType, String assemblyName, StringSet excludedSubdirectories) +128 System.Web.Compilation.BuildManager.CompileCodeDirectories() +265 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +320 [HttpException (0x80004005): The assembly '' is already loaded in another appdomain. Setting <deployment retail="true" /> in machine.config can help solve this issue.] System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() +58 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +512 System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) +729 [HttpException (0x80004005): The assembly '' is already loaded in another appdomain. Setting <deployment retail="true" /> in machine.config can help solve this issue.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +8890735 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +85 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +259 What am I doing wrong?

    Read the article

  • Using Entity Framework with an SQL Compact Private Installation

    - by David Veeneman
    I am using Entity Framework 4 in a desktop application with SQL Compact. I want to use a private installation of SQL Compact with my application, so that my installer can install SQL Compact without giving the user a second installation to do. It also avoids versioning hassles down the road. My development machine has SQL Compact 3.5 SP1 installed as a public installation, so my app runs fine there, as one would expect. But it's not running on my test machine, which does not have SQL Compact installed. I get this error: The specified store provider cannot be found in the configuration, or is not valid. I know some people have had difficulty with SQL Compact private installations, but I have used them for a while, and I really like them. Unfortunately, my regular private installation approach isn't working. I have checked the version numbers on my SQL CE files, and they are all 3.8.8078.0, which is the SP2 RC version. Here are the files I have included in my private installation: sqlcecompact35.dll sqlceer35EN.dll sqlceme35.dll sqlceqp35.dll sqlcese35.dll System.Data.SqlServerCe.dll System.Data.SqlServerCe.Entity.dll I have added a reference to System.Data.SqlServerCe to my project, and I have verified that all of the files listed above are being copied to the application folder on the installation machine. Here is the code I use to configure an EntityConnectionStringBuilder when I open a SQL Compact file: var sqlCompactConnectionString = string.Format("Data Source={0}", filePath); // Set Builder properties builder.Metadata = string.Format("res://*/{0}.csdl|res://*/{0}.ssdl|res://*/{0}.msl", edmName); builder.Provider = "System.Data.SqlServerCe.3.5"; builder.ProviderConnectionString = sqlCompactConnectionString; var edmConnectionString = builder.ToString(); Am I missing a file? Am I missing a configuration stepp needed to tell Entity Framework where to find my SQL Compact DLLs? Any other suggestions why EF isn't finding my SQL Compact DLLs on the installation machine? Thanks for your help.

    Read the article

  • Packaging a qt application compiled with shared libraries

    - by Surjya Narayana Padhi
    Hi Geeks, I downloaded the qt embedded demo source code recently on my linux machine. Following are the outcomes during running of the program I compiled it statically on my x86 machine and run the application on x86 machine it runs fine. But when i took the statically compiled binary file to other machine with Atom platform It run with some missing widgets. I found that the plugins cant be ported with static compilation. Can anybody tell me is it true? If no can anybody tell me the steps for it? I compiled it dynamically with shared libraries. Then got an executalbe on linux. I did "ldd MyAppName". It show me the shared library files it is using. But I dont know how to package these. Can anybody tell me the steps to package it? I checked in the article on deploying qt applications on X11-linux platforms. But its not complete. Can anybody give me the detailed steps? Any help will be appreciated......

    Read the article

  • VS2010 / Target Framework = 3.5 / Building on Continuous Integration Server

    - by granadaCoder
    I'm checking into upgrading to VS2010. Our production servers only have 3.5 Framework and it will be 6-9 months before they are updated. We also have a Continuous Integration Server, running CruiseControl.NET (CC.NET). It has the 3.5 Framework on it as well. Our implementation of CC.NET mainly calls msbuild.exe MySolution.msbuild. (We encapsulate most of the build logic into .msbuild files fyi) Inside the .msbuild file, the following is the "Build" syntax: < Target Name="Build" DependsOnTargets="Checkout" < MSBuild Projects="$(WorkingCheckout)\MySolution.sln" Targets="Build" Properties="Configuration=$(Configuration)" < Output TaskParameter="TargetOutputs" ItemName="TargetOutputsItemName"< /Output < /MSBuild < /Target (A few spaces added to make it display here) =========== I know the VS2010 can "Target" the 3.5 Framework. My question is what happens when I have a VS2010 dev machine, and I check the VS2010 .sln and .csproj(s) files into source control (svn, btw).....will the CC.NET machine ~~which only have the 3.5 Framework installed on it........be able to build the .sln ? I guess I could test it, but the catch22 is that I don't have VS2010 (yet). So I'm asking before I try (the trial or a real install. ............. Any ideas what will happen? I guess the crux question is, what will happen. c:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe "MyVS2010SolutionFile.sln" ?? My hopeful goal would be, allow the developers to have VS2010 (now!), and it still be "ok" for the CC.NET machine and the Production Servers which will only have the 3.5 Framework on them for the foreseeable future. Just to be clear, developers NEVER create deployable builds. Only the CC.NET machine produces builds that will be pushed as production builds. Any help?

    Read the article

  • Turing Model Vs Von Neuman model

    - by Santhosh
    First some background (based on my understanding).. The Von-Neumann architecture describes the stored-program computer where instructions and data are stored in memory and the machine works by changing it's internal state, i.e an instruction operated on some data and modifies the data. So inherently, there is state msintained in the system. The Turing machine architecture works by manipulating symbols on a tape. i.e A tape with infinite number of slots exists, and at any one point in time, the Turing machine is in a particular slot. Based on the symbol read at that slot, the machine change the symbol and move to a different slot. All of this is deterministic. My questions are Is there any relation between these two models (Was the Von Neuman model based on or inspired by the Turing model)? Can we say that Turing model is a superset of Von Newman model? Does functional Programming fit into Turing model. If so how? (I assume FP does not lend itself nicely to the Von Neuman model)

    Read the article

  • The RSA key container could not be opened

    - by TenaciousImpy
    Hi, I've been developing an ASP.NET site on an older machine running XP home. I recently got a new Win 7 PC and moved all my project files across. When I try and run the project, I get this error message: "Failed to decrypt using provider 'MyRsaProtectedConfigurationProvider'. Error message from the provider: The RSA key container could not be opened." I realised that I encrypted parts of my web.config file using a RSA encryption. This is where the problem now lies. I'm not sure how to get that key working again so that I can use it on my new machine. I exported the key from the older machine and imported it using: aspnet_regiis -pi "RSAProviderName" "C:\RSA_configkey.xml" This was imported successfully. I then ran the project, but the same error message came up. I figured it might be a permission thing, so I ran: aspnet_regiis -pa "RSAProviderName" "\Desktop" -full This was also successful, but I still get the error. From reading around, I've seen people use "ASPNET" instead of "\Desktop" (Desktop is my machine name). However, when I try and use "ASPNET", I get: No mapping between account name and security IDs was done. <Exception from HRESULT = 0x80070534 I can't work on the project until this is fixed, so any help is much appreciated. Thanks!

    Read the article

  • Using DPAPI / ProtectedData in a web farm environment with the User Store

    - by Lachman
    I was wondering if anyone had successfully used DPAPI with a user store in a web farm enviroment? Because our application is a recently converted from 1.1 to 2.0 asp.net app, we're using a custom wrapper which directly calls the CryptUnprotect methods. But this should be the same as the ProtectedData method available in the 2.0 framework. Because we are operating in a web farm environment, we can't guarantee that the machine that did the encryption is going to be the one decrypting it. (Also because machine failures shouldn't destroy our encrypted data). So what we have is a serviced component that runs in a service under a particular user account on each one of our web boxes. This user is a set up to have a roaming profile, as per the recomendation. The problem we have is that info encrypted on one machine can not be decrypted on another, this fails with the win32 error 'Key not valid for use in specified state'. I suspect that this is because I've made a mistake by having the encryption service running as the user on multiple machines, hence keeping the user logged in on more than one machine at the same time. If this is the problem, how are other using DPAPI with the User Store in a web farm environment?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >