Search Results

Search found 28957 results on 1159 pages for 'single instance'.

Page 164/1159 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Cloud Computing Pricing - It's like a Hotel

    - by BuckWoody
    I normally don't go into the economics or pricing side of Distributed Computing, but I've had a few friends that have been surprised by a bill lately and I wanted to quickly address at least one aspect of it. Most folks are used to buying software and owning it outright - like buying a car. We pay a lot for the car, and then we use it whenever we want. We think of the "cloud" services as a taxi - we'll just pay for the ride we take an no more. But it's not quite like that. It's actually more like a hotel. When you subscribe to Azure using a free offering like the MSDN subscription, you don't have to pay anything for the service. But when you create an instance of a Web or Compute Role, Storage, that sort of thing, you can think of the idea of checking into a hotel room. You get the key, you pay for the room. For Azure, using bandwidth, CPU and so on is billed just like it states in the Azure Portal. so in effect there is a cost for the service and then a cost to use it, like water or power or any other utility. Where this bit some folks is that they created an instance, played around with it, and then left it running. No one was using it, no one was on - so they thought they wouldn't be charged. But they were. It wasn't much, but it was a surprise.They had the hotel room key, but they weren't in the room, so to speak. To add to their frustration, they had to talk to someone on the phone to cancel the account. I understand the frustration. Although we have all this spelled out in the sign up area, not everyone has the time to read through all that. I get that. So why not make this easier? As an explanation, we bill for that time because the instance is still running, and we have to tie up resources to be available the second you want them, and that costs money. As far as being able to cancel from the portal, that's also something that needs to be clearer. You may not be aware that you can spin up instances using code - and so cancelling from the Portal would allow you to do the same thing. Since a mistake in code could erase all of your instances and the account, we make you call to make sure you're you and you really want to take it down. Not a perfect system by any means, but we'll evolve this as time goes on. For now, I wanted to make sure you're aware of what you should do. By the way, you don't have to cancel your whole account not to be billed. Just delete the instance from the portal and you won't be charged. You don't have to call anyone for that. And just FYI - you can download the SDK for Azure and never even hit the online version at all for learning and playing around. No sign-up, no credit card, PO, nothing like that. In fact, that's how I demo Azure all the time. Everything runs right on your laptop in an emulated environment.  

    Read the article

  • ConcurrentDictionary<TKey,TValue> used with Lazy<T>

    - by Reed
    In a recent thread on the MSDN forum for the TPL, Stephen Toub suggested mixing ConcurrentDictionary<T,U> with Lazy<T>.  This provides a fantastic model for creating a thread safe dictionary of values where the construction of the value type is expensive.  This is an incredibly useful pattern for many operations, such as value caches. The ConcurrentDictionary<TKey, TValue> class was added in .NET 4, and provides a thread-safe, lock free collection of key value pairs.  While this is a fantastic replacement for Dictionary<TKey, TValue>, it has a potential flaw when used with values where construction of the value class is expensive. The typical way this is used is to call a method such as GetOrAdd to fetch or add a value to the dictionary.  It handles all of the thread safety for you, but as a result, if two threads call this simultaneously, two instances of TValue can easily be constructed. If TValue is very expensive to construct, or worse, has side effects if constructed too often, this is less than desirable.  While you can easily work around this with locking, Stephen Toub provided a very clever alternative – using Lazy<TValue> as the value in the dictionary instead. This looks like the following.  Instead of calling: MyValue value = dictionary.GetOrAdd( key, () => new MyValue(key)); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would instead use a ConcurrentDictionary<TKey, Lazy<TValue>>, and write: MyValue value = dictionary.GetOrAdd( key, () => new Lazy<MyValue>( () => new MyValue(key))) .Value; This simple change dramatically changes how the operation works.  Now, if two threads call this simultaneously, instead of constructing two MyValue instances, we construct two Lazy<MyValue> instances. However, the Lazy<T> class is very cheap to construct.  Unlike “MyValue”, we can safely afford to construct this twice and “throw away” one of the instances. We then call Lazy<T>.Value at the end to fetch our “MyValue” instance.  At this point, GetOrAdd will always return the same instance of Lazy<MyValue>.  Since Lazy<T> doesn’t construct the MyValue instance until requested, the actual MyClass instance returned is only constructed once.

    Read the article

  • XNA WP7 Texture memory and ContentManager

    - by jlongstreet
    I'm trying to get my WP7 XNA game's memory under control and under the 90MB limit for submission. One target I identified was UI textures, especially fullscreen ones, that would never get unloaded. So I moved UI texture loads to their own ContentManager so I can unload them. However, this doesn't seem to affect the value of Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage"), so it doesn't look like the memory is actually being released. Example: splash screens. In Game.LoadContent(): Application.Instance.SetContentManager("UI"); // set the current content manager for (int i = 0; i < mSplashTextures.Length; ++i) { // load the splash textures mSplashTextures[i] = Application.Instance.Content.Load<Texture2D>(msSplashTextureNames[i]); } // set the content manager back to the global one Application.Instance.SetContentManager("Global"); When the initial load is done and the title screen starts, I do: Application.Instance.GetContentManager("UI").Unload(); The three textures take about 6.5 MB of memory. Before unloading the UI ContentManager, I see that ApplicationCurrentMemoryUsage is at 34.29 MB. After unloading the ContentManager (and doing a full GC.Collect()), it's still at 34.29 MB. But after that, I load another fullscreen texture (for the title screen background) and memory usage still doesn't change. Could it be keeping the memory for these textures allocated and reusing it? edit: very simple test: protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); PrintMemUsage("Before texture load: "); // TODO: use this.Content to load your game content here red = this.Content.Load<Texture2D>("Untitled"); PrintMemUsage("After texture load: "); } private void PrintMemUsage(string tag) { Debug.WriteLine(tag + Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage")); } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here if (count++ == 100) { GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("Before CM unload: "); this.Content.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After CM unload: "); red = null; spriteBatch.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After SpriteBatch Dispose(): "); } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here if (red != null) { spriteBatch.Begin(); spriteBatch.Draw(red, new Vector2(0,0), Color.White); spriteBatch.End(); } base.Draw(gameTime); } This prints something like (it changes every time): Before texture load: 7532544 After texture load: 10727424 Before CM unload: 9875456 After CM unload: 9953280 After SpriteBatch Dispose(): 9953280

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • AWS Amazon EC2 - password-less SSH login for non-root users using PEM keypairs

    - by Mark White
    We've got a couple of clusters running on AWS (HAProxy/Solr, PGPool/PostgreSQL) and we've setup scripts to allow new slave instances to be auto-included into the clusters by updating their IPs to config files held on S3, then SSHing to the master instance to kick them to download the revised config and restart the service. It's all working nicely, but in testing we're using our master pem for SSH which means it needs to be stored on an instance. Not good. I want a non-root user that can use an AWS keypair who will have sudo access to run the download-config-and-restart scripts, but nothing else. rbash seems to be the way to go, but I understand this can be insecure unless setup correctly. So what security holes are there in this approach: New AWS keypair created for user.pem (not really called 'user') New user on instances: user Public key for user is in ~user/.ssh/authorized_keys (taken by creating new instance with user.pem, and copying it from /root/.ssh/authorized_keys) Private key for user is in ~user/.ssh/user.pem 'user' has login shell of /home/user/bin/rbash ~user/bin/ contains symbolic links to /bin/rbash and /usr/bin/sudo /etc/sudoers has entry "user ALL=(root) NOPASSWD: ~user/.bashrc sets PATH to /home/user/bin/ only ~user/.inputrc has 'set disable-completion on' to prevent double tabbing from 'sudo /' to find paths. ~user/ -R is owned by root with read-only access to user, except for ~user/.ssh which has write access for user (for writing known_hosts), and ~user/bin/* which are +x Inter-instance communication uses 'ssh -o StrictHostKeyChecking=no -i ~user/.ssh/user.pem user@ sudo ' Any thoughts would be welcome. Mark...

    Read the article

  • GIT repository layout for server with multiple projects

    - by Paul Alexander
    One of the things I like about the way I have Subversion set up is that I can have a single main repository with multiple projects. When I want to work on a project I can check out just that project. Like this \main \ProductA \ProductB \Shared then svn checkout http://.../main/ProductA As a new user to git I want to explore a bit of best practice in the field before committing to a specific workflow. From what I've read so far, git stores everything in a single .git folder at the root of the project tree. So I could do one of two things. Set up a separate project for each Product. Set up a single massive project and store products in sub folders. There are dependencies between the products, so the single massive project seems appropriate. We'll be using a server where all the developers can share their code. I've already got this working over SSH & HTTP and that part I love. However, the repositories in SVN are already many GB in size so dragging around the entire repository on each machine seems like a bad idea - especially since we're billed for excessive network bandwidth. I'd imagine that the Linux kernel project repositories are equally large so there must be a proper way of handling this with Git but I just haven't figured it out yet. Are there any guidelines or best practices for working with very large multi-project repositories?

    Read the article

  • SQL Server Express 2008 R2 Installation error at Windows 7

    - by Shai Sherman
    Hello, I created install script that will install SQL Server 2008 R2 on windows XP SP3, windows vista and windows 7. One of the command that i used in the installation is for silent installation of SQL Server 2008 R2. When i install it on windows XP everything works just fine but when i try to install it on Windows 7 i get an error. What am I doing wrong? Here is the command line that i use: "Setup.exe /ConfigurationFile=Mysetup.ini" Mysetup.ini file: -------------------------------------Start of ini file --------------------------------- ;SQL SERVER 2008 R2 Configuration File ;Version 1.0, 5 May 2010 ; [SQLSERVER2008] ; Specify the Instance ID for the SQL Server features you have specified. SQL Server directory structure, registry structure, and service names will reflect the instance ID of the SQL Server instance. INSTANCEID="MSSQLSERVER" ; Specifies a Setup work flow, like INSTALL, UNINSTALL, or UPGRADE. This is a required parameter. ACTION="Install" ; Specifies features to install, uninstall, or upgrade. The list of top-level features include SQL, AS, RS, IS, and Tools. The SQL feature will install the database engine, replication, and full-text. The Tools feature will install Management Tools, Books online, Business Intelligence Development Studio, and other shared components. FEATURES=SQLENGINE ; Displays the command line parameters usage HELP="False" ; Specifies that the detailed Setup log should be piped to the console. INDICATEPROGRESS="False" ; Setup will not display any user interface. QUIET="False" ; Setup will display progress only without any user interaction. QUIETSIMPLE="True" ; Specifies that Setup should install into WOW64. This command line argument is not supported on an IA64 or a 32-bit system. ;X86="False" ; Specifies the path to the installation media folder where setup.exe is located. ;MEDIASOURCE="z:\" ; Detailed help for command line argument ENU has not been defined yet. ENU="True" ; Parameter that controls the user interface behavior. Valid values are Normal for the full UI, and AutoAdvance for a simplied UI. ; UIMODE="Normal" ; Specify if errors can be reported to Microsoft to improve future SQL Server releases. Specify 1 or True to enable and 0 or False to disable this feature. ERRORREPORTING="False" ; Specify the root installation directory for native shared components. ;INSTALLSHAREDDIR="D:\Program Files\Microsoft SQL Server" ; Specify the root installation directory for the WOW64 shared components. ;INSTALLSHAREDWOWDIR="D:\Program Files (x86)\Microsoft SQL Server" ; Specify the installation directory. ;INSTANCEDIR="D:\Program Files\Microsoft SQL Server" ; Specify that SQL Server feature usage data can be collected and sent to Microsoft. Specify 1 or True to enable and 0 or False to disable this feature. SQMREPORTING="False" ; Specify a default or named instance. MSSQLSERVER is the default instance for non-Express editions and SQLExpress for Express editions. This parameter is required when installing the SQL Server Database Engine (SQL), Analysis Services (AS), or Reporting Services (RS). INSTANCENAME="SQLEXPRESS" SECURITYMODE=SQL SAPWD=SystemAdmin ; Agent account name AGTSVCACCOUNT="NT AUTHORITY\NETWORK SERVICE" ; Auto-start service after installation. AGTSVCSTARTUPTYPE="Manual" ; Startup type for Integration Services. ;ISSVCSTARTUPTYPE="Automatic" ; Account for Integration Services: Domain\User or system account. ;ISSVCACCOUNT="NT AUTHORITY\NetworkService" ; Controls the service startup type setting after the service has been created. ;ASSVCSTARTUPTYPE="Automatic" ; The collation to be used by Analysis Services. ;ASCOLLATION="Latin1_General_CI_AS" ; The location for the Analysis Services data files. ;ASDATADIR="Data" ; The location for the Analysis Services log files. ;ASLOGDIR="Log" ; The location for the Analysis Services backup files. ;ASBACKUPDIR="Backup" ; The location for the Analysis Services temporary files. ;ASTEMPDIR="Temp" ; The location for the Analysis Services configuration files. ;ASCONFIGDIR="Config" ; Specifies whether or not the MSOLAP provider is allowed to run in process. ;ASPROVIDERMSOLAP="1" ; A port number used to connect to the SharePoint Central Administration web application. ;FARMADMINPORT="0" ; Startup type for the SQL Server service. SQLSVCSTARTUPTYPE="Automatic" ; Level to enable FILESTREAM feature at (0, 1, 2 or 3). FILESTREAMLEVEL="0" ; Set to "1" to enable RANU for SQL Server Express. ENABLERANU="1" ; Specifies a Windows collation or an SQL collation to use for the Database Engine. SQLCOLLATION="SQL_Latin1_General_CP1_CI_AS" ; Account for SQL Server service: Domain\User or system account. SQLSVCACCOUNT="NT Authority\System" ; Default directory for the Database Engine user databases. ;SQLUSERDBDIR="K:\Microsoft SQL Server\MSSQL\Data" ; Default directory for the Database Engine user database logs. ;SQLUSERDBLOGDIR="L:\Microsoft SQL Server\MSSQL\Data\Logs" ; Directory for Database Engine TempDB files. ;SQLTEMPDBDIR="T:\Microsoft SQL Server\MSSQL\Data" ; Directory for the Database Engine TempDB log files. ;SQLTEMPDBLOGDIR="T:\Microsoft SQL Server\MSSQL\Data\Logs" ; Provision current user as a Database Engine system administrator for SQL Server 2008 R2 Express. ADDCURRENTUSERASSQLADMIN="True" ; Specify 0 to disable or 1 to enable the TCP/IP protocol. TCPENABLED="1" ; Specify 0 to disable or 1 to enable the Named Pipes protocol. NPENABLED="0" ; Startup type for Browser Service. BROWSERSVCSTARTUPTYPE="Automatic" ; Specifies how the startup mode of the report server NT service. When ; Manual - Service startup is manual mode (default). ; Automatic - Service startup is automatic mode. ; Disabled - Service is disabled ;RSSVCSTARTUPTYPE="Automatic" ; Specifies which mode report server is installed in. ; Default value: “FilesOnly” ;RSINSTALLMODE="FilesOnlyMode" ; Accept SQL Server 2008 R2 license terms IACCEPTSQLSERVERLICENSETERMS="TRUE" ;setup.exe /CONFIGURATIONFILE=Mysetup.ini /INDICATEPROGRESS --------------------------- End of ini file ------------------------------------- And i get this error: 2010-08-31 18:05:53 Slp: Error result: -2068119551 2010-08-31 18:05:53 Slp: Result facility code: 1211 2010-08-31 18:05:53 Slp: Result error code: 1 2010-08-31 18:05:53 Slp: Sco: Attempting to create base registry key HKEY_LOCAL_MACHINE, machine 2010-08-31 18:05:53 Slp: Sco: Attempting to open registry subkey 2010-08-31 18:05:53 Slp: Sco: Attempting to open registry subkey Software\Microsoft\PCHealth\ErrorReporting\DW\Installed 2010-08-31 18:05:53 Slp: Sco: Attempting to get registry value DW0200 2010-08-31 18:05:53 Slp: Submitted 1 of 1 failures to the Watson data repository What the meaning of this? What do i need to do to fix that problem? Here is the Summary file: Overall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068119551 Exit facility code: 1211 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2010-08-31 18:03:44 End time: 2010-08-31 18:05:51 Requested action: Install Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\Detail.txt Exception help link: http%3a%2f%2fgo.microsoft.com%2ffwlink%3fLinkId%3d20476%26ProdName%3dMicrosoft%2bSQL%2bServer%26EvtSrc%3dsetup.rll%26EvtID%3d50000%26ProdVer%3d10.50.1600.1%26EvtType%3d0x6121810A%400xC24842DB Machine Properties: Machine name: NVR Machine processor count: 2 OS version: Windows 7 OS service pack: OS region: United States OS language: English (United States) OS architecture: x86 Process architecture: 32 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: SQL Server Database Services 2008 R2 ProductName: SQL Server 2008 R2 Type: RTM Version: 10 SPLevel: 0 Installation location: C:\Disk1\setupsql\x86\setup\ Installation edition: EXPRESS User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: True AGTSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE AGTSVCPASSWORD: * AGTSVCSTARTUPTYPE: Disabled ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASDOMAINGROUP: ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSVCACCOUNT: ASSVCPASSWORD: * ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Automatic CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\ConfigurationFile.ini CUSOURCE: ENABLERANU: True ENU: True ERRORREPORTING: False FARMACCOUNT: FARMADMINPORT: 0 FARMPASSWORD: * FEATURES: SQLENGINE FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: FTSVCACCOUNT: FTSVCPASSWORD: * HELP: False IACCEPTSQLSERVERLICENSETERMS: True INDICATEPROGRESS: False INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSQLDATADIR: INSTANCEDIR: C:\Program Files\Microsoft SQL Server\ INSTANCEID: MSSQLSERVER INSTANCENAME: SQLEXPRESS ISSVCACCOUNT: NT AUTHORITY\NetworkService ISSVCPASSWORD: * ISSVCSTARTUPTYPE: Automatic NPENABLED: 0 PASSPHRASE: * PCUSOURCE: PID: * QUIET: False QUIETSIMPLE: True ROLE: AllFeatures_WithDefaults RSINSTALLMODE: FilesOnlyMode RSSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE RSSVCPASSWORD: * RSSVCSTARTUPTYPE: Automatic SAPWD: * SECURITYMODE: SQL SQLBACKUPDIR: SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT Authority\System SQLSVCPASSWORD: * SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: SQLTEMPDBDIR: SQLTEMPDBLOGDIR: SQLUSERDBDIR: SQLUSERDBLOGDIR: SQMREPORTING: False TCPENABLED: 1 UIMODE: AutoAdvance X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0x0A2FBD17@1211@1 Configuration error description: The process cannot access the file because it is being used by another process. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\Detail.txt Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\SystemConfigurationCheck_Report.htm What should I do and why does this problem occur? Thanks , Shai.

    Read the article

  • Suggestions for displaying code on webpages, MUST use <br> for newline

    - by bguiz
    Hi, I want to post code snippets online (wordpress.com blog) - and have its whitespace formatted nicely. See the answers suggested by this other SO question: Those would be OK, except that I like to copy code to clip board or clip entire pages using Evernote - and they use either the <pre> tag or <table> (or both) to format the code. So I end up with text whose newlines and white spaces ignored, e.g. string url = "<a href=\"" + someObj.getUrl() + "\" target=\"_blank\">"; // single line comments // second single line override protected void OnLoad(EventArgs e) { if(Attributes["class"]&nbsp;!= null) { //_year.CssClass = _month.CssClass = _day.CssClass = Attributes["class"]; } base.OnLoad(e); } Which I find rather annoying myself. I find that if the code was formatted using <br> tags, they copy/ clip porperly, e.g. string url = "<a href=\"" + someObj.getUrl() + "\" target=\"_blank\">"; // single line comments // second single line override protected void OnLoad(EventArgs e) { if(Attributes["class"]&nbsp;!= null) { //_year.CssClass = _month.CssClass = _day.CssClass = Attributes["class"]; } base.OnLoad(e); } I find this annoying myself, so I don't want to inflict it upon others when I post my own code. Please suggest methods of posting code snippets online that are able to do this. I would like to emphasise that syntax highlighting capability is secondary to correct white space markup. Thank you

    Read the article

  • jQuery .each() with multiple selectors - skip, then .empty() elements

    - by joe
    I'd like to remove all matching elements, but skip the first instance of each match: // Works as expected: removes all but first instance of .a jQuery ('.a', '#scope') .each ( function (i) { if (i > 0) jQuery (this).empty(); }); // Expected: removal of all but first instance of .a and .b // Result: removal of *all* instances of both .a and .b jQuery ('.a, .b', '#scope') .each ( function (i) { if (i > 1) jQuery (this).empty(); }); <div id="scope"> <!-- Want to keep the first instance of .a and .b --> <div class="a">[bla]</span> <div class="b">[bla]</span> <!-- Want to remove all the others --> <div class="a">[bla]</span> <div class="b">[bla]</span <div class="a">[bla]</span> <div class="b">[bla]</span ... </div> Any suggestions? Using jQuery() rather than $() because of conflict with 'legacy' code Using .empty() because .a contains JS I'd like to disable Stuck with jQuery 1.2.3 Thanks you!

    Read the article

  • IIS 7 can't connect to SQLServer 2008

    - by Nicolas Cadilhac
    Sorry if this is the most seen question on the web, but this is my turn. I am trying to publish my asp.net mvc app on IIS 7 under MS Sql Server 2008. I am on a Windows Server 2008 virtual machine. I get the following classical error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Under SQLServer, Allow remote connections is checked. My connection string is: Data Source=.\MSSQLSERVER;Initial Catalog=mydbname;User Id=sa;Password=mypassword I also tried with no username/password and "Integrated Security=true". There is only one instance of SQLServer installed. I tried to access my web page locally and remotely. There is no active firewall on the virtual machine. Thx for your help.

    Read the article

  • How do you keep your business rules DRY?

    - by Mario
    I periodically ponder how to best design an application whose every business rule exists in just a single location. (While I know there is no proverbial “best way” and that designs are situational, people must have a leaning toward one practice or another.) I work for a shop where they prefer to house as much of the business rules as possible in the database. This requires developers in many cases to perform identical front-end validations to avoid sending data to the database that will result in an exception—not very DRY. It grates me anytime I find myself duplicating any kind of logic—even lowly validation logic. I am a single-point-of-truth purist to an anal degree. On the other end of the spectrum, I know of shops that create dumb databases (the Rails community leans in this direction) and handle all of the business logic in a separate tier (in Rails the models would house “most” of this). Note the word “most” which implies that some business logic does end up spilling into other places (in Rails it might spill over into the controllers). In way, a clean separation of concerns where all business logic exists in a single core location is a Utopian fantasy that’s hard to uphold (n-tiered architecture or not). Furthermore, is see the “Database as a fortress” and would agree that it should be built on constraints that cause it to reject bad data. As such, I hold principles that cause a degree of angst as I attempt to balance them. How do you balance the database-as-a-fortress view with the desire to have a single-point-of-truth?

    Read the article

  • Call asp.net mvc Html Helper within custom html helper with expression parameter

    - by Frank Michael Kraft
    I am trying to write an html helper extension within the asp.net mvc framework. public static MvcHtmlString PlatformNumericTextBoxFor<TModel>(this HtmlHelper instance, TModel model, Expression<Func<TModel,double>> selector) where TModel : TableServiceEntity { var viewModel = new PlatformNumericTextBox(); var func = selector.Compile(); MemberExpression memExpession = (MemberExpression)selector.Body; string name = memExpession.Member.Name; var message = instance.ValidationMessageFor<TModel, double>(selector); viewModel.name = name; viewModel.value = func(model); viewModel.validationMessage = String.Empty; var result = instance.Partial(typeof(PlatformNumericTextBox).Name, viewModel); return result; } The line var message = instance.ValidationMessageFor<TModel, double>(selector); has a syntax error. But I do not understand it. The error is: Fehler 2 "System.Web.Mvc.HtmlHelper" enthält keine Definition für "ValidationMessageFor", und die Überladung der optimalen Erweiterungsmethode "System.Web.Mvc.Html.ValidationExtensions.ValidationMessageFor(System.Web.Mvc.HtmlHelper, System.Linq.Expressions.Expression)" weist einige ungültige Argumente auf. C:\Projects\WorkstreamPlatform\WorkstreamPlatform_WebRole\Extensions\PlatformHtmlHelpersExtensions.cs 97 27 WorkstreamPlatform_WebRole So according to the message, the parameter is invalid. But the method is actually declared like this: public static MvcHtmlString ValidationMessageFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression); So actually it should work.

    Read the article

  • Amazon EC2 RSA key stopped authenticating - Permission denied (publickey)

    - by shedd
    Authenticating to our Ubuntu EC2 instance worked fine until a little while ago. All of a sudden, the key is being rejected. When we create a new instance with the keypair, we're able to connect to the instance perfectly, so it appears to be an issue with the existing instance. Port 22 is open. Any suggestions on what to look at from a configuration standpoint so we can fix this? Any thoughts on how we can get into the box? Here is the SSH debug output. Is there anything obviously amiss? Thanks so much! $ ssh -v -i ~/zzz.pem ubuntu@###.###.###.### OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to ###.###.###.### [###.###.###.###] port 22. debug1: Connection established. debug1: identity file zzz.pem type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '###.###.###.###' is known and matches the RSA host key. debug1: Found key in /zzz/.ssh/known_hosts:18 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering public key: /zzz/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Offering public key: zzz.txt debug1: Authentications that can continue: publickey debug1: Trying private key: zzz.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • externalizing junit stub objects.

    - by Ajay
    Hi!    In my project we created stub files for testing junits in java(factories) itself. However, we have to externalize these stubs. After seeing a number of serializers/deserializers, we settled on using XStream to serialize and deserialize these stub objects. XStream works like a charm. Its pretty good at what it claims to be. Previously, we had a single factory class say AFactory which produced all the stubs needed for testing different test cases. Now when externalizing each of the stub generated, we hit a road block. We had to create 1 xml file for each stub produced by the factory. For example, public final class AFactory{ public static A createStub1(){ /*Code here */} public static A createStub2(){ /*Code here */} public static A createStub3(){ /*Code here */} } Now, when trying to move this stubs to external files, we had to create 1 xml file for each stub created(A-stub1.xml, A-stub2.xml and A-stub3.xml). The problem with this approach is that, it leads to proliferation of xml stub files. I was thinking, how about keeping all the stubs related to a single bean class in a single xml file. <?xml version="1.0"?> <stubs class="A"> <stub id="stub1"> <!-- Here comes the externalized xml stub representation --> </stub> <stub id="stub2"> </stub> </stubs> Is there a framework which allows you keep all the stub in xml representation in a single xml file as above ? Or What do you guys suggest should be the right approach to adhere to ?

    Read the article

  • NSBundle loading a NSViewController

    - by Staros
    Hey all, I'm looking into a project that would upload custom NSBundles that include a NSViewController. In my main program I've got this code to deal with the bundle after it's been loaded... id principalClass = [loadedBundle principalClass]; id instance = [[principalClass alloc] init]; [localLabel setStringValue:[instance name]]; NSView *incomingView = [[instance viewController] view]; [localView addSubview:incomingView]; [localView display]; And the principal classes init method in the bundle looks like this... -(id) init { if(self = [super init]){ name = @"My Plugin"; viewController = [[ViewController alloc] initWithNibName:@"View" bundle:nil]; } return self; } View.nib is a nib located in the bundles project. But whenever I load the bundle I get this error... 2010-05-27 09:11:18.423 PluginLoader[45032:a0f] unable to find nib named: View in bundle path: (null) 2010-05-27 09:11:18.424 PluginLoader[45032:a0f] -[NSViewController loadView] could not load the "View" nib. I know I've got everything wired up because the line [label setStringValue:[instance name]]; sets the label text correctly. Plus, if I take all of the clases in the bundle and load them into my main applications project everything works as expect. Any thoughts on how I can correctly reference "View" in my bundle? Thanks!

    Read the article

  • JAXB: How to avoid repeated namespace definition for xmlns:xsi

    - by sfussenegger
    I have a JAXB setup where I use a @XmlJavaTypeAdapter to replace objects of type Person with objects of type PersonRef that only contains the person's UUID. This works perfectly fine. However, the generated XML redeclares the same namespace (xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance") every time it's used. While this is generally okay, it just doesn't feel right. How can I configure JAXB to declare xmlns:xsi at the very beginning of the document? Can I manually add namespace declarations to the root element? Here's an example of what I want to achive: Current: <person uuid="6ec0cf24-e880-431b-ada0-a5835e2a565a"> <relation type="CHILD"> <to xsi:type="personRef" uuid="56a930c0-5499-467f-8263-c2a9f9ecc5a0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/> </relation> <relation type="CHILD"> <to xsi:type="personRef" uuid="6ec0cf24-e880-431b-ada0-a5835e2a565a" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/> </relation> <!-- SNIP: some more relations --> </person> Wanted: <person uuid="6ec0cf24-e880-431b-ada0-a5835e2a565a" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <relation type="CHILD"> <to xsi:type="personRef" uuid="56a930c0-5499-467f-8263-c2a9f9ecc5a0"/> </relation> <relation type="CHILD"> <to xsi:type="personRef" uuid="6ec0cf24-e880-431b-ada0-a5835e2a565a"/> </relation> <!-- SNIP: some more relations --> </person>

    Read the article

  • Amazon EC2 and jbossws

    - by avjaz
    Hi - I've deployed a webservice to a Jboss instance running on Amazon EC2. The webservice works fine locally, but when I deploy on EC2, and go to the /jbossws/services page the Endpoint Address for the webservice is the private DNS of the ec2 instance (domU-X-X-X-X etc...), not the public dns (which I would like it to be). I've tried loading the wsdl by changing the private hostname to the public IP; that works, but when I try to call any of the operations I get a HostNotFoundException, I'm guessing due to the fact that the generated wsdl has the stanza: <service name='XXXService'> <port binding='tns:XXXBinding' name='XXXPort'> <soap:address location='http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal:8080/xx/xx/xx'/> </port> </service> where http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal is the internal dns of the ec2 instance. The wsdl is auto generated - Is there a JAXB annotation I can use so that I can force the generated wsdl to use the public dns of the EC2 instance? Many thanks -

    Read the article

  • Asynchronous NSURLConnection Throws EXC_BAD_ACCESS

    - by Sheehan Alam
    I'm not really sure why my code is throwing a EXC_BAD_ACCESS, I have followed the guidelines in Apple's documentation: -(void)getMessages:(NSString*)stream{ NSString* myURL = [NSString stringWithFormat:@"http://www.someurl.com"]; NSURLRequest *theRequest = [NSURLRequest requestWithURL:[NSURL URLWithString:myURL]]; NSURLConnection *theConnection=[[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if (theConnection) { receivedData = [[NSMutableData data] retain]; } else { NSLog(@"Connection Failed!"); } } And my delegate methods #pragma mark NSURLConnection Delegate Methods - (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { // This method is called when the server has determined that it // has enough information to create the NSURLResponse. // It can be called multiple times, for example in the case of a // redirect, so each time we reset the data. // receivedData is an instance variable declared elsewhere. [receivedData setLength:0]; } - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { // Append the new data to receivedData. // receivedData is an instance variable declared elsewhere. [receivedData appendData:data]; } - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { // release the connection, and the data object [connection release]; // receivedData is declared as a method instance elsewhere [receivedData release]; // inform the user NSLog(@"Connection failed! Error - %@ %@", [error localizedDescription], [[error userInfo] objectForKey:NSErrorFailingURLStringKey]); } - (void)connectionDidFinishLoading:(NSURLConnection *)connection { // do something with the data // receivedData is declared as a method instance elsewhere NSLog(@"Succeeded! Received %d bytes of data",[receivedData length]); // release the connection, and the data object [connection release]; [receivedData release]; } I get an EXC_BAD_ACCESS on didReceiveData. Even if that method simply contains an NSLog, I get the error. Note: receivedData is an NSMutableData* in my header file

    Read the article

  • Multiple Out-of-Browser Applications in One Application

    - by Otaku
    I'm looking at a scenario where I need to create a single "master" Silverlight application and then add "child" applications for an out-of-browser Silverlight application. The scenario is something like this. A user will visit a gameboard web site and choose a game to play. Let's call it Checkers. He likes it, so then he installs the out-of-browser app to his desktop. He then finds Chess, and installs that too. For both games, while played on the site, he has stats (games played, win/loss records, etc.). For each game on the site, he navigates to a different page. But now he wants to play offline and view his stats and other cross-games information. He wants to have a single app to launch to play either game. From his single out-of-browser app, he sees that Go is also available, and he places a checkmark against it to download on his next connection. Does anyone have any experience at developing multiple out-of-browser Silverlight apps that reside within a single master app? What considerations need to be had for this type of design? How would this work in terms of install experience from different web pages?

    Read the article

  • In-document schema declarations and lxml

    - by shylent
    As per the official documentation of lxml, if one wants to validate a xml document against a xml schema document, one has to construct the XMLSchema object (basically, parse the schema document) construct the XMLParser, passing the XMLSchema object as its schema argument parse the actual xml document (instance document) using the constructed parser There can be variations, but the essense is pretty much the same no matter how you do it, - the schema is specified 'externally' (as opposed to specifying it inside the actual xml document). If you follow this procedure, the validation occurs, sure enough, but if I understand it correctly, that completely ignores the whole idea of the schemaLocation and noNamespaceSchemaLocation attributes from xsi. This introduces a whole bunch of limitations, starting with the fact, that you have to deal with instance<-schema relation all by yourself (either store it externally or write some hack to retrieve the schema location from the root element of the instance document), you can not validate the document using multiple schemata (say, when each schema governs its own namespace) and so on. So the question is: maybe I am missing something completely trivial or doing it wrong? Or are my statements about lxml's limitations regarding schema validation true? To recap, I'd like to be able to: have the parser use the schema location declarations in the instance document at parse/validation time use multiple schemata to validate a xml document declare schema locations on non-root elements (not of extreme importance) Maybe I should look for a different library? Although, that'd be a real shame, - lxml is a de-facto xml processing library for python and is regarded by everyone as the best one in terms of performace/features/convenience (and rightfully so, to a certain extent)

    Read the article

  • JavaScript and JQuery - Encoding HTML

    - by user70192
    Hello, I have a web page that has a textarea defined on it like so: <textarea id="myTextArea" rows="6" cols="75"></textarea> There is a chance that a user may enter single and double quotes in this field. For instance, I have been testing with the following string: Just testin' using single and double "quotes". I'm hoping the end of this task is comin'. Additionally, the user may enter HTML code, which I would prefer to prevent. Regardless, I am passing the contents of this textarea onto web service. I must encode the contents of the textarea in JavaScript before I can send it on. Currently, I'm trying the following: var contents $('<div/>').text($("#myTextArea).val()).html(); alert(contents); I was expecting contents to display Just testin&#39; using single and double &#34;quotes&#34;. I&#39;m hoping the end of this task is comin&#39;. Instead, the original string is printed out. Beyond just double-and-single quotes, there are a variety of entities to consider. Because of this, I was assuming there would be a way to encode HTML before passing it on. Can someone please tell me how to do this? Thank you,

    Read the article

  • Flex - Issues retrieving an XMLList

    - by BS_C3
    Hello! I'm having a problem retrieving a XMLList and I don't understand why. I have an application that is running properly. It uses some data from two xml files called division.xml and store.xml. I noticed that I have some data in division.xml that should be in store.xml, so I did a copy/paste of the data from one file to the other. This is the data I copied: <stores> <store> <odeis>101</odeis> <name></name> <password></password> <currency></currency> <currSymbol></currSymbol> </store> <store> <odeis>102</odeis> <name></name> <password></password> <currency></currency> <currSymbol></currSymbol> </store> </stores> In the application, I list all odeis codes and I need to retrieve the block store corresponding to the selected odeis code. Before moving the data into store.xml, this is how I retrieved the block: var node:XMLList = divisionData.division.(@name==HomePageData.instance.divisionName).stores.store.(odeis == HomePageData.instance.storeCodeOdeis) This is how I retrieve it after copying the data into store.xml: var node:XMLList = storeData.stores.(@name==HomePageData.instance.divisionName).store.(odeis == HomePageData.instance.storeCodeOdeis) And I'm currently getting the following error: ReferenceError: Error #1065: The variable odeis is not defined. Could anyone enlighten me? Cause I really have no clue of why it is not working... Thanks for any tips you can give. Regards, BS_C3

    Read the article

  • Help to understand the issue with protected method

    - by zeroed
    I'm reading Sybex Complete Java 2 Certification Study Guide April 2005 (ISBN0782144195). This book is for java developers who wants to pass java certification. After a chapter about access modifiers (along with other modifiers) I found the following question (#17): True or false: If class Y extends class X, the two classes are in different packages, and class X has a protected method called abby(), then any instance of Y may call the abby() method of any other instance of Y. This question confused me a little. As far as I know you can call protected method on any variable of the same class (or subclasses). You cannot call it on variables, that higher in the hierarchy than you (e.g. interfaces that you implement). For example, you cannot clone any object just because you inherit it. But the questions says nothing about variable type, only about instance type. I was confused a little and answered "true". The answer in the book is False. An object that inherits a protected method from a superclass in a different package may call that method on itself but not on other instances of the same class. There is nothing here about variable type, only about instance type. This is very strange, I do not understand it. Can anybody explain what is going on here?

    Read the article

  • [Smalltalk] Store List of Instruction

    - by Luciano Lorenti
    Hi all, I have a design Problem. i have a Drawer class wich invokes a serie of methods of a kind-of-brush class and i have a predefined shapes which i want to draw. Each shape uses a list of instance methods from the drawer. I can have more than 1 brush object. I want to add custom shapes on runtime in the drawer instance, especifying the list of methods of the new shape. i've created a class method for every predefined shape that returns a BlockClosure with the instruccions. Obviously i have to give to each BlockClosure the brush object as parameter. I imagine a collection with all the BlockClosures in each instance of the Drawer Class. Maybe i can inherit a SequenceableCollection and make a instruccion collection. Each element of the collection it's a instruction and i give the brush object when i instance this new collection. I really don't know the best way to store these steps. (Maybe a shared variable?)

    Read the article

  • Wordpress thumbnail creation question

    - by Will Ashworth
    I'm trying to use Wordpress' built-in thumbnailing and image re-sizing in my Wordpress 2.9.2 installation. I'm trying to get various sizes (post listing/results 160x160 & "single.php" 618x150) and for some reason the single.php one works, but only half way. Not sure if I'm doing something wrong here. I have it working…sorta. I’m totally stuck and there seems to be a lack of documentation on the Codex for this feature so here goes. The small 160×160 thumbnail for article listings/search views works fine. It crops it, all’s groovy. The issue comes when I go to format the image for the single.php article details view. It crops, but then scales down even further for some reason. Screenshot: http://c1319072.cdn.cloudfiles.rackspacecloud.com/4-15-2010%204-56-46%20PM.png NOTE: every time I re-test this I’m completely deleting the image from the media section and re-uploading the image entirely. I also have the re-create thumbnails plugin so I know it’s not caching. Here is my code included in "functions.php". This will help in debugging. add_theme_support( ‘post-thumbnails’ ); set_post_thumbnail_size( 160, 160, true ); // Normal post thumbnails add_image_size( ’single-post-thumbnail’, 618, 150, true ); // Permalink thumbnail size

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >