Search Results

Search found 25585 results on 1024 pages for 'multiple variables'.

Page 38/1024 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Keep it Professional &ndash; Multiple Environments

    - by AjarnMark
    I have certainly been reading blogs a whole lot more than writing them the last several weeks, and it’s about time I got back to writing.  I have been collecting several topics and references for blog posts…some of which will probably just never get written as the timeliness of the topics fade over time.  Nonetheless, I’m back, and I think it is time to revive my Doing Business Right series, this time coming from the slant of managing a development team rather than the previous angle of being self-employed.  First up: separating Dev, Test, and Prod. A few months ago, Colin Stasiuk (@BenchmarkIT) wrote a great post about separating your Dev, Test/UAT, and Prod environments.  This post covers all the important points such as removing Developer access from both PROD and UAT, and the importance of proper deployment (a.k.a. promotion) procedures.  I won’t repeat it all here, go read the original!  But what I do want to address is what I believe to be the #1 excuse people use for not having separate environments:  Money.  I discussed this briefly in my comment on Colin’s post at the time, but let me repeat it here and expand on it a bit. Don’t let the size of your company or the size of its budget dictate whether you do things professionally or not.  I am convinced that most developers and development teams would agree that it is a best practice to have separate environments for development, testing, and production (a.k.a. Live).  So why don’t they?  Because they think that it means separate servers which means more money.  While having separate physical servers for the different environments would be ideal, it is not an absolute requirement in order to make this work.  Here are a few ideas: Use multiple instances of SQL Server and multiple Web Sites with Headers or Ports.  For no additional fees* you can install multiple instances of SQL Server on the same machine.  This gives you a nice separation, allowing you to even use the same database names as will appear in PROD, yet isolating the data and security access.  And in IIS, you can create multiple Web Sites on the same server just by using Host Headers or different port numbers to separate them.  This approach does still pose the risk of non-Prod environments impacting performance on Prod, but when your application is busy enough for that to be a concern, you can probably afford one of the other options. Use desktop PCs instead of servers.  Instead of investing in full server-grade hardware, you can mimic the separate environments on old desktop PCs and at least get functional equivalency, if not performance matching.  The last I checked, Microsoft did not require separate licensing for SQL Server if that installation was used exclusively for dev or test purposes*.  There may be some version or performance differences between this approach and what you have in Prod, but you have isolated test from impacting Prod resources this way. Virtualization.  This is of course one of the hot topics of the day, and I would be remiss if I did not suggest this.  It is quite easy these days to setup virtual machines so that, again, your environments are fairly isolated from one another, and you retain all the security and procedural benefits of having separate environments. So the point is, keep your high professional standards intact.  You don’t need to compromise on using proper procedure just because you work in a small company with a small budget.  Keep doing things the right way! By the way, where I work, our DEV environment is not on a server.  All development is done on the developer’s individual workstation where it can be isolated from other developers’ work for the duration of writing the code, but also where the developers have to reconcile (merge) differences in code under concurrent development.  This usually means that each change is executed multiple times (once per developer to update their environments with the latest changes from others) giving us an extra, informal. test deployment before even going to the Test/UAT server.  It also means that if the network goes down, the developers can continue to hum along because they are not dependent on networked resources.  In fact, they will likely be even more productive because they aren’t being interrupted by email…but that’s another post I need to write. * I am not a lawyer, nor a licensing specialist, but it appeared to be so the last time I checked.  When in doubt, consult an expert on the topic.

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • Open Multiple Sites Without Reopening the Menus in Firefox

    - by Asian Angel
    Are you frustrated with having to reopen your menus for each website that you need or want to view? Now you can keep those menus open while opening multiple websites with the Stay-Open Menu extension for Firefox. Stay-Open Menu in Action You can start using the extension as soon as you have installed it…simply access your favorite links in the “Bookmarks Menu, Bookmarks Toolbar, Awesome Bar, or History Menu” and middle click on the appropriate entries. Here you can see our browser opening the Productive Geek website and that the “Bookmarks Menu” is still open. As soon as you left click on a link or click outside the menus they will close normally like before. Note: Middle clicked links open in new tabs. The only time during our tests that a newly opened link “remained in the background” was for any links opened from the “Awesome Bar”. But as soon as the “Awesome Bar” was closed the new tabs automatically focused to the front. A link being opened from the “History Menu”…still open while the webpage is loading. Options The options are simple to sort through…enable or disable the additional “stay open” functions and enable automatic menu closing if desired. Conclusion If you get frustrated with having to reopen menus to access multiple webpages at one time then you might want to give this extension a try. Links Download the Stay-Open Menu extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Make Firefox Use Multiple Rows of TabsDisable Web Site Window Resizing in FirefoxQuick Hits: 11 Firefox Tab How-TosPrevent Annoying Websites From Messing With the Right-Click Menu in FirefoxJatecblog Moves to How-To Geek Blogs (Linux Readers Should Subscribe) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader

    Read the article

  • Quickly Add Watermark To Multiple PDF Files Using “Batch PDF Watermark”

    - by Kavitha
    Want to add watermark to your PDF files with a single click? You can use the freeware Batch PDF Watermark. Batch PDF Watermark is super cool application that lets you add image or text watermarks to multiple files at a time. Office 2010 style ribbon user interface of the application is very easy to use and provides many options to configure watermark properties like – font styles, positioning, transparency levels, rotation of watermark image, scaling of watermark image and etc. Before running the watermark process, you can even preview it. To select multiple PDF files to watermark you can use “Add Files” option to hand pick required files or “Add Folder” option to choose all the PDF files available in the folder. Download Batch PDF Watermark [via liferocks] This article titled,Quickly Add Watermark To Multiple PDF Files Using “Batch PDF Watermark”, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • no dual screens with 11.10 and Asus m4A89 GTD Pro

    - by Alex
    I'm having an issue getting dual monitors working for Kubuntu 11.10. I have Asus m4A89 GTD pro/USB3 mother board with integrated Ati HD4290 graphics chip. When I try to enable multiple monitors through the system settings, it says "This module is only for configuring systems with a single desktop spread across multiple monitors. You do not appear to have this configuration." I had previously attempted to fix this problem with another installation of Ubuntu 11.10, but ended up having to reinstall ubuntu because i messed up the software center dependencies. After I installed Ubuntu the first time, a notification showed up asking me to install an Ati graphics driver. I installed this driver, then restarted, and dual monitors did not work. That was when I went to the ATI site and attempted to install the fglrx driver. When I tried to run the shell script for the fglrx driver, it said i had a previous version of an fglrx driver installed, and needed to remove it in order to install the new one. So I looked up some tutorial on how to remove it and found some apt-get remove command, which i ran. Then I was able to install the new driver. Dual monitors still did not work, and i couldn't use the software center any more because it was corrupted and was unable to repair itself. So i just reinstalled ubuntu, and now i'm trying to go about this the correct way. Does anyone have this same configuration and which driver works for you?

    Read the article

  • Help with complex MVVM (multiple views)

    - by jsjslim
    I need help creating view models for the following scenario: Deep, hierarchical data Multiple views for the same set of data Each view is a single, dynamically-changing view, based on the active selection Depending on the value of a property, display different types of tabs in a tab control My questions: Should I create a view-model representation for each view (VM1, VM2, etc)? 1. Yes: a. Should I model the entire hierarchical relationship? (ie, SubVM1, HouseVM1, RoomVM1) b. How do I keep all hierarchies in sync? (e.g, adding/removing nodes) 2. No: a. Do I use a huge, single view model that caters for all views? Here's an example of a single view Figure 1: Multiple views updated based on active room. Notice Tab control Figure 2: Different active room. Multiple views updated. Tab control items changed based on object's property. Figure 3: Different selection type. Entire view changes

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Send Multiple InMemory Attachments Using FileUpload Controls

    - by bullpit
    I wanted to give users an ability to send multiple attachments from the web application. I did not want anything fancy, just a few FileUpload controls on the page and then send the email. So I dropped five FileUpload controls on the web page and created a function to send email with multiple attachments. Here’s the code: public static void SendMail(string fromAddress, string toAddress, string subject, string body, HttpFileCollection fileCollection)     {         // CREATE THE MailMessage OBJECT         MailMessage mail = new MailMessage();           // SET ADDRESSES         mail.From = new MailAddress(fromAddress);         mail.To.Add(toAddress);           // SET CONTENT         mail.Subject = subject;         mail.Body = body;         mail.IsBodyHtml = false;                        // ATTACH FILES FROM HttpFileCollection         for (int i = 0; i < fileCollection.Count; i++)         {             HttpPostedFile file = fileCollection[i];             if (file.ContentLength > 0)             {                 Attachment attachment = new Attachment(file.InputStream, Path.GetFileName(file.FileName));                 mail.Attachments.Add(attachment);             }         }           // SEND MESSAGE         SmtpClient smtp = new SmtpClient("127.0.0.1");         smtp.Send(mail);     } And here’s how you call the method: protected void uxSendMail_Click(object sender, EventArgs e)     {         HttpFileCollection fileCollection = Request.Files;         string fromAddress = "[email protected]";         string toAddress = "[email protected]";         string subject = "Multiple Mail Attachment Test";         string body = "Mail Attachments Included";         HelperClass.SendMail(fromAddress, toAddress, subject, body, fileCollection);            }

    Read the article

  • Is using multiple canvas objects a good practice?

    - by user1818924
    We're developing a jump and run game with HTML5 and JavaScript and have to build an own game framework for this. Here we have some difficulties and would like to ask you for some advice: We have a "Stage" object, which represents the root of our game and is a global div-wrapper. The stage can contain multiple "Scenes", which are also div-elements. We would implement a Scene for the playing task, for pause, etc. and switch between them. Each scene can therefore contain multiple "Layers", representing a canvas. These Layer contain "ObjectEntities", which represent images or other shapes like rectangles, etc. Each Objectentity has its own temporaryCanvas, to be able to draw images for one entity, whereas another contains a rectangle. We set an activeScene in our Stage, so when the game is played, just the active scene is drawn. Calling activeScene.draw(), calls all sublayers to draw, which draw their entities (calling drawImage(entity.canvas)). But is this some kind of good practice? Having multiple canvas to draw? Each game loop every layer-context is cleared and drawn again. E.g. we just have a still Background-Layer, … wouldn't it be more useful to draw this once and not to clear it every time and redraw it? Or should we use a global canvas for example in the Stage and just use this canvas to draw? But we thought this would be to expensive...

    Read the article

  • ATI Radeon HD with Catalyst driver stuck mirroring screens

    - by Mike Axiak
    In 11.10 I replaced my aging Nvidia card with a new Radeon HD 6970 card. The single card has two DVI output ports which I've connected to two monitors. I installed Catalyst version 11.9 and I cannot get multiple monitors set up the way I want. I tried: $ sudo amdcccle and setting the mode to single desktop multiple monitors and whenever I do that Unity crashes and I get back to the login screen. Nothing shows up in the Xorg.*.log files for me to post here. There's only one card so I don't think xinerama would be any help here. Anyone have any ideas? EDIT: Here's my xorg.conf file: Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "0-DFP3" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x1024" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x1024" Option "TargetRefresh" "75" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" Option "Monitor-DFP3" "0-DFP3" Option "Monitor-CRT1" "0-CRT1" BusID "PCI:5:0:0" EndSection Section "Device" Identifier "amdcccle-Device[5]-1" Driver "fglrx" Option "Monitor-DFP3" "0-DFP3" BusID "PCI:5:0:0" Screen 1 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" DefaultDepth 24 SubSection "Display" EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[5]-1" Device "amdcccle-Device[5]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Need efficient way to keep enemy from getting hit multiple times by same source

    - by TenFour04
    My game's a simple 2D one, but this probably applies to many types of scenarios. Suppose my player has a sword, or a gun that shoots a projectile that can pass through and hit multiple enemies. While the sword is swinging, there is a duration where I am checking for the sword making contact with any enemy on every frame. But once an enemy is hit by that sword, I don't want him to continue getting hit over and over as the sword follows through. (I do want the sword to continue checking whether it is hitting other enemies.) I've thought of a couple different approaches (below), but they don't seem like good ones to me. I'm looking for a way that doesn't force cross-referencing (I don't want the enemy to have to send a message back to the sword/projectile). And I'd like to avoid generating/resetting multiple array lists with every attack. Each time the sword swings it generates a unique id (maybe by just incrementing a global static long). Every enemy keeps a list of id's of swipes or projectiles that have already hit them, so the enemy knows not to get hurt by something multiple times. Downside is that every enemy may have a big list to compare to. So projectiles and sword swipes would have to broadcast their end-of-life to all enemies and cause a search and remove on every enemy's array list. Seems kind of slow. Each sword swipe or projectile keeps its own list of enemies that it has already hit so it knows not to apply damage. Downsides: Have to generate a new list (probably pull from a pool and clear one) every time a sword is swung or a projectile shot. Also, this breaks down modularity, because now the sword has to send a message to the enemy, and the enemy has to send a message back to the sword. Seems to me that two-way streets like this are a great way to create very difficult-to-find bugs.

    Read the article

  • Single complex or multiple simple autoload functions [on hold]

    - by Tyson of the Northwest
    Using the spl_autoload_register(), should I use a single autoload function that contains all the logic to determine where the include files are or should I break each include grouping into it's own function with it's own logic to include the files for the called function? As the places where include files may reside expands so too will the logic of a single function. If I break it into multiple functions I can add functions as new groupings are added, but the functions will be copy/pastes of each other with minor alterations. Currently I have a tool with a single registered autoload function that picks apart the class name and tries to predict where it is and then includes it. Due to naming conventions for the project this has been pretty simple. if has namespace if in template namespace look in Root\Templates else look in Root\Modules\Namespace else look in Root\System if file exists include But we are starting to include Interfaces and Traits into our codebase and it hurts me to include the type of a thing in it's name. So we are looking at instead of a single autoload function that digs through the class name and looks for the file and has increasingly complex logic to it, we are looking at having multiple autoload functions registered. But each one follows the same pattern and any time I see that I get paranoid about code copying. function systemAutoloadFunc logic to create probable filename if filename exists in system include it and return true else return false function moduleAutoloadFunc logic to create probable filename if filename exists in modules include it and return true else return false Every autoload function will follow that pattern and the last of each function if filename exists, include return true else return false is going to be identical code. This makes me paranoid about having to update it later across the board if the file_exists include pattern we are using ever changes. Or is it just that, paranoia and the multiple functions with some identical code is the best option?

    Read the article

  • How should VertexBuffers be used with Multiple Monitors in DirectX 9

    - by Joshua C
    I am currently using DirectX 9 on a machine with two GPUs and three monitors. I am currently trying to draw a triangle on each monitor using vertexbuffers; A directx helloworld with multiple monitors if you will. I am familiar with some DirectX coding, but new to multiple monitor DirectX coding. I may be going about this the wrong way, so please do correct me if I'm doing something wrong. I have created a Direct3D Device for each enumerated adapter sharing the same Form handle. This allows me to successfully use all three monitors in full-screen mode. For Each Adapter In Direct3D.Adapters Dim PresentParameters As New PresentParameters 'Setup PresentParameters PresentParameters.Windowed = False PresentParameters.DeviceWindowHandle = MainForm.Handle Dim Device as New Device(Direct3D, Adapter.Adapter, DeviceType.Hardware, PresentParameters.DeviceWindowHandle, CreateFlags.HardwareVertexProcessing, PresentParameters) Device.SetRenderState(RenderState.Lighting, False) Devices.Add(Device) Next I can also draw text to each device successfully using a different Font for each Device. When I render a triangle using a different VertexBuffer for each Device, only two monitors display the triangle. One of the two monitors on the same GPU, and the monitor on it's own GPU display properly. VertexBuffer = New VertexBuffer(Device, 4 * Marshal.SizeOf(GetType(ColoredVertex)), Usage.WriteOnly, VertexFormat.None, Pool.Managed) Dim Verts = VertexBuffer.Lock(0, 0, LockFlags.None) Verts.WriteRange({ New ColoredVertex(-.5, -.5, 1, ForeColor), New ColoredVertex(0, .5, 1, ForeColor), New ColoredVertex(.5, -.5, 1, ForeColor) }) VertexBuffer.Unlock() VertexDeclaration = New VertexDeclaration(Device, { New VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.Position, 0), New VertexElement(0, 12, DeclarationType.Color, DeclarationMethod.Default, DeclarationUsage.Color, 0), VertexElement.VertexDeclarationEnd }) Render Code: Device.SetStreamSource(0, VertexBuffer, 0, Marshal.SizeOf(GetType(ColoredVertex))) Device.VertexDeclaration = VertexDeclaration Device.DrawPrimitives(PrimitiveType.TriangleList, 0, 1) I have to assume the fact that they share the same physical card comes into play. Should I use multiple buffers on the same card, and if so, how? Or what is the way I should access the VertexBuffer across Devices? Another thought I had was the non working monitor acts like there are no lights. Is turning off lighting on each device on the same card causing issues somehow?

    Read the article

  • how to code multiple button navigation with java activities [migrated]

    - by user1738212
    Question 1: I have 2 activities. I was wondering how to optimize it. I can either create 2 activities with multiple listeners. Or create multiple java files for each button(onclick listener) Question 2: I have tried to create multiple listeners in one java but can only get one button to work. What is the syntax for multiple listeners in one java file? Here is my *updated code: now the issue is no matter what button is clicked on it leads to the same page. package install.fineline; import android.app.Activity; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.widget.Button; import android.view.View; import android.view.View.OnClickListener; public class Activity1 extends Activity2 { Button Button1; Button Button2; Button Button3; Button Button4; Button Button5; Button Button6; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.fineline); addListenerOnButton(); } public void addListenerOnButton() { final Context context = this; Button1 = (Button) findViewById(R.id.autobody); Button1.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); Button2 = (Button) findViewById(R.id.glass); Button2.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); Button3 = (Button) findViewById(R.id.wheels); Button3.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); Button4 = (Button) findViewById(R.id.speedy); Button4.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); Button5 = (Button) findViewById(R.id.sevan); Button5.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); Button6 = (Button) findViewById(R.id.towing); Button6.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); }} activity2.java package install.fineline; import android.app.Activity; import android.os.Bundle; import android.widget.Button; public class Activity2 extends Activity { Button Button1; public void onCreate1(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.autobody); } Button Button2; public void onCreate2(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.glass); } Button Button3; public void onCreate3(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.wheels); } Button button4; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.speedy); } Button Button5; public void onCreate5(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.sevan); } Button Button6; public void onCreate6(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.towing); }}

    Read the article

  • How best to implement support for multiple devices in a web application.

    - by Kabeer
    Hello. My client would like a business application to support 'every possible device'. The application in question is essentially a web application and 'every possible device', I believe encompasses mobile phones, netbooks, ipad, other browser supporting devices, etc. The application is somewhat complex w.r.t. the data it captures and other functions it performs (reporting). If I continue to honor increasing complexity in the application, I guess there are more chances of it not working on other devices. I'd like to know how web applications support multiple devices conventionally? Are there multiple versions of presentation layer (like many times I find m.website.com dedicated for mobile devices)? Further, if my application is to take advantage of Java Script, RIA (Flash, SilverLight) then what are the consequences and workarounds? Mine is a .Net based application and the stack also contains Ext JS Java Script library. While I would like to use it for sure, considering that I would be doing a lot of work in Java Script rather than HTML, this could be a problem. The answer to the above could be descriptive. If there is something already prescribed out there, please share the link(s). Thanks.

    Read the article

  • How to configure a NSPopupButton for displaying multiple values in a TableView?

    - by jekmac
    Hi there! I'm using two entities A and B with to-many-to-many relationship. Lets say I got an entity A with attribute aAttrib and a to-many relationship aRelat to another entity B with attribute bAttrib and a to-many relationship bRelat with entity A. Now I am building an interface with two tables one for entity A and another for entity B. The table for entity B has two columns one for bAttrib and one for the relationship aRelat. The aRelat-column should be a NSPopupButtonCell to display multiple aAttrib values. I'd like to set all the bindings in InterfaceBuilder in Table Column Bindings: -- I have two NSArrayController each for one entity: Object Controller Mode:Entity Array Controller Bindings: Parameters Managed Object Context bind to File's Owner -- One Table Cloumn with a PopUpButtonCell: TableCloumnBindings Content bind to Entity A with ControllerKey arrangedObjects; Content Values bind to Entity A with ModelKeyPath aAttrib Selected Object bind to Entity B with ModelKeyPath bRelat I know that this configuration doesn't allow multiple value setting. But I don't know how to do the right one. Getting the following message: HIToolbox: ignoring exception 'Unacceptable type of value for to-many relationship: property = "bRelat"; desired type = NSSet; given type = NSCFString; value = testValue.' that raised inside Carbon event dispatch... Does anyone have any idea?

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • How to allow multiple inputs from user using R?

    - by Juan
    For example, if I need that the user specifies the number of rows and columns of a matrix: PROMPT: Number of rows?: USER INPUT: [a number] I need that R 'waits' for the input. Then save [a number] into a variable v1. Next, PROMPT: Number of columns?: USER INPUT: [another number] Also save [another number] into a variable v2. At the end, I will have two variables (v1, v2) that will be used in the rest of the code. "readline" only works for one input at a time. I can't run the two lines together v1 <- readline("Number of rows?: ") v2 <- readline("Number of columns?: ") Any ideas or suggestions? Thank you in advance

    Read the article

  • Force www. on multi domain site and retain http or https

    - by John Isaacks
    I am using CakePHP which already contains an .htaccess file that looks like: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> I want to force www. (unless it is a subdomain) to avoid duplicate content penalties. It needs to retain http or https Also This application will have multiple domains pointing to it. So the code needs to be able to work with any domain.

    Read the article

  • How do I give a Byobu session a name?

    - by Ashimema
    Is there a way to create identifiable Byobu sessions so that when I've got multiple sessions running, the byobu-select-session menu gives me a list of sessions I can recognize, as opposed to non-descript tmux port numbers? In an ideal world, it would be great to be able to both start a session giving it a name and to modify such a session to change a name if it's already running? Is this possible, how?

    Read the article

  • environment variables generated by at command

    - by Jordan Arseno
    I'm inspecting /var/spool/cron/atjobs/a001cf01570e44 with cat, after running the at command from PHP using exec(). It looks like at has prepended the script with lots of APACHE environment variables. #!/bin/sh # atrun uid=33 gid=33 # mail www-data 0 umask 22 APACHE_RUN_DIR=/var/run/apache2; export APACHE_RUN_DIR APACHE_PID_FILE=/var/run/apache2.pid; export APACHE_PID_FILE PATH=/usr/local/bin:/usr/bin:/bin; export PATH APACHE_LOCK_DIR=/var/lock/apache2; export APACHE_LOCK_DIR LANG=C; export LANG APACHE_RUN_USER=www-data; export APACHE_RUN_USER APACHE_RUN_GROUP=www-data; export APACHE_RUN_GROUP APACHE_LOG_DIR=/var/log/apache2; export APACHE_LOG_DIR PWD=/home/jordanarseno/webroot/public_html/myapp; export PWD cd /home/jordanarseno/webroot/public\_html/myapp || { echo 'Execution directory inaccessible' >&2 exit 1 } curl -k http://localhost/myapp/crons/this_action/3 The last line is the only real command I sent along with at via stdin. What is the purpose of these variables? Where is this procedure stored?

    Read the article

  • IIS no longer saving session variables

    - by John
    I'm running IIS v7 on a Win7 development machine. I have PHP code that saves session variables and calls them back later. This has been working on this machine for some time. For some reason now, the session variables dissapear immediatly after saving. Code that used to work fine on http://localhost/, suddenly now does not. I have tested different browsers - the vars dissapear regardless of browser. I have tested identical code on different servers. The problem exists only on this development machine. I tried some code that saves a session var, then reads it back and displays it, then shows a link to click on to read it back and display again. What happens is the session var DOES get written and read back and displayed ok. But when you click the link to view it again, it's gone. I don't recall making any changes to IIS. But I did run several malware scanners and clean-up tools. Is anyone aware of any setting in IIS that disallows session vars? Any other throughts?

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • TEMP environment variable occasionally set incorrectly

    - by Roger Lipscombe
    Occasionally, I find my TEMP and TMP environment variables set to C:\Windows\TEMP. They should be set to %USERPROFILE%\AppData\Local\Temp, and are configured correctly in System Properties. This manifests itself as error messages like the following: ---> System.InvalidOperationException: Unable to generate a temporary class (result=1). error CS2001: Source file 'C:\Windows\TEMP\gb_pz65v.0.cs' could not be found error CS2008: No inputs specified ...which occurs in various .NET applications (in particular Visual Studio 2010 or SQL Server Management Studio). Alternatively, SQL Server Management Studio will report: Value cannot be null. Parameter name: viewInfo (Microsoft.SqlServer.Management.SqlStudio.Explorer) If I run PowerShell elevated, then $env:TEMP is set correctly. If I run PowerShell non-elevated, then it's not. I believe that it should be set correctly in both cases. If not, it's the wrong way round. The same is true for CMD.EXE. Rebooting fixes it, temporarily, until something breaks it again. Presumably something loaded into Explorer.exe is messing with its environment variables, but what? The values in the registry are correct, even while this is happening: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment has TEMP = %SYSTEMROOT%\Temp HKCU\Environment has TEMP = %USERPROFILE%\AppData\Local\Temp By setting a breakpoint on shell32!RegenerateUserEnvironment, I'm able to trap it when it happens, but I still don't know why explorer.exe is reading the wrong environment variables. I can reproduce it consistently by broadcasting a WM_SETTINGCHANGE message (I wrote a one-line C++ program to do this). Watching the activity in Process Monitor shows that explorer.exe doesn't even look at HKCU\Environment. What is going on?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >