Search Results

Search found 15376 results on 616 pages for 'once'.

Page 241/616 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • How do I package this vbscript as a msi for Group Policy

    - by TheCleaner
    I had a developer that is no longer with us create an msi to do this for me, but the package is outdated now and we need to deploy new files. Basically I need to do the following: Take the code at the bottom of this question and deploy it to all users as a software install package in Group Policy. I don't want to use a computer startup script because I don't want this to run at every login...just once to install and be done. How can I take the below and turn it into an msi for deployment through GPO? @echo off delete "C:\Windows\Downloaded Program Files\jdeexpimp.inf" delete "C:\Windows\Downloaded Program Files\jdeexpimpU.ocx" delete "C:\Windows\Downloaded Program Files\jdewebctls.inf" delete "C:\Windows\Downloaded Program Files\jdewebctlsU.ocx" copy "\\tuldc01\EOneActiveXapplets\ActiveX898\jdeexpimpU\*" "C:\Windows\Downloaded Program Files\" copy "\\tuldc01\EOneActiveXapplets\ActiveX898\jdewebctlsU\*" "C:\Windows\Downloaded Program Files\" regsvr32 "C:\Windows\Downloaded Program Files\jdeexpimpU.ocx" regsvr32 "C:\Windows\Downloaded Program Files\jdewebctlsU.ocx"

    Read the article

  • Debian date jumping, causing complete lockup

    - by artfulrobot
    I have a Debian Squeeze VM that has suddenly chosen to jump it's date forwards just over a month, which seems to confuse it no end and cause it to require a hard reset (yikes!). There is nothing unusual in the logs, except that the datestamp suddenly jumps (today back to 2005). It's happened three times, so I don't think it's to do with the leap second issue as the last one of those was in July. When it happened once I spent ages checking stuff but could not find anything, decided to forget it. But three times is becoming an issue on a production server. Edits providing information requested in comments (thanks!): I do not have control over the hypervisor, it is a hired VM. # cat /sys/devices/system/clocksource/clocksource0/current_clocksource kvm-clock # ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== +grendel.exizten 130.149.17.8 2 u 29 64 77 14.811 1.778 1.744 *panoramix.linoc 193.67.79.202 2 u 32 64 77 19.729 -0.419 1.691 +robert.elnounch 213.251.128.249 2 u 27 64 77 17.762 0.600 1.722 -janetzki.eu 83.169.43.165 3 u 31 64 77 27.214 3.575 1.638

    Read the article

  • What can be done to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue ?

    Read the article

  • 12.04 LTS boot hangs at "SP5100 TCO timer: mmio address 0xfec000f0 already in use", didn't yesterday

    - by DarkIron112
    Dual-booting Windows 7 and Ubuntu 12.04 LTS. I went to reboot from Win to Ubu, and found a few interesting things. My POST screen is covered in blocks of epileptic colors until I hit GRUB, which continues when I try to boot into Ubuntu. These color blocks don't appear when I use my on-board VGA, so I'll just attribute to that. Grub dimensions are swapped (card vs onboard, probably), but, when interfacing with onboard VGA, the Grub Timeout Counter works and when using my card, it does not (see "[!!!]" below for more information) Booting into Ubuntu directly causes the error: SP5100 TCO timer: mmio address 0xfec000f0 already in use Booting into recovery mode, meanwhile, and then "resuming normal boot" gets me to the desktop without native 1440x900 resolution and graphic drivers can't tell the monitor it's looking at (I assume this is because it's not a full graphic boot, and as it says, some drivers won't run?) [!!!] When I reboot after going into recovery mode, the countdown timer works ONCE, puts me back into default ubuntu boot, and then does not work again until after another recovery-mode boot. Windows 7 can boot perfectly with no issues whatsoever from epilepsy color blocks or driver detection. This makes me wonder /why/ the POST screen can't handle my video card anymore. Amidst all the diagnostics, I opened my case and re-seated the videocard securely, ensuring it wasn't a loose connection-- But this did nothing to help me. Hardware I am running an NVidia GeForce GTX 8800 video card in a PCI slot. I have 4.8GiB memory, an AMD Athlon II Quad-core 640 Processor, on an MSI K9N6GM Series Mobo. Onboard video is an NVidia GeForce MCP61(V/S/P) card. Note: I did not have any of these problems yesterday, and I have been using Ubuntu intensively for a week, though it's been working flawlessly for months. I've recently been using it to mod my Android phone, perhaps I messed something up in the file system?

    Read the article

  • Missing Package: header, Problem with MergeList, The package lists or status file could not be parsed or opened

    - by Inbar Rose
    THIS IS NOT A DUPLICATE OF SIMILAR QUESTIONS (like this) I just had to write that first, there are tons of questions similar to this, all of them have the same redirect to an answer that does not solve my problem, because I don't have the same problem, just the same symptom. I write tests for my companies application. One of these tests tries to upgrade the application from a previous version to a new version to make sure nothing breaks. When I am installing an old version of the application, some weird stuff starts to happen. Sometimes everything goes Okay, and nothing is wrong, other times when trying to install I get this message (company app name censored): E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/XXX-amd64_Packages E: The package lists or status file could not be parsed or opened. Using the solutions provided in the questions similar to this one (like this). Do not help, and the problem keeps repeating itself once it happens the first time. This has led me to believe something is wrong on the apt server where the package is being created, but searching these errors yields no information on anything beyond the "fix" suggested in the question I linked, the only other source of information I could find also did not help (here): So I am asking for information; What is the actual problem? What causes the problem? What can fix the problem? I hope this question is in good format, if there is a problem, or missing information I can move to chat.

    Read the article

  • Amazon EC2, fastest way to get a node into an existing cluster

    - by imaginative
    I'm new to Amazon AWS. A lot of the time I hear about people folks spawning instances and almost instantly putting them behind a load balancer and into an existing cluster. In the traditional world of managed machines, this would include provisioning hardware, installing an OS, configuring the network on the machine and once the network is available, use a tool of your choice such as CFengine, Puppet or Chef to bootstrap the machine based on its class. It seems like there are "shortcuts" that are able to get a server of a particular class up and running in Amazon EC2. If I have a particular stack running on my server, such as erlang, tomcat6 etc.. what's the fastest way to get these up and running and hooked into Amazon's load balancer? From network, to software stack to kernel tuning? Is it a combination of creating an AMI then running a tool like Puppet against the new instance? Any idea

    Read the article

  • Google chrome IETab login pages

    - by Jeff Storey
    Hi, I'm using Google Chrome and for certain sites I need to use IE. I've installed IE tab classic but I've noticed then when I have pages that require an active directory popup login that chrome will prompt me for the username/password and then send switch over to IE. IE will always show a message indicating that a connection to the page could not be made and I have to then press the "Refresh the page" link and then be prompted again for the username/password (this time inside IE) and then the the login will work. Does anyone know why this happens and how I can just login once? thanks, Jeff

    Read the article

  • Working with multiple interfaces on a single mock.

    - by mehfuzh
    Today , I will cover a very simple topic, which can be useful in cases we want to mock different interfaces on our expected mock object.  Our target interface is simple and it looks like:   public interface IFoo : IDisposable {     void Do(); } Now, as we can see that our target interface has implemented IDisposable and in normal cases if we have to implement it in class where language rules require use to implement that as well[no doubt about it] and whether or not there can be more complex cases, we want to ensure that rather having an extra call(..As()) or constructs to prepare it for us, we should do it in the simplest way possible. Therefore, keeping that in mind, first we create a mock of IFoo var foo = Mock.Create<IFooDispose>(); Then, as we are interested with IDisposable, we simply do: var iDisposable = foo as IDisposable;   Finally, we proceed with our existing mock code. Considering the current context, we I will check if the dispose method has invoked our mock code successfully.   bool called = false;   Mock.Arrange(() => iDisposable.Dispose()).DoInstead(() => called = true);     iDisposable.Dispose();   Assert.True(called);   Further, we assert our expectation as follows: Mock.Assert(() => iDisposable.Dispose(), Occurs.Once());   Hopefully that will help a bit and stay tuned. Enjoy!!

    Read the article

  • OpenWrt Backfire 10.03 Frequently Becoming Unresponsive (Bridged Client)

    - by Christopher Parker
    I have a Linksys WRT54G version 2 that I've flashed with OpenWrt Backfire 10.03. It's acting as a bridged client using the wl.o driver to give me network access in my home office, which is in a far corner of my house in a position that would make it exceedingly difficult to fish network cabling in through the walls. I have three network-ready devices attached to the device that don't currently support WiFi, including a networked printer. Ever since I migrated from WhiteRussian, which was also set up as a bridged client, to Backfire, the device has been becoming unresponsive, as though the OS itself has crashed or frozen. The WLAN light becomes completely solid and the LAN lights stay mostly solid, blipping off and then back on again maybe once a second or so. They all blink more or less in unison. Is there some way I can diagnose why this is happening so I can fix it? Right now, the only way to fix it is to unplug the device and plug it back in.

    Read the article

  • MySQL on a laptop for remote workers - MyISAM keeps corrupting

    - by Jonathon
    We have an application that is used by remote, mobile workers. It intalls WAMP (Server2Go) on a laptop and uses MySQL to store data locally. All tables are MyISAM. Once a day, the workers sync the database to our central server via HTTP scripts that query the data and post it to our site. The problem is that many of these laptop database tables are corrupting continually. It appears that MySQL acts like it saves the information (I don't get any query errors), but the table gets corrupt. I have to repair the table constantly (which removes several rows of data in the process). Does anyone have any ideas about how to work around this problem? Would it be wise to switch to InnoDB on the laptops? How about a different database system altogether. I have looked at MySQL Embedded, but it appears to be the same engine as the regular MySQL.

    Read the article

  • NginX & PHP-FPM, random 502.

    - by pestaa
    2010/09/19 14:52:07 [error] 1419#0: *10220 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [...], server: [...], request: "POST /[...] HTTP/1.1", upstream: "fastcgi://unix:/server/php-fpm.sock:", host: "[...]", referrer: "[...]" This is the error I'm receiving randomly. 95% of the time my setup works perfectly, but once in a while I'm getting 502 for 3-4 subsequent requests. I'm using Unix socket between the server and the PHP process as you can see, also have set up FastCGI params (SCRIPT_FILENAME), etc. correctly. What can I do about it to strengthen the connection between these services? Thank you very much in advance.

    Read the article

  • Which language meets my needs? [closed]

    - by Gerald Goward
    I am a junior C# developer, working for half a year now. In my company I am working on some enterprise projects and after doing it for quite some time I understood that I dont like enterprise projects. I have my own browser-game written in PHP+MySql with some simple HTML+CSS and I have 300 active (those, who entered the game at least once per 5 days) players currently :) After thinking quite some time I understood that I am interested in: 1). Web-development AND 2). standalone programs (but not enterprise ones). 3). Development for mobile platforms is also nice, Android/iOs. 1st and 2nd categories are what I want the most. Android/iOs is good too. I am NOT interested in big systems which are hard to integrate, I am not interested in enterprise systems. In future I would like to start my own business/projects. I would like to create my own projects or/and create a small programmers company to create and release own products. Please tell me what programming language(s)/technologies would you advice me for it? Thanks alot! UPD: It's NOT a "which language is better" or any flame/holywar generating topic since I ask for language that suits my EXACT needs better. I believe C++ is better for low-level coding, while PHP is good for web-development and Object-C being made for iOs. I am still newbie at programming so dont hate me please.

    Read the article

  • Relationship between Repository and Unit of Work

    - by NullOrEmpty
    I am going to implement a repository, and I would like to use the UOW pattern since the consumer of the repository could do several operations, and I want to commit them at once. After read several articles about the matter, I still don't get how to relate this two elements, depending on the article it is being done in a way u other. Sometimes the UOW is something internal to the repository: public class Repository { UnitOfWork _uow; public Repository() { _uow = IoC.Get<UnitOfWork>(); } public void Save(Entity e) { _uow.Track(e); } public void SubmittChanges() { SaveInStorage(_uow.GetChanges()); } } And sometimes it is external: public class Repository { public void Save(Entity e, UnitOfWork uow) { uow.Track(e); } public void SubmittChanges(UnitOfWork uow) { SaveInStorage(uow.GetChanges()); } } Other times, is the UOW whom references the Repository public class UnitOfWork { Repository _repository; public UnitOfWork(Repository repository) { _repository = repository; } public void Save(Entity e) { this.Track(e); } public void SubmittChanges() { _repository.Save(this.GetChanges()); } } How are these two elements related? UOW tracks the elements that needs be changed, and repository contains the logic to persist those changes, but... who call who? Does the last make more sense? Also, who manages the connection? If several operations have to be done in the repository, I think using the same connection and even transaction is more sound, so maybe put the connection object inside the UOW and this one inside the repository makes sense as well. Cheers

    Read the article

  • Drawing text from update method in XNA

    - by Sigh-AniDe
    I am having a problem drawing the "Game Over!" text once the user is on the last tile. This is what I have: The Update and drawText methods are in a class named turtle: public void Update(float scalingFactor, int[,] map, SpriteBatch batch, SpriteFont font) { if (isMovable(mapX, mapY - 1, map)) { position.Y = position.Y - (int)scalingFactor; angle = 0.0f; Program.form.direction = ""; if (mapX == 17 && mapY == 1)// This is the last tile(Tested) { Program.form.BackColor = System.Drawing.Color.Red; drawText(batch, font); } } } public void drawText(SpriteBatch spritebatch, SpriteFont spriteFont) { textPosition.X = 200; // a vector2 textPosition.Y = 200; spritebatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend); spritebatch.DrawString(spriteFont, "Game Over!!!", textPosition, Color.Red); spritebatch.End(); } This update is in the Game1 class: protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); turtle.Update(scalingFactor, map, spriteBatch, font); base.Update(gameTime); } I have also added the font content to LoadContent: font = Content.Load<SpriteFont>("fontType"); What am I doing wrong? Why does the text not want to show on game completion? If I call the turtle.draw() in the main Draw method. The "Game Over" text stays on screen from the beggining. What am I missing? Thanks

    Read the article

  • Eventtriggers frequence

    - by holian
    Masters, I try to set some event task on windows server 2003. I use this tutorial: http://www.petri.co.il/how-to-use-eventtriggersexe-to-send-e-mail-based-on-event-ids.htm My problem is when i set an event for example "If Event Id 528 in the security log, than send an e-mail.", then the eventrigger fire up the task continously, and i get the mail over and over. Any suggestion how to set eventriggers.exe to send e-mail once after the event occure in the event log? Thank you.

    Read the article

  • Ubuntu 13.10 installer making no changes to partition, even after complete

    - by dragonhart6505
    Trying to install Ubuntu 13.10 (x64 package) on a HP Probook 4430S from USB made with UnetBootin. Intel Celeron B810 Dual-core x64 1.6ghz 4gb Ram Intel Graphics HD 2000 320GB HDD - 3 partitions (1 with backup files - 40gb, 2 Win7 that were dual-boot but no longer boot after attempting to install - 55gb and 222gb) I am fine with losing the data on the 222gb partition, but when trying to install it only shows the 55gb and the 222gb, but the 222gb is not 222gb...its including the 40gb backup. Whatever, went through with the installation anyway. Files can be replaced (just backed-up games anyway.) Installation appears to run without a hitch on the now 222gb/262gb partition, formatted to ext4 with the installer itself. Asks to reboot to begin using. Upon rebooting, I get the GNU boot selection screen. Press Enter on "Ubuntu". Get a "Gave up booting from root..." or something error. Reboot and load "Try without installing" option from USB. Once booted, nothing has changed! All 3 partitions are still present, all files intact. But now I can't boot my Win7 55gb partition. EVERYTHING in the "Try..." loader works perfectly. Bluetooth, Wifi, Display adapter, SD Card reader, HDMI-Out, DVD drive, USB ports...even reads correct battery data. Help?

    Read the article

  • MySQL Database synchronizing with local and remote with c#

    - by Neo
    I've posted this here as its more of a mysql questions than c#, I have written some software that runs a local instance of mysql when it first starts, now once mysql is up I would like to synchronize the data between the remote database table and the local database table that the software runs (it shouldn't sync any other databases / tables as there are a lot). I have replication setup to synchronize the entire database to another server which works unless the server goes down then it never comes back up, so based on that I don't think replication will work as when the software is closed it also closes MySQL. So what would be the best method of synchronizing the remote and local databases?

    Read the article

  • newbie hard drive upgrade question

    - by musoNic80
    I have an Acer Aspire 3500WLMi laptop. It currently has a 40gb hard drive which I would like to upgrade. Could someone talk me through the process? I've listed my concerns/queries... Can I buy and install any 2.5" SATA or IDE hard drive into this machine? Should I buy somesort of USB caddy and clone my existing drive onto the new one via USB then physically swap the drives over? My current disk is partitioned to include a small amount of space for a Ubuntu install. Will a clone keep the current partition sizes or is it best for me to repartition once I've cloned? Many thanks.

    Read the article

  • Is there an application that can do a blue screen effect with a webcam?

    - by Axxmasterr
    Background: This is not the blue screen of death I am speaking of but the process called "Blue Screening" that takes and removes a particular colored background from an image so that it can be superimposed on some other video/still picture. If you have ever seen the weatherman stand in front of the map, then you have seen someone doing a blue screen technique. I would like to be able to capture video from my webcam, then send that video to a blue screen program which removes the white (or other color) from the background and then inserts a background of my own choosing. (think of the dead guy in freejack who was calling from all of the different places on earth) Then once the image is superimposed, I would like to pipe it into Skype for video conferencing. Anyone have a good way to do this?

    Read the article

  • Google Image Search Quick Fix

    - by Asian Angel
    Are you tired of unneeded webpage loading and extra link clicking just to access an image found using Google Image Search? Now you can jump directly to the image itself with the clickGOOGLEview extension for Google Chrome. The Problem When you find an image that you like using Google Image Search you always have to go through extra hassle just to get to the image itself. First you have an entire webpage loading in your browser and then you have to click through that irritating “See full size image” link. All that you need is the image, right? Problem Fixed Once you have installed the clickGOOGLEview extension you will absolutely love the result. Find an image that you like, click the link, and there is your new image without any of the hassle or extra link clicking. Big or small having direct access to the image is how it should have been from the beginning. Conclusion The clickGOOGLEview extension does one thing and does it extremely well…it gets you to those images without the extra hassle or additional link clicking. Links Download the clickGOOGLEview extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysChange Internet Explorer in Windows Vista to Search Google by DefaultMake Firefox Built-In Search Box Use Google’s Experimental Search KeysQuick Tip: Show PageRank in Firefox while Google Toolbar is HiddenQuick Tip: Use Google Talk Sidebar in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher

    Read the article

  • What packages can i use to unroll a complete store with customer service?

    - by acidzombie24
    I havent bought the server yet (possible VPS) but i am thinking about using linux with apache and mono for asp.net support. I don't know much about this. What packages can i use together to have a store with customer support? What i like is 1) A store to purchase one item (its digital). More may be possible but they are likely to be addons which need the first item. 2) Have the the store send messages to my app which will generate registration key and deliver the digital item. 3) Create an account for that customer on a support site used for tickets 4) A Forum. I'd like a private forum for customers and may want their account to be disabled when their product license has expired. 5) A mailing list. I like non customers be able to subscribe to a list and i'd like to know if any customers are on it so i can send different emails to each if desired. Are there packages that make any of these easy? I dont mind writing glue code if i need to but i havent tried any stores, mailing list, ticket system but have installed a forum once long ago. My mail server will likely be through google apps.

    Read the article

  • Read-only lock on a SharePoint site collection, or Why can't I edit anymore?

    - by PeterBrunone
    Monday morning, the calls started.  For some reason, long-time users were unable to edit list items.  I figured we had a permissions issue, so I popped in to look at the Site Settings -- and found that I couldn't.  A quick trip to Central Administration showed that I was still listed as a Site Collection Administrator, but I had no power at all on the site collection in question.A quick glance at the logs told me that the server had recently shut down unexpectedly (this is a Hyper-V virtual machine).  Apparently, in the confusion, somehow SharePoint decided to lock the site collection as Read Only.  This can be remedied in one of two ways:1)  In Central Administration, go to Application Management->SharePoint Site Management->Site collection quotas and locks.  Once you have arrived, select the correct application and site collection, and you will have the opportunity to view and set the lock status of the collection (it most likely will be set to "Read-only", and you'll want to move that radio button to "Not locked").2)  Fire up stsadm and issue the following command:stsadm -o setsitelock -url http://myportalsitecollection -lock none

    Read the article

  • Detecting Browser Types?

    - by Mike Schinkel
    My client has asked me to implement a browser detection system for the admin login with the following criteria, allow these: Internet Explorer 8 or newer Firefox 3.6 or newer Safari 5 or newer for Mac only And everything else should be blocked. They want me to implement a page telling the user what browser they need to upgrade/switch to in order to access the CMS. Basically I need to know the best way to detect these browsers with PHP, distinct from any other browsers, and I've read that browser sniffing per se is not a good idea. The CMS is WordPress but this is not a WordPress question (FYI I am a moderator on the WordPress Answers site.) Once I figure out the right technique to detect the browser I'm fully capable to make WordPress react as my client wants, I just need to know what the best ways are with PHP (or worse case jQuery, but I much prefer to do on the server) to figure how what works and what doesn't. Please understand that "Don't do it" is not an acceptable answer for this question. I know this client too well and when they ask me to implement something I need to do it (they are a really good client so I'm happy to do what they ask.) Thanks in advance for your expertise. -Mike

    Read the article

  • MySQL keeps crashing OS server.. Please help adjust my.ini!

    - by TruMan1
    I have MySQL 5.0 installed on a Windows 2008 machine (3GB RAM). My server crashes on a regular basis (almost once a day) with this error: Changed limits: max_open_files: 2048 max_connections: 800 table_cache: 619 I did not use the heavy InnoDB .ini file, although I am rethinking that I should have? I am worried that big configuration changes will make my current sites stop working. What should I do? Here is my current ini settings: default-character-set=latin1 default-storage-engine=INNODB max_connections=800 query_cache_size=84M table_cache=1520 tmp_table_size=30M thread_cache_size=38 myisam_max_sort_file_size=100G myisam_sort_buffer_size=30M key_buffer_size=129M read_buffer_size=64K read_rnd_buffer_size=256K sort_buffer_size=256K innodb_additional_mem_pool_size=6M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=3M innodb_buffer_pool_size=250M innodb_log_file_size=50M innodb_thread_concurrency=10

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >