Search Results

Search found 16499 results on 660 pages for 'off rhoden'.

Page 289/660 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Webcast: Leveraging Mobile And Social Commerce To Deliver A Complete Customer Experience

    - by Michael Hylton
      Mobile and social media are emerging as new channels for customers to interact and transact with brands. Mobile users demand experiences that are relevant and engaging and are designed with the capabilities and constraints of devices in mind. Just having a mobile app or mobile-specific website is not a long-term strategy. Brands must invest in an optimized experience, especially as mobile becomes critical to an overall digital commerce strategy.Debating the merits of using Facebook or not is missing the point when it comes to social media. True innovators are thinking beyond the social channel and are building programs that leverage Facebook data to drive conversions and engagement both on and off Facebook.  Learn how to be more strategic about mobile and social commerce in this informative editorial webcast.Attend this webcast and you will learn: How to leverage mobile and social touchpoints in digital commerce Why having a Facebook page or a mobile app is not enough The benefits of a consistent, personalized and relevant customer experience Strategies for integrating mobile and social into an overall digital commerce strategy Featured Speakers: Peter Sheldon, Senior Analyst, eBusiness & Channel Strategy Professionals, Forrester Research Brenna Johnson, Product Manager, Oracle Commerce Click here to register.

    Read the article

  • Webcast: Leveraging Mobile And Social Commerce To Deliver A Complete Customer Experience

    - by Michael Hylton
      Mobile and social media are emerging as new channels for customers to interact and transact with brands. Mobile users demand experiences that are relevant and engaging and are designed with the capabilities and constraints of devices in mind. Just having a mobile app or mobile-specific website is not a long-term strategy. Brands must invest in an optimized experience, especially as mobile becomes critical to an overall digital commerce strategy.Debating the merits of using Facebook or not is missing the point when it comes to social media. True innovators are thinking beyond the social channel and are building programs that leverage Facebook data to drive conversions and engagement both on and off Facebook.  Learn how to be more strategic about mobile and social commerce in this informative editorial webcast.Attend this webcast and you will learn: How to leverage mobile and social touchpoints in digital commerce Why having a Facebook page or a mobile app is not enough The benefits of a consistent, personalized and relevant customer experience Strategies for integrating mobile and social into an overall digital commerce strategy Featured Speakers: Peter Sheldon, Senior Analyst, eBusiness & Channel Strategy Professionals, Forrester Research Brenna Johnson, Product Manager, Oracle Commerce Click here to register.

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    Hello, When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • Resolution independence - resize on the fly or ship all sizes?

    - by RecursiveCall
    My game relies heavily on textures of various sizes with some being full-screen. The game is targeted for multiple resolutions. I found that resizing textures (downsizing) works quite well for this game’s art type (it’s not Pixel Art or anything like that). I asked my artist to ensure that all textures at the edges of the screen to be created in such a way that they can safely “overflow” off screen; this means that aspect ratio is not an issue. So with no aspect ratio issues, I figured that I would simply ask my artist to create assets in very high resolution, and then resize them down to the appropriate screen resolution. The question is, when and how do I do that? Do I pre-resize everything to common resolutions in Photoshop and package all assets in the final product (increasing the size download that the user has to deal with) and then select the appropriate asset based on the detected resolution? Or do I ship with the largest set of Textures, detect the resolution on load, set a render target and draw all downsized assets to it and use that? Or for the latter, do I use some sort of a CPU-sided algorithm to resize on game load?

    Read the article

  • On the frontier between work and home

    - by MPelletier
    I think we've all been there: You hear of someone say "hey, wouldn't it be nice if platform X had feature Y?" You look around (on SO!), the feature really doesn't exist, even though it probably would be useful in many contexts. So it's pretty generic. Your mind wanders for a bit. "How tough would it be? Well, it'd probably be just a snippet. And an ad-hoc function. And maybe a wrapper." And boom, before you know it, you've spent a dozen hours of your free time implementing a FooFeature that's really neat and generic. The kind of code you might not even have the time to spit and shine at work, that would be a bit rushed and not so documented. So now you wonder "wouldn't this be useful to others?" And you've got your blog, maybe a CodeProject account, and your colleague who asked if FooFeature exists might, haphazardly, come accross that blog entry, had it existed before they told you. On the otherhand, the NDA agreement. It's sort of vague and general. It doesn't forbid you from coding at home, but it's clear on sharing company code, that's a big NO. But this isn't company code. Or is it? Or will it be? So, what do you do with code (that's more than just a snippet) you wrote in your off time with universality in mind but an idea that came from work, and that will most likely be used at work? Can it be published?

    Read the article

  • Can I temporarily leave other roles active when converting a 2008 R2 server to a hyper-v host?

    - by Eric
    I currently have a Windows 2008 R2 box that I want to convert to a hyper-v host. Currently I have the following running on it: File Services, Web Server (IIS), and Sql Server 2008 R2 with a few different DBs. In the long term, I'd like to move all of those over to the VMs that will be created. However, in the short term I'd like to leave them running on the server. Are there any complications/problems with leaving them running on the hyper-v server for a day or two? How long will those services typically go down during the addition of the hyper-v role? Right now my plan is to back up everything, enable the hyper-v role, set up VMs, and migrate stuff off of the host. However, if there is a good probability that something may cease to function I might need to migrate everything over to a temporary host before doing the conversion.

    Read the article

  • Am I missing a pattern?

    - by Ryan Pedersen
    I have a class that is a singleton and off of the singleton are properties that hold the instances of all the performance counters in my application. public interface IPerformanceCounters { IPerformanceCounter AccountServiceCallRate { get; } IPerformanceCounter AccountServiceCallDuration { get; } Above is an incomplete snippet of the interface for the class "PerformanceCounters" that is the singleton. I really don't like the plural part of the name and thought about changing it to "PerformanceCounterCollection" but stopped because it really isn't a collection. I also thought about "PerformanceCounterFactory" but it is really a factory either. After failing with these two names and a couple more that aren't worth mentioning I thought that I might be missing a pattern. Is there a name that make sense or a change that I could make towards a standardized pattern that would help me put some polish on this object and get rid of the plural name? I understand that I might be splitting hairs here but that is why I thought that the "Programmers" exchange was the place for this kind of thing. If it is not... I am sorry and I will not make that mistake again. Thanks!

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • WPF Applications &ndash; Handling the Unhandled

    - by David Totzke
    Instead of just letting your application crash, you can attach a method to the DispatcherUnhandledExceptionEventHandler and one to the AppDomain.Current.UnhandledException.  You wire these up in the code behind of your application which by default is App.xaml.cs.  You can log these errors or throw up a message Don Box and tell the user what happened.  Then you shut down the app gracefully.  You shut it down because something bad happened that you weren’t expecting and at this point there is no guarantee as to the state of the stack or memory or anything really.  All bets are off. If, on the other hand, the method for the UnhandledException is empty and the method for the DispatcherUnhandledEventHandler ends up in a call to a method called LogError() and the LogError() method is FUCKING EMPTY, and you just swallow the exceptions and keep on running, then, not so much.  I spent nearly a day trying to track down a bug that would have been obvious had something been logged or if it just crashed.  It’s my own fault I suppose.  I knew these were hooked up.  I just never suspected that there wouldn’t be any implementation at all.  Live and learn. Customs Man at Heathrow: Anything to declare, Sir? Jekyll and Hyde: Man has not evolved an inch from the slime that spawned him. Customs Man at Heathrow: Very Good, Sir. I tend to agree. Dave Just because I can…

    Read the article

  • What happens when Oracle's Enterprise Single-Sign-On database goes down? [migrated]

    - by Unai
    We're working on setting up Oracle's Enterprise Single-Sign-On with High Availability. At the moment every component provides HA except our database backend (i.e. we have just one instance). While conducting some kick-the-plug tests we learnt that the ESSO system works even with the database turned OFF. This was a nice surprise but now we need to understand what are the implications of a database failure; sure the sessions might be handled on the application servers and the policies might have been cached but... for how long? how big is this cache? what is the role of the database? I would appreciate if anyone shares her/his experience and/or points out to documentation that covers this. Thank you so much.

    Read the article

  • Can I put /tmp and /var/log in a ramdisk on OS X?

    - by kbyrd
    For non-critical Linux systems, I often move things /tmp and /var/log to tmpfs to save on some disk writing. I've been doing this for a year or so and if I ever need the logs across reboots, I just comment out a line in /etc/fstab and then start debugging. In any case, I would like to do the same thing on OS X. I've seen posts on creating a ramdisk for OS X, but I'm looking for a more permanent solution. I always want /tmp and /var/log mounted in a ramdisk, with the ability to turn that off with a bit of cmdline editing in vi if I have to.

    Read the article

  • Backup to disk, encrypted, without any installed local software

    - by user30064
    Hi, Ok, this is a tough one, and it might not even be possible, but no harm in asking I guess. I have a Buffalo Terastation file server that I use for network attached storage. After a couple of phone calls to customer services I realised that there is no way to backup to disk encrypted. In effect, I would be carrying unencrypted company data off-site daily, which is obviously unacceptable. I had a go at TrueCrypt, EncFS, and a few others, and as far as I could see all of them required that you install some software on the machine that is to use the file system, which makes sense. Unfortunately the firmware on the Terastation is closed and I cannot install any software (and I can't build from source either, since Buffalo didn't include a compiler). Are there any ways to copy files to disk, where as soon as they are written to the disk they are transparently encrypted, without having to install additional software? I'm not sure it matters too much, but the Terastation firmware is Linux based, although as I mentioned, closed. Many thanks, Andreas

    Read the article

  • Updated Intel display driver causing errors when booting

    - by cdysthe
    I upgraded to to Graphics Installer 1.0.6 on my Ubuntu 14.04 and installed the drivers using the Intel Graphics Installer. The laptop is Intel Ivybridge powered with Intel HD graphics. It Optimus but I have disabled the Nvidia card in bios. The Intel Graphics Installer installs the package i915-3.15-3.13-dkms.deb which I assume is the updated driver. It causes a bunch of error messages when I boot. Here are the relevant errors from dmesg when I boot: [ 7.206151] drm: module verification failed: signature and/or required key missing - tainting kernel [ 7.208045] drm: module has bad taint, not creating trace events [ 7.336470] fb: conflicting fb hw usage inteldrmfb vs VESA VGA - removing generic driver [ 7.393854] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013) [ 7.393855] [drm] Driver supports precise vblank timestamp query. [ 7.393921] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 7.505798] [drm] GMBUS [i915 gmbus dpb] timed out, falling back to bit banging on pin 5 [ 7.507233] init: Failed to obtain startpar-bridge instance: Unknown parameter: INSTANCE [ 7.944183] [drm:cpt_serr_int_handler] ERROR PCH transcoder A FIFO underrun [ 8.368479] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device [ 8.368480] i915 0000:00:02.0: registered panic notifier [ 8.818416] [drm] Enabling RC6 states: RC6 on, RC6p on, RC6pp off What could the problem be and will it affect performance? I tried to remove the package and the errors went away but I'm then running and older driver I assume?

    Read the article

  • Policy Administration is the Top 2011 IT Priority for Insurers

    - by helen.pitts(at)oracle.com
    The current issue of Insurance Networking News includes an interesting column by Novarica's Matt Josefowicz.  Recent research by the firm revealed that policy administration replacement or extension is the most common strategic IT project for insurers this year.  The article goes on to note that insurers are keenly focused on the business capabilities that can be delivered once the system is in production as well as the ability to leverage agile development methodologies and true business/IT collaboration during implementation. The results are not too surprising given that policy administration is a mission-critical system for life and annuity insurers.  As Josefowicz notes, "Core systems are called core for a reason--they are at the heart of the insurer's ability to function.  Replacing them is not to be done lightly, but failing to replace them can mean diminishing the ability to compete or function effectively as a company." Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities.  The ability to leverage the policy administration systems to better service customers and distribution channels by providing real-time access to policy information throughout the policy lifecycle is also critical to sustain loyalty and further fuel growth.Insurers can benefit from a modern, adaptive policy administration system, like Oracle Insurance Policy Administration for Life and Annuity.  You can learn more about the industry's most highly advanced, rules-based system, which is unmatched for its highly flexible, rules-based configurability, performance and extensibility, as well as global market industry trends by viewing a complimentary, on-demand Webcast, Adapt, Transform and Grow:  Accelerate Speed to Market with Adaptive Insurance Policy Administration.Data conversions can be a daunting process for many insurers when deciding to modernize, in particular when consolidating from multiple, disparate legacy policy administration systems to a single new platform.  Migrating from a legacy system requires a well-thought out approach that builds on the industry's best thinking from previous modernization efforts and takes data migration off the critical path by leveraging proven methodology and tools to capitalize on the new system's capabilities.  We'll discuss more about this approach in a future Oracle Insurance blog.Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Rip authedicatation from LDAP to Local

    - by oxinabox
    We are taking a small portion of out network offline, and running a separate network using that portion. (By small portion I mean 2 servers, that will be connected to 30 odd boxs that aren't usually part of our network, and don't need to authenicate) I intend to create a VM on one of the servers to provide general user services, and IRC server, remote shell etc. And I would like the users to be able to use there usual server log in details. Problem is the LDAP server that normally checks those details is not one of the severs. So I need to be able to some how take their details off LDAP and put them on the the server that is coming. One suggestion I had was to set a LDAP server on the VM locally, and clone the LDAP database onto it (using something called slapcat) is this the best way? Or can I I change the LDAP data into local authentication data?

    Read the article

  • How to deal with colleagues refuse to follow practices?

    - by Adrian Shum
    I was discussing with another colleague about what we should be used when an DB entity is referring to another. I don't think there is any good reason to break the practice of putting the Primary Key in the referring entity. However, one of my colleague says: "You should use a surrogate key in the entity, but it is better to put the human-readable natural key in the referring entity. As long it is unique, it is fine and it is easier when you are doing support or maintenance job" I know it will works, but obviously it is not a good practice you are putting a non-PK unique column as "foreign key", just for gaining a bit of ease in writing SQL during support as we can have less table join. Though I mentioned the his approach is conceptual incorrect, and causing problem too practically etc, he seems rather trade off correctness in data model in exchange of ease of maintenance. And he said: "I know it is not good practice, but good practice is not golden rule" Honestly I feel frustrated when dealing with something like this. I know there are always case that we should break some rule or practice, but doubtless it is not such case now. What will you when you are facing situation like this? Please assume yourself being a senior developer which is expected to contribute in misc development direction and convention.

    Read the article

  • How can I find the program making a harmonica sound?

    - by Josh
    A friend has a Windows XP SP3 machine that plays a harmonica sound for about 5 seconds throughout the day at what seems to be random intervals (every couple hours). My question is how can I find the program making this sound? Is there a Windows API hook for monitoring audio access? I've gone through and checked all the standard Windows sounds in the Control Panel and right now the theme is set to no sounds and I personally checked to make sure none of the events have a sound specified. I also checked the Task Scheduler to make sure there wasn't something scheduled to go off every couple hours. Any ideas on how to go about finding the bugger?

    Read the article

  • Best way to transfer files across unstable LAN?

    - by JamesTheAwesomeDude
    This is very similar to Question 326211, but in this case, the LAN is an unstable Wi-Fi connection. I need to transfer about 11 GiB of files between two computers, both running Linux (although one may be rebooted into Windows.) Their connection is both slow and unstable (due to Linux's awful Wi-Fi support,) but removable media (such as a flash drive or external hard drive) is not an option at this time. Right now, I'm slowly transferring the files, one by one, across SFTP, but I have to reconnect each computer approximately every 90 seconds, and the computers are not very close to each other, so this is not feasible. This is not a duplicate of Question 30186; that one specifically concerns Windows 7, and all the proposed solutions involve closed-source, Windows-only programs (which are all spyware IMHO, and are all off the table even if I trusted them - one of the computers is Linux-only.)

    Read the article

  • Flash AS3 sidescrolling tiles optimization

    - by Galvanize
    I'm trying to make a sidescrolling game in Flash that will run on a low performance laptop. While studying the subject from Tonypa I saw that he builds a Bitmap by making copys of the BitmapData of each tile from the Tile Sheet and placing it on the bigger Bitmat with the size of the screen. But when I came to think on how to scroll my map I ran into some optimization doubts. I came up with two choices: Create a MovieClip, place a Bitmap instance for each tile that is shown on the screen + 1 row in it, then move them all. Then when the tile ran off the screen I would move it to end of the MovieClip and replace their BitmapData for the next row in my map. Use a Bitmap with copys of each tile in it (as shown in Tonypa's tutorial) but 1 extra row, move the whole Bitmap, and when it comes the time to replace rows, redraw the whole Bitmap and move it back to the origin position. The first idea is how a co-worker of mine suggested, the second one is my own, but none of us has enough technical knowledge to be sure on a technique that would be optimal in performance, can anyone help?

    Read the article

  • I made an .htaccess template; is there anything else that should be added or changed?

    - by purpler
    # DEFAULTS ServerSignature Off AddDefaultCharset UTF-8 DefaultLanguage en-US SetEnv Europe/Belgrade SetEnv SERVER_ADMIN [email protected] # Rewrites RewriteEngine On RewriteBase / # Redirect to WWW RewriteCond %{HTTP_HOST} ^serpentineseo.com RewriteRule (.*) http://www.serpentineseo.com/$1 [R=301,L] # Cache media files <filesMatch "\.(gif|jpg|jpeg|png|ico|swf|js)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <FilesMatch "\.(js|css|pdf|swf)$"> Header set Cache-Control "max-age=604800" </FilesMatch> <FilesMatch "\.(html|htm|txt)$"> Header set Cache-Control "max-age=600" </FilesMatch> # DONT CACHE <FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$"> Header unset Cache-Control </FilesMatch> # Deny access to .htaccess <Files .htaccess> order allow,deny deny from all </Files>

    Read the article

  • Should we be able to deploy a single package to the SSIS Catalog?

    - by jamiet
    My buddy Sutha Thiru sent me an email recently asking about my opinion on a particular nuance of the project deployment model in SQL Server Integration Services (SSIS) 2012 and I’d like to share my response as I think it warrants a wider discussion. Sutha asked: Jamie What is your take on this? http://www.mattmasson.com/index.php/2012/07/can-i-deploy-a-single-ssis-package-from-my-project-to-the-ssis-catalog/ Overnight I was talking to Matt who confirmed that they got no plans to change the deployment model. For example if we have following scenrio how do we do deploy? Sprint 1 Pkg1, 2 & 3 has been developed and deployed to UAT. Once signed off its been deployed to Live. Sprint 2 Pkg 4 & 5 been developed. During this time users raised a bug on Pkg2. We want to make the change to Pkg2 and deploy that to UAT and eventually to LIVE without releasing Pkg 4 &5. How do we do it? Matt pointed me to his blog entry which I have seen before . http://www.mattmasson.com/index.php/2012/02/thoughts-on-branching-strategies-for-ssis-projects/ Thanks Sutha My response: Personally, even though I've experienced the exact problem you just outlined, I agree with the current approach. I steadfastly believe that there should not be a way for an unscrupulous developer to slide in a new version of a package under the covers. Deploying .ispac files brings a degree of rigour to your operational processes. Yes, that means that we as SSIS developers are going to have to get better at using source control and branching properly but that is no bad thing in my opinion. Claiming to be proper "developers" is a bit of a cheap claim if we don't even do the fundamentals correctly. I would be interested in the thoughts of others who have used the project deployment model. Do you agree with my point of view? @Jamiet

    Read the article

  • P2V - Cold Clone ISO

    - by jlehtinen
    I need to cold clone a physical box in a VMWare environment. What are people using for this these days? My preference is for VMWare's vConverter ISO, but it appears that this was discontinued. It's no longer available for download on their site from what I can tell (even under old versions). I found one guy who appears to have an ISO for version 3.0.3 of vConverter posted to his site for download, but I'm eternally skeptical about downloading these types of software from random strangers: http://thatcouldbeaproblem.com/?p=584 I also found some mention of using MOA, but I've never used this and have no idea on how effective it is as a vConverter replacement. http://www.sanbarrow.com/moa.html One other options seems to be using Acronis - booting off an Acronis disk to capture a .tib, then using a standard installation of vConverter to push the .tib to ESXi.

    Read the article

  • Apache Errordocument (custom 503 page) works intermittently

    - by jimmycavnz
    We have Apache 2.2 running on Windows 2k3 and 2k8 R2 as a reverse proxy to downstream applications. Some of these applications may go offline during off-peak hours so we've implemented a custom 503 page like so: ErrorDocument 503 /error/serverTimeout.html ErrorDocument 504 /error/serverTimeout.html (the error directory is in Apaches's htdocs folder) If I make these changes, restart apache and then access the down application on firefox I see the custom page as expected. I then access it using my IE browser, it also works. If I close my IE browser and access it again, I get Apache's standard "Service Temporarily Unavailable" message instead of my custom page. Once I receive the standard error message, I never get the custom page again until I restart Apache. I've put the server on debug and I can't see any difference between the requests which return the custom error page and the requests which return the standard error message. Is there some weird proxy setting which is messing with the errordocument configuration? Any ideas?

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • Proper 16:9 video size for non-HD 4:3 video (for youtube/vimeo)

    - by Xeoncross
    Since High Definition video came out on all the online sites it has changed the default aspect ratio of the player from 4:3 to 16:9. This means that for people posting SD video you have to resize some of your videos to get them to fit right. For example, NTSC DVD quality (aka 480i/p) is 720x480 pixels (width x height). However, low-end High Definition (720i/p) is 1280x720. Anyway, now that the video players are built for HD you will find that uploading standard quality videos will result in videos that are "letter boxed" which means they have extra black bars on the top and bottom (or sides). Correct me if I'm wrong, but in order to get a 720x480 video to fit a box that is designed for HD the best practice would be to crop some of it off so that it fits as 720x404 since: 16/9 = 1.78 (1.7777777777778) 720/405 = 1.78 405x1.78 = 720.9 The same would stand for 640x480 (old TV quality) video that would need to be 640x360 correct? I'm asking because I'm not sure about all this and whether this is the proper way to fix these letter-boxing/display problems.

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >