Search Results

Search found 19299 results on 772 pages for 'off topic'.

Page 379/772 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Updated Intel display driver causing errors when booting

    - by cdysthe
    I upgraded to to Graphics Installer 1.0.6 on my Ubuntu 14.04 and installed the drivers using the Intel Graphics Installer. The laptop is Intel Ivybridge powered with Intel HD graphics. It Optimus but I have disabled the Nvidia card in bios. The Intel Graphics Installer installs the package i915-3.15-3.13-dkms.deb which I assume is the updated driver. It causes a bunch of error messages when I boot. Here are the relevant errors from dmesg when I boot: [ 7.206151] drm: module verification failed: signature and/or required key missing - tainting kernel [ 7.208045] drm: module has bad taint, not creating trace events [ 7.336470] fb: conflicting fb hw usage inteldrmfb vs VESA VGA - removing generic driver [ 7.393854] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013) [ 7.393855] [drm] Driver supports precise vblank timestamp query. [ 7.393921] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 7.505798] [drm] GMBUS [i915 gmbus dpb] timed out, falling back to bit banging on pin 5 [ 7.507233] init: Failed to obtain startpar-bridge instance: Unknown parameter: INSTANCE [ 7.944183] [drm:cpt_serr_int_handler] ERROR PCH transcoder A FIFO underrun [ 8.368479] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device [ 8.368480] i915 0000:00:02.0: registered panic notifier [ 8.818416] [drm] Enabling RC6 states: RC6 on, RC6p on, RC6pp off What could the problem be and will it affect performance? I tried to remove the package and the errors went away but I'm then running and older driver I assume?

    Read the article

  • Policy Administration is the Top 2011 IT Priority for Insurers

    - by helen.pitts(at)oracle.com
    The current issue of Insurance Networking News includes an interesting column by Novarica's Matt Josefowicz.  Recent research by the firm revealed that policy administration replacement or extension is the most common strategic IT project for insurers this year.  The article goes on to note that insurers are keenly focused on the business capabilities that can be delivered once the system is in production as well as the ability to leverage agile development methodologies and true business/IT collaboration during implementation. The results are not too surprising given that policy administration is a mission-critical system for life and annuity insurers.  As Josefowicz notes, "Core systems are called core for a reason--they are at the heart of the insurer's ability to function.  Replacing them is not to be done lightly, but failing to replace them can mean diminishing the ability to compete or function effectively as a company." Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities.  The ability to leverage the policy administration systems to better service customers and distribution channels by providing real-time access to policy information throughout the policy lifecycle is also critical to sustain loyalty and further fuel growth.Insurers can benefit from a modern, adaptive policy administration system, like Oracle Insurance Policy Administration for Life and Annuity.  You can learn more about the industry's most highly advanced, rules-based system, which is unmatched for its highly flexible, rules-based configurability, performance and extensibility, as well as global market industry trends by viewing a complimentary, on-demand Webcast, Adapt, Transform and Grow:  Accelerate Speed to Market with Adaptive Insurance Policy Administration.Data conversions can be a daunting process for many insurers when deciding to modernize, in particular when consolidating from multiple, disparate legacy policy administration systems to a single new platform.  Migrating from a legacy system requires a well-thought out approach that builds on the industry's best thinking from previous modernization efforts and takes data migration off the critical path by leveraging proven methodology and tools to capitalize on the new system's capabilities.  We'll discuss more about this approach in a future Oracle Insurance blog.Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Can I put /tmp and /var/log in a ramdisk on OS X?

    - by kbyrd
    For non-critical Linux systems, I often move things /tmp and /var/log to tmpfs to save on some disk writing. I've been doing this for a year or so and if I ever need the logs across reboots, I just comment out a line in /etc/fstab and then start debugging. In any case, I would like to do the same thing on OS X. I've seen posts on creating a ramdisk for OS X, but I'm looking for a more permanent solution. I always want /tmp and /var/log mounted in a ramdisk, with the ability to turn that off with a bit of cmdline editing in vi if I have to.

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • WPF Applications &ndash; Handling the Unhandled

    - by David Totzke
    Instead of just letting your application crash, you can attach a method to the DispatcherUnhandledExceptionEventHandler and one to the AppDomain.Current.UnhandledException.  You wire these up in the code behind of your application which by default is App.xaml.cs.  You can log these errors or throw up a message Don Box and tell the user what happened.  Then you shut down the app gracefully.  You shut it down because something bad happened that you weren’t expecting and at this point there is no guarantee as to the state of the stack or memory or anything really.  All bets are off. If, on the other hand, the method for the UnhandledException is empty and the method for the DispatcherUnhandledEventHandler ends up in a call to a method called LogError() and the LogError() method is FUCKING EMPTY, and you just swallow the exceptions and keep on running, then, not so much.  I spent nearly a day trying to track down a bug that would have been obvious had something been logged or if it just crashed.  It’s my own fault I suppose.  I knew these were hooked up.  I just never suspected that there wouldn’t be any implementation at all.  Live and learn. Customs Man at Heathrow: Anything to declare, Sir? Jekyll and Hyde: Man has not evolved an inch from the slime that spawned him. Customs Man at Heathrow: Very Good, Sir. I tend to agree. Dave Just because I can…

    Read the article

  • What happens when Oracle's Enterprise Single-Sign-On database goes down? [migrated]

    - by Unai
    We're working on setting up Oracle's Enterprise Single-Sign-On with High Availability. At the moment every component provides HA except our database backend (i.e. we have just one instance). While conducting some kick-the-plug tests we learnt that the ESSO system works even with the database turned OFF. This was a nice surprise but now we need to understand what are the implications of a database failure; sure the sessions might be handled on the application servers and the policies might have been cached but... for how long? how big is this cache? what is the role of the database? I would appreciate if anyone shares her/his experience and/or points out to documentation that covers this. Thank you so much.

    Read the article

  • Backup to disk, encrypted, without any installed local software

    - by user30064
    Hi, Ok, this is a tough one, and it might not even be possible, but no harm in asking I guess. I have a Buffalo Terastation file server that I use for network attached storage. After a couple of phone calls to customer services I realised that there is no way to backup to disk encrypted. In effect, I would be carrying unencrypted company data off-site daily, which is obviously unacceptable. I had a go at TrueCrypt, EncFS, and a few others, and as far as I could see all of them required that you install some software on the machine that is to use the file system, which makes sense. Unfortunately the firmware on the Terastation is closed and I cannot install any software (and I can't build from source either, since Buffalo didn't include a compiler). Are there any ways to copy files to disk, where as soon as they are written to the disk they are transparently encrypted, without having to install additional software? I'm not sure it matters too much, but the Terastation firmware is Linux based, although as I mentioned, closed. Many thanks, Andreas

    Read the article

  • How to deal with colleagues refuse to follow practices?

    - by Adrian Shum
    I was discussing with another colleague about what we should be used when an DB entity is referring to another. I don't think there is any good reason to break the practice of putting the Primary Key in the referring entity. However, one of my colleague says: "You should use a surrogate key in the entity, but it is better to put the human-readable natural key in the referring entity. As long it is unique, it is fine and it is easier when you are doing support or maintenance job" I know it will works, but obviously it is not a good practice you are putting a non-PK unique column as "foreign key", just for gaining a bit of ease in writing SQL during support as we can have less table join. Though I mentioned the his approach is conceptual incorrect, and causing problem too practically etc, he seems rather trade off correctness in data model in exchange of ease of maintenance. And he said: "I know it is not good practice, but good practice is not golden rule" Honestly I feel frustrated when dealing with something like this. I know there are always case that we should break some rule or practice, but doubtless it is not such case now. What will you when you are facing situation like this? Please assume yourself being a senior developer which is expected to contribute in misc development direction and convention.

    Read the article

  • How can I find the program making a harmonica sound?

    - by Josh
    A friend has a Windows XP SP3 machine that plays a harmonica sound for about 5 seconds throughout the day at what seems to be random intervals (every couple hours). My question is how can I find the program making this sound? Is there a Windows API hook for monitoring audio access? I've gone through and checked all the standard Windows sounds in the Control Panel and right now the theme is set to no sounds and I personally checked to make sure none of the events have a sound specified. I also checked the Task Scheduler to make sure there wasn't something scheduled to go off every couple hours. Any ideas on how to go about finding the bugger?

    Read the article

  • Rip authedicatation from LDAP to Local

    - by oxinabox
    We are taking a small portion of out network offline, and running a separate network using that portion. (By small portion I mean 2 servers, that will be connected to 30 odd boxs that aren't usually part of our network, and don't need to authenicate) I intend to create a VM on one of the servers to provide general user services, and IRC server, remote shell etc. And I would like the users to be able to use there usual server log in details. Problem is the LDAP server that normally checks those details is not one of the severs. So I need to be able to some how take their details off LDAP and put them on the the server that is coming. One suggestion I had was to set a LDAP server on the VM locally, and clone the LDAP database onto it (using something called slapcat) is this the best way? Or can I I change the LDAP data into local authentication data?

    Read the article

  • Best way to transfer files across unstable LAN?

    - by JamesTheAwesomeDude
    This is very similar to Question 326211, but in this case, the LAN is an unstable Wi-Fi connection. I need to transfer about 11 GiB of files between two computers, both running Linux (although one may be rebooted into Windows.) Their connection is both slow and unstable (due to Linux's awful Wi-Fi support,) but removable media (such as a flash drive or external hard drive) is not an option at this time. Right now, I'm slowly transferring the files, one by one, across SFTP, but I have to reconnect each computer approximately every 90 seconds, and the computers are not very close to each other, so this is not feasible. This is not a duplicate of Question 30186; that one specifically concerns Windows 7, and all the proposed solutions involve closed-source, Windows-only programs (which are all spyware IMHO, and are all off the table even if I trusted them - one of the computers is Linux-only.)

    Read the article

  • Flash AS3 sidescrolling tiles optimization

    - by Galvanize
    I'm trying to make a sidescrolling game in Flash that will run on a low performance laptop. While studying the subject from Tonypa I saw that he builds a Bitmap by making copys of the BitmapData of each tile from the Tile Sheet and placing it on the bigger Bitmat with the size of the screen. But when I came to think on how to scroll my map I ran into some optimization doubts. I came up with two choices: Create a MovieClip, place a Bitmap instance for each tile that is shown on the screen + 1 row in it, then move them all. Then when the tile ran off the screen I would move it to end of the MovieClip and replace their BitmapData for the next row in my map. Use a Bitmap with copys of each tile in it (as shown in Tonypa's tutorial) but 1 extra row, move the whole Bitmap, and when it comes the time to replace rows, redraw the whole Bitmap and move it back to the origin position. The first idea is how a co-worker of mine suggested, the second one is my own, but none of us has enough technical knowledge to be sure on a technique that would be optimal in performance, can anyone help?

    Read the article

  • Apache Errordocument (custom 503 page) works intermittently

    - by jimmycavnz
    We have Apache 2.2 running on Windows 2k3 and 2k8 R2 as a reverse proxy to downstream applications. Some of these applications may go offline during off-peak hours so we've implemented a custom 503 page like so: ErrorDocument 503 /error/serverTimeout.html ErrorDocument 504 /error/serverTimeout.html (the error directory is in Apaches's htdocs folder) If I make these changes, restart apache and then access the down application on firefox I see the custom page as expected. I then access it using my IE browser, it also works. If I close my IE browser and access it again, I get Apache's standard "Service Temporarily Unavailable" message instead of my custom page. Once I receive the standard error message, I never get the custom page again until I restart Apache. I've put the server on debug and I can't see any difference between the requests which return the custom error page and the requests which return the standard error message. Is there some weird proxy setting which is messing with the errordocument configuration? Any ideas?

    Read the article

  • Should we be able to deploy a single package to the SSIS Catalog?

    - by jamiet
    My buddy Sutha Thiru sent me an email recently asking about my opinion on a particular nuance of the project deployment model in SQL Server Integration Services (SSIS) 2012 and I’d like to share my response as I think it warrants a wider discussion. Sutha asked: Jamie What is your take on this? http://www.mattmasson.com/index.php/2012/07/can-i-deploy-a-single-ssis-package-from-my-project-to-the-ssis-catalog/ Overnight I was talking to Matt who confirmed that they got no plans to change the deployment model. For example if we have following scenrio how do we do deploy? Sprint 1 Pkg1, 2 & 3 has been developed and deployed to UAT. Once signed off its been deployed to Live. Sprint 2 Pkg 4 & 5 been developed. During this time users raised a bug on Pkg2. We want to make the change to Pkg2 and deploy that to UAT and eventually to LIVE without releasing Pkg 4 &5. How do we do it? Matt pointed me to his blog entry which I have seen before . http://www.mattmasson.com/index.php/2012/02/thoughts-on-branching-strategies-for-ssis-projects/ Thanks Sutha My response: Personally, even though I've experienced the exact problem you just outlined, I agree with the current approach. I steadfastly believe that there should not be a way for an unscrupulous developer to slide in a new version of a package under the covers. Deploying .ispac files brings a degree of rigour to your operational processes. Yes, that means that we as SSIS developers are going to have to get better at using source control and branching properly but that is no bad thing in my opinion. Claiming to be proper "developers" is a bit of a cheap claim if we don't even do the fundamentals correctly. I would be interested in the thoughts of others who have used the project deployment model. Do you agree with my point of view? @Jamiet

    Read the article

  • P2V - Cold Clone ISO

    - by jlehtinen
    I need to cold clone a physical box in a VMWare environment. What are people using for this these days? My preference is for VMWare's vConverter ISO, but it appears that this was discontinued. It's no longer available for download on their site from what I can tell (even under old versions). I found one guy who appears to have an ISO for version 3.0.3 of vConverter posted to his site for download, but I'm eternally skeptical about downloading these types of software from random strangers: http://thatcouldbeaproblem.com/?p=584 I also found some mention of using MOA, but I've never used this and have no idea on how effective it is as a vConverter replacement. http://www.sanbarrow.com/moa.html One other options seems to be using Acronis - booting off an Acronis disk to capture a .tib, then using a standard installation of vConverter to push the .tib to ESXi.

    Read the article

  • I made an .htaccess template; is there anything else that should be added or changed?

    - by purpler
    # DEFAULTS ServerSignature Off AddDefaultCharset UTF-8 DefaultLanguage en-US SetEnv Europe/Belgrade SetEnv SERVER_ADMIN [email protected] # Rewrites RewriteEngine On RewriteBase / # Redirect to WWW RewriteCond %{HTTP_HOST} ^serpentineseo.com RewriteRule (.*) http://www.serpentineseo.com/$1 [R=301,L] # Cache media files <filesMatch "\.(gif|jpg|jpeg|png|ico|swf|js)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <FilesMatch "\.(js|css|pdf|swf)$"> Header set Cache-Control "max-age=604800" </FilesMatch> <FilesMatch "\.(html|htm|txt)$"> Header set Cache-Control "max-age=600" </FilesMatch> # DONT CACHE <FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$"> Header unset Cache-Control </FilesMatch> # Deny access to .htaccess <Files .htaccess> order allow,deny deny from all </Files>

    Read the article

  • Proper 16:9 video size for non-HD 4:3 video (for youtube/vimeo)

    - by Xeoncross
    Since High Definition video came out on all the online sites it has changed the default aspect ratio of the player from 4:3 to 16:9. This means that for people posting SD video you have to resize some of your videos to get them to fit right. For example, NTSC DVD quality (aka 480i/p) is 720x480 pixels (width x height). However, low-end High Definition (720i/p) is 1280x720. Anyway, now that the video players are built for HD you will find that uploading standard quality videos will result in videos that are "letter boxed" which means they have extra black bars on the top and bottom (or sides). Correct me if I'm wrong, but in order to get a 720x480 video to fit a box that is designed for HD the best practice would be to crop some of it off so that it fits as 720x404 since: 16/9 = 1.78 (1.7777777777778) 720/405 = 1.78 405x1.78 = 720.9 The same would stand for 640x480 (old TV quality) video that would need to be 640x360 correct? I'm asking because I'm not sure about all this and whether this is the proper way to fix these letter-boxing/display problems.

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • Convert a gray PNG with alpha to a 1-bit black rectanble with 8-bit alpha

    - by jcayzac
    I use a tool to render LaTeX equations as PNG. The resulting images are in RGBA8888 format. I would like to extract the luminance (grayscale from RGB channels, multiplied by the A channel) as my new alpha channel, set the picture fully black, and save the result in Gray1Alpha8 (G1A8) format. So far I've only managed to get G1A4 or G8A8 but not G1A8. Also, the resulting picture looks like it's not multiplied correctly… convert original.png \ \( -clone 0 -alpha extract \) \ \( -clone 0 -clone 1 -compose multiply -composite \) \ -delete 0 +swap -alpha off -compose copy_opacity -composite -colorspace Gray -depth 4 result.png What am I missing?

    Read the article

  • Converting a VMware player image from non-persistant to persistant

    - by Journeyman Geek
    I'm pondering setting up a virtual machine to generate app-v (since i got mdop from school) or thinapp application virtualisation packages. Ideally with either of these, i should work off a fresh system do the install, then copy out the packages produced. I'd like to not have to reinstall windows per package to get a fresh, unmodified stock copy of windows. I'm aware that its possible to make a hard disk image persistant in vmware player (one of my lecturers did it, but he's in another country). I'm wondering how would i convert a persistant image to a non persistant one? I'm currently running vmware player 4, with windows 7 guest and host.

    Read the article

  • Seizing the Moment with Mobility

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan CapdevilaVice President, Oracle Applications Development

    Read the article

  • What's upcoming in the GlassFish Webinar Series

    - by pieter.humphrey
    2011 is kicking off with the return of the GF Webinar series as you've never seen it before.  It's going to be packed with information about Java EE6 and how simplicity, testability and convention-over-configuration is winning the hearts and minds of enterprise Java developers.  Don't miss these industry leading speakers and topics reviewing the cutting edge of Java EE6 implementations, tools, and much more.   Note:  future dates are subject to change. Jan 20th: GlassFish & Netbeans Jan 27th: Building a Simple Web Application with Java EE Feb 15th: Java EE Developer Tools 'shootout' with GlassFish Feb 24th: What's New in GlassFish 3.1 Clustering & HA Admin Console Coherence Web Integration Security Microkernel Architecture March 15th: GlassFish 3.1 - clustering deep dive March 29th: GlassFish 3.1 - Admin Console & Productivity Features April 5th: GlassFish 3.1 - Coherence Web Integration deep dive Possible "Tech cast live" event: April (date TBC): Special Guest Adam Bien April 19th: GlassFish 3.1 - Security deep dive with Byron Nevins & TBD May 3rd: GlassFish 3.1 - Microkernel Architecture deep dive Possible "Tech cast live" event: May 17th: "Upgrading to 3.1 from existing GlassFish installations" May 31st: Embedded GlassFish del.icio.us Tags: glassfish,development,java,java ee,java ee6,OTN,NetBeans,JDeveloper,enterprise Pack for Eclipse Technorati Tags: glassfish,development,java,java ee,java ee6,OTN,NetBeans,JDeveloper,enterprise Pack for Eclipse

    Read the article

  • Different Flavors of Leases Back On

    - by Theresa Hickman
    Given the continued interest regarding the proposed changes to Lease Accounting, I decided to write another entry on this controversial topic with colorful commentary from our resident accounting expert, Seamus Moran. Background (A History Lesson) Back in 1976, the FASB issued FAS 13, “Accounting for Leases” that permitted leases to be either an operating lease or capital (finance) lease. In substance, operating leases are a form of off-balance sheet financing. According to Seamus, operating leases date back to the launch of the Boeing 707 in the 1950s.  Because the aircraft was so much more expensive than previous aircrafts, the industry came up with the operating lease concept to accommodate these jet liners that dominated air transport.  How it worked was the bank would buy the plane and lease it to the airline.  Because the bank never controlled or flew the plane, they never placed the asset on their balance sheet, and because the airline never owned the plane, they didn’t place it on their balance sheet either. They simply treated the monthly lease payments as rental expenses on the P&L.   August 2010 Original Lease Accounting Changes In August 2010, FASB and IASB decided to overhaul lease accounting as part of their joint commitment “to insure that investors and other users of financial statements are provided useful, transparent, and complete information about leasing transactions in the financial statements.”  Some say that the current lease accounting standards are broken because it keeps assets off the balance sheet, hidden from investors’ view. The original proposal abolished operating leases and only permitted capital leases where all leases would be recorded on the balance sheet as assets and liabilities. The asset side would reflect the right to use the asset for the leased term, and the liability side would reflect the obligation to make lease payments.   Why Companies Were Freaking Out According to the SEC, the financial impact of the aforementioned lease changes was estimated to add more than $1.3 trillion of operating lease obligations to corporate balance sheets. Many companies in various industries, especially retail, are concerned because the changes are significant and will impact existing leases with no grandfather clause for existing operating leases. Of course, the banks and airlines I mentioned earlier really hate this because neither wants to report the airplane (now costing around $60 M) as an asset. Regular companies were concerned that they would have to report routine short term leases of real estate or equipment as fixed assets, even though they were really just longer term rentals.  One company we spoke to leased roadside billboards, and really did not consider them to be fixed assets in any way. Obviously, these changes would have had a profound and lasting effect on a company’s financial and real estate strategies and significantly impact its financial statements.  Financial statements would show higher depreciation and interest expense with significantly higher total assets and debt. In terms of financial metrics, they’re negatively impacted. It would raise a company’s debt-to-capital ratio to reflect the higher debt compared to equity, it would negatively impact their return-on-assets because now companies will appear more asset intensive, and it will decrease EPS, lowering shareholder ROI. Feb. 2011 Recent Update The comment period on leases closed in December 2010. The FASB and the IASB have met several times since then and published their initial responses to the input they received from the various interested parties.  They are “redeliberating” the principles involved in Lease Accounting.  Some of the issues they are looking at include: The core definition of a lease.  This will articulate principles on what is a lease and what is “not-a-lease.” One theory or supposition is that they might define a lease as the transfer of certain but not all major ownership attributes for a certain period of time.  So a year’s lease of an aircraft might be a “lease,” but a year’s lease of half a floor in an office building would be “not-a-lease.”  The ownership attributes transferred from the core owner to the user are different; the airline must maintain, paint, and do whatever it needs to do on the aircraft. However, the office renter will have strictly limited rights in respect to the rented space. The differences between a lease contract and service contract.  Even if they call them “leases” for the purpose of commercial law, a service contract might not be accounted for as a lease. The accounting to be done by the lessee.  They would define when the bank or landlord would retain the asset on their balance sheet, and perhaps by implication, when the lessor would not need to include the asset on theirs.  So if the finance house keeps the airplane or office on their balance sheet, the tenant doesn’t need to.  I’m not sure that I can draw the opposite conclusion where the finance house doesn’t report but the tenant must. The difference, if any, between a financing lease and other leases, and the implications to the accounting. The present value calculation when renewable terms exist. They have reduced the circumstances in which one must look at the renewable terms of a lease in calculating the present value.  In most circumstances, you will use the lease term rather than the potential renewable term. Their latest discussion this past week with the contents of the discussion was not available at the time of me writing this entry.  For more details, the results of the discussions are posted on both the FASB and the IASB websites. Implied Software Changes Whatever the final rules turn out to be, all ERP systems, such as Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards, and Oracle Hyperion will need to change their software to accommodate the new rules. The following lists some changes that might have to be made to accounting software depending on what the final standards will be in June 2011: Lease tracking may require modifications with tracking of additional lease details that might require a centralized repository to maintain Accounting may need to be modified as there are many changes to how capital leases and the new “other than finance” leases are accounted for both on the lessee and lessor side.  For example, valuation, amortization, and disclosure will be considerably different requiring different types of data to be captured. Companies may need to modify their chart of accounts depending on how they want to track leases, which could then impact financial reporting and consolidation Business processes may require changes which could then impact internal controls Software applications may need to perform more advanced computations on leases Reports and KPIs may need to reflect new operating metrics Hold Onto Your Seats           Before you redo all your lease agreements and call your software vendors asking when the changes to the software will be made, remember that the rules are not finalized yet, and from appearances, will not reflect the proposals in the exposure draft.  Not only are there objections to putting the operating lease assets on anyone’s balance sheet, there are lots of objections to subjectivity and the data required for the valuation.  According to Seamus, there is huge opposition from New York bankers, the airlines, the EU, the Communist Party of China (since it impacts their exporting business), and Republicans (hearing complaints from small and large businesses). Even if everyone can agree on the proposed changes, 2013 might be the earliest that companies would need to change how they report leases. The Boards will finish their deliberations in April, May or June 2011.  As we’ve seen with other Exposure Drafts, if the changes are minor and the principles met the General Acceptance consensus criteria, the Standard could be finalized at that time.  However, if substantial changes are made, a fresh exposure draft, comment period, and review period might be involved, too. Seamus added an interesting perspective. Even if the proposed changes do pass, don’t you think our customers, such as Boeing, GE Capital, United Airlines, etc. will be clever enough to come up with a new kind of financing arrangement that complies with the new accounting? How about the large retail customers, such as Best Buy and Macerich? Don’t you think they might simply cut deals around retail locations with new contracts that prevent their leases from being capital leases? Instead of blindly adapting the software to meet the principles outlined in the final standard, our software needs to accommodate how businesses will respond to the new rules. We cannot know our customers’ responses until the rules are finalized. Oracle is aware of the potential changes and is staying abreast of the developments through our domain expertise staff, our relationship with customers, our market awareness, and, of course, our relationships with the Big 4. This is part of our normal process with respect to worldwide regulatory compliance. Oracle products have been IFRS and GAAP compliant for years and we will continue to maintain those standards going forward.

    Read the article

  • Where does amavisd-new log on ubuntu, by default

    - by Bojan Markovic
    I have a misbehaving amavisd on a mail server. It starts and then silently stops. Init script outputs successful start whan invoked manually, however the next second I check the status and the service is off. Nothing is listening on it's ports. I googled. I checked out the configuration files, I checked out init.d script. It starts, then it stops. And I have no idea what's going on, since I can't find the log file (no it's not /var/log/amavis.log or anything logical like that).

    Read the article

  • Getting Started with StreamInsight 2.1

    - by Roman Schindlauer
    If you're just beginning to get familiar with StreamInsight, you may be looking for a way to get started. What are the basics? How can I get my first StreamInsight application running so I can see how it works? Where is the 'front door' that will get me going? If that describes you, then this blog entry might be just what you need. If you're already a StreamInsight wiz, keep reading anyway - you may find some helpful links here that you weren't aware of. But here's what we'd like from you experienced readers in particular: if you know of other good resources that we missed, please feel free to add them in the comments below. We appreciate you sharing your expertise. The Book The basic documentation for StreamInsight is located in the MSDN Library (Microsoft StreamInsight 2.1). You'll notice that previous versions of StreamInsight are still there (1.2 and 2.0), but if you're just getting started you can stick to the 2.1 section. The documentation has been organized to function as reference material, which is fine after you're familiar with the technology. But if you're trying to learn the basics, you might want to take a different path instead of just starting at the top. The following is one map you can use. What Is StreamInsight? Here is a sequence of topics that should give you a good overview of what StreamInsight is and how it works: Overview answers the question, "what is it?" StreamInsight Server Architecture gives you a quick look at a high-level architectural drawing StreamInsight Concepts lays out an overview of the basic components Deploying StreamInsight Entities to a StreamInsight Server describes the mechanics of how these components work together Getting an Example Running Once you have this background, go ahead and install StreamInsight and get a basic example up and running: Installation download and install the software StreamInsight Examples walk through a set of 3 simple StreamInsight applications that work together to demonstrate what you learned in the topics above; you can copy and paste the code into Visual Studio, compile, and run That's it - you now have a real, functioning StreamInsight system! Now that you have a handle on the basics, you might want to start digging deeper. Digging Deeper Here's a suggested path through the documentation to help you understand the next layer of StreamInsight technologies: Using Event Sources and Event Sinks sources supply data and sinks consume it; this topic gives you an overview of how they work Publishing and Connecting to the StreamInsight Server practical details on how to set up a StreamInsight server A Hitchhiker’s Guide to StreamInsight 2.1 Queries queries are the heart of how StreamInsight performs data analytics, and this whitepaper will help you really understand how they work Using StreamInsight LINQ root through this section for technical details on specific query components Using the StreamInsight Event Flow Debugger in addition to troubleshooting, the debugger is a great way to learn more about what goes on inside a StreamInsight application And Even Deeper Finally, to get a handle on some of the more complex things you can do with StreamInsight, dig into these: Input and Output Adapters adapters can be useful for handling more complex sources and sinks Building Resilient StreamInsight Applications a resilient application is able to recover from system failures Operations this section will help you monitor and troubleshoot a running StreamInsight system The StreamInsight Community As you're designing and developing your StreamInsight solutions, you probably will find it helpful to see working examples or to learn tips and tricks from others. Or maybe you need a place to post a vexing question. Here are some community resources that we have found useful. If you know of others, please add them in the comments below. Code samples and tools Official StreamInsight code samples Introduction to LinqPad Driver for StreamInsight 2.1 - LinqPad is a very useful tool for developing queries The following case studies are based on earlier versions of StreamInsight, but they still are useful examples: Microsoft Media Analytics - real-time monitoring and analytic Edgenet - responding to information from multiple source ICONICS - managing energy usage Blogs Microsoft StreamInsight Ruminations of J.net Richard Seroter's Architecture Musings pluralsight Forums MSDN StreamInsight Forum stackoverflow Training Microsoft StreamInsight Fundamentals (“Introducing StreamInsight” is free) from pluralsight Twitter @streaminsight   You’re a StreamInsight Expert That should get you going. Please add any other resources you have found useful in the comments below.   Regards, The StreamInsight Team

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >