Search Results

Search found 365 results on 15 pages for 'slice'.

Page 8/15 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • How to ship a server internationally?

    - by devians
    I have a fileserver that will need to be shipped internationally soon. I'm looking on advice for packing and recommendations on methods/companies. Should it be shipped whole, or in parts? How to pack it, precautions to take. Any way you slice it, it's going to be very heavy. Will this cause problems? Whats the best way to protect it from shock? It would be pointless for it to arrive with broken hdd's. If you've done this before, your hindsight would be invaluable.

    Read the article

  • Solaris mounting partitions

    - by Benco
    I'm trying to mount a partition in solaris 10... bash-3.00# mount /dev/dsk/c0t0d0s3 /data mount: /dev/dsk/c0t0d0s3 is already mounted or /data is busy As far as I know c0t0d0s3 isn't already mounted elsewhere, so what's really going on here? From /etc/mnttab : /dev/dsk/c1t0d0s0 / ufs rw,intr,largefiles,logging,xattr,onerror=panic,dev=7800001285811136 /devices /devices devfs dev=4840000 1285811125 ctfs /system/contract ctfs dev=48c0001 1285811125 proc /proc proc dev=4880000 1285811125 mnttab /etc/mnttab mntfs dev=4900001 1285811125 swap /etc/svc/volatile tmpfs xattr,dev=4940001 1285811125 objfs /system/object objfs dev=4980001 1285811125 sharefs /etc/dfs/sharetab sharefs dev=49c0001 1285811125 /usr/lib/libc/libc_hwcap1.so.1 /lib/libc.so.1 lofs dev=780000 1285811131 fd /dev/fd fd rw,dev=4b40001 1285811136 swap /tmp tmpfs xattr,dev=4940002 1285811137 swap /var/run tmpfs xattr,dev=4940003 1285811137 -hosts /net autofs nosuid,indirect,ignore,nobrowse,dev=4c00001 1285811148 auto_home /home autofs indirect,ignore,nobrowse,dev=4c00002 1285811148 cordb:vold(pid530) /vol nfs ignore,noquota,dev=4bc0001 1285811149 I suspect the problem is not related to the mount point, but rather the disk slice I'm trying to mount: bash-3.00# newfs -v /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3: Device busy

    Read the article

  • MySQL takes forever after running certain php scripts... optimization help?

    - by DFischer
    When I run a couple scripts from the vBulletin software (like uninstalling a plugin) it takes forever. When monitoring the memory usage, it shows this = -/+ buffers/cache: 158 381 Swap: 255 10 245 It seems that MySQL is only using a certain amount and once it does it tries to use the swap instead? I have a 512MB slice and right now my key buffer is at 16M and max_allowed_packet is at 16M. Is there something else I should increase or can I increase those variables and still be safe? Thanks.

    Read the article

  • useful JMX metrics for monitoring WebSphere Application Server (and apps inside it)?

    - by Justin Grant
    When managing custom Java applications hosted inside WebSphere Application Server, what JMX metrics do you find most useful for monitoring performance, monitoring availability, and troubleshooting problems? And how do you prefer to slice and visualize those metrics (e.g. chart by top 10 hosts, graph by app, etc.). The more details I can get, the better, as I need to specify a standard set of reports which IT can offer to owners of applications hosted by IT, which those owners can customize but many won't bother. So I'll need to come up with a bunch of generally-applicable reports which most groups can use out-of-the-box. Obviously there's no one perfect answer to this question, so I'll accept the answer with the most comprehensive details and I'll be generous about upvoting any other useful answer. My question is WebSphere-specific, but I realize that most JMX metrics are equally applicable across any container, so feel free to give an answer for JBoss, Tomcat, WebLogic, etc.

    Read the article

  • Can we save image slices in sub folders in Photoshop?

    - by bobo
    I am using Photoshop CS5, by default, Photoshop saves image slices in a folder named images. This is fine but there are lots of images, saving them all in a single folder becomes quite messy. What I would like to do in Photoshop is, it allows me to specify which sub-folder a slice should go under the default images folder. For example, a website template PSD file, I would like to save image slices that appear in the header under images\header\. Maybe for the images in the footer, they should go images\footer\. Of course, I can group them manually after the slices are extracted. If possible, I would like to define them in Photoshop, so that every time the image slices are extracted, they each go to the specified sub-folder automatically. Is this possible in Photoshop?

    Read the article

  • Pages partially load on rapid refresh

    - by user101570
    I recently set up a VPS slice with 256MB to run a LAMP stack (Ubuntu 11.04, Apache2, Mysql, PHP5). So far I'm only running a simple Wordpress site on an IP-based virtual host I set up. The performance is excellent, but I've noticed that if I send multiple HTTP requests from the same IP in a short time period, only partial pages are rendered. Then if I wait a bit and refresh the page, the entire page loads again. I noticed this behaviour when accessing the site from two browsers from my office desktop, but it also presents itself if I quickly navigate the site from a single browser (any browser). I'm guessing this is an Apache phenomenon, as the pages are rendered correctly except under the conditions above, but perhaps I'm wrong here. Could it be my hosting company with some kind of DOS protection in place? As a relative Linux/server noob, I'd really appreciate any insight into what settings in Apache could explain this behaviour, and how I might go about changing it.

    Read the article

  • Develop an iPad app with MonoTouch, C#, and .NET.

    - by Wallym
    Article url: http://www.devproconnections.com/article/mobile-development/ipad-ios-app-monotouch-143636 Since its release in March 2010, the iPad has taken the world by storm. Each new iPad release has launched the device further and further into our lives. Here are some interesting facts that we have seen over the past few years along with some market share analysis:The school that my teenagers attend here in Knoxville, Tennessee, was the first school in the United States to integrate the iPad into its teaching program and its curriculum. Many schools have since followed suit.There are a healthy number of applications in a variety of market segments. In fact, there is a market for iPad point-of-sale systems.The iPad is a popular device and growing more so each day. comScore states that one in four smartphone owners also owns a tablet. We also know that the iPad comprises 68 percent of the tablet market.eMarketer recently issued a report stating that the number of iPad users is expected to grow by 90 percent in the US in 2012.No matter how you slice the data, tablet usage is growing, and the iPad is currently leading the pack. The question for .NET developers is "How can I get me some of that?" Xamarin's MonoTouch offers help, by providing the means for .NET developers to leverage their C# and .NET coding skills to develop iPad applications. MonoTouch has supported the iPad since the initial release of the iOS 3.2 beta SDK all the way up to the most recent iOS SDK, which supports the iPad. In this article, we'll look at targeting the iPad and how we can take advantage of iPad-specific features in our iOS applications written with MonoTouch.I hope you enjoy the article and you find it helpful.

    Read the article

  • Dim an Overly Bright Alarm Clock with a Binder Divider

    - by ETC
    Love your alarm clock but hate how eye-searingly bright it is? Slice up a plastic binder divider to dim your alarm clock (or any other aggressively bright monochromatic display). At DIY site Curbly Chris Job shares a simple alarm clock hack. For years he had an alarm clock with a nice dim display. When it broke he went in search of a replacement but failed to find one that wasn’t . His solution came in form of a sliced up binder divider (the clear, usually tinted, plastic tabs you put in between paper in a binder). He sliced to fit the display, spritzed it with a little water to help it cling to the plastic, and pressed it in place. The plastic dims the display enough–as seen in the before/after picture above–that he doesn’t need to cover it up to get a good night’s sleep. Calm Your Alarm Clock’s Display and Sleep Better for 25¢ [Curbly] Latest Features How-To Geek ETC Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 Dim an Overly Bright Alarm Clock with a Binder Divider Preliminary List of Keyboard Shortcuts for Unity Now Available Bring a Touch of the Wild West to Your Desktop with the Rango Theme for Windows 7 Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic E.T. II – Extinction [Fake Movie Sequel Video] Remastered King’s Quest Games Offer Classic Gaming on Modern Machines

    Read the article

  • Getting Query Parameters in Javascript

    - by PhubarBaz
    I find myself needing to get query parameters that are passed into a web app on the URL quite often. At first I wrote a function that creates an associative array (aka object) with all of the parameters as keys and returns it. But then I was looking at the revealing module pattern, a nice javascript design pattern designed to hide private functions, and came up with a way to do this without even calling a function. What I came up with was this nice little object that automatically initializes itself into the same associative array that the function call did previously. // Creates associative array (object) of query params var QueryParameters = (function() {     var result = {};     if (window.location.search)     {         // split up the query string and store in an associative array         var params = window.location.search.slice(1).split("&");         for (var i = 0; i < params.length; i++)         {             var tmp = params[i].split("=");             result[tmp[0]] = unescape(tmp[1]);         }     }     return result; }()); Now all you have to do to get the query parameters is just reference them from the QueryParameters object. There is no need to create a new object or call any function to initialize it. var debug = (QueryParameters.debug === "true"); or if (QueryParameters["debug"]) doSomeDebugging(); or loop through all of the parameters. for (var param in QueryParameters) var value = QueryParameters[param]; Hope you find this object useful.

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Jump handling and gravity

    - by sprawl
    I'm new to game development and am looking for some help on improving my jump handling for a simple side scrolling game I've made. I would like to make the jump last longer if the key is held down for the full length of the jump, otherwise if the key is tapped, make the jump not as long. Currently, how I'm handling the jumping is the following: Player.prototype.jump = function () { // Player pressed jump key if (this.isJumping === true) { // Set sprite to jump state this.settings.slice = 250; if (this.isFalling === true) { // Player let go of jump key, increase rate of fall this.settings.y -= this.velocity; this.velocity -= this.settings.gravity * 2; } else { // Player is holding down jump key this.settings.y -= this.velocity; this.velocity -= this.settings.gravity; } } if (this.settings.y >= 240) { // Player is on the ground this.isJumping = false; this.isFalling = false; this.velocity = this.settings.maxVelocity; this.settings.y = 240; } } I'm setting isJumping on keydown and isFalling on keyup. While it works okay for simple use, I'm looking for a better way handle jumping and gravity. It's a bit buggy if the gravity is increased (which is why I had to put the last y setting in the last if condition in there) on keyup, so I'd like to know a better way to do it. Where are some resources I could look at that would help me better understand how to handle jumping and gravity? What's a better approach to handling this? Like I said, I'm new to game development so I could be doing it completely wrong. Any help would be appreciated.

    Read the article

  • Self-referencing anonymous closures: is JavaScript incomplete?

    - by Tom Auger
    Does the fact that anonymous self-referencing function closures are so prevelant in JavaScript suggest that JavaScript is an incomplete specification? We see so much of this: (function () { /* do cool stuff */ })(); and I suppose everything is a matter of taste, but does this not look like a kludge, when all you want is a private namespace? Couldn't JavaScript implement packages and proper classes? Compare to ActionScript 3, also based on EMACScript, where you get package com.tomauger { import bar; class Foo { public function Foo(){ // etc... } public function show(){ // show stuff } public function hide(){ // hide stuff } // etc... } } Contrast to the convolutions we perform in JavaScript (this, from the jQuery plugin authoring documentation): (function( $ ){ var methods = { init : function( options ) { // THIS }, show : function( ) { // IS }, hide : function( ) { // GOOD }, update : function( content ) { // !!! } }; $.fn.tooltip = function( method ) { // Method calling logic if ( methods[method] ) { return methods[ method ].apply( this, Array.prototype.slice.call( arguments, 1 )); } else if ( typeof method === 'object' || ! method ) { return methods.init.apply( this, arguments ); } else { $.error( 'Method ' + method + ' does not exist on jQuery.tooltip' ); } }; })( jQuery ); I appreciate that this question could easily degenerate into a rant about preferences and programming styles, but I'm actually very curious to hear how you seasoned programmers feel about this and whether it feels natural, like learning different idiosyncrasies of a new language, or kludgy, like a workaround to some basic programming language components that are just not implemented?

    Read the article

  • How to cover the widest range of computers when publishing?

    - by DevilWithin
    When you plan a game, or even when you already made a game, and its time to publish, you wonder how much of your audience is covered by the game technology demands. I'm directing this essentialy to casual games, as I constantly see people having old laptops and being unable to replace them. Laptops with integrated cards whose OpenGL version doesn't even support textures larger than 1024x1024. These people may be avid gamers as well, and a reasonable share of the audience to consider giving them the chance to play casual games, once they cannot play any blockbusters. As I've seen happening, a very "noticeable" example is Angry Birds. It's gameplay is merely casual (I think nobody disagrees here) and still, it uses so high resolution textures that at least OpenGL 2.0 or around is needed, which blocks away a lot of people. So, the actual question is: what is a good tradeoff for this issue? Would it be better to just sacrifice the texture resolution for everyone, but have more supported hardware? Would it be better to keep the high quality and just slice the textures into smaller ones, sacrificing the performance a little bit? What else? Any ideas about this topic are welcome for discussion.

    Read the article

  • Neo4j increasing latency as SKIP increases on Cypher query + REST API

    - by voldomazta
    My setup: Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) Neo4j 2.0.0-M06 Enterprise First I made sure I warmed up the cache by executing the following: START n=node(*) RETURN COUNT(n); START r=relationship(*) RETURN count(r); The size of the table is 63,677 nodes and 7,169,995 relationships Now I have the following query: START u1=node:node_auto_index('uid:39') MATCH (u1:user)-[w:WANTS]->(c:card)<-[h:HAS]-(u2:user) WHERE u2.uid <> 39 WITH u2.uid AS uid, (CASE WHEN w.qty < h.qty THEN w.qty ELSE h.qty END) AS have RETURN uid, SUM(have) AS total ORDER BY total DESC SKIP 0 LIMIT 25 This UID has about 40k+ results that I want to be able to put a pagination to. The initial skip was around 773ms. I tried page 2 (skip 25) and the latency was around the same even up to page 500 it only rose up to 900ms so I didn't really bother. Now I tried some fast forward paging and jumped by thousands so I did 1000, then 2000, then 3000. I was hoping the ORDER BY arrangement will already have been cached by Neo4j and using SKIP will just move to that index in the result and wont have to iterate through each one again. But for each thousand skip I made the latency increased by alot. It's not just cache warming because for one I already warmed up the cache and two, I tried the same skip a couple of times for each skip and it yielded the same results: SKIP 0: 773ms SKIP 1000: 1369ms SKIP 2000: 2491ms SKIP 3000: 3899ms SKIP 4000: 5686ms SKIP 5000: 7424ms Now who the hell would want to view 5000 pages of results? 40k even?! :) Good point! I will probably put a cap on the maximum results a user can view but I was just curious about this phenomenon. Will somebody please explain why Neo4j seems to be re-iterating through stuff which appears to be already known to it? Here is my profiling for the 0 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(0)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14 of type Any),false)"], limit="Add(Literal(0),Literal(25))", _rows=25, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) And for the 5000 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(5000)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872 of type Any),false)"], limit="Add(Literal(5000),Literal(25))", _rows=5025, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) The only difference is the LIMIT clause on the Top function. I hope we can make this work as intended, I really don't want to delve into doing an embedded Neo4j + my own Jetty REST API for the web app.

    Read the article

  • Data Structure for Small Number of Agents in a Relatively Big 2D World

    - by Seçkin Savasçi
    I'm working on a project where we will implement a kind of world simulation where there is a square 2D world. Agents live on this world and make decisions like moving or replicating themselves based on their neighbor cells(world=grid) and some extra parameters(which are not based on the state of the world). I'm looking for a data structure to implement such a project. My concerns are : I will implement this 3 times: sequential, using OpenMP, using MPI. So if I can use the same structure that will be quite good. The first thing comes up is keeping a 2D array for the world and storing agent references in it. And simulate the world for each time slice by checking every cell in each iteration and further processing if an agents is found in the cell. The downside is what if I have 1000x1000 world and only 5 agents in it. It will be an overkill for both sequential and parallel versions to check each cell and look for possible agents in them. I can use quadtree and store agents in it, but then how can I get the information about neighbor cells then? Please let me know if I should elaborate more.

    Read the article

  • One monitor getting spilled over into other monitor: how to do a 100% reset of gnome graphics configuration

    - by Paul Nathan
    I had to kill a VMWare process and afterwards, my monitor's configuration is buggy. I have 2 monitors in a side-by-side configuration. My right-hand monitor is the secondary monitor. Upon its right-hand side there are about 50 pixels showing from the left side of the lefthand monitor (ie, as if it was wrapped around). Further, my mouse clicks are registering as about 50 pixels sideways from where they should be. It's as if those 50 pixels between monitors got gobbled. What have I done? I've reset the screen configuration in multiple ways, using xrandr, multiple monitors app, etc. This persists in different side-by-side configurations, and also persists with another user. It does not occur with XFCE. Resetting the Window manager with the Compiz reset WM app does not fix this. I've concluded the burn-to-the-ground approach is likely the best, and would like to do a 100% reset of my graphics settings. It's an Intel integrated chipset. Removing ~/.config/monitors.xml did not work. Also, interestingly, the mouse can mouse-over the 50 errant pixels on the rhs of the right-hand monitor. I hypothesize that it's a compositing problem occurring at the layer where the background, selection, and clicks are caught. Also, inverting the right-hand monitor removes the issue, but renders the screen unusable. Even more datapoints: This happens in KDE as well Sometimes logging into Gnome and running xrandr --output DVI1 --auto resets it, but the issue immediately reappears when I press alt-tab. With Compiz Application Switch turned on, the workspace is 'pushed back' a bit, and the slice on the RHS follows it as well. I'm wondering if it's a flaw in the compiz workspace compositing configuration. I suspect the error was in the compositing configuration. I installed 11.10.

    Read the article

  • Need a good quality bitmap rotation algorithm for Android

    - by Lumis
    I am creating a kaleidoscopic effect on an android tablet. I am using the code below to rotate a slice of an image, but as you can see in the image when rotating a bitmap 60 degrees it distorts it quite a lot (red rectangles) – it is smudging the image! I have set dither and anti-alias flags but it does not help much. I think it is just not a very sophisticated bitmap rotation algorithm. canvas.save(); canvas.rotate(angle, screenW/2, screenH/2); canvas.drawBitmap(picSlice, screenW/2, screenH/2, pOutput); canvas.restore(); So I wonder if you can help me find a better way to rotate a bitmap. It does not have to be fast, because I intend to use a high quality rotation only when I want to save the screen to the SD card - I would redraw the screen in memory before saving. Do you know any comprehensible or replicable algorithm for bitmap rotation that I could programme or use as a library? Or any other suggestion? EDIT: The answers below made me think if Android OS had bilinear or bicubic interpolation option and after some search I found that it does have its own version of it called FilterBitmap. After applying it to my paint pOutput.setFilterBitmap(true); I get much better result

    Read the article

  • Static / Shared Helper Functions vs Built-In Methods

    - by Nathan
    This is a simple question but a design consideration that I often run across in my day to day development work. Lets say that you have a class that represents some kinds of collection. Public Class ModifiedCustomerOrders Public Property Orders as List(Of ModifiedOrders) End Class Within this class you do all kinds of important work, such as combining many different information sources and, eventually, build the Modified Customer Orders. Now, you have different processes that consume this class, each of which needs a slightly different slice of the ModifiedCustomerOrders items. To enable this, you want to add filtering functionality. How do you go about this? Do you: Add Filtering calls to the ModifiedCustomerOrders class so that you can say: MyOrdersClass.RemoveCanceledOrders() Create a Static / Shared "tooling" class that allows you to call: OrdersFilters.RemoveCanceledOrders(MyOrders) Create an extension method to accomplish the same feat as #2 but with less typing: MyOrders.RemoveCanceledOrders() Create a "Service" method that handles the getting of Orders as appropriate to the calling function, while using one of the previous approaches "under the hood". OrdersService.GetOrdersForProcessA() Others? I tend to prefer the tooling / extension method approaches as they make testing a little bit simpler. Although I dependency inject all my sourcing data into the ModifiedCustomerOrders, having it as part of the class makes it a little bit more complicated to test. Typically, I choose to use extension methods where I am doing parameterless transformations / filters. As they get more complex, I will move it into a static class instead. Thoughts on this approach? How would you approach it?

    Read the article

  • nil object in view when building objects on two different associations

    - by Shako
    Hello all. I'm relatively new to Ruby on Rails so please don't mind my newbie level! I have following models: class Paintingdescription < ActiveRecord::Base belongs_to :paintings belongs_to :languages end class Paintingtitle < ActiveRecord::Base belongs_to :paintings belongs_to :languages end class Painting < ActiveRecord::Base has_many :paintingtitles, :dependent => :destroy has_many :paintingdescriptions, :dependent => :destroy has_many :languages, :through => :paintingdescriptions has_many :languages, :through => :paintingtitles end class Language < ActiveRecord::Base has_many :paintingtitles, :dependent => :nullify has_many :paintingdescriptions, :dependent => :nullify has_many :paintings, :through => :paintingtitles has_many :paintings, :through => :paintingdescriptions end In my painting new/edit view, I would like to show the painting details, together with its title and description in each of the languages, so I can store the translation of those field. In order to build the languagetitle and languagedescription records for my painting and each of the languages, I wrote following code in the new method of my Paintings_controller.rb: @temp_languages = @languages @languages.size.times{@painting.paintingtitles.build} @painting.paintingtitles.each do |paintingtitle| paintingtitle.language_id = @temp_languages[0].id @temp_languages.slice!(0) end @temp_languages = @languages @languages.size.times{@painting.paintingdescriptions.build} @painting.paintingdescriptions.each do |paintingdescription| paintingdescription.language_id = @temp_languages[0].id @temp_languages.slice!(0) end In form partial which I call in the new/edit view, I have <% form_for @painting, :html => { :multipart => true} do |f| %> ... <% languages.each do |language| %> <p> <%= label language, language.name %> <% paintingtitle = @painting.paintingtitles[counter] %> <% new_or_existing = paintingtitle.new_record? ? 'new' : 'new' %> <% prefix = "painting[#{new_or_existing}_title_attributes][]" %> <% fields_for prefix, paintingtitle do |paintingtitle_form| %> <%= paintingtitle_form.hidden_field :language_id%> <%= f.label :title %><br /> <%= paintingtitle_form.text_field :title%> <% end %> <% paintingdescription = @painting.paintingdescriptions[counter] %> <% new_or_existing = paintingdescription.new_record? ? 'new' : 'new' %> <% prefix = "painting[#{new_or_existing}_title_attributes][]" %> <% fields_for prefix, paintingdescription do |paintingdescription_form| %> <%= paintingdescription_form.hidden_field :language_id%> <%= f.label :description %><br /> <%= paintingdescription_form.text_field :description %> <% end %> </p> <% counter += 1 %> <% end %> ... <% end %> But, when running the code, ruby encounters a nil object when evaluating paintingdescription.new_record?: You have a nil object when you didn't expect it! You might have expected an instance of ActiveRecord::Base. The error occurred while evaluating nil.new_record? However, if I change the order in which I a) build the paintingtitles and painting descriptions in the paintings_controller new method and b) show the paintingtitles and painting descriptions in the form partial then I get the nil on the paintingtitles.new_record? call. I always get the nil for the objects I build in second place. The ones I build first aren't nil in my view. Is it possible that I cannot build objects for 2 different associations at the same time? Or am I missing something else? Thanks in advance!

    Read the article

  • Overlapping text on top menu in Unity 12.04

    - by mercury
    So I'm new to Linux but am basically blown away by Ubuntu 12.04 and could definitely see this becoming my main desktop over time One small annoyance for me is a tendency for the global menu on the top bar to partially over-write the text description of the active window in the top panel. e.g. I focus on the "Ubuntu Software Centre" window which writes out that label in the leftmost corner of the top menu bar. If I then move the cursor up to the top menu bar to access the "file menu", this will partially overwrite the window name leaving just "Ubuntu" visible. This is a little slice of ugliness I don't want to see every day! Much easier on the eye for me would be to display the active window name at the centre of the top panel, using some of that free space and then have the global menu stay where it is, just to the right of the app launcher. I've found a solution to disable the global menu but I would prefer to keep it and instead move (or disable) the active window name in the top panel. Any way to do this?

    Read the article

  • Has there ever been a great print version of Why’s Poignant Guide to Ruby?

    - by Paul D. Waite
    Although it’s probably meant to be experienced on the web, I’d love to read a great print version of Why’s Poignant Guide to Ruby. It’s liberally licensed, so I can run off a copy for myself. But I think a work like that deserves more. Full colour illustrations. Main text on the left-hand page, sidebars on the right. (Stick in a few cartoons, or a slice of onion, when there aren’t any sidebars.) A built-in MP3 player and headphones for the soundtrack, but made completely out of telephone wires. Whilst that might be wishing for too much, have you (ever) seen any decent print versions of the book available?

    Read the article

  • Force chart labels to remain inside frame

    - by adolf garlic - SAVE BBC6MUSIC
    RS2008 - pie chart I have 'outside' labels with lines pointing to the segment (although strangely this only appears to work in pdf output) However (see pic below) the label is appearing outside the scope of the chart area How can I force it to remain inside? (MinimumRelativePieSize is set to 70) (pic below missing due to not being able to find an image host that isn't blocked by corp firewall) Picture a pie chart of 25 slices, with radial lines that project through the sides. The line from each slice then becomes horizontal, before disappearing outside. (above actually fits tune of "Lucy in the Sky with Diamonds")

    Read the article

  • REST API Help in Rails

    - by dannymcc
    Hi Everyone, I am trying to get some information posted using our accountancy package (FreeAgentCentral) using their API via a GEM. http://github.com/aaronrussell/freeagent_api/ I have the following code to get it working (supposedly): Kase Controller def create @kase = Kase.new(params[:kase]) @company = Company.find(params[:kase][:company_id]) @kase = @company.kases.create!(params[:kase]) respond_to do |format| if @kase.save UserMailer.deliver_makeakase("[email protected]", "Highrise", @kase) @kase.create_freeagent_project(current_user) #flash[:notice] = 'Case was successfully created.' flash[:notice] = fading_flash_message("Case was successfully created & sent to Highrise.", 5) format.html { redirect_to(@kase) } format.xml { render :xml => @kase, :status => :created, :location => @kase } else format.html { render :action => "new" } format.xml { render :xml => @kase.errors, :status => :unprocessable_entity } end end end To save you looking through, the important part is: @kase.create_freeagent_project(current_user) Kase Model # FreeAgent API Project Create # Required attribues # :contact_id # :name # :payment_term_in_days # :billing_basis # must be 1, 7, 7.5, or 8 # :budget_units # must be Hours, Days, or Monetary # :status # must be Active or Completed def create_freeagent_project(current_user) p = Freeagent::Project.create( :contact_id => 0, :name => "#{jobno} - #{highrisesubject}", :payment_terms_in_days => 5, :billing_basis => 1, :budget_units => 'Hours', :status => 'Active' ) user = Freeagent::User.find_by_email(current_user.email) Freeagent::Timeslip.create( :project_id => p.id, :user_id => user.id, :hours => 1, :new_task => 'Setup', :dated_on => Time.now ) end lib/freeagent_api.rb require 'rubygems' gem 'activeresource', '< 3.0.0.beta1' require 'active_resource' module Freeagent class << self def authenticate(options) Base.authenticate(options) end end class Error < StandardError; end class Base < ActiveResource::Base def self.authenticate(options) self.site = "https://#{options[:domain]}" self.user = options[:username] self.password = options[:password] end end # Company class Company def self.invoice_timeline InvoiceTimeline.find :all, :from => '/company/invoice_timeline.xml' end def self.tax_timeline TaxTimeline.find :all, :from => '/company/tax_timeline.xml' end end class InvoiceTimeline < Base self.prefix = '/company/' end class TaxTimeline < Base self.prefix = '/company/' end # Contacts class Contact < Base end # Projects class Project < Base def invoices Invoice.find :all, :from => "/projects/#{id}/invoices.xml" end def timeslips Timeslip.find :all, :from => "/projects/#{id}/timeslips.xml" end end # Tasks - Complete class Task < Base self.prefix = '/projects/:project_id/' end # Invoices - Complete class Invoice < Base def mark_as_draft connection.put("/invoices/#{id}/mark_as_draft.xml", encode, self.class.headers).tap do |response| load_attributes_from_response(response) end end def mark_as_sent connection.put("/invoices/#{id}/mark_as_sent.xml", encode, self.class.headers).tap do |response| load_attributes_from_response(response) end end def mark_as_cancelled connection.put("/invoices/#{id}/mark_as_cancelled.xml", encode, self.class.headers).tap do |response| load_attributes_from_response(response) end end end # Invoice items - Complete class InvoiceItem < Base self.prefix = '/invoices/:invoice_id/' end # Timeslips class Timeslip < Base def self.find(*arguments) scope = arguments.slice!(0) options = arguments.slice!(0) || {} if options[:params] && options[:params][:from] && options[:params][:to] options[:params][:view] = options[:params][:from]+'_'+options[:params][:to] options[:params].delete(:from) options[:params].delete(:to) end case scope when :all then find_every(options) when :first then find_every(options).first when :last then find_every(options).last when :one then find_one(options) else find_single(scope, options) end end end # Users class User < Base self.prefix = '/company/' def self.find_by_email(email) users = User.find :all users.each do |u| u.email == email ? (return u) : next end raise Error, "No user matches that email!" end end end config/initializers/freeagent.rb Freeagent.authenticate({ :domain => 'XXXXX.freeagentcentral.com', :username => '[email protected]', :password => 'XXXXXX' }) The above render the following error when trying to create a new Case and send the details to FreeAgent: ActiveResource::ResourceNotFound in KasesController#create Failed with 404 Not Found and ActiveResource::ResourceNotFound (Failed with 404 Not Found): app/models/kase.rb:56:in `create_freeagent_project' app/controllers/kases_controller.rb:96:in `create' app/controllers/kases_controller.rb:93:in `create' Rendered rescues/_trace (176.5ms) Rendered rescues/_request_and_response (1.1ms) Rendering rescues/layout (internal_server_error) If anyone can shed any light on this problem it would be greatly appreciated! Thanks, Danny

    Read the article

  • What strategy do you use for package naming in Java projects and why?

    - by Tim Visher
    I thought about this awhile ago and it recently resurfaced as my shop is doing its first real Java web app. As an intro, I see two main package naming strategies. (To be clear, I'm not referring to the whole 'domain.company.project' part of this, I'm talking about the package convention beneath that.) Anyway, the package naming conventions that I see are as follows: Functional: Naming your packages according to their function architecturally rather than their identity according to the business domain. Another term for this might be naming according to 'layer'. So, you'd have a *.ui package and a *.domain package and a *.orm package. Your packages are horizontal slices rather than vertical. This is much more common than logical naming. In fact, I don't believe I've ever seen or heard of a project that does this. This of course makes me leery (sort of like thinking that you've come up with a solution to an NP problem) as I'm not terribly smart and I assume everyone must have great reasons for doing it the way they do. On the other hand, I'm not opposed to people just missing the elephant in the room and I've never heard a an actual argument for doing package naming this way. It just seems to be the de facto standard. Logical: Naming your packages according to their business domain identity and putting every class that has to do with that vertical slice of functionality into that package. I have never seen or heard of this, as I mentioned before, but it makes a ton of sense to me. I tend to approach systems vertically rather than horizontally. I want to go in and develop the Order Processing system, not the data access layer. Obviously, there's a good chance that I'll touch the data access layer in the development of that system, but the point is that I don't think of it that way. What this means, of course, is that when I receive a change order or want to implement some new feature, it'd be nice to not have to go fishing around in a bunch of packages in order to find all the related classes. Instead, I just look in the X package because what I'm doing has to do with X. From a development standpoint, I see it as a major win to have your packages document your business domain rather than your architecture. I feel like the domain is almost always the part of the system that's harder to grok where as the system's architecture, especially at this point, is almost becoming mundane in its implementation. The fact that I can come to a system with this type of naming convention and instantly from the naming of the packages know that it deals with orders, customers, enterprises, products, etc. seems pretty darn handy. It seems like this would allow you to take much better advantage of Java's access modifiers. This allows you to much more cleanly define interfaces into subsystems rather than into layers of the system. So if you have an orders subsystem that you want to be transparently persistent, you could in theory just never let anything else know that it's persistent by not having to create public interfaces to its persistence classes in the dao layer and instead packaging the dao class in with only the classes it deals with. Obviously, if you wanted to expose this functionality, you could provide an interface for it or make it public. It just seems like you lose a lot of this by having a vertical slice of your system's features split across multiple packages. I suppose one disadvantage that I can see is that it does make ripping out layers a little bit more difficult. Instead of just deleting or renaming a package and then dropping a new one in place with an alternate technology, you have to go in and change all of the classes in all of the packages. However, I don't see this is a big deal. It may be from a lack of experience, but I have to imagine that the amount of times you swap out technologies pales in comparison to the amount of times you go in and edit vertical feature slices within your system. So I guess the question then would go out to you, how do you name your packages and why? Please understand that I don't necessarily think that I've stumbled onto the golden goose or something here. I'm pretty new to all this with mostly academic experience. However, I can't spot the holes in my reasoning so I'm hoping you all can so that I can move on. Thanks in advance!

    Read the article

  • Python: Slicing a list into n nearly-equal-length partitions

    - by Drew
    I'm looking for a fast, clean, pythonic way to divide a list into exactly n nearly-equal partitions. partition([1,2,3,4,5],5)->[[1],[2],[3],[4],[5]] partition([1,2,3,4,5],2)->[[1,2],[3,4,5]] (or [[1,2,3],[4,5]]) partition([1,2,3,4,5],3)->[[1,2],[3,4],[5]] (there are other ways to slice this one too) There are several answers in here http://stackoverflow.com/questions/1335392/iteration-over-list-slices that run very close to what I want, except they are focused on the size of the list, and I care about the number of the lists (some of them also pad with None). These are trivially converted, obviously, but I'm looking for a best practice. Similarly, people have pointed out great solutions here http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks-in-python for a very similar problem, but I'm more interested in the number of partitions than the specific size, as long as it's within 1. Again, this is trivially convertible, but I'm looking for a best practice.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >