Search Results

Search found 4221 results on 169 pages for 'bounding volume hierarchy'.

Page 46/169 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Tagging CALayers in iPhone

    - by Peyman
    Hi I am looking for a general way to be able to search for a unique CALayer in a hierarchy without having to remember where the layer is in a hierarchy (and use the sublayer: and superlayer: methods). I know this is possible with UIViews (which makes flipping views easy) but is it possible for CALayer? thank you in advance for your help Peyman

    Read the article

  • What is the String 'volumeName' argument of MediaStore.Audio.Playlists.Members.getContentUri referri

    - by Brett
    I am wanting to query the members of a given playlist. I have the correct playlist id, and want to use a managedQuery() to look at the playlist members in question. What I have is this: private String [] columns = { MediaStore.Audio.Playlists.Members.PLAYLIST_ID, MediaStore.Audio.Playlists.Members.TITLE, }; Uri membersUri = MediaStore.Audio.Playlists.Members.getContentUri(volume, playlistId); Cursor tCursor = managedQuery(membersUri, columns, null, null, null); I don't know what the volume argument needs to be. I've tried this: MediaStore.Audio.Playlists.EXTERNAL_CONTENT_URI.toString() for the "volume" argument. That gives me back a valid content URI that looks like: content://media/external/audio/playlists/2/members However, my cursor comes back null. I probably am way off base -- I know what I want to do is very simple.

    Read the article

  • What's the best way to read/write array contents from/to binary files in C#?

    - by Eric
    I would like to read and write the contents of large ( 2GB), raw volume files (e.g. MRI scans). These files are just a sequence of e.g. 32 x 32 x 32 floats so they map well to 1D arrays. I would like to be able to read the contents of the binary volume files into 1D arrays of e.g. float or ushort (depending on the data type of the binary files) and similarly export the arrays back out to the raw volume files. What's the best way to do this with C#? Read/Write them 1 element at a time with BinaryReader/BinaryWriter? Read them piece-wise into byte arrays with FileStream.Read and then do a System.Buffer.BlockCopy between arrays (keeping in mind that I want 2GB arrays)? Write my own Reader/Writer?

    Read the article

  • What is your custom exception hierrarchy?

    - by bonefisher
    My question is: how would you create exception hierarchy in your application? Designing the architecture of an application, from my perspective, we could have three types of exceptions: the built-in (e.g.: InvalidOperationException) custom internal system faults (DB transaction failed on commit, DbTransactionFailedException) custom business exceptions (BusinessRuleViolationException) Class hierarchy: Exception MyAppInternalException DbTransactionFailedException MyServerTimeoutException ... MyAppBusinessRuleViolationException UsernameAlreadyExistsException ... where only MyAppInternalException & MyAppBusinessRuleViolationException would be catched.

    Read the article

  • Should I put my flex project within my rails project?

    - by ChrisInCambo
    I have a project with a RESTful Rails back-end and a Flex front-end, first time for me with this combo and I debating whether to put the flex source somewhere inside the Rails folder hierarchy or making it a separate project. If I do so which folder would be most suitable /lib? Also be doing one click deployment with Vlad which can also compile the flex app and dump it in the public folder. Or does anyone have any good reasons why the flex project shouldn't reside within the Rails folder hierarchy? Cheers

    Read the article

  • Converting convex hull to binary mask

    - by Jonas
    I want to generate a binary mask that has ones for all pixels inside and zeros for all pixels outside a volume. The volume is defined by the convex hull around a set of 3D coordinates (<100; some of the coordinates are inside the volume). I can get the convex hull using CONVHULLN, but how do I convert that into a binary mask? In case there is no good way to go via the convex hull, do you have any other idea how I could create the binary mask?

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • MDX: How to exclude ancestors from being returned in this query?

    - by wgpubs
    I have this MDX query: Exists([Group].[Group Hierarchy].allmembers, {[Group].[Group Full Name].&[121 - Group A], [Group].[Group Full Name].&[700000 - Group C]}) ... which works fine EXCEPT that it returns all of the ancestors of the specified groups as well. What I want is to return JUST the groups from the hierarchy with the specified Group Names (this is a type 2 dimension so there may be many at different levels). Any ideas?

    Read the article

  • Invoking callhierarchy for a method

    - by Steven
    hi, I have an object of type IMethod which represents a method .I want to get the Call Hierarchy of this method . Which methods should i call to get the call hierarchy of a method? Is there any method by which i can do it? I know that i can get it by ctrl+alt+H but i want the code or method for invoking it. Thanks

    Read the article

  • How do you force a Linux process (Java webstart App) to stop locking a Filesystem (CD-ROM) WITHOUT k

    - by Blake
    In Linux (CentOS 5.4), how do you force a process to stop locking a file system without killing the process? I am trying to get my Java Webstart Application, running locally, to eject a CD. I do not have this problem if I am just browsing through the files using a JFileChooser, but once I read the contents of a file, I can no longer eject the CD...even after removing ALL references to any files. Hitting the eject button will give the error (Title - "Cannot Eject Volume"): "An application is preventing the volume 'volume name' from being ejected" Thus, my goal is to tell the process to stop targeting the CD-ROM in order to free it up. Thank you for any help or direction!! Attempted Fix: -running the commands: sudo umount -l /media/Volume_Name //-l Lazy Unmount forces the unmount sudo eject Problem: When a new CD is inserted, it is no longer mounted automatically probably because the process is still "targeting" it.

    Read the article

  • GetVolumeNameForVolumeMountPoint returns false

    - by new
    Hi.. To get the volume GUID i tried the code like below int len = wcslen( pDetData-DevicePath); pDetData-DevicePath[len] = '\'; pDetData-DevicePath[len+1] = 0; define BUFFER_SIZE MAX_PATH WCHAR volume[BUFFER_SIZE]; BOOL bFlag; bFlag = GetVolumeNameForVolumeMountPoint( pDetData-DevicePath, volume, BUFFER_SIZE ); int loginErrCode = GetLastError(); printf("loginErrCode: %d\n", loginErrCode); printf("BFLAG: %d\n", bFlag); the GetLastError() also prints it as 1 . it means ERROR_INVALID_FUNCTION the bFlag always returns zero it means false. what is the problem in my code... please help me up to get the above function work right

    Read the article

  • Selecting a portion of a JSON array and applying variables in javascript or jquery

    - by user1644609
    I am retrieving a JSON file that returns its results like what you see below. The JSON has 365 days worth of data. I would like to create "views" of this JSON using javascript, one which pulls the last 10 days, then 1 month, 6 months, etc. After the getJSON function I am doing this to get a string as JSON, then turn it into an object and will then graph it. So I would like each "view" to be an object for the specified timeframe (using the one JSON). The obj_10days, obj_1month, etc variables would then be plotted. var $ graph = data ; var obj = $ . parseJSON ( $ graph ) ; JSON: [ { "Low": 8.63, "Volume": 14211900, "Date": "2012-10-26", "High": 8.79, "Close": 8.65, "Adj Close": 8.65, "Open": 8.7 }, { "Low": 8.65, "Volume": 12167500, "Date": "2012-10-25", "High": 8.81, "Close": 8.73, "Adj Close": 8.73, "Open": 8.76 }, { "Low": 8.68, "Volume": 20239700, "Date": "2012-10-24", "High": 8.92, "Close": 8.7, "Adj Close": 8.7, "Open": 8.85 }, Any help is appreciated, thank you!

    Read the article

  • How to search for a file or directory in Linux Ubuntu machine

    - by Jury A
    I created an EC2 instance (Ubuntu 64 bit) and attached a volume from a publicly available snapshot to the instance. I successfully mounted the volume. I am supposed to be able to run a script from this attached volume using the following steps as explained in the tutorial: Log in to your virtual machine. mkdir /space mount /dev/sdf1 /space cd /space ./setup-script The problem is that, when I try: ./setup-script I got the following message: -bash: ./setup-script: No such file or directory What is the problem ? How can I search for the ./setup-script in the whole machine ? I'm not very familiar with linux system. Please, help. For more details about the issue: Look at my previous post: Error when mounting drive

    Read the article

  • Collision Error

    - by Manji
    I am having trouble with collision detection part of the game. I am using touch events to fire the gun as you will see in the video. Note, the android icon is a temporary graphic for the bullets When ever the user touches (represented by clicks in the video)the bullet appears and kills random sprites. As you can see it never touches the sprites it kills or kill the sprites it does touch. My Question is How do I fix it, so that the sprite dies when the bullet hits it? Collision Code snippet: //Handles Collision private void CheckCollisions(){ synchronized(mSurfaceHolder){ for (int i = sprites.size() - 1; i >= 0; i--){ Sprite sprite = sprites.get(i); if(sprite.isCollision(bullet)){ sprites.remove(sprite); mScore++; if(sprites.size() == 0){ mLevel = mLevel +1; currentLevel++; initLevel(); } break; } } } } Sprite Class Code Snippet: //bounding box left<right and top>bottom int left ; int right ; int top ; int bottom ; public boolean isCollision(Beam other) { // TODO Auto-generated method stub if(this.left>other.right || other.left<other.right)return false; if(this.bottom>other.top || other.bottom<other.top)return false; return true; } EDIT 1: Sprite Class: public class Sprite { // direction = 0 up, 1 left, 2 down, 3 right, // animation = 3 back, 1 left, 0 front, 2 right int[] DIRECTION_TO_ANIMATION_MAP = { 3, 1, 0, 2 }; private static final int BMP_ROWS = 4; private static final int BMP_COLUMNS = 3; private static final int MAX_SPEED = 5; private HitmanView gameView; private Bitmap bmp; private int x; private int y; private int xSpeed; private int ySpeed; private int currentFrame = 0; private int width; private int height; //bounding box left<right and top>bottom int left ; int right ; int top ; int bottom ; public Sprite(HitmanView gameView, Bitmap bmp) { this.width = bmp.getWidth() / BMP_COLUMNS; this.height = bmp.getHeight() / BMP_ROWS; this.gameView = gameView; this.bmp = bmp; Random rnd = new Random(); x = rnd.nextInt(gameView.getWidth() - width); y = rnd.nextInt(gameView.getHeight() - height); xSpeed = rnd.nextInt(MAX_SPEED * 2) - MAX_SPEED; ySpeed = rnd.nextInt(MAX_SPEED * 2) - MAX_SPEED; } private void update() { if (x >= gameView.getWidth() - width - xSpeed || x + xSpeed <= 0) { xSpeed = -xSpeed; } x = x + xSpeed; if (y >= gameView.getHeight() - height - ySpeed || y + ySpeed <= 0) { ySpeed = -ySpeed; } y = y + ySpeed; currentFrame = ++currentFrame % BMP_COLUMNS; } public void onDraw(Canvas canvas) { update(); int srcX = currentFrame * width; int srcY = getAnimationRow() * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bmp, src, dst, null); } private int getAnimationRow() { double dirDouble = (Math.atan2(xSpeed, ySpeed) / (Math.PI / 2) + 2); int direction = (int) Math.round(dirDouble) % BMP_ROWS; return DIRECTION_TO_ANIMATION_MAP[direction]; } public boolean isCollision(float x2, float y2){ return x2 > x && x2 < x + width && y2 > y && y2 < y + height; } public boolean isCollision(Beam other) { // TODO Auto-generated method stub if(this.left>other.right || other.left<other.right)return false; if(this.bottom>other.top || other.bottom<other.top)return false; return true; } } Bullet Class: public class Bullet { int mX; int mY; private Bitmap mBitmap; //bounding box left<right and top>bottom int left ; int right ; int top ; int bottom ; public Bullet (Bitmap mBitmap){ this.mBitmap = mBitmap; } public void draw(Canvas canvas, int mX, int mY) { this.mX = mX; this.mY = mY; canvas.drawBitmap(mBitmap, mX, mY, null); } }

    Read the article

  • Hudson on debian lenny

    - by Laurent
    Hello, I installed Hudson deamon on one server (running on debian lenny testing) some time ago. All was working until I perform an upgrade. At this time Hudson isn't accessible at port 8080 (which is the default port used). I have looked for iptables problems, however port 8080 is open in INPUT and OUTPUT. Configuration file in /etc/default/hudson seems okay, I haven't touch it. And if I do a ps aux | grep hudson, hudson deamon is running. Update 1: What is really strange for me is that in /var/log/hudson/hudson.log I get no error : [Winstone 2010/02/10 17:10:04] - Control thread shutdown successfully [Winstone 2010/02/10 17:10:04] - Winstone shutdown successfully Running from: /usr/share/hudson/hudson.war [Winstone 2010/02/10 17:10:43] - Beginning extraction from war file hudson home directory: /var/lib/hudson [Winstone 2010/02/10 17:10:44] - HTTP Listener started: port=8080 [Winstone 2010/02/10 17:10:44] - AJP13 Listener started: port=8009 [Winstone 2010/02/10 17:10:44] - Winstone Servlet Engine v0.9.10 running: controlPort=disabled 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Started initialization 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Listed all plugins 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Prepared all plugins 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Started all plugins 10 févr. 2010 17:10:46 hudson.model.Hudson$4 onAttained INFO: Loaded all jobs 10 févr. 2010 17:10:46 hudson.model.Hudson$4 onAttained INFO: Completed initialization 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext@caa559d: display name [Root WebApplicationContext]; startup date [Wed Feb 10 17:10:47 CET 2010]; root of context hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext@caa559d]: org.springframework.beans.factory.support.DefaultListableBeanFactory@40d2f5f1 10 févr. 2010 17:10:47 org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@40d2f5f1: defining beans [daoAuthenticationProvider,authenticationManager,userDetailsService]; root of factory hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext@4d88a387: display name [Root WebApplicationContext]; startup date [Wed Feb 10 17:10:47 CET 2010]; root of context hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext@4d88a387]: org.springframework.beans.factory.support.DefaultListableBeanFactory@6153e0c0 10 févr. 2010 17:10:47 org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@6153e0c0: defining beans [filter,legacy]; root of factory hierarchy 10 févr. 2010 17:10:47 hudson.TcpSlaveAgentListener <init> INFO: JNLP slave agent listener started on TCP port 59750 Update 2: What I get with lsof -i -n -P | grep hudson: java 28985 hudson 97u IPv6 2002707 0t0 TCP *:8080 (LISTEN) java 28985 hudson 99u IPv6 2002708 0t0 TCP *:8009 (LISTEN) java 28985 hudson 147u IPv6 2002711 0t0 TCP *:59750 (LISTEN) java 28985 hudson 150u IPv6 2002712 0t0 UDP *:33848 I don't know what I can verify. Does someone has an idea in order to help me to resolve this problem ?

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • Oracle Retail Point-of-Service with Mobile Point-of-Service, Release 13.4.1

    - by Oracle Retail Documentation Team
    Oracle Retail Mobile Point-of-Service was previously released as a standalone product. Oracle Retail Mobile Point-of-Service is now a supported extension of Oracle Retail Point-of-Service, Release 13.4.1. Oracle Retail Mobile Point-of-Service provides support for using a mobile device to perform tasks such as scanning items, applying price adjustments, tendering, and looking up item information. Integration with Oracle Retail Store Inventory Management (SIM) If Oracle Retail Mobile Point-of-Service is implemented with Oracle Retail Store Inventory Management (SIM), the following Oracle Retail Store Inventory Management functionality is supported: Inventory lookup at the current store Inventory lookup at buddy stores Validation of serial numbers Technical Overview The Oracle Retail Mobile Point-of-Service server application runs in a domain on Oracle WebLogic. The server supports the mobile devices in the store. On each mobile device, the Mobile POS application is downloaded and then installed. Highlighted End User Documentation Updates and List of Documents  Oracle Retail Point-of-Service with Mobile Point-of-Service Release NotesA high-level overview is included about the release's functional, technical, and documentation enhancements. In addition, a section has been written that addresses Product Support considerations.   Oracle Retail Mobile Point-of-Service Java API ReferenceJava API documentation for Oracle Retail Mobile Point-of-Service is included as part of the Oracle Retail Mobile Point-of-Service Release 13.4.1 documentation set. Oracle Retail Point-of-Service with Mobile Point-of-Service Installation Guide - Volume 1, Oracle StackA new chapter is included with information on installing the Mobile Point-of-Service server and setting up the Mobile POS application. The installer screens for installing the server are included in a new appendix. Oracle Retail Point-of-Service with Mobile Point-of-Service User GuideA new chapter describes the functionality available on a mobile device and how to use Oracle Retail Mobile Point-of-Service on a mobile device. Oracle Retail POS Suite with Mobile Point-of-Service Configuration GuideThe Configuration Guide is updated to indicate which parameters are used for Oracle Retail Mobile Point-of-Service. Oracle Retail POS Suite with Mobile Point-of-Service Implementation Guide - Volume 5, Mobile Point-of-ServiceThis new Implementation Guide volume contains information for extending and customizing both the Mobile POS application for the mobile device and the Oracle Retail Mobile Point-of-Service server. Oracle Retail POS Suite with Mobile Point-of-Service Licensing InformationThe Licensing Information document is updated with the list of third-party open-source software used by Oracle Retail Mobile Point-of-Service. Oracle Retail POS Suite with Mobile Point-of-Service Security GuideThe Security Guide is updated with information on security for mobile devices. Oracle Retail Enhancements Summary (My Oracle Support Doc ID 1088183.1)This enterprise level document captures the major changes for all the products that are part of releases 13.2, 13.3, and 13.4. The functional, integration, and technical enhancements in the Release Notes for each product are listed in this document.

    Read the article

  • mysql show databases not showing databases that are in /opt/bitnami/mysql/data directory

    - by hgolov
    and thank you for taking the time to look at my question. I have an ebs-backed ec2 ubuntu server which is running but unreachable. ** There are very stupidly no recent backups ** I made a snapshot of the block, created a volume, spun up a new instance, attached the new volume. I see all the data from my site in the /opt/bitnami/mysql/data directory, but when I go into the mysql console, it shows only information_schema and test when I type show databases; How can I 'point' mysql to the correct folder? Thank you!

    Read the article

  • When Less is More

    - by aditya.agarkar
    How do you reconcile the fact that while the overall warehouse volume is down you still need more workers in the warehouse to ship all the orders? A WMS customer recently pointed out this seemingly perplexing fact in a customer conference. So what is going on? Didn't we tell you before that for a warehouse the customer is really the "king"? In this case customers are merely responding to a low overall low demand and uncertainty. They do not want to hold down inventory and one of the ways to do that is by decreasing the order size and ordering more frequently. Overall impact to the warehouse? Two words: "More work!!" This is not all. Smaller order sizes also mean challenges from a transportation perspective including a rise in costlier parcel or LTL shipments instead of cheaper TL shipments. Here is a hypothetical scenario where a customer reduces the order size by 10% and increases the order frequency by 10%. As you can see in the following table, the overall volume declines by 1% but the warehouse has to ship roughly 10% more lines. Order Frequency (Line Count)Order Size (Units)Total VolumeChange (%)10010010,000 -110909,900-1% If you want to see how "Less is More" in graphical terms, this is how it appears: Even though the volume is down, there is going to be more work in the warehouse in terms of number of lines shipped. The operators need to pick more discrete orders, pack them into more shipping containers and ship more deliveries. What do you do differently if you are facing this situation?In this case here are some obvious steps to take:Uno: Change your pick methods. If you are used to doing order picks, it needs to go out the door. You need to evaluate batch picking and grouping techniques. Go for cluster picking, go for zone picking, pick and pass...anything that improves your picker productivity. More than anything, cluster picking works like a charm and above all, its simple and very effective. Dos: Are you minimize "touch" points in your pick process? Consider doing one step pick, pack and confirm i.e. pick and pack stuff directly into shipping cartons. Done correctly the container will not require any more "touch" points all the way to the trailer loading. Use cartonization!Tres: Are the being picked from an optimized pick face? Are the items slotted correctly? This needs to be looked into. Consider automated "pull" or "push" replenishment into your pick face and also make sure that high demand items are occupying the golden zones.  Cuatro: Are you tracking labor productivity? If not there needs to be a concerted push for having labor standards in place. Hope you found these ideas useful.

    Read the article

  • Android app to remote control Samsung Smart TVs

    - by Gopinath
    Smart TV Remote is an unofficial Android app that lets you control Samsung Smart TVs connected over a local WiFi network. This app comes very handy when you want to control your TV which is not in line of sight of your TV remote control or just want to use your mobile phone/tablet to control the TV. Setting up a TV  is very easy using auto scan feature . Once the TV is setup, you are all set to start using the app as a remote control. A traditional remote controls makes use of infra red technology and it needs to be in the line of sight of the TV receiver to work. But this app make use of WiFi technology which give it flexibility of controlling the TV as long as the mobile & TV is connected to WiFi network. It just works even if the TV is behind a wall. The App provides very easy to use options to switch between channels and separate remotes with media controls, smart hub features and a numeric key pad if you want to navigate to a channel through its number. The App also provides a home screen widget with volume controls and channel navigation options. I use  this App to control Samsung E Series Smart Tv at home and it works very well. I’m impressed by the ease at which it allows to setup a TV, support for multiple TVs, controlling the TV though I’m not in the line of sight and using volume buttons of smart phone to control volume of TV. What’s annoying and missing with the app As advertised the app works very well in controlling Samsung TVs (B-, C-, D- E-, and F-Series) except it is very painful to move mouse pointer while browsing web on TV. When you try to move mouse pointer using the App, it mouse painfully slow especially. I gave us using the app to control mouse pointer after trying couple of times. I installed this App thinking that it may help me browse web on Smart TVs, especially a key board support to type web urls. App does not supports entering text either while browsing web or searching through Smart TV apps like YouTube, App Store etc. Developers of this App never advertised keyboard support so no complaints about this. But it would be very helpful if the developers allow this app use as a keyboard and rescue me from the pain of typing text using TV Remote control. Overall this is a very nice app and worth trying out – Download Smart TV Remote from Google Play

    Read the article

  • Use Drive Mirroring for Instant Backup in Windows 7

    - by Trevor Bekolay
    Even with the best backup solution, a hard drive crash means you’ll lose a few hours of work. By enabling drive mirroring in Windows 7, you’ll always have an up-to-date copy of your data. Windows 7’s mirroring – which is only available in Professional, Enterprise, and Ultimate editions – is a software implementation of RAID 1, which means that two or more disks are holding the exact same data. The files are constantly kept in sync, so that if one of the disks fails, you won’t lose any data. Note that mirroring is not technically a backup solution, because if you accidentally delete a file, it’s gone from both hard disks (though you may be able to recover the file). As an additional caveat, having mirrored disks requires changing them to “dynamic disks,” which can only be read within modern versions of Windows (you may have problems working with a dynamic disk in other operating systems or in older versions of Windows). See this Wikipedia page for more information. You will need at least one empty disk to set up disk mirroring. We’ll show you how to mirror an existing disk (of equal or lesser size) without losing any data on the mirrored drive, and how to set up two empty disks as mirrored copies from the get-go. Mirroring an Existing Drive Click on the start button and type partitions in the search box. Click on the Create and format hard disk partitions entry that shows up. Alternatively, if you’ve disabled the search box, press Win+R to open the Run window and type in: diskmgmt.msc The Disk Management window will appear. We’ve got a small disk, labeled OldData, that we want to mirror in a second disk of the same size. Note: The disk that you will use to mirror the existing disk must be unallocated. If it is not, then right-click on it and select Delete Volume… to mark it as unallocated. This will destroy any data on that drive. Right-click on the existing disk that you want to mirror. Select Add Mirror…. Select the disk that you want to use to mirror the existing disk’s data and press Add Mirror. You will be warned that this process will change the existing disk from basic to dynamic. Note that this process will not delete any data on the disk! The new disk will be marked as a mirror, and it will starting copying data from the existing drive to the new one. Eventually the drives will be synced up (it can take a while), and any data added to the E: drive will exist on both physical hard drives. Setting Up Two New Drives as Mirrored If you have two new equal-sized drives, you can format them to be mirrored copies of each other from the get-go. Open the Disk Management window as described above. Make sure that the drives are unallocated. If they’re not, and you don’t need the data on either of them, right-click and select Delete volume…. Right-click on one of the unallocated drives and select New Mirrored Volume…. A wizard will pop up. Click Next. Click on the drives you want to hold the mirrored data and click Add. Note that you can add any number of drives. Click Next. Assign it a drive letter that makes sense, and then click Next. You’re limited to using the NTFS file system for mirrored drives, so enter a volume label, enable compression if you want, and then click Next. Click Finish to start formatting the drives. You will be warned that the new drives will be converted to dynamic disks. And that’s it! You now have two mirrored drives. Any files added to E: will reside on both physical disks, in case something happens to one of them. Conclusion While the switch from basic to dynamic disks can be a problem for people who dual-boot into another operating system, setting up drive mirroring is an easy way to make sure that your data can be recovered in case of a hard drive crash. Of course, even with drive mirroring, we advocate regular backups to external drives or online backup services. Similar Articles Productive Geek Tips Rebit Backup Software [Review]Disabling Instant Search in Outlook 2007Restore Files from Backups on Windows Home ServerSecond Copy 7 [Review]Backup Windows Home Server Folders to an External Hard Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • SQL SERVER – NTFS File System Performance for SQL Server

    - by pinaldave
    Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system. Question: “Pinal, we have 80 GB of data including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.” When I receive questions which I have just listed above, it often sends me deep thought. Honestly, I know a lot but there are plenty of things, I believe can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it. NTFS File System Performance Best Practices for SQL Server Disable Indexing on disk volumes Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1) Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1) Keep some space empty (let us say 15% for reference) on drive is possible (Only on Filestream Data storage volume) Defragement the volume Add your suggestions here… The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with related to NTFS allocation size. If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that above list prepared assuming the SQL Server application is only running on the computer system. The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Keyboard problems on Acer Laptop

    - by artfulrobot
    The key map on my Acer Aspire 5750 keyboard has gone awry since I had the keyboard replaced (because of physical fault with one of the keys). The new keyboard looks identical, but now Ubuntu seems a little confused about which (function) buttons are which. So previously Fn+left/right were, as marked on the keys, brightness up and down; Fn+up/down were volume. It all worked as advertised! But now Page Dn (which is marked as Page Dn or Back a track with Fn) is turning the volume up, and the others fn+arrow keys don't work at all. Is there a way I can see what's happening or get it to re-map my keys? Thanks.

    Read the article

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

  • Getting a TV Capture Card working

    - by Benny Hallett
    I'm new to Linux, and am trying to get my Capture Card working on 11.04. The only command that I know to run to find out any information is lspci, which tells me that I have 02:00.0 Multimedia video controller: Conexant Systems, Inc. CX23885 PCI Video and Audio Decoder (rev 04) I've looked at using Me TV, but haven't worked out how to configure it for my card, or what I need to do to get it running. I'm not fussed on what software I use to run the Capture Card, but I've currently got only Me TV installed. Edit: When I run tvtime, I get the following errors: videoinput: Cannot open capture device /dev/video0: No such file or directory mixer: find error: Success mixer: Can't open mixer default, mixer volume and mute unavailable. mixer: Can't open device default/Line, mixer volume and mute unavailable. Segmentation fault

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >