Search Results

Search found 1999 results on 80 pages for 'temporary'.

Page 71/80 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Cloud Computing Architecture Patterns: Don’t Focus on the Client

    - by BuckWoody
    Normally I try to put topics in the positive in other words "Do this" not "Don't do that". Sometimes its clearer to focus on what *not* to do. Popular development processes often start with screen mockups, or user input descriptions. In a scale-out pattern like Cloud Computing on Windows Azure, that's the wrong place to start. Start with the Data    Instead, I recommend that you start with the data that a process requires. That data might be temporary or persisted, but starting with the data and its requirements helps to define not only the storage engine you need but also drives everything from security to the integrity of the application. For instance, assume the requirements show that the user must enter their phone number, and that this datum is used in a contact management system further down the application chain. For that datum, you can determine what data type you need (U.S. only or International?) the security requirements, whether it needs ACID compliance, how it will be searched, indexed and so on. From one small data point you can extrapolate out your options for storing and processing the data. Here's the interesting part, which begins to break the patterns that we've used for decades: all of the data doesn't have the same requirements. The phone number might be best suited for a list, or an element, or a string, with either BASE or ACID requirements, based on how it is used. That means we don't have to dump everything into XML, an RDBMS, a NoSQL engine, or a flat file exclusively. In fact, one record might use all of those depending on the use-case requirements. Next Is Data Management  With the data defined, we can move on to how to store the data. Again, the requirements now dictate whether we need a full relational calculus or set-based operations, or we can choose another method based on the requirements for the data. And breaking another pattern its OK to store in more than once, in more than one location. We do this all the time for reporting systems and Business Intelligence systems, so this is a pattern we need to think about even for OLTP data. Move to Data Transport How does the data get around? We can use a connection-based method, sending the data along a transport to the storage engine, but in some cases we may want to use a cache, a queue, the Service Bus, or Complex Event Processing. Finally, Data Processing Most RDBMS engines, NoSQL, and certainly Big Data engines not only store data, but can process and manipulate it as well. Its doubtful that you'll calculate that phone number right? Well, if you're the phone company, you most certainly will. And so we see that even once we've chosen the data type, storage and engine, the same element can have different computing requirements based on how it is used. Sure, We Need A Front-End At Some Point Not all data is entered by human hands in fact most data isn't. We don't really need a Graphical User Interface (GUI) we need some way for a GUI to get data into and out of the systems listed earlier.   But when we do need to allow users to enter or examine data, that should be left to the GUI that best fits the device the user has. Ever tried to use an application designed for a web browser on a phone? Or one designed for a tablet on a phone? Its usually quite painful. The siren song of "We'll just write one interface for all devices" is strong, and has beguiled many an unsuspecting architect. But they just don't work out.   Instead, focus on the data, its transport and processing. Create API calls or a message system that allows for resilient transport to the device or interface, and let it do what it does best. References Microsoft Architecture Journal:   http://msdn.microsoft.com/en-us/architecture/bb410935.aspx Patterns and Practices:   http://msdn.microsoft.com/en-us/library/ff921345.aspx Windows Azure iOS, Android, Windows 8 Mobile Devices SDK: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-ios/ Windows Azure Facebook SDK: http://ntotten.com/2013/03/14/using-windows-azure-mobile-services-with-the-facebook-sdk-for-windows-phone/

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Rip and Convert DVD’s to an ISO Image

    - by Mysticgeek
    If you own a lot of DVD’s, you might want to convert them to an ISO image for backup and easily playing them on your media center. Today we take a look at ripping your discs using DVDFab, then using ImgBurn to create an ISO image of the ripped DVD files. Rip DVD with DVDFab6 DVDFab will remove copy protection and rip the DVD files for free. Other components in the suite require you to purchase a license after the 30 day trial, but you’ll still be able to rip DVD’s after the trial. Install DVDFab by accepting the defaults (link below)…a system restart is required to complete the install process. The first time you run it, a welcome screen is displayed. If you don’t want to see it again check the box Do not show again, then Start DVDFab.  Pop the DVD in your drive and click Next. Now select your region and check Do not show again, then OK. It will then open the DVD and begin to scan it. Under DVD to DVD you can select either Full Disc or Main Movie depending on what you want to rip. If you want to burn the DVD to a disc after it’s created select the Full Disc option. Now click the Start button to begin the ripping process. After the ripping process has completed, you’ll get a message telling you it’s waiting for you to put in a blank DVD. Since we aren’t burning the disc, just cancel the message. Click Finish and close out of DVDFab or just minimize it if you’re going to keep using it to rip another DVD. By default the temporary directory is in My Documents \ DVDFab \ Temp…however you can change it in settings. If you go to the Temp directory you’ll see the DVD files listed there… Convert Files to ISO with ImgBurn Now that we have the files ripped from the DVD, we need to convert them to an ISO image using ImgBurn (link below). Open it up and from the main menu click on Create image file from files/folders. Click on the folder icon to browse to the location of the ripped DVD files. Browse to the DVDFab temp directory and the VIDEO_TS folder for the source and click Ok. Then choose a destination directory, give the ISO a name, and click Save. In this case we ripped the Unbreakable DVD, so named it that.   So now in ImgBurn you have the source being the ripped DVD files, and the destination for the ISO…then click the Build button. If you don’t create a volume label, ImgBurn is kind enough to create on for you. If everything looks correct, click Ok. Now wait while ImgBurn goes through the process of converting the ripped DVD files to an ISO image. The process has successfully completed. The ISO image of the DVD will be in the output directory you selected earlier. Now you can burn the ISO image to a blank DVD or store it on an external hard drive for safe keeping. When you’re done, you’ll probably want to go into the temp DVDFab folder and delete the VOB and other files in the Video_TS folder as they will take up a lot of space on your hard drive.   Conclusion Although this method requires two programs to make an ISO out of a DVD, it’s extremely quick. When burning DVD’s of various lengths, it took less than 30 minutes to get the final ISO. Now, you’ll have your DVD movies backed up in case something were to happen to the discs and are no longer playable. If you use Windows Media Center to watch your movies, check out our article on how to automatically mount and view ISO files in Windows 7 Media Center. With DVDFab, you get a 30 day fully functional trial for all of its features. You’ll still be able rip DVD’s even after the 30 day trial has ended. The more we’ve been using DVDFab, the more impressed we are with its capabilities, so after the 30 day trial you should consider purchasing a license. We will have a full review of the of it to share with you soon.  Download DVDFab Download ImgBurn Similar Articles Productive Geek Tips How To Rip DVDs with VLCCalculate with Qalculate on LinuxConvert a Row to a Column in Excel the Easy WayEnjoy Quick & Easy Unit Conversion with Convert for WindowsConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • 2D metaball liquid effect - how to feed output of one rendering pass as input to another shader

    - by Guye Incognito
    I'm attempting to make a shader for unity3d web project. I want to implement something like in the great answer by DMGregory in this question. in order to achieve a final look something like this.. Its metaballs with specular and shading. The steps to make this shader are. 1. Convert the feathered blobs into a heightmap. 2. Generate a normalmap from the heightmap 3. Feed the normal map and height map into a standard unity shader, for instance transparent parallax specular. I pretty much have all the pieces I need assembled but I am new to shaders and need help putting them together I can generate a heightmap from the blobs using some fragment shader code I wrote (I'm just using the red channel here cus i dont know if you can access the brightness) half4 frag (v2f i) : COLOR{ half4 texcol,finalColor; texcol = tex2D (_MainTex, i.uv); finalColor=_MyColor; if(texcol.r<_botmcut) { finalColor.r= 0; } else if((texcol.r>_topcut)) { finalColor.r= 0; } else { float r = _topcut-_botmcut; float xpos = _topcut - texcol.r; finalColor.r= (_botmcut + sqrt((xpos*xpos)-(r*r)))/_constant; } return finalColor; } turns these blobs.. into this heightmap Also I've found some CG code that generates a normal map from a height map. The bit of code that makes the normal map from finite differences is here void surf (Input IN, inout SurfaceOutput o) { o.Albedo = fixed3(0.5); float3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv_MainTex)); float me = tex2D(_HeightMap,IN.uv_MainTex).x; float n = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y+1.0/_HeightmapDimY)).x; float s = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y-1.0/_HeightmapDimY)).x; float e = tex2D(_HeightMap,float2(IN.uv_MainTex.x-1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float w = tex2D(_HeightMap,float2(IN.uv_MainTex.x+1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float3 norm = normal; float3 temp = norm; //a temporary vector that is not parallel to norm if(norm.x==1) temp.y+=0.5; else temp.x+=0.5; //form a basis with norm being one of the axes: float3 perp1 = normalize(cross(norm,temp)); float3 perp2 = normalize(cross(norm,perp1)); //use the basis to move the normal in its own space by the offset float3 normalOffset = -_HeightmapStrength * ( ( (n-me) - (s-me) ) * perp1 + ( ( e - me ) - ( w - me ) ) * perp2 ); norm += normalOffset; norm = normalize(norm); o.Normal = norm; } Also here is the built-in transparent parallax specular shader for unity. Shader "Transparent/Parallax Specular" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 0) _Shininess ("Shininess", Range (0.01, 1)) = 0.078125 _Parallax ("Height", Range (0.005, 0.08)) = 0.02 _MainTex ("Base (RGB) TransGloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} _ParallaxMap ("Heightmap (A)", 2D) = "black" {} } SubShader { Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"} LOD 600 CGPROGRAM #pragma surface surf BlinnPhong alpha #pragma exclude_renderers flash sampler2D _MainTex; sampler2D _BumpMap; sampler2D _ParallaxMap; fixed4 _Color; half _Shininess; float _Parallax; struct Input { float2 uv_MainTex; float2 uv_BumpMap; float3 viewDir; }; void surf (Input IN, inout SurfaceOutput o) { half h = tex2D (_ParallaxMap, IN.uv_BumpMap).w; float2 offset = ParallaxOffset (h, _Parallax, IN.viewDir); IN.uv_MainTex += offset; IN.uv_BumpMap += offset; fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Transparent/Bumped Specular" }

    Read the article

  • BizTalk: Sample: Context routing and Throttling with orchestration

    - by Leonid Ganeline
    The sample demonstrates using orchestration for throttling and using context routing. Usually throttling is implemented on the host level (in BizTalk 2010 we can also using the host instance level throttling). Here is demonstrated the throttling with orchestration convoy that slows down message flow from some customers. Sample implements sort of quality service agreement layer for different kind of customers. The sample demonstrates the context routing between orchestrations. It has several advantages over the content routing. For example, we don’t have to create the property schema and promote properties on the schemas; we don’t have to change the message content to change routing. Use case:  The BizTalk application has a main processing orchestration that process all input messages. The application usually works as an OLTP application. Input messages came in random order without peaks, typical scenario for the on-line users. But sometimes the big data batch payloads come. These batches overload processing orchestrations. All processes, activated by on-line users after the payload, come to the same queue and are processed only after the payload. Result is on-line users can see significant delay in processing. It can be minutes or hours, depending of the batch size. Requirements: On-line user’s processing should work without delays. Big batches cannot disturb on-line users. There should be higher priority for the on-line users and the lower priority for the batches. Design: Decision is to divide the message flow in two branches, one for on-line users and second for batches. Branch with batches provides messages to the processing line with low priority, and the on-line user’s branch – with high priority. All messages are provided by hi-speed receive port. BTS.ReceivePortName context property is used for routing. The Router orchestration separates messages sent from on-line users and from the batch messages. But the Router does not use the BizTalk provided value of this property, the Router set up this value by itself. Router uses the content of the messages to decide if it is from on-line users or from batches. The message context property the BTS.ReceivePortName is changed respectively, its value works as a recipient address, as the “To” address for the next recipient orchestrations. Those next orchestrations are the BatchBottleneck and the MainProcess orchestrations. Messages with context equal “ToBatch” are filtered up by the BatchBottleneck orchestration. It is a unified convoy orchestration and it throttles the message flow, delaying the message delivery to the MainProcess orchestration. The BatchBottleneck orchestration changes the message context to the “ToProcess” and sends messages one after another with small delay in between. Delay can be configured in the BizTalk config file as:                 <appSettings>                                 <add key="GLD_Tests_TwoWayRouting_BatchBottleneck_DelayMillisec" value="100"/>                 </appSettings>   Of course, messages with context equal “ToProcess” are filtered up by the MainProcess orchestration.   NOTES: Filters with string values: In Orchestrations (the first Receive shape in orchestration) use string values WITH quotes; in Send Ports use string values WITHOUT quotes. Filters on the Send Ports are dynamic; we can change them in run-time. Filters on the Orchestrations are static; we can change them only in design-time. To check the existence of the promoted property inside orchestration use the Expression shape with construction like this:       if (BTS.ReceivePortName exists myMessage) { …; } It is not possible in the Message Assignment shape because using the “if” statement inside Message Assignment is prohibited. Several predefined context properties can behave in specific way. Say MessageTracking.OriginatingMessage or XMLNORM.DocumentSpecName, they are required some internal rules should be applied to the format or usage of this properties. MessageTracking.* parameters require you have to use tracking and you can get unexpected run-time errors in some cases. My recommendation is - use very limited set of the predefined context properties. To “attach” the new promoted property to the message, we have to use correlation. The correlation type should include this property. [Here is a good explanation by Saravana ] The sample code is here [sorry, temporary trubles with CodePlex].

    Read the article

  • Collision Error

    - by Manji
    I am having trouble with collision detection part of the game. I am using touch events to fire the gun as you will see in the video. Note, the android icon is a temporary graphic for the bullets When ever the user touches (represented by clicks in the video)the bullet appears and kills random sprites. As you can see it never touches the sprites it kills or kill the sprites it does touch. My Question is How do I fix it, so that the sprite dies when the bullet hits it? Collision Code snippet: //Handles Collision private void CheckCollisions(){ synchronized(mSurfaceHolder){ for (int i = sprites.size() - 1; i >= 0; i--){ Sprite sprite = sprites.get(i); if(sprite.isCollision(bullet)){ sprites.remove(sprite); mScore++; if(sprites.size() == 0){ mLevel = mLevel +1; currentLevel++; initLevel(); } break; } } } } Sprite Class Code Snippet: //bounding box left<right and top>bottom int left ; int right ; int top ; int bottom ; public boolean isCollision(Beam other) { // TODO Auto-generated method stub if(this.left>other.right || other.left<other.right)return false; if(this.bottom>other.top || other.bottom<other.top)return false; return true; } EDIT 1: Sprite Class: public class Sprite { // direction = 0 up, 1 left, 2 down, 3 right, // animation = 3 back, 1 left, 0 front, 2 right int[] DIRECTION_TO_ANIMATION_MAP = { 3, 1, 0, 2 }; private static final int BMP_ROWS = 4; private static final int BMP_COLUMNS = 3; private static final int MAX_SPEED = 5; private HitmanView gameView; private Bitmap bmp; private int x; private int y; private int xSpeed; private int ySpeed; private int currentFrame = 0; private int width; private int height; //bounding box left<right and top>bottom int left ; int right ; int top ; int bottom ; public Sprite(HitmanView gameView, Bitmap bmp) { this.width = bmp.getWidth() / BMP_COLUMNS; this.height = bmp.getHeight() / BMP_ROWS; this.gameView = gameView; this.bmp = bmp; Random rnd = new Random(); x = rnd.nextInt(gameView.getWidth() - width); y = rnd.nextInt(gameView.getHeight() - height); xSpeed = rnd.nextInt(MAX_SPEED * 2) - MAX_SPEED; ySpeed = rnd.nextInt(MAX_SPEED * 2) - MAX_SPEED; } private void update() { if (x >= gameView.getWidth() - width - xSpeed || x + xSpeed <= 0) { xSpeed = -xSpeed; } x = x + xSpeed; if (y >= gameView.getHeight() - height - ySpeed || y + ySpeed <= 0) { ySpeed = -ySpeed; } y = y + ySpeed; currentFrame = ++currentFrame % BMP_COLUMNS; } public void onDraw(Canvas canvas) { update(); int srcX = currentFrame * width; int srcY = getAnimationRow() * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bmp, src, dst, null); } private int getAnimationRow() { double dirDouble = (Math.atan2(xSpeed, ySpeed) / (Math.PI / 2) + 2); int direction = (int) Math.round(dirDouble) % BMP_ROWS; return DIRECTION_TO_ANIMATION_MAP[direction]; } public boolean isCollision(float x2, float y2){ return x2 > x && x2 < x + width && y2 > y && y2 < y + height; } public boolean isCollision(Beam other) { // TODO Auto-generated method stub if(this.left>other.right || other.left<other.right)return false; if(this.bottom>other.top || other.bottom<other.top)return false; return true; } } Bullet Class: public class Bullet { int mX; int mY; private Bitmap mBitmap; //bounding box left<right and top>bottom int left ; int right ; int top ; int bottom ; public Bullet (Bitmap mBitmap){ this.mBitmap = mBitmap; } public void draw(Canvas canvas, int mX, int mY) { this.mX = mX; this.mY = mY; canvas.drawBitmap(mBitmap, mX, mY, null); } }

    Read the article

  • What Counts for A DBA - Logic

    - by drsql
    "There are 10 kinds of people in the world. Those who will always wonder why there are only two items in my list and those who will figured it out the first time they saw this very old joke."  Those readers who will give up immediately and get frustrated with me for not explaining it to them are not likely going to be great technical professionals of any sort, much less a programmer or administrator who will be constantly dealing with the common failures that make up a DBA's day.  Many of these people will stare at this like a dog staring at a traffic signal and still have no more idea of how to decipher the riddle. Without explanation they will give up, call the joke "stupid" and, feeling quite superior, walk away indignantly to their job likely flipping patties of meat-by-product. As a data professional or any programmer who has strayed  to this very data-oriented blog, you would, if you are worth your weight in air, either have recognized immediately what was going on, or felt a bit ignorant.  Your friends are chuckling over the joke, but why is it funny? Unfortunately you left your smartphone at home on the dresser because you were up late last night programming and were running late to work (again), so you will either have to fake a laugh or figure it out.  Digging through the joke, you figure out that the word "two" is the most important part, since initially the joke mentioned 10. Hmm, why did they spell out two, but not ten? Maybe 10 could be interpreted a different way?  As a DBA, this sort of logic comes into play every day, and sometimes it doesn't involve nerdy riddles or Star Wars folklore.  When you turn on your computer and get the dreaded blue screen of death, you don't immediately cry to the help desk and sit on your thumbs and whine about not being able to work. Do that and your co-workers will question your nerd-hood; I know I certainly would. You figure out the problem, and when you have it narrowed down, you call the help desk and tell them what the problem is, usually having to explain that yes, you did in fact try to reboot before calling.  Of course, sometimes humility does come in to play when you reach the end of your abilities, but the ‘end of abilities’ is not something any of us recognize readily. It is handy to have the ability to use logic to solve uncommon problems: It becomes especially useful when you are trying to solve a data-related problem such as a query performance issue, and the way that you approach things will tell your coworkers a great deal about your abilities.  The novice is likely to immediately take the approach of  trying to add more indexes or blaming the hardware. As you become more and more experienced, it becomes increasingly obvious that performance issues are a very complex topic. A query may be slow for a myriad of reasons, from concurrency issues, a poor query plan because of a parameter value (like parameter sniffing,) poor coding standards, or just because it is a complex query that is going to be slow sometimes. Some queries that you will deal with may have twenty joins and hundreds of search criteria, and it can take a lot of thought to determine what is going on.  You can usually figure out the problem to almost any query by using basic knowledge of how joins and queries work, together with the help of such things as the query plan, profiler or monitoring tools.  It is not unlikely that it can take a full day’s work to understand some queries, breaking them down into smaller queries to find a very tiny problem. Not every time will you actually find the problem, and it is part of the process to occasionally admit that the problem is random, and everything works fine now.  Sometimes, it is necessary to realize that a problem is outside of your current knowledge, and admit temporary defeat: You can, at least, narrow down the source of the problem by looking logically at all of the possible solutions. By doing this, you can satisfy your curiosity and learn more about what the actual problem was. For example, in the joke, had you never been exposed to the concept of binary numbers, there is no way you could have known that binary - 10 = decimal - 2, but you could have logically come to the conclusion that 10 must not mean ten in the context of the joke, and at that point you are that much closer to getting the joke and at least won't feel so ignorant.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Exiting a reboot loop

    - by user12617035
    If you're in a situation where the system is panic'ing during boot, you can use # boot net -s to regain control of your system. In my case, I'd added some diagnostic code to a (PCI) driver (that is used to boot the root filesystem). There was a bug in the driver, and each time during boot, the bug occurred, and so caused the system to panic: ... 000000000180b950 genunix:vfs_mountroot+60 (800, 200, 0, 185d400, 1883000, 18aec00) %l0-3: 0000000000001770 0000000000000640 0000000001814000 00000000000008fc %l4-7: 0000000001833c00 00000000018b1000 0000000000000600 0000000000000200 000000000180ba10 genunix:main+98 (18141a0, 1013800, 18362c0, 18ab800, 180e000, 1814000) %l0-3: 0000000070002000 0000000000000001 000000000180c000 000000000180e000 %l4-7: 0000000000000001 0000000001074800 0000000000000060 0000000000000000 skipping system dump - no dump device configured rebooting... If you're logged in via the console, you can send a BREAK sequence in order to gain control of the firmware's (OBP's) prompt. Enter Ctrl-Shift-[ in order to get the TELNET prompt. Once telnet has control, enter this: telnet> send brk You'll be presented with OBP's prompt: ok You then enter the following in order to boot into single-user mode via the network: ok boot net -s Note that booting from the network under Solaris will implicitly cause the system to be INSTALLED with whatever software had last been configured to be installed. However, we are using boot net -s as a "handle" with which to get at the Solaris prompt. Once at that prompt, we can perform actions as root that will let us back out our buggy driver (ok... MY buggy driver :-)) ...and replace it with the original, non-buggy driver. Entering the boot command caused the following output, as well as left us at the Solaris prompt (in single-user-mode): Sun Blade 1500, No Keyboard Copyright 1998-2004 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.16.4, 1024 MB memory installed, Serial #53463393. Ethernet address 0:3:ba:2f:c9:61, Host ID: 832fc961. Rebooting with command: boot net -s Boot device: /pci@1f,700000/network@2 File and args: -s 1000 Mbps FDX Link up Timeout waiting for ARP/RARP packet Timeout waiting for ARP/RARP packet 4000 1000 Mbps FDX Link up Requesting Internet address for 0:3:ba:2f:c9:61 SunOS Release 5.10 Version Generic_118833-17 64-bit Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Booting to milestone "milestone/single-user:default". Configuring devices. Using RPC Bootparams for network configuration information. Attempting to configure interface bge0... Configured interface bge0 Requesting System Maintenance Mode SINGLE USER MODE # Our goal is to now move to the directory containing the buggy driver and replace it with the original driver (that we had saved away before ever loading our buggy driver! :-) However, since we booted from the network, the root filesystem ("/") is NOT mounted on one of our local disks. It is mounted on an NFS filesystem exported by our install server. To verify this, enter the following command: # mount | head -1 / on my-server:/export/install/media/s10u2/solarisdvd.s10s_u2dvd/latest/Solaris_10/Tools/Boot remote/read/write/setuid/devices/dev=4ac0001 on Wed Dec 31 16:00:00 1969 As a result, we have to create a temporary mount point and then mount the local disk onto that mount point: # mkdir /tmp/mnt # mount /dev/dsk/c0t0d0s0 /tmp/mnt Note that your system will not necessarily have had its root filesystem on "c0t0d0s0". This is something that you should also have recorded before you ever loaded your.. er... "my" buggy driver! :-) One can find the local disk mounted under the root filesystem by entering: # df -k / Filesystem kbytes used avail capacity Mounted on /dev/dsk/c0t0d0s0 76703839 4035535 71901266 6% / To continue with our example, we can now move to the directory of buggy-driver in order to replace it with the original driver. Note that /tmp/mnt is prefixed to the path of where we'd "normally" find the driver: # cd /tmp/mnt/platform/sun4u/kernel/drv/sparcv9 # ls -l pci\* -rw-r--r-- 1 root root 288504 Dec 6 15:38 pcisch -rw-r--r-- 1 root root 288504 Dec 6 15:38 pcisch.aar -rwxr-xr-x 1 root sys 211616 Jun 8 2006 pcisch.orig # cp -p pcisch.orig pcisch We can now synchronize any in-memory filesystem data structures with those on disk... and then reboot. The system will then boot correctly... as expected: # sync;sync # reboot syncing file systems... done Sun Blade 1500, No Keyboard Copyright 1998-2004 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.16.4, 1024 MB memory installed, Serial #xxxxxxxx. Ethernet address 0:3:ba:2f:c9:61, Host ID: yyyyyyyy. Rebooting with command: boot Boot device: /pci@1e,600000/ide@d/disk@0,0:a File and args: SunOS Release 5.10 Version Generic_118833-17 64-bit Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hostname: my-host NIS domain name is my-campus.Central.Sun.COM my-host console login: ...so that's how it's done! Of course, the easier way is to never write a buggy-driver... but.. then.. we all "have an eraser on the end of each of our pencils"... don't we ? :-) "...thank you... and good night..."

    Read the article

  • Why is Git telling me "Your branch is ahead of 'origin/master' by 11 commits." and how do I get it t

    - by spilth
    I'm a Git newbie. I recently moved a Rails project from Subversion to Git. I followed the tutorial here: http://www.simplisticcomplexity.com/2008/03/05/cleanly-migrate-your-subversion-repository-to-a-git-repository/ I am also using unfuddle.com to store my code. I make changes on my Mac laptop on the train to/from work and then push them to unfuddle when I have a network connection using the following command: git push unfuddle master I use Capistrano for deployments and pull code from the unfuddle repository using the master branch. Lately I've noticed the following message when I run "git status" on my laptop: # On branch master # Your branch is ahead of 'origin/master' by 11 commits. # nothing to commit (working directory clean) And I'm confused as to why. I thought my laptop was the origin... but don't know if either the fact that I originally pulled from Subversion or push to Unfuddle is what's causing the message to show up. How can I: Find out where Git thinks 'origin/master' is? If it's somewhere else, how do I turn my laptop into the 'origin/master'? Get this message to go away. It makes me think Git is unhappy about something. My mac is running Git version 1.6.0.1. When I run git remote show origin as suggested by dbr, I get the following: ~/Projects/GeekFor/geekfor 10:47 AM $ git remote show origin fatal: '/Users/brian/Projects/GeekFor/gf/.git': unable to chdir or not a git archive fatal: The remote end hung up unexpectedly When I run git remote -v as suggested by Aristotle Pagaltzis, I get the following: ~/Projects/GeekFor/geekfor 10:33 AM $ git remote -v origin /Users/brian/Projects/GeekFor/gf/.git unfuddle [email protected]:spilth/geekfor.git Now, interestingly, I'm working on my project in the geekfor directory but it says my origin is my local machine in the gf directory. I believe gf was the temporary directory I used when converting my project from Subversion to Git and probably where I pushed to unfuddle from. Then I believe I checked out a fresh copy from unfuddle to the geekfor directory. So it looks like I should follow dbr's advice and do: git remote rm origin git remote add origin [email protected]:spilth/geekfor.git

    Read the article

  • Can I read an Outlook (2003/2007) PST file in C#?

    - by Andy May
    Is it possible to read a .PST file using C#? I would like to do this as a standalone application, not as an Outlook addin (if that is possible). If have seen other SO questions similar to this mention MailNavigator but I am looking to do this programmatically in C#. I have looked at the Microsoft.Office.Interop.Outlook namespace but that appears to be just for Outlook addins. LibPST appears to be able to read PST files, but this is in C (sorry Joel, I didn't learn C before graduating). Any help would be greatly appreciated, thanks! EDIT: Thank you all for the responses! I accepted Matthew Ruston's response as the answer because it ultimately led me to the code I was looking for. Here is a simple example of what I got to work (You will need to add a reference to Microsoft.Office.Interop.Outlook): using System; using System.Collections.Generic; using Microsoft.Office.Interop.Outlook; namespace PSTReader { class Program { static void Main () { try { IEnumerable<MailItem> mailItems = readPst(@"C:\temp\PST\Test.pst", "Test PST"); foreach (MailItem mailItem in mailItems) { Console.WriteLine(mailItem.SenderName + " - " + mailItem.Subject); } } catch (System.Exception ex) { Console.WriteLine(ex.Message); } Console.ReadLine(); } private static IEnumerable<MailItem> readPst(string pstFilePath, string pstName) { List<MailItem> mailItems = new List<MailItem>(); Application app = new Application(); NameSpace outlookNs = app.GetNamespace("MAPI"); // Add PST file (Outlook Data File) to Default Profile outlookNs.AddStore(pstFilePath); MAPIFolder rootFolder = outlookNs.Stores[pstName].GetRootFolder(); // Traverse through all folders in the PST file // TODO: This is not recursive, refactor Folders subFolders = rootFolder.Folders; foreach (Folder folder in subFolders) { Items items = folder.Items; foreach (object item in items) { if (item is MailItem) { MailItem mailItem = item as MailItem; mailItems.Add(mailItem); } } } // Remove PST file from Default Profile outlookNs.RemoveStore(rootFolder); return mailItems; } } } Note: This code assumes that Outlook is installed and already configured for the current user. It uses the Default Profile (you can edit the default profile by going to Mail in the Control Panel). One major improvement on this code would be to create a temporary profile to use instead of the Default, then destroy it once completed.

    Read the article

  • SSRS Report from Oracle DB - Use stored procedure

    - by Emtucifor
    I am developing a report in Sql Server Reporting Services 2005, connecting to an Oracle 11g database. As you post replies perhaps it will help to know that I'm skilled in MSSQL Server and inexperienced in Oracle. I have multiple nested subreports and need to use summary data in outer reports and the same data but in detail in the inner reports. In order to spare the DB server from multiple executions, I thought to populate some temp tables at the beginning and then query just them the multiple times in the report and the subreports. In SSRS, Datasets are evidently executed in the order they appear in the RDL file. And you can have a dataset that doesn't return a rowset. So I created a stored procedure to populate my four temp tables and made this the first Dataset in my report. This SP works when I run it from SQLDeveloper and I can query the data from the temp tables. However, this didn't appear to work out because SSRS was apparently not reusing the same session, so even though the global temporary tables were created with ON COMMIT PRESERVE ROWS my Datasets were empty. I switched to using "real" tables and am now passing in an additional parameter, a GUID in string form, uniquely generated on each new execution, that is part of the primary key of each table, so I can get back just the rows for this execution. Running this from Sql Developer works fine, example: DECLARE ActivityCode varchar2(15) := '1208-0916 '; ExecutionID varchar2(32) := SYS_GUID(); BEGIN CIPProjectBudget (ActivityCode, ExecutionID); END; Never mind that in this example I don't know the GUID, this simply proves it works because rows are inserted to my four tables. But in the SSRS report, I'm still getting no rows in my Datasets and SQL Developer confirms no rows are being inserted. So I'm thinking along the lines of: Oracle uses implicit transactions and my changes aren't getting committed? Even though I can prove that the non-rowset returning SP is executing (because if I leave out the parameter mapping it complains at report rendering time about not having enough parameters) perhaps it's not really executing. Somehow. Wrong execution order isn't the problem or rows would appear in the tables, and they aren't. I'm interested in any ideas about how to accomplish this (especially the part about not running the main queries multiple times). I'll redesign my whole report. I'll stop using a stored procedure. Suggest anything you like! I just need help getting this working and I am stuck. If you want more details, in my SSRS report I have a List object (it's a container that repeats once for each row in a Dataset) that has some header values and then contains a subreport. Eventually, there will be four total reports: one main report, with three nested subreports. Each subreport will be in a List on the parent report.

    Read the article

  • Very slow compile times on Visual Studio

    - by johnc
    We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines. A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash) We are looking at combing projects (not nice, as we like the separation of concerns, but is a good opportunity to refactor away some dead wood). We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a dll hell as we try to keep things in synch. I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages UPDATE Apologies, I neglected to mention this is a C# solution. Thanks for all the cpp suggestions, but it's been a few years since I've had to worry about headers. At a distance I say I miss C++, but I'm not sure I want to go back EDIT: Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped) New 3GHz laptop - the power of lost utilization works wonders when whinging to management Disable Anti Virus during compile 'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI Still not rip-snorting through a compile, but every bit helps. Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance WORKAROUND We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them. We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.

    Read the article

  • WIX installer with Custom Actions: "built by a runtime newer than the currently loaded runtime and cannot be loaded."

    - by Rimer
    I have a WIX installer that executes Custom Actions over the course of install. When I run the WIX installer, and it encounters its first Custom Action, the installer fails out, and I receive an error in the MSI log as follows: Action start 12:03:53: LoadBCAConfigDefaults. SFXCA: Extracting custom action to temporary directory: C:\DOCUME~1\ELOY06~1\LOCALS~1\Temp\MSI10C.tmp-\ SFXCA: Binding to CLR version v2.0.50727 Calling custom action WIXCustomActions!WIXCustomActions.CustomActions.LoadBCAConfigDefaults Error: could not load custom action class WIXCustomActions.CustomActions from assembly: WIXCustomActions System.BadImageFormatException: Could not load file or assembly 'WIXCustomActions' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. File name: 'WIXCustomActions' at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.AppDomain.Load(String assemblyString) at Microsoft.Deployment.WindowsInstaller.CustomActionProxy.GetCustomActionMethod(Session session, String assemblyName, String className, String methodName) ... the specific problem from above is "System.BadImageFormatException: Could not load file or assembly 'WIXCustomActions' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded." The wordage of that error seems to indicate something like an incorrectly referenced .NET framework or something (I'm targeting 3.5 in both my custom actions and its dependencies), but I can't figure out where to make a change to address this problem. Any ideas? .... Not sure if this will help but it's the CustomActions package batch file I run to create the .dll package containing the custom action functions: =============== call "C:\Program Files\Microsoft Visual Studio 10.0\VC\vcvarsall.bat" @echo on cd "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions" csc /target:library /r:"C:\program files\windows installer xml v3.6\sdk\microsoft.deployment.windowsinstaller.dll" /r:"C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\eLoyalty.PortalLib.dll" /out:"C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\WIXCustomActions.dll" CustomActions.cs cd "C:\Program Files\Windows Installer XML v3.6\SDK" makesfxca "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\BatchCustomerAnalysisWIXCustomActionsPackage.dll" "c:\program files\windows installer xml v3.6\sdk\x86\sfxca.dll" "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\WIXCustomActions.dll" customaction.config Microsoft.Deployment.WindowsInstaller.dll

    Read the article

  • .Net Intermittent System.Web.Services.Protocols.SoapHeaderException

    - by ScottE
    We have a .net 3.5 web app that consumes third party web services. The proxy was created by adding a web reference to their wsdl. This proxy is not compiled. Our error logging is picking up frequent but intermittent exceptions: An exception of type 'System.Web.Services.Protocols.SoapHeaderException' occurred and was caught If I follow the url to the page that generated the exception, I can't recreate it. Edit: Here is most of the exception - where it bubbled up from Message : Internal Error Type : System.Web.Services.Protocols.SoapHeaderException, System.Web.Services, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Source : System.Web.Services Help link : Actor : Code : http://schemas.xmlsoap.org/soap/envelope/:Client Detail : Lang : Node : Role : SubCode : Data : System.Collections.ListDictionaryInternal TargetSite : System.Object[] ReadResponse(System.Web.Services.Protocols.SoapClientMessage, System.Net.WebResponse, System.IO.Stream, Boolean) Stack Trace : at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Vendor.getSearch(getSearchRequest getSearchRequest) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\be43c34e\b09edc7e\App_WebReferences.pww-cf-q.0.cs:line 73 Edit 2: Inner exceptions: I sometimes get the following inner exceptions logged: Message : Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. Type : System.IO.IOException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Read(Byte[], Int32, Int32) Stack Trace : at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) And/Or: Message : An existing connection was forcibly closed by the remote host Type : System.Net.Sockets.SocketException, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : ErrorCode : 10054 SocketErrorCode : ConnectionReset NativeErrorCode : 10054 Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Receive(Byte[], Int32, Int32, System.Net.Sockets.SocketFlags) Stack Trace : at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) Update We're still working on it. Originally there was a route issue, which was resolved. We're still getting the inner exception with socket errors. We had MS support involved today, and they looked at some traces and network captures. The web service host does round-robin DNS, and they may be responding on a different IP address for the syn syn/ack from one ip, and the next from a different ip. This is not good. This is likely quite specific to our situation, but perhaps it applies to others as well. Microsoft Network Monitor and an application trace got us the information we needed.

    Read the article

  • Access Violation Using memcpy or Assignment to an Array in a Struct

    - by Synetech inc.
    Hi, I wrote a program last night that worked just fine but when I refactored it today to make it more extensible, I ended up with a problem. The original version had a hard-coded array of bytes. After some processing, some bytes were written into the array and then some more processing was done. To avoid hard-coding the pattern, I put the array in a structure so that I could add some related data and create an array of them. However now, I cannot write to the array in the structure. Here’s a pseudo-code example: main() { char pattern[]="\x32\x33\x12\x13\xba\xbb"; PrintData(pattern); pattern[2]='\x65'; PrintData(pattern); } That one works but this one does not: struct ENTRY { char* pattern; int somenum; }; main() { ENTRY Entries[] = { {"\x32\x33\x12\x13\xba\xbb\x9a\xbc", 44} , {"\x12\x34\x56\x78", 555} }; PrintData(Entries[0].pattern); Entries[0].pattern[2]='\x65'; //0xC0000005 exception!!! :( PrintData(Entries[0].pattern); } The second version causes an access violation exception on the assignment. I’m sure it’s because the second version allocates memory differently, but I’m starting to get a headache trying to figure out what’s what or how to get fix this. (I’m currently working around it by dynamically allocating a buffer of the same size as the pattern array, copying the pattern to the new buffer, making the changes to the buffer, using the buffer in the place of the pattern array, and then trying to remember to free the—temporary—buffer.) (Specifically, the original version cast the pattern array—+offset—to a DWORD* and assigned a DWORD constant to it to overwrite the four target bytes. The new version cannot do that since the length of the source is unknown—may not be four bytes—so it uses memcpy instead. I’ve checked and re-checked and have made sure that the pointers to memcpy are correct, but I still get an access violation. I use memcpy instead of str(n)cpy because I am using plain chars (as an array of bytes), not Unicode chars and ignoring the null-terminator. Using an assignment as above causes the same problem.) Any ideas? Thanks a lot.

    Read the article

  • How do I diagnose "Microsoft .NET ClickOnce Launch Utility has stopped working"?

    - by Xaero
    Hello StackOverflow! We deploy our application using ClickOnce, installed from a file path. For 24 versions it has been working perfectly - now, on version 25 I get the following error once the application has installed and it launches: If I test a previous deployment on the same machine, it works. Where can I even begin to look to find the cause of this error? I already checked the windows event logs - nothing. EDIT: I noticed that while the dialog is displayed, a temporary xml file 'WER561D.tmp.WERInternalMetadata.xml' is generated in my temp folder. Here is the contents (it might contain clues helpful to those more knowledgeable in this area than I): <?xml version="1.0" encoding="UTF-16"?> <WERReportMetadata> <OSVersionInformation> <WindowsNTVersion>6.1</WindowsNTVersion> <Build>7600 </Build> <Product>(0x4): Windows 7 Enterprise</Product> <Edition>Enterprise</Edition> <BuildString>7600.16385.x86fre.win7_rtm.090713-1255</BuildString> <Revision>1</Revision> <Flavor>Multiprocessor Free</Flavor> <Architecture>X86</Architecture> <LCID>1033</LCID> </OSVersionInformation> <ProblemSignatures> <EventType>CLR20r3</EventType> <Parameter0>applaunch.exe</Parameter0> <Parameter1>2.0.50727.4927</Parameter1> <Parameter2>4a275abe</Parameter2> <Parameter3>mscorlib</Parameter3> <Parameter4>2.0.0.0</Parameter4> <Parameter5>4a275af7</Parameter5> <Parameter6>4f3</Parameter6> <Parameter7>0</Parameter7> <Parameter8>System.Security.Security</Parameter8> </ProblemSignatures> <DynamicSignatures> <Parameter1>6.1.7600.2.0.0.256.4</Parameter1> <Parameter2>1033</Parameter2> </DynamicSignatures> <SystemInformation> -- removed for privacy reasons -- </SystemInformation> </WERReportMetadata> Another key point is that I am publishing via Visual Studio, there is no manual manifest editing going on.

    Read the article

  • Issues with MIPS interrupt for tv remote simulator

    - by pred2040
    Hello I am writing a program for class to simulate a tv remote in a MIPS/SPIM enviroment. The functions of the program itself are unimportant as they worked fine before the interrupt so I left them all out. The gaol is basically to get a input from the keyboard by means of interupt, store it in $s7 and process it. The interrupt is causing my program to repeatedly spam the errors: Exception occurred at PC=0x00400068 Bad address in data/stack read: 0x00000004 Exception occurred at PC=0x00400358 Bad address in data/stack read: 0x00000000 program starts here .data msg_tvworking: .asciiz "tv is working\n" msg_sec: .asciiz "sec -- " msg_on: .asciiz "Power On" msg_off: .asciiz "Power Off" msg_channel: .asciiz " Channel " msg_volume: .asciiz " Volume " msg_sleep: .asciiz " Sleep Timer: " msg_dash: .asciiz "-\n" msg_newline: .asciiz "\n" msg_comma: .asciiz ", " array1: .space 400 # 400 bytes of storage for 100 channels array2: .space 400 # copy of above for sorting var1: .word 0 # 1 if 0-9 is pressed, 0 if not var2: .word 0 # stores number of channel (ex. 2-) var3: .word 0 # channel timer var4: .word 0 # 1 if s pressed once, 2 if twice, 0 if not var5: .word 0 # sleep wait timer var6: .word 0 # program timer var9: .float 0.01 # for channel timings .kdata var7: .word 10 var8: .word 11 .text .globl main main: li $s0, 300 li $s1, 0 # channel li $s2, 50 # volume li $s3, 1 # power - 1:on 0:off li $s4, 0 # sleep timer - 0:off li $s5, 0 # temporary li $s6, 0 # length of sleep period li $s7, 10000 # current key press li $t2, 0 # temp value not needed across calls li $t4, 0 interrupt data here mfc0 $a0, $12 ori $a0, 0xff11 mtc0 $a0, $12 lui $t0, 0xFFFF ori $a0, $0, 2 sw $a0, 0($t0) mainloop: # 1. get external input, and process it # input from interupt is taken from $a2 and placed in $s7 #for processing beq $a2, $0, next lw $s7, 4($a2) li $a2, 0 # call the process_input function here # jal process_input next: # 2. check sleep timer mainloopnext1: # 3. delay for 10ms jal delay_10ms jal check_timers jal channel_time # 4. print status lw $s5, var6 addi $s5, $s5, 1 sw $s5, var6 addi $s0, $s0, -1 bne $s0, $0, mainloopnext4 li $s0, 300 jal status_print mainloopnext4: j mainloop li $v0,10 # exit syscall -------------------------------------------------- status_print: seconds_stat: power_stat: on_stat: off_stat: channel_stat: volume_stat: sleep_stat: j $ra -------------------------------------------------- delay_10ms: li $t0, 6000 delay_10ms_loop: addi $t0, $t0, -1 bne $t0, $0, delay_10ms_loop jr $ra -------------------------------------------------- check_timers: channel_press: sleep_press: go_back_press: channel_check: channel_ignore: sleep_check: sleep_ignore: j $ra ------------------------------------------------ process_input: beq $s7, 112, power beq $s7, 117, channel_up beq $s7, 100, channel_down beq $s7, 108, volume_up beq $s7, 107, volume_down beq $s7, 115, sleep_init beq $s7, 118, history bgt $s7, 47, end_range jr $ra end_range: power: on: off: channel_up: over: channel_down: under: channel_message: channel_time: volume_up: volume_down: volume_message: sleep_init: sleep_incr: sleep: sleep_reset: history: digit_pad_init: digit_pad: jr $ra -------------------------------------------- interupt data here, followed closely from class .ktext 0x80000180 .set noat move $k1, $at .set at sw $v0, var7 sw $a0, var8 mfc0 $k0, $13 srl $a0, $k0, 2 andi $a0, $a0, 0x1f bne $a0, $zero, no_io lui $v0, 0xFFFF lw $a2, 4($v0) # keyboard data placed in $a2 no_io: mtc0 $0, $13 mfc0 $k0, $12 andi $k0, 0xfffd ori $k0, 0x11 mtc0 $k0, $12 lw $v0, var7 lw $a0, var8 .set noat move $at, $k1 .set at eret Thanks in advance.

    Read the article

  • Python - calculate multinomial probability density functions on large dataset?

    - by Seafoid
    Hi, I originally intended to use MATLAB to tackle this problem but the inbuilt functions has limitations that do not suit my goal. The same limitation occurs in NumPy. I have two tab-delimited files. The first is a file showing amino acid residue, frequency and count for an in-house database of protein structures, i.e. A 0.25 1 S 0.25 1 T 0.25 1 P 0.25 1 The second file consists of quadruplets of amino acids and the number of times they occur, i.e. ASTP 1 Note, there are 8,000 such quadruplets. Based on the background frequency of occurence of each amino acid and the count of quadruplets, I aim to calculate the multinomial probability density function for each quadruplet and subsequently use it as the expected value in a maximum likelihood calculation. The multinomial distribution is as follows: f(x|n, p) = n!/(x1!*x2!*...*xk!)*((p1^x1)*(p2^x2)*...*(pk^xk)) where x is the number of each of k outcomes in n trials with fixed probabilities p. n is 4 four in all cases in my calculation. I have created three functions to calculate this distribution. # functions for multinomial distribution def expected_quadruplets(x, y): expected = x*y return expected # calculates the probabilities of occurence raised to the number of occurrences def prod_prob(p1, a, p2, b, p3, c, p4, d): prob_prod = (pow(p1, a))*(pow(p2, b))*(pow(p3, c))*(pow(p4, d)) return prob_prod # factorial() and multinomial_coefficient() work in tandem to calculate C, the multinomial coefficient def factorial(n): if n <= 1: return 1 return n*factorial(n-1) def multinomial_coefficient(a, b, c, d): n = 24.0 multi_coeff = (n/(factorial(a) * factorial(b) * factorial(c) * factorial(d))) return multi_coeff The problem is how best to structure the data in order to tackle the calculation most efficiently, in a manner that I can read (you guys write some cryptic code :-)) and that will not create an overflow or runtime error. To data my data is represented as nested lists. amino_acids = [['A', '0.25', '1'], ['S', '0.25', '1'], ['T', '0.25', '1'], ['P', '0.25', '1']] quadruplets = [['ASTP', '1']] I initially intended calling these functions within a nested for loop but this resulted in runtime errors or overfloe errors. I know that I can reset the recursion limit but I would rather do this more elegantly. I had the following: for i in quadruplets: quad = i[0].split(' ') for j in amino_acids: for k in quadruplets: for v in k: if j[0] == v: multinomial_coefficient(int(j[2]), int(j[2]), int(j[2]), int(j[2])) I haven'te really gotten to how to incorporate the other functions yet. I think that my current nested list arrangement is sub optimal. I wish to compare the each letter within the string 'ASTP' with the first component of each sub list in amino_acids. Where a match exists, I wish to pass the appropriate numeric values to the functions using indices. Is their a better way? Can I append the appropriate numbers for each amino acid and quadruplet to a temporary data structure within a loop, pass this to the functions and clear it for the next iteration? Thanks, S :-)

    Read the article

  • getting rid of filesort on WordPress MySQL query

    - by Hans
    An instance of WordPress that I manage goes down about once a day due to this monster MySQL query taking far too long: SELECT SQL_CALC_FOUND_ROWS distinct wp_posts.* FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id LEFT JOIN wp_ec3_schedule ec3_sch ON ec3_sch.post_id=id WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('1050') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') AND NOT EXISTS (SELECT * FROM wp_term_relationships JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id WHERE wp_term_relationships.object_id = wp_posts.ID AND wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN (533,3567) ) AND ec3_sch.post_id IS NULL GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10; What do I have to do to get rid of the very slow filesort? I would think that the multicolumn type_status_date index would be fast enough. The EXPLAIN EXTENDED output is below. +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | wp_posts | ref | type_status_date | type_status_date | 124 | const,const | 7034 | Using where; Using temporary; Using filesort | | 1 | PRIMARY | wp_term_relationships | ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_posts.ID | 373 | Using index | | 1 | PRIMARY | wp_term_taxonomy | eq_ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_term_relationships.term_taxonomy_id | 1 | Using index | | 1 | PRIMARY | ec3_sch | ref | post_id_index | post_id_index | 9 | bwog_wordpress_w.wp_posts.ID | 1 | Using where; Using index | | 3 | DEPENDENT SUBQUERY | wp_term_taxonomy | range | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | NULL | 2 | Using where | | 3 | DEPENDENT SUBQUERY | wp_term_relationships | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | bwog_wordpress_w.wp_posts.ID,bwog_wordpress_w.wp_term_taxonomy.term_taxonomy_id | 1 | Using index | | 2 | DEPENDENT SUBQUERY | tt | const | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | const,const | 1 | | | 2 | DEPENDENT SUBQUERY | tr | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | func,const | 1 | Using index | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ 8 rows in set, 2 warnings (0.05 sec) And CREATE TABLE: CREATE TABLE `wp_posts` ( `ID` bigint(20) unsigned NOT NULL auto_increment, `post_author` bigint(20) unsigned NOT NULL default '0', `post_date` datetime NOT NULL default '0000-00-00 00:00:00', `post_date_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content` longtext NOT NULL, `post_title` text NOT NULL, `post_excerpt` text NOT NULL, `post_status` varchar(20) NOT NULL default 'publish', `comment_status` varchar(20) NOT NULL default 'open', `ping_status` varchar(20) NOT NULL default 'open', `post_password` varchar(20) NOT NULL default '', `post_name` varchar(200) NOT NULL default '', `to_ping` text NOT NULL, `pinged` text NOT NULL, `post_modified` datetime NOT NULL default '0000-00-00 00:00:00', `post_modified_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content_filtered` text NOT NULL, `post_parent` bigint(20) unsigned NOT NULL default '0', `guid` varchar(255) NOT NULL default '', `menu_order` int(11) NOT NULL default '0', `post_type` varchar(20) NOT NULL default 'post', `post_mime_type` varchar(100) NOT NULL default '', `comment_count` bigint(20) NOT NULL default '0', `robotsmeta` varchar(64) default NULL, PRIMARY KEY (`ID`), KEY `post_name` (`post_name`), KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`), KEY `post_parent` (`post_parent`), KEY `post_date` (`post_date`), FULLTEXT KEY `post_related` (`post_title`,`post_content`) )

    Read the article

  • How to optimize my PageRank calculation?

    - by asmaier
    In the book Programming Collective Intelligence I found the following function to compute the PageRank: def calculatepagerank(self,iterations=20): # clear out the current PageRank tables self.con.execute("drop table if exists pagerank") self.con.execute("create table pagerank(urlid primary key,score)") self.con.execute("create index prankidx on pagerank(urlid)") # initialize every url with a PageRank of 1.0 self.con.execute("insert into pagerank select rowid,1.0 from urllist") self.dbcommit() for i in range(iterations): print "Iteration %d" % i for (urlid,) in self.con.execute("select rowid from urllist"): pr=0.15 # Loop through all the pages that link to this one for (linker,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid): # Get the PageRank of the linker linkingpr=self.con.execute("select score from pagerank where urlid=%d" % linker).fetchone()[0] # Get the total number of links from the linker linkingcount=self.con.execute("select count(*) from link where fromid=%d" % linker).fetchone()[0] pr+=0.85*(linkingpr/linkingcount) self.con.execute("update pagerank set score=%f where urlid=%d" % (pr,urlid)) self.dbcommit() However, this function is very slow, because of all the SQL queries in every iteration >>> import cProfile >>> cProfile.run("crawler.calculatepagerank()") 2262510 function calls in 136.006 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 136.006 136.006 <string>:1(<module>) 1 20.826 20.826 136.006 136.006 searchengine.py:179(calculatepagerank) 21 0.000 0.000 0.528 0.025 searchengine.py:27(dbcommit) 21 0.528 0.025 0.528 0.025 {method 'commit' of 'sqlite3.Connecti 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler 1339864 112.602 0.000 112.602 0.000 {method 'execute' of 'sqlite3.Connec 922600 2.050 0.000 2.050 0.000 {method 'fetchone' of 'sqlite3.Cursor' 1 0.000 0.000 0.000 0.000 {range} So I optimized the function and came up with this: def calculatepagerank2(self,iterations=20): # clear out the current PageRank tables self.con.execute("drop table if exists pagerank") self.con.execute("create table pagerank(urlid primary key,score)") self.con.execute("create index prankidx on pagerank(urlid)") # initialize every url with a PageRank of 1.0 self.con.execute("insert into pagerank select rowid,1.0 from urllist") self.dbcommit() inlinks={} numoutlinks={} pagerank={} for (urlid,) in self.con.execute("select rowid from urllist"): inlinks[urlid]=[] numoutlinks[urlid]=0 # Initialize pagerank vector with 1.0 pagerank[urlid]=1.0 # Loop through all the pages that link to this one for (inlink,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid): inlinks[urlid].append(inlink) # get number of outgoing links from a page numoutlinks[urlid]=self.con.execute("select count(*) from link where fromid=%d" % urlid).fetchone()[0] for i in range(iterations): print "Iteration %d" % i for urlid in pagerank: pr=0.15 for link in inlinks[urlid]: linkpr=pagerank[link] linkcount=numoutlinks[link] pr+=0.85*(linkpr/linkcount) pagerank[urlid]=pr for urlid in pagerank: self.con.execute("update pagerank set score=%f where urlid=%d" % (pagerank[urlid],urlid)) self.dbcommit() This function is 20 times faster (but uses a lot more memory for all the temporary dictionaries) because it avoids the unnecessary SQL queries in every iteration: >>> cProfile.run("crawler.calculatepagerank2()") 64802 function calls in 6.950 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.004 0.004 6.950 6.950 <string>:1(<module>) 1 1.004 1.004 6.946 6.946 searchengine.py:207(calculatepagerank2 2 0.000 0.000 0.104 0.052 searchengine.py:27(dbcommit) 23065 0.012 0.000 0.012 0.000 {meth 'append' of 'list' objects} 2 0.104 0.052 0.104 0.052 {meth 'commit' of 'sqlite3.Connection 1 0.000 0.000 0.000 0.000 {meth 'disable' of '_lsprof.Profiler' 31298 5.809 0.000 5.809 0.000 {meth 'execute' of 'sqlite3.Connectio 10431 0.018 0.000 0.018 0.000 {method 'fetchone' of 'sqlite3.Cursor' 1 0.000 0.000 0.000 0.000 {range} But is it possible to further reduce the number of SQL queries to speed up the function even more?

    Read the article

  • Using native MySQL driver in Erlang

    - by Mickey Shine
    I am using native MySQL driver (http://code.google.com/p/erlang-mysql-driver/) with mochiweb. When I tried that MySQL driver in shell mode, all woked fine. But when I write some code with Mochiweb, it reported me the following error: =CRASH REPORT==== 4-Jul-2009::04:44:29 === crasher: initial call: mochiweb_socket_server:acceptor_loop/1 pid: <0.61.0> registered_name: [] exception error: no function clause matching mysql:fetch(p1,<<"SELECT * FROM cdb_forums LIMIT 10">>) in function perly_web:loop/2 in call from mochiweb_http:headers/5 ancestors: [perly_web,perly_sup,<0.58.0>] messages: [] links: [<0.60.0>,#Port<0.965>] dictionary: [{mochiweb_request_body,undefined}, {mochiweb_request_qs,[]}, {mochiweb_request_post,[]}, {mochiweb_request_path,"/online"}, {mochiweb_request_cookie, [{"04c_sid","hG9Oyv"}, {"04c_visitedfid","2"}, {"kQx_cookietime","2592000"}, {"kQx_loginuser","admin"}, {"kQx_activationauth", "98b3mdX86fKT9dI4WyKuL61Tqxk%2BW1r6ACpHp9y8itH2xQ"}, {"smile","1D1"}]}] trap_exit: false status: running heap_size: 1597 stack_size: 24 reductions: 5188 neighbours: The code I write in Mochiweb is start(Options) -> {DocRoot, Options1} = get_option(docroot, Options), Loop = fun (Req) -> ?MODULE:loop(Req, DocRoot) end, % we’ll set our maximum to 1 million connections. (default: 2048) mochiweb_http:start([{max, 1000000}, {name, ?MODULE}, {loop, Loop} | Options1]), mysql:start_link(p1, "10.0.0.123", "root", "root", "test"). stop() -> mochiweb_http:stop(?MODULE). loop(Req, DocRoot) -> "/" ++ Path = Req:get(path), case Req:get(method) of Method when Method =:= 'GET'; Method =:= 'HEAD' -> case Path of "online" -> Result1 = mysql:fetch(p1, <<"SELECT * FROM cdb_forums LIMIT 10">>), Body1 = io:format("Result1: ~p~n", [Result1]), Req:ok({"text/plain", Body1}); The connection looks good but when I added Result1 = mysql:fetch(p1, <<"SELECT * FROM cdb_forums LIMIT 10">>), it crashed. Can someone help me? Thanks in advance~ //================================================== updated: I noticed the follwoing information. If that is correct? =PROGRESS REPORT==== 4-Jul-2009::05:49:32 === supervisor: {local,kernel_safe_sup} started: [{pid,<0.65.0>}, {name,inet_gethost_native_sup}, {mfa,{inet_gethost_native,start_link,[]}}, {restart_type,temporary}, {shutdown,1000}, {child_type,worker}] mysql_conn: greeting version "5.1.33-log" (protocol 10) salt "ne0_m'vA" caps 63487 serverchar <<8,2,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0>> salt2 "!|o;vabJ*4bt" mysql_auth send packet 1: <<5,162,0,0,64,66,15,0,8,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,114,111,111,116,0,20,52,235,78, 173,36,251,201,242,172,139,113,231,253,181,245,3, 91,198,111,135>> Link: {ok,<0.62.0>} =SUPERVISOR REPORT==== 4-Jul-2009::05:49:32 === Supervisor: {local,perly_sup} Context: start_error Reason: ok Offender: [{pid,undefined}, {name,perly_web}, {mfa, {perly_web,start, [[{ip,"0.0.0.0"}, {port,8000}, {docroot, "/work/mochiweb-read-only/scripts/perly/priv/www"}]]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}]

    Read the article

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >