Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 718/1338 | < Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >

  • different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

  • Model won't render in my XNA game

    - by Daniel Lopez
    I am trying to create a simple 3D game but things aren't working out as they should. For instance, the mode will not display. I created a class that does the rendering so I think that is where the problem lies. P.S I am using models from the MSDN website so I know the models are compatible with XNA. Code: class ModelRenderer { private float aspectratio; private Model model; private Vector3 camerapos; private Vector3 modelpos; private Matrix rotationy; float radiansy = 0; public ModelRenderer(Model m, float AspectRatio, Vector3 initial_pos, Vector3 initialcamerapos) { model = m; if (model.Meshes.Count == 0) { throw new Exception("Invalid model because it contains zero meshes!"); } modelpos = initial_pos; camerapos = initialcamerapos; aspectratio = AspectRatio; return; } public Vector3 CameraPosition { set { camerapos = value; } get { return camerapos; } } public Vector3 ModelPosition { set { modelpos = value; } get { return modelpos; } } public void RotateY(float radians) { radiansy += radians; rotationy = Matrix.CreateRotationY(radiansy); } public float AspectRatio { set { aspectratio = value; } get { return aspectratio; } } public void Draw() { Matrix world = Matrix.CreateTranslation(modelpos) * rotationy; Matrix view = Matrix.CreateLookAt(this.CameraPosition, this.ModelPosition, Vector3.Up); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), this.AspectRatio, 1.0f, 10000f); model.Draw(world, view, projection); } } If you need more code just make a comment.

    Read the article

  • Oracle Java Cloud Service - Platform as a Service for Your Java Applications

    - by GeneEun
    Oracle Java Cloud Service is an enterprise grade Platform as a Service for developing, testing, and deploying business applications. For Java developers, Java Cloud Service provides the power, flexibility, and performance of a true Java EE container in the cloud. Java Cloud Service delivers one of the key advantages of the Java platform, the ability to “write once, run anywhere”. Because of the standards-based approach, there's no need to worry that applications you build and deploy are forever locked into the Oracle Cloud.  In fact, you can use Java Cloud Service just as you would an on-premise Java EE environment and deploy your Java applications on a Java Cloud Service instance as-is. Provisioning of Java Cloud Service instances is self-service and takes only minutes, making access to Java environments both quick and easy. Java Cloud Service instances are also automatically associated with Oracle Database Cloud Service instances, so there's no complex setup involved in order to get a complete application environment up and running.If you're attending Oracle OpenWorld in San Francisco this week, I'm sure you've seen that there are many sessions covering Oracle Cloud services, including Java Cloud Service. Each session will provide a wealth of information, so I highly recommend you consult your conference schedule and try to check them out. In the meantime, here's a short video about Java Cloud Service. Enjoy!

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Task Manager: VM Size smaller than Mem usage?

    - by shoosh
    The windows XP tasks manager can show two different columns regarding the memory usage of the processes. One is called Mem Usage and the other is VM Size (not on by default, you need to activate it) From what I've gathered, VM size is the size of the entire memory space occupied by the process and Mem Usage is the amount of memory currently committed and used. This assumption is verified by most processes when the VM Size is only slightly larger than Mem Usage for instance my Outlook currently has 79,724 K in VM Size and 56,600 K in Mem Usage But it fails for other processes such as Firefox which currently has 171,900 K for Mem Usage and only 156,440 K in VM Size. How can a process use more memory than the amount of virtual memory allocated to it? So Maybe my interpretation of these columns is wrong. What do they actually mean?

    Read the article

  • Did the developers of Java conciously abandon RAII?

    - by JoelFan
    As a long-time C# programmer, I have recently come to learn more about the advantages of Resource Acquisition Is Initialization (RAII). In particular, I have discovered that the C# idiom: using (my dbConn = new DbConnection(connStr) { // do stuff with dbConn } has the C++ equivalent: { DbConnection dbConn(connStr); // do stuff with dbConn } meaning that remembering to enclose the use of resources like DbConnection in a using block is unnecessary in C++ ! This seems to a major advantage of C++. This is even more convincing when you consider a class that has an instance member of type DbConnection, for example class Foo { DbConnection dbConn; // ... } In C# I would need to have Foo implement IDisposable as such: class Foo : IDisposable { DbConnection dbConn; public void Dispose() { dbConn.Dispose(); } } and what's worse, every user of Foo would need to remember to enclose Foo in a using block, like: using (var foo = new Foo()) { // do stuff with "foo" } Now looking at C# and its Java roots I am wondering... did the developers of Java fully appreciate what they were giving up when they abandoned the stack in favor of the heap, thus abandoning RAII? (Similarly, did Stroustrup fully appreciate the significance of RAII?)

    Read the article

  • How do graphics programmers deal with rendering vertices that don't change the image?

    - by canisrufus
    So, the title is a little awkward. I'll give some background, and then ask my question. Background: I work as a web GIS application developer, but in my spare time I've been playing with map rendering and improving data interchange formats. I work only in 2D space. One interesting issue I've encountered is that when you're rendering a polygon at a small scale (zoomed way out), many of the vertices are redundant. An extreme case would be that you have a polygon with 500,000 vertices that only takes up a single pixel. If you're sending this data to the browser, it would make sense to omit ~499,999 of those vertices. One way we achieve that is by rendering an image on a server and and sending it as a PNG: voila, it's a point. Sometimes, though, we want data sent to the browser where it can be rendered with SVG (or canvas, or webgl) so that it can be interactive. The problem: It turns out that, using modern geographic data sets, it's very easy to overload SVG's rendering abilities. In an effort to cope with those limitations, I'm trying to figure out how to visually losslessly reduce a data set for a given scale and map extent (and, if necessary, for a known map pixel width and height). I got a great reduction in data size just using the Douglas-Peucker algorithm, and I believe I was able to get it to keep the polygons true to within one pixel. Unfortunately, Douglas-Peucker doesn't preserve topology, so it changed how borders between polygons got rendered. I couldn't readily find other algorithms to try out and adapt to the purpose, but I don't have much CS/algorithm background and might not recognize them if I saw them.

    Read the article

  • Entering data into AD LDS

    - by Robert Koritnik
    I need some help configuring AD LDS (Active Directory Lightweight Directory Services). I'm not an administrator, have never configured domains and I don't have a clue how to add new users to existing domains. The thing is I need to develop an app that must be connected to AD. I've chosen AD LDS because I can install it on Windows 7 and it acts as an active directory even though there's no dmain controller present in the network. What I've done so far: I've installed AD LDS I've added a new instance with appication directory partition name DN=Air,DC=Watanabe,DC=pri I can connect to it using ADSI Edit and see all kinds of strange But now I don't know what to do? When it opens I can see the window below, but where's next? Can anybody give me some guidelines, how can I add domain users, so I can use them in my app AD required app?

    Read the article

  • Does IP helper forward subnet broadcasts?

    - by Eamon
    Hi, I have a device on a VLAN that uses UDP subnet broadcasts to advertise its presence to similar devices. This works fine on a single VLAN, but now I need to allow it to communicate with similar devices on a second VLAN. I thought of using the IP helper command in the router, but I am wondering if that only forwards global broadcasts (255.255.255.255)? My device sends out a subnet broadcast (e.g. 192.168.6.255) Will IP helper change the destination address to the target subnet (e.g. 192.168.7.255)? Eamon

    Read the article

  • SQL SERVER – A Cool Trick – Restoring the Default SQL Server Management Studio – SSMS

    - by pinaldave
    “I do not know where my windows went!” “I just closed my object explorer and now I cannot find it.” “How do I get my original windows layout back in SQL Server Management Studio?” “How do I get the window which was there in left side back again?” Since last 2-3 years, every single day I receive more than 5 emails on SSMS and its layout. For the beginners it is very common to get confused when they attempt to change SQL Server Management Studio’s windows layout. They often change the layout and are not able to get the original layout back. Often people do not change the layout whole of their life, leading to uncomfortable feeling when they go to another’s computer where the windows are differently placed. Today’s blog post is dedicated all the beginners in SQL Server. It is extremely simple to reset the SSMS layout to default layout. The default layout involves 2 major things 1) Object Explorer on left side 2) Query Windows on right side (80% screen estate). Personally I am so used to this as well that if there is any other changes in the same, I do not enjoy working on the environment. Well, the solution to rest the SSMS layout is very simple. One can do it in split seconds.  To restore the default configuration, on the Window menu, click Reset Window Layout. Have you ever used this feature? Do you feel uncomfortable when SSMS layout is not in default state? How do you address this situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What paths are guaranteed to exist on Windows Server 2008 R2?

    - by jpmc26
    What paths are guaranteed to exist on a Windows Server 2008 R2 instance? A client is requiring that some instructions specify exact paths in all cases. (The person executing said instructions is not supposed to have to decide on any path themselves, even when the path makes absolutely no difference.) So I need to know what paths I can rely on to be there. It's fine with me if they involve environment variables, but they need to be variables guaranteed to hold an existing path. (That is, no modification to a path that doesn't exist possible.) Or are there no guaranteed paths?

    Read the article

  • What is meant by the terms CPU, Core, Die and Package?

    - by lovesh
    Now this might sound like too many previous questions, but I am really confused about these terms. I was trying to understand how "dual core" is different from "Core 2 Duo", and I came across some answers. For example, this answer states: Core 2 Duo has two cores inside a single physical package and dual core is 2 cpu in a package 2 cpu's in a die = 2 cpu's made together 2 cpu's in package = 2 cpu's on small board or linked in some way Now, is a core different from a CPU? What I understand is there is something that does all the heavy computation, decision making, math and other stuff (aka "processing") is called a CPU. Now what is a Core? And what is a processor when somebody says he has got a Core 2 Duo? And in this context what is a Package and what is a Die? I still don't understand the difference between Core 2 Duo and Dual Core. And can somebody explain hyper-threading (symmetric multi-threading) too if they are super generous?

    Read the article

  • Specify a custom dictionary for FxCop and Visual Studio source analysis

    - by Marko Apfel
    Renaming the default custom dictionary from CustomDictionary.xml to an other name – for instance FxCop.CustomDictionary.xml needs some additional changes to work in involved applications. Visual Studio Team System code analysis For Visual Studio Team System code analysis this file should be added as a link to all projects and setted to be the Build Action CodeAnalysisDirectory. Build target In a build target the command line tool FxCopCmd should be called with the /dictionary parameter: <Target Name="FxCop"> <Exec Command="&quot;$(ProjectDir)..\..\build\FxCop\FxCopCmd.exe&quot; /file:&quot;$(TargetPath)&quot; /project:&quot;$(ProjectDir)..\EsriDE.SfgPraxair.FxCop&quot; /directory:&quot;$(ProjectDir)..\..\lib\Esri.ArcGIS&quot; /directory:&quot;$(ProjectDir)..\..\lib\Microsoft&quot; /dictionary:&quot;$(ProjectDir)..\FxCop.CustomDictionary.xml&quot; /out:&quot;$(OutDir)..\$(ProjectName).FxCopReport.xml&quot; /console /forceoutput /ignoregeneratedcode"> </Exec> <Message Text="FxCop finished." /> </Target> FxCop-GUI (standalone application) In FxCop-GUI is no option to specify an own file name – but you could add a hint in the FxCop project file. Open your this file and look for the line: <CustomDictionaries SearchFxCopDir="True" SearchUserProfile="True" SearchProjectDir="True" /> Then change it to: <CustomDictionaries SearchFxCopDir="True" SearchUserProfile="True" SearchProjectDir="True"> <CustomDictionary Path="FxCop.CustomDictionary.xml"/> </CustomDictionaries> Ready :-)

    Read the article

  • ADFS 2.0 and WebEx

    - by DavisTasar
    We have a brand new deployment going on, where our University has purchased WebEx MeetingPlace. We have the Cisco CallManager component working, but the integration with Single Sign On with ADFS 2.0 has been nothing short of torture. The biggest problem I'm working with is that we use Split-Brain DNS, and our internal domain name versus external domain name is different. Trying to determine what credentials are getting passed back and forth, certificate errors for using the self-signed certificate, etc. Does anyone have any experience with this, or something similar? Do you have any tips, or watch-out-for-this, etc.? I've not worked with a Federated Authentication system before, and this scenario is very black-box-esque. Sorry, I'm also partially ranting as I'm frustrated.

    Read the article

  • Screwed up terminal after modifying bashrc

    - by omgzor
    I ended up screwing up my terminal, while setting up Sbt for the Coursera Scala course. I can't summon gedit (or anything else) anymore. I get the following error: Command 'gedit' is available in '/usr/bin/gedit' The command could not be located because '/usr/bin' is not included in the PATH environment variable. Also, each new instance of Terminal writes these messages before any command is written: -bash: :/home/antonio/jdk7/jdk1.7.0_07/bin: No such file or directory -bash: export: `/home/antonio/Desktop/Scala/install/sbt/bin:/home/antonio/jdk7/jdk1.7.0_07/bin': not a valid identifier I recently did a manual installation of the jdk 7, which apparently works: java -version java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode) While setting up Sbt, I made the mistake of editing bashrc by writing gedit ~/.bashrc on my terminal instead of writing gedit .bashrc, I wrote the following lines at the end of the bashrc file that opened: export PATH=/PATH/TO/YOUR/jdk1.7.0-VERSION/bin:$PATH export PATH=/home/antonio/jdk7/jdk1.7.0_07/bin:$PATH What is wrong here? How can I access my bashrc file and modify it again?

    Read the article

  • unity, seeing all instances of same open application windows on all virtual desktop

    - by Nasser M. Abbasi
    I noticed this strange issue with unity. I am using 12.04. The desktop has 4 virtual desktops, which I can switch between using the 'workspace switcher' which is very nice. But I noticed the following: When I have 2 instances of the same app (say 2 different firefox windows, or 2 different terminal windows), in 2 different virtual desktops, then I click on the icon for that application located on the launcher panel (the left long strip with icons on it), then I see the application comes into focus. Then when I click again right away (on the same icon on the launcher), then now all instances of this application that are open come into ONE view (may be on was on desktop 1, and the other was on desktop 3 for example) and then I can now click on the one instance window that I want to select to use. This is all very nice actually. But this does NOT work for all applications! I just tried it, and it worked for firefox, and for gedit and for the gnome terminal. I have one firefox window open in virtual desktop 1, and another window open in virtual desktop 2. I clicked once on the firefox icon, then again, and both windows came into the main desktop and I was able to select which one to use. When I tried the same thing on dolphin file manager, which I also had 2 windows (instances) of it open in 2 different virtual desktops, this behavior did not happen. I clicked again, and nothing happened. Only one remained in focus. So I had to fo look for the second dolphin window the hard way. It looks like some apps are supported by this feature and some are not. How does one make it so that all applications are supported like this? This is a very handy feature. Is it a configuration item somewhere? thanks

    Read the article

  • MSSQL Replace Database in Live Web App

    - by casoninabox
    I have a web app that is currently live. I had the need to make major modifications to the database and now I need to replace the current one. My dev SQL instance is not the live one. I usually just make a backup of the new DB, blow the old one away and Restore my updated one. But now I have data I need to preserve. Most of the current tables have changed, in that extra columns have been added, all existing columns are still there and unchanged. I have access to Management Studio. What is the right way to do this?

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Architecture advice for converting biz app from old school to new school?

    - by Aaron Anodide
    I've got a WinForms business application that evolved over the past few years. It's forms over data with a number custom UI experiences taylored to the business, so I don't think it's a candidate to port to something like SharePoint or re-write in LightSwitch (at least not without significant investment). When I started it in 2009 I was new to this type of development (coming from more low level programming and my RDBMS knowledge was just slightly greater than what I got from school). Thus, when I was confronted with a business model that operates on a strict monthly accounting cycle, I made the unfortunate decision to create a separate database for each accounting period. Also, when I started I knew DataSets, then I learned Linq2Sql, then I learned EntityFramework. The screens are a mix and match of those. Now, after a few years developing this thing by myself I've finally got a small team. Ultimately, I want a web front end (for remote access to more straight up screens with grids of data) and a thick client (for the highly customized interfaces). My question is: can you offer me some broad strokes architecture advice that will help me formulate a battle plan to convert over to a single database and lay the foundations for my future goals at the same time? Here's a screen shot showing how an older screen uses DataSets and a newer screen uses EF (I'm thinking this might make it more real for someone reading the question - I'm willing to add any amount of detail if someone is willing to help).

    Read the article

  • 2D graphics - why use spritesheets?

    - by Columbo
    I have seen many examples of how to render sprites from a spritesheet but I havent grasped why it is the most common way of dealing with sprites in 2d games. I have started out with 2d sprite rendering in the few demo applications I've made by dealing with each animation frame for any given sprite type as its own texture - and this collection of textures is stored in a dictionary. This seems to work for me, and suits my workflow pretty well, as I tend to make my animations as gif/mng files and then extract the frames to individual pngs. Is there a noticeable performance advantage to rendering from a single sheet rather than from individual textures? With modern hardware that is capable of drawing millions of polygons to the screen a hundred times a second, does it even matter for my 2d games which just deal with a few dozen 50x100px rectangles? The implementation details of loading a texture into graphics memory and displaying it in XNA seems pretty abstracted. All I know is that textures are bound to the graphics device when they are loaded, then during the game loop, the textures get rendered in batches. So it's not clear to me whether my choice affects performance. I suspect that there are some very good reasons most 2d game developers seem to be using them, I just don't understand why.

    Read the article

  • How to make use of movie events imported by imovie 09 from a DVR

    - by overboming
    I have imported about 100 clips of video from my DVR using imovie 09 and they are all saved under movie events folder, this is ok. The problem is all the movie events files are not in standard formats, they are in some 'Apple Intermediate Format' only Quicktime and iMovie recoginize and play. My problem is I simply want to give my 100 clips of video to someone using PC and this intermediate format produced by imovie is not playable or convertable by anything I have found. Now the only option for me seems to be creating a project in iMovie and drag all the clips into the project and then export these 100 clips into a single standard file, but iMovie doesn't even let me conveninent do that, I can only click at some clip, select all, drag into the project, and repeat for 100 more times, Is there a alternative way I can do so? (or use Quicktime player to convert video formats one by one). Thanks

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • Ubuntu 12.04 boot hangs with a black screen before grub menu after upgrade (gma500_gfx driver)

    - by Eric van der Vlist
    I am using Ubuntu on a fit-pc2 specifications and after upgrading from 10.04 to 12.04 I get a black screen at boot time (before displaying the grub menu) and the computer hangs with no disk activity. I have managed to boot Ubuntu 12.04 on a live USB key but had to add the following boot options to do so: console=tty1 or console=text acpi=off noapic nomodeset Using boot-repair, I have tried to add these options to /etc/default/grub (see this pastie log for instance) but I haven't been able to fix the black screen issue. I have tried many other things such as the workarounds mentioned on the web for PSB-GFX_drivers without any success and also to uncomment GRUB_TERMINAL=console with the result of getting a No video mode activated error. During these tests, I have managed to break /boot/grub/grub.cfg and could then hit grub in command line. This gave me the chance to check that I can boot without problem if I type: grub> set root=(hd0,1) grub> linux /vmlinuz root=/dev/sda1 ro acpi=off noapic nomodeset console=tty1 grub> initrd /initrd.img grub> boot How can I tell grub to use these options?

    Read the article

  • How to build MVC Views that work with polymorphic domain model design?

    - by Johann de Swardt
    This is more of a "how would you do it" type of question. The application I'm working on is an ASP.NET MVC4 app using Razor syntax. I've got a nice domain model which has a few polymorphic classes, awesome to work with in the code, but I have a few questions regarding the MVC front-end. Views are easy to build for normal classes, but when it comes to the polymorphic ones I'm stuck on deciding how to implement them. The one (ugly) option is to build a page which handles the base type (eg. IContract) and has a bunch of if statements to check if we passed in a IServiceContract or ISupplyContract instance. Not pretty and very nasty to maintain. The other option is to build a view for each of these IContract child classes, breaking DRY principles completely. Don't like doing this for obvious reasons. Another option (also not great) is to split the view into chunks with partials and build partial views for each of the child types that are loaded into the main view for the base type, then deciding to show or hide the partial in a single if statement in the partial. Also messy. I've also been thinking about building a master page with sections for the fields that only occur in subclasses and to build views for each subclass referencing the master page. This looks like the least problematic solution? It will allow for fairly simple maintenance and it doesn't involve code duplication. What are your thoughts? Am I missing something obvious that will make our lives easier? Suggestions?

    Read the article

  • SQL Server Performance & Latching

    - by Colin
    I have a SQL server 2000 instance which runs several concurrent select statements on a group of 4 or 5 tables. Often the performance of the server during these queries becomes extremely diminished. The querys can take up to 10x as long as other runs of the same query, and it gets to the point where simple operations like getting the table list in object explorer or running sp_who can take several minutes. I've done my best to identify the cause of these issues, and the only performance metric which I've found to be off base is Average Latch Wait time. I've read that over 1 second wait time is bad, and mine ranges anywhere from 20 to 75 seconds under heavy use. So my question is, what could be the issue? Shouldn't SQL be able to handle multiple selects on a single table without losing so much performance? Can anyone suggest somewhere to go from here to investigate this problem? Thanks for the help.

    Read the article

< Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >