Search Results

Search found 15137 results on 606 pages for 'mean filter'.

Page 460/606 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Cannot install wine on Ubuntu natty 64bit [broken dependency]

    - by MHK
    I've just installed Ubuntu natty 64bit. Now I'm trying to install wine and no matter how I do it (Software center/synaptic/terminal), it fails. Here's what I tried on terminal: sudo apt-get update sudo apt-get install wine It shows: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine : Depends: wine1.3 but it is not going to be installed Depends: ia32-libs (>= 1.6) but it is not going to be installed Depends: lib32asound2 (> 1.0.14) but it is not going to be installed Depends: libc6-i386 (>= 2.6-1) but it is not going to be installed Depends: lib32nss-mdns (>= 0.10-3) but it is not going to be installed E: Broken packages Any body faced this? Is this a bug or something is broken on my end? Any hints on how to solve? Edit: I've tried with aptitude, it gives more clear message: sudo aptitude install wine Output: The following NEW packages will be installed: gnome-exe-thumbnailer{a} ia32-libs{a} icoutils{a} imagemagick{a} lib32asound2{ab} lib32bz2-1.0{a} lib32gcc1{ab} lib32ncurses5{a} lib32nss-mdns{a} lib32stdc++6{ab} lib32v4l-0{ab} lib32z1{a} libc6-i386{ab} libcdt4{a} libgraph4{a} libgvc5{a} libilmbase6{a} liblqr-1-0{a} libmagickcore3{a} libmagickcore3-extra{a} libmagickwand3{a} libnetpbm10{a} libopenexr6{a} libpathplan4{a} netpbm{a} ttf-droid{a} ttf-symbol-replacement-wine1.3{a} ttf-umefont{a} winbind{a} wine wine1.3{a} wine1.3-gecko{a} winetricks{a} 0 packages upgraded, 33 newly installed, 0 to remove and 0 not upgraded. Need to get 135 MB of archives. After unpacking 421 MB will be used. The following packages have unmet dependencies: libc6-i386: Depends: libc6 (= 2.12.1-0ubuntu16) but 2.13-0ubuntu13 is installed. lib32gcc1: Depends: gcc-4.5-base (= 4.5.2-2ubuntu3) but 4.5.2-8ubuntu4 is installed. lib32asound2: Depends: libasound2 (= 1.0.23-2.1ubuntu2) but 1.0.24.1-0ubuntu5 is installed. lib32stdc++6: Depends: gcc-4.5-base (= 4.5.2-2ubuntu3) but 4.5.2-8ubuntu4 is installed. lib32v4l-0: Depends: libv4l-0 (= 0.8.1-2) but 0.8.3-1 is installed. The following actions will resolve these dependencies: Keep the following packages at their current version: 1) ia32-libs [Not Installed] 2) lib32asound2 [Not Installed] 3) lib32bz2-1.0 [Not Installed] 4) lib32gcc1 [Not Installed] 5) lib32ncurses5 [Not Installed] 6) lib32nss-mdns [Not Installed] 7) lib32stdc++6 [Not Installed] 8) lib32v4l-0 [Not Installed] 9) lib32z1 [Not Installed] 10) libc6-i386 [Not Installed] 11) wine [Not Installed] 12) wine1.3 [Not Installed] Leave the following dependencies unresolved: 13) wine1.3-gecko recommends wine1.3 14) winetricks recommends wine1.2 | wine1.3 | cxoffice5 | cxgames5 It seems the wine package hasn't been updated in the repo. What should I do now?

    Read the article

  • Problem Solving vs. Solution Finding

    - by ryanabr
    By enlarge, most developers fall into these two camps I will try to explain what I mean by way of example. A manager gives the developer a task that is communicated like this: “Figure out why control A is not loading on this form”. Now, right there it could be argued that the manager should probably have given better direction and said something more like: “Control A is not loading on the Form, fix it”. They might sound like the same thing to most people, but the first statement will have the developer problem solving the reason why it is failing. The second statement should have the developer looking for the solution to make it work, not focus on why it is broken. In the end, they might be the same thing, but I usually see the first approach take way longer than the second approach. The Problem Solver: The problem solver’s approach to fixing something that is broken is likely to take the error or behavior that is being observed and start to research it using a tool like Google, or any other search engine. 7/10 times this will yield results for the most common of issues. The challenge is in the other 30% of issues that will take the problem solver down the rabbit hole and cause them not to surface for days on end while every avenue is explored for the cause of the problem. In the end, they will probably find the cause of the issue and resolve it, but the cost can be days, or weeks of work. The Solution Finder: The solution finder’s approach to a problem will begin the same way the Problem Solver’s approach will. The difference comes in the more difficult cases. Rather than stick to the pure “This has to work so I am going to work with it until it does” approach, the Solution Finder will look for other ways to get the requirements satisfied that may or may not be using the original approach. For example. there are two area of an application of externally equivalent features, meaning that from a user’s perspective, the behavior is the same. So, say that for whatever reason, area A is now not working, but area B is working. The Problem Solver will dig in to see why area A is broken, where the Solution Finder will investigate to see what is the difference between the two areas and solve the problem by potentially working around it. The other notable difference between the two types of developers described is what point they reach before they re-emerge from their task. The problem solver will likely emerge with a triumphant “I have found the problem” where as the Solution Finder will emerge with the more useful “I have the solution”. Conclusion At the end of the day, users are what drives features in software development. With out users there is no need for software. In todays world of software development with so many tools to use, and generally tight schedules I believe that a work around to a problem that takes 8 hours vs. the more pure solution to the problem that takes 40 hours is a more fruitful approach.

    Read the article

  • Box2D Difference Between WorldCenter and Position

    - by Free Lancer
    So this problem has been brothering for a couple of days now. First off, what is the difference between say Body.getWorldCenter() and Body.getPosition(). I heard that WorldCenter might have to do with the center of gravity or something. Second, When I create a Box2D Body for a sprite the Body is always at the lower left corner. I check it by printing a Rectangle of 1 pixel around the box.getWorldCenter(). From what I understand the Body should be in the center of the Sprite and its bounding box should wrap around the Sprite, correct? Here's an image of what I mean (The Sprite is Red, Body Blue): Here's some code: Body Creator: public static Body createBoxBody( final World pPhysicsWorld, final BodyType pBodyType, final FixtureDef pFixtureDef, Sprite pSprite ) { float pRotation = 0; float pCenterX = pSprite.getX() + pSprite.getWidth() / 2; float pCenterY = pSprite.getY() + pSprite.getHeight() / 2; float pWidth = pSprite.getWidth(); float pHeight = pSprite.getHeight(); final BodyDef boxBodyDef = new BodyDef(); boxBodyDef.type = pBodyType; //boxBodyDef.position.x = pCenterX / Constants.PIXEL_METER_RATIO; //boxBodyDef.position.y = pCenterY / Constants.PIXEL_METER_RATIO; boxBodyDef.position.x = pSprite.getX() / Constants.PIXEL_METER_RATIO; boxBodyDef.position.y = pSprite.getY() / Constants.PIXEL_METER_RATIO; Vector2 v = new Vector2( boxBodyDef.position.x * Constants.PIXEL_METER_RATIO, boxBodyDef.position.y * Constants.PIXEL_METER_RATIO ); Gdx.app.log("@Physics", "createBoxBody():: Box Position: " + v); // Temporary Box shape of the Body final PolygonShape boxPoly = new PolygonShape(); final float halfWidth = pWidth * 0.5f / Constants.PIXEL_METER_RATIO; final float halfHeight = pHeight * 0.5f / Constants.PIXEL_METER_RATIO; boxPoly.setAsBox( halfWidth, halfHeight ); // set the anchor point to be the center of the sprite pFixtureDef.shape = boxPoly; final Body boxBody = pPhysicsWorld.createBody(boxBodyDef); Gdx.app.log("@Physics", "createBoxBody():: Box Center: " + boxBody.getPosition().mul(Constants.PIXEL_METER_RATIO)); boxBody.createFixture(pFixtureDef); boxBody.setTransform( boxBody.getWorldCenter(), MathUtils.degreesToRadians * pRotation ); boxPoly.dispose(); return boxBody; } Making the Sprite: public Car( Texture texture, float pX, float pY, World world ) { super( "Car" ); mSprite = new Sprite( texture ); mSprite.setSize( mSprite.getWidth() / 6, mSprite.getHeight() / 6 ); mSprite.setPosition( pX, pY ); mSprite.setOrigin( mSprite.getWidth()/2, mSprite.getHeight()/2); FixtureDef carFixtureDef = new FixtureDef(); // Set the Fixture's properties, like friction, using the car's shape carFixtureDef.restitution = 1f; carFixtureDef.friction = 1f; carFixtureDef.density = 1f; // needed to rotate body using applyTorque mBody = Physics.createBoxBody( world, BodyDef.BodyType.DynamicBody, carFixtureDef, mSprite ); }

    Read the article

  • High CPU load for 1:30 minutes when mounting ext4-raid partition

    - by sirion
    I have a raid 5 (software) with 5x2TB drives. I encrypted the raid with cryptsetup and put an ext4-partition on top. In the beginning opening and mounting the raid took less than 10 seconds, now (for a few weeks) mounting alone takes 1:30 minutes and the cpu stays around 93% the whole time: The output of "time sudo mount /dev/mapper/8000 /media/8000" is: real 1m31.952s user 0m0.008s sys 1m25.229s At the same time only one line is added to /var/log/syslog: kernel: [ 2240.921381] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) My Ubuntu-version is "12.04.1 LTS" and no updates are pending. I checked the partition with fsck, but it says that all is ok. The "cryptsetup luksOpen" command only takes a few seconds. I also tried changing the raid-bitmap (as it was suggested in some forum) but it did not change the behaviour. sudo mdadm --grow /dev/md0 -b internal and sudo mdadm --grow /dev/md0 -b none I had the idea that it might be the hardware being slow, but a read test with "sudo hdparm -t /dev/md0" spit out values between 62 and 159 MB/sec: Timing buffered disk reads: 382 MB in 3.00 seconds = 127.14 MB/sec Timing buffered disk reads: 482 MB in 3.02 seconds = 159.62 MB/sec Timing buffered disk reads: 190 MB in 3.03 seconds = 62.65 MB/sec Timing buffered disk reads: 474 MB in 3.02 seconds = 157.12 MB/sec Although I think it is strange that the read rate jumps by more than 100% - could that mean something? The speed test when reading from the mapped (decrypted) device shows similar behavior, although it is of course much slower. "sudo hdparm -t /dev/mapper/8000": Timing buffered disk reads: 56 MB in 3.02 seconds = 18.54 MB/sec Timing buffered disk reads: 122 MB in 3.09 seconds = 39.43 MB/sec Timing buffered disk reads: 134 MB in 3.02 seconds = 44.35 MB/sec The output of a verbose mount "mount -vvv /dev/mapper/8000 /media/8000" does not help much: mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "/dev/mapper/8000" mount: node: "/media/8000" mount: types: "(null)" mount: opts: "(null)" mount: you didn't specify a filesystem type for /dev/mapper/8000 I will try type ext4 mount: mount(2) syscall: source: "/dev/mapper/8000", target: "/media/8000", filesystemtype: "ext4", mountflags: -1058209792, data: (null) Any idea where I could find additional information on why mounting takes so long, or what additional tests I could run?

    Read the article

  • PASS Summit for SQL Starters

    - by Davide Mauri
    I’ve received a buch of emails from PASS Summit “First Timers” that are also somehow new to SQL Server (for “somehow” I mean people with less than 6 month experience but with some basic knowledge of SQL Server engine) or are catching up from SQL Server 2000. The common question regards the session one should not miss to have a broad view of the entire SQL Server platform have some insight into some specific areas of SQL Server Given that I’m on (semi-)vacantion and that I have more free time (not true, I have to prepare slides & demos for several conferences, PASS Summit  - Building the Agile Data Warehouse with SQL Server 2012 - and PASS 24H - Agile Data Warehousing with SQL Server 2012 - among them…but let’s pretend it to be true), I’ve decided to make a post to answer to this common questions. Of course this is my personal point of view and given the fact that the number and quality of session that will be delivered at PASS Summit is so high that is very difficoult to make a choice, fell free to jump into the discussion and leave your feedback or – even better – answer with another post. I’m sure it will be very helpful to all the SQL Server beginners out there. I’ve imposed to myself to choose 6 session at maximum for each Track. Why 6? Because it’s the maximum number of session you can follow in one day, and given that all the session will be on the Summit DVD, they are the answer to the following question: “If I have one day to spend in training, which session I should watch?”. Of course a Summit is not like a Course so a lot of very basics concept of well-established technologies won’t be found here. Analysis Services, Integration Services, MDX are not part of the Summit this time (at least for the basic part of them). Enough with that, let’s start with the session list ideal to have a good Overview of all the SQL Server Platform: Geospatial Data Types in SQL Server 2012 Inside Unstructured Data: SQL Server 2012 FileTable and Semantic Search XQuery and XML in SQL Server: Common Problems and Best Practice Solutions Microsoft's Big Play for Big Data Dashboards: When to Choose Which MSBI Tool Microsoft BI End-User Tools 360° for what concern Database Development, I recommend the following sessions Understanding Transaction Isolation Levels What to Look for in Execution Plans Improve Query Performance by Fixing Bad Parameter Sniffing A Window into Your Data: Using SQL Window Functions Practical Uses and Optimization of New T-SQL Features in SQL Server 2012 Taking MERGE Beyond the Basics For Business Intelligence Information Delivery Analyzing SSAS Data with Excel Building Compelling Power View Reports Managed Self-Service BI PowerPivot 101  SharePoint for Business Intelligence The Best Microsoft BI Tools You've Never Heard Of and for Business Intelligence Architecture & Development BI Power Hour Building a Tabular Model Database Enterprise Information Management: Bringing Together SSIS, DQS, and MDS SSIS Design Patterns Storing Columnstore Indexes Hadoop and Its Ecosystem Components in Action Beside the listed sessions, First Timers should also take a look the the page PASS set up for them: http://www.sqlpass.org/summit/2012/Connect/FirstTimers.aspx See you at PASS Summit!

    Read the article

  • How to get lookahead symbol when constructing LR(1) NFA for parser?

    - by greenoldman
    I am reading an explanation (awesome "Parsing Techniques" by D.Grune and C.J.H.Jacobs; p.292 in the 2nd edition) about how to construct an LR(1) parser, and I am at the stage of building the initial NFA. What I don't understand is how to get/compute a lookahead symbol. Here is the example from the book, the grammar: S -> E E -> E - T E -> T T -> ( E ) T -> n n is terminal. The "weird" transitions for me are is the sequence: 1) S -> . E eof 2) E -> . E - T eof 3) E -> . E - T - 4) E -> E . - T - 5) E -> E - . T - (Note: In the above table, the state numbers are in front and the lookahead symbol is at the end.) What puzzles me is that transition from (4) to (5) means reading - token, right? So how is it that - is still a lookahead symbol and even more important why is it that eof is no longer a lookahead symbol? After all in an input such as n - n eof there is only one - symbol. My naive thinking tells me (5) should be written as: 5) E -> E - . T - eof And another thing -- n is terminal. Why it is not used at all as a lookahead symbol? I mean -- we expect to see - or (, it is ok, but lack of n means we are sure it won't appear in input? Update: after more reading I am only more confused ;-) I.e. what is really a lookahead? Because I see such state as (p.292, 2nd column, 2nd row): E -> E . - T eof Lookahead says eof but the incoming input says -. Isn't it a contradiction? And it is not only in this book.

    Read the article

  • Profiling Startup Of VS2012 &ndash; dotTrace Profiler

    - by Alois Kraus
    Jetbrains which is famous for the Resharper tool has also a profiler in its portfolio. I downloaded dotTrace 5.2 Professional (569€+VAT) to check how far I can profile the startup of VS2012. The most interesting startup option is “.NET Process”. With that you can profile the next started .NET process which is very useful if you want to profile an application which is not started by you.     I did select Tracing as and Wall time to get similar options across all profilers. For some reason the attach option did not work with .NET 4.5 on my home machine. But I am sure that it did work with .NET 4.0 some time ago. Since we are profiling devenv.exe we can also select “Standalone Application” and start it from the profiler. The startup time of VS does increase about a factor 3 but that is ok. You get mainly three windows to work with. The first one shows the threads where you can drill down thread wise where most time is spent. I The next window is the call tree which does merge all threads together in a similar view. The last and most useful view in my opinion is the Plain List window which is nearly the same as the Method Grid in Ants Profiler. But this time we do get when I enable the Show system functions checkbox not a 150 but 19407 methods to choose from! I really tried with Ants Profiler to find something about out how VS does work but look how much we were missing! When I double click on a method I do get in the lower pane the called methods and their respective timings. This is something really useful and I can nicely drill down to the most important stuff. The measured time seems to be Wall Clock time which is a good thing to see where my time is really spent. You can also use Sampling as profiling method but this does give you much less information. Except for getting a first idea where to look first this profiling mode is not very useful to understand how you system does interact.   The options have a good list of presets to hide by default many method and gray them out to concentrate on your code. It does not filter anything out if you enable Show system functions. By default methods from these assemblies are hidden or if the checkbox is checked grayed out. All in all JetBrains has made a nice profiler which does show great detail and it has nice drill down capabilities. The only thing is that I do not trust its measured timings. I did fall several times into the trap with this one to optimize at places which were already fast but the profiler did show high times in these methods. After measuring with Tracing I was certain that the measured times were greatly exaggerated. Especially when IO is involved it seems to have a hard time to subtract its own overhead. What I did miss most was the possibility to profile not only the next started process but to be able to select a process by name and perhaps a count to profile the next n processes of this name. Next: YourKit

    Read the article

  • Applying DDD principles in a RESTish web service

    - by Andy
    I am developing an RESTish web service. I think I got the idea of the difference between aggregation and composition. Aggregation does not enforce lifecycle/scope on the objects it references. Composition does enforce lifecycle/scope on the objects it contain/own. If I delete a composite object then all the objects it contain/own are deleted as well, while the deleting an aggregate root does not delete referenced objects. 1) If it is true that deleting aggregate roots does not necessary delete referenced objects, what sense does it make to not have a repository for the references objects? Or are aggregate roots as a term referring to what is known as composite object? 2) When you create an web service you will have multiple endpoints, in my case I have one entity Book and another named Comment. It does not make sense to leave the comments in my application if the book is deleted. Therefore, book is a composite object. I guess I should not have a repository for comments since that would break the enforcement of lifecycle and rules that the book class may have. However I have URL such as (examples only): GET /books/1/comments POST /books/1/comments Now, if I do not have a repository for comments, does that mean I have to load the book object and then return the referenced comments? Am I allowed to return a list of Comment entities from the BookRepository, does that make sense? The repository for Book may eventually become rather big with all sorts of methods. Am I allowed to write JPQL (JPA queries) that targets comments and not books inside the repository? What about pagination and filtering of comments. When adding a new comment triggered by the POST endpoint, do you need to load the book, add the comment to the book, and then update the whole book object? What I am currently doing is having a own CommentRepository, even though the comments are deleted with the book. I could need some direction on how to do it correct. Since you are exposing not only root objects in RESTish services I wonder how to handle this at the backend. I am using Hibernate and Spring.

    Read the article

  • Get More Value From Your Oracle Premier Support Investment

    - by Get Proactive Customer Adoption Team
    Untitled Document The Return on Investment in Support Training I’m a typical software user. I’ve been using spreadsheets almost daily for the past 10 years or so. I know how to enter simple formulas, format cells, import files, and I can sort and filter. Sometimes I even use a pivot table. I never attended training. I learnt everything I know on the fly. Sometimes it was intuitive and easy, other times I had to spend minutes and even hours searching for a solution. Yet when I see what some other people can do with their spreadsheets, I know I’m utilizing maybe 15% of the functionality. Pity, one day I really have to sign up for training. Why haven’t I done it yet? Ah, you know, I’m a busy person, I have work to do. And if I need to use a feature that I am unfamiliar with, I’ll spend time on it only when I really need it. Now wait. When I recall how much time I spent trying to figure how things work compared to time I spent doing the productive work, I realize it was not insignificant. I’m unable to sum up all the time I spent ‘learning’ on the fly, but I’m sure it’s been days or even weeks. And after all this time, I’ve mastered 15% of its features. If only I had attended training years ago. That investment would have paid back 10 times! Working with My Oracle Support is no different. Our customers typically use simple search, create service requests, and download patches. They think they know how to use My Oracle Support. And they’re right. They know something but often they’re utilizing only a fragment of My Oracle Support’s potential. For the investment that has been made, using only a small subset of the capabilities offered in My Oracle Support leaves value on the table. There is much more available in My Oracle Support. Dozens of diagnostic tools and proactive health checks will keep verifying your Oracle environments against best practices that Oracle gathers every day thanks to our comprehensive knowledge management process. Automated patch recommendations will help prevent known issues, and upgrade planning and more is included in My Oracle Support. Why are you not utilizing all of these best practices, capabilities and tools? Is it because you don’t have time to invest 2-3 hours of your time to learn about the features? Simply because you think you can learn on the fly like I thought I could? Does learning on the fly how to properly use the Service Request escalation process when you already have critical issue sound like a good idea? My advice is: Invest your time now to learn how My Oracle Support can help you prevent issues on your systems. Learn how to find answers faster and resolve problems more efficiently. Understand how to properly complete a service request. Invest in Support training, offered at no additional cost to Oracle Premier Support customers. It will pay back quicker than you think. It will bring you more value than you think. Discover your advantage with Oracle Premier Support's Proactive Portfolio.

    Read the article

  • Documenting sp_ssiscatalog

    - by jamiet
    What is the best way to document an API? Moreover, what is the best way to document a T-SQL API? Before I try to answer those questions I should explain what I mean by “a T-SQL API”. I think of an API as being a collection of well-defined, known, code modules that provide some notion of a service to whomever uses it; in T-SQL terms I tend to think of a collection of stored procedures and functions as a form of API. Its a loose definition, I admit, and in SQL Server circles we don’t tend to think of stored procedures collectively as an API but if you think about it that’s exactly what they are. The question of how to document a T-SQL API came to my mind as I worked on sp_ssiscatalog. How could I make it easy for people to learn about the capabilities of sp_ssiscatalog without forcing them to dig through the code and find out for themselves? My opening gambit was to write documentation pages on the wiki at http://ssisreportingpack.codeplex.com. That’s kinda useful but it does suffer the disadvantage that someone using sp_ssiscatalog needs to go visit a webpage to read it – I want the documentation to be available wherever the user is using sp_ssiscatalog. Moreover, maintaining the wiki is a real PITA. Intellisense works up to a point, I guess: but that only shows whatever SQL Server knows about the various parameters, which isn’t all that much! I wanted a better way for my API users to learn about its capabilities and so I hit upon the idea of simply using PRINT statements within the code itself to inform the user what options are available; hence I added such PRINT statements in the latest check-in. Now when you execute (for example): EXEC sp_ssiscatalog @operation_type='execs' you can hit F6 a few times to view the messages pane and you shall see something like this: Notice that I’m returning information about all the parameters that can be used to affect the results that just got returned. I really do think this will be very useful to anyone using sp_ssiscatalog; I myself am always forgetting what the parameters are and I wrote the damn thing so I can’t really expect anyone else to remember them. I have not yet made available a release that has these changes in it but when I do I’ll blog about it right here. At the time of writing the latest available release of sp_ssiscatalog is DB v1.0.1.0 but if you want to the latest and greatest simply download it straight from source. Feedback is welcome as always. @Jamiet

    Read the article

  • Pixels - A cry for some insight

    - by CarrotFile
    I'm pretty new to web developing and I'd love some clarification. Although reading more than one book on the topic, I cannot seem to wrap my head around the pixel concept. I encounter problems with this issue when trying to use CSS and pixel units for design that fits different screen sizes. To my understanding a pixel is the most basic unit used by a monitor in order to compose an image on the screen. So if me resolution is 800 by 600, everything on my screen is rendered using those 800*600 basic building blocks. If I were to enlarge my screen resolution, 3 things would accrue: A. The basic image building block(the pixel) would shrink in size B. The pixels would move close together C. Well, more pixels would now be available All these combined lead to a sharper(depending on the viewing distance) and more detail enabling image. Well so far so good. Here is were I start getting lost: To my knowledge a pixel is not a physical, real object. Monitors are not embedded with a few thousand pixels. I am drawn to this conclusion because anyone can change his screen's resolution, making a pixel on his screen bigger or smaller, and adding or subtracting the amount of total pixels on screen. Adding to that, I have herd that different monitors have different pixel densities. For example Apple's retina monitors. Taking all of the above as my knowledge base, These are my questions: If a pixel has no real world constant size, what does comparing different pixel densities matter? Each screen company can define it's own pixel concept and declare the higher density. What does a bigger pixel density mean? Say we take two screens with the same physical dimensions, but with a different pixel density, am I to assert that the main difference would be the larger density screen being able to display a higher max resolution? Or am I to assert that given the same resolution on both monitors, the higher density one would display a sharper, smaller image? If a pixel is not a fixed size within one monitor, is it a fixed size between the same resolution on two different monitors? For example, would two different monitors, set to the same resolution, be comprised of same size, same quantity pixels? I'd love some help (:

    Read the article

  • Nails vs Screws (C# List vs Dictionary)

    - by MarkPearl
    General This may sound like a typical noob statement, but I’m finding out in a very real way that just because you have a solution to a problem, doesn’t necessarily mean it is the best solution. This was reiterated to me when a friend of mine suggested I look at using Dictionaries instead of Lists for a particular problem – he was right, I have always just assumed that because lists solved my problem I did not need to look elsewhere. So my new manifesto to counter this ageless problem is as follows… Look for a solution that will logically work Once you have a solution look for possible alternatives Decide why your current solution is the best approach compared to the alternatives If it is.. use it till something better comes along, if it isnt…. change What’s the difference between Lists & Dictionaries Both lists and dictionaries are used to store collections of data. Assume we had the following declarations… var dic = new Dictionary<string, long>(); var lst = new List<long>(); long data;   With a list, you simply add the item to the list and it will add the item to the end of the list. lst.Add(data); With a dictionary, you need to specify some sort of key and the data you want to add so that it can be uniquely identified. dic.Add(uniquekey, data);   Because with a dictionary you now have unique identifier, in the background they provide all sort’s of optimized algorithms to find your associated data. What this means is that if you are wanting to access your data it is a lot faster than a List. So when is it appropriate to use either class? For me, if I can guarantee that each item in my collection will have a unique identifier, then I will use Dictionaries instead of Lists as there is a considerable performance benefit when accessing each data item. If I cannot make this sort of guarantee, then by default I will use a list. I know this is all really basic, and I hope I haven’t missed some fundamental principle… If anyone would like to add their 2 cents, please feel free to do so…

    Read the article

  • POST and PUT requests – is it just the convention?

    - by bckpwrld
    I've read quite a few articles on the difference between POST and PUT and in when the two should be used. But there are still few things confusing me ( hopefully questions will make some sense ): 1) We should use PUT to create resources when we want clients to specify the URI of the newly created resources and we should use POST to create resources when we let service generate the URI of the newly created resources. a) Is it just by convention that POST create request doesn't contain an URI of the newly created resource or POST create request actually can't contain the URI of the newly created resource? b) PUT has idempotent semantics and thus can be safely used for absolute updates ( ie we send entire state of the resource to the server ), but not also for relative updates ( ie we send just changes to the resource state ), since that would violate its semantics. But I assume it's still possible for PUT to send relative updates to the server, it's just that in that case the PUT update won't be idempotent? 2) I've read somewhere that we should "use POST to append a resource to a collection identified by a service-generated URI". a) What exactly does that mean? That if URIs for the resources were generated by a server ( thus the resources were created via POST ), then ALL subsequent resources should also be created via POST? Thus, in such situation no resource should be created via PUT? b) If my assumption under a) is correct, could you elaborate why we shouldn't create some resources via POST and some via PUT ( assuming server already contains a collection of resources created via POST )? REPLY: 1) Please correct me if I'm wrong, but from your post and from the link you've posted, it seems: a) The Request-URI in POST is interpreted by server as the URI of the service. Thus, it could just as easily be interpreted as an URI of a newly created resource, if server code was written to recognize Request-URI as such b) Similarly, PUT is able to send relative updates, it's just that service code is usually written such that it will complain if PUT updates are relative. 2) Usually, create has fallen into the POST camp, because of the idea of "appending to a collection." It's become the way to append a resource to a list of resources. I don't quite understand the reasoning behind the idea of "appending to a collection" and why this idea prefers POST for create. Namely, if we create 10 resources via PUT, then server will contain a collection of 10 resources and if we then create another resource, then server will append this resource to that collection ( which will now contain 11 resources )?! Uh, this is kinda confusing thank you

    Read the article

  • Point line collision reaction

    - by user4523
    I am trying to program point line segment collision detection and reaction. I am doing this for fun and to learn. The point moves (it has a velocity, and can be controlled by the user), whilst the lines are strait and stationary. The lines are not axis aligned. Everything is in 2D. It is quite straight forward to work out if a collision has occurred. For each frame, the point moves from A to B. AB is a line, and if it crosses the line segment, a collision has occurred (or will occur) and I am able to work out the point of intersection (poi). The problem I am having is with the reaction. Ideally I would like the point to be prevented from moving across the line. In one frame, I can move the point back to the poi (or only alow it to move as far as the poi), and alter the velocity. The problem I am having with this approach (I think) is that, next frame the user may try to cross the line again. Although the point is on the poi, the point may not be exactly on the line. Since it is not axis aligned, I think there is always some subtle rounding issue (A float representation of a point on a line might be rounded to a point that is slightly on one side or the other). Because of this, next frame the path might not intersect the line (because it can start on the other side and move away from it) and the point is effectively allowed to cross the line. Firstly, does the analysis sound correct? Having accepted (maybe) that I cannot always exactly position the point on the line, I tried to move the point away from the line slightly (either along the normal to the line, or along the path vector). I then get a problem at edges. Attempting to fix one collision by moving the point away from the line (even slightly) can cause it to cross another line (one shape I am dealing with is a star, with sharp corners). This can mean that the solution to one collision inadvertently creates another collision, which is ignored. Again, does this sound correct? Anyway, whatever I try, I am having difficulty with edges, and the point is occasionally able to penetrate the polygons and cross lines, which is undesirable. Whilst I can find a lot of information about collision detection on the web (and on this site) I can find precious little information on collision reaction. Does any one know of any good point line collision reaction tutorials? Or is my approach too flawed/over complicated?

    Read the article

  • Should library classes be wrapped before using them in unit testing?

    - by Songo
    I'm doing unit testing and in one of my classes I need to send a mail from one of the methods, so using constructor injection I inject an instance of Zend_Mail class which is in Zend framework. Example: class Logger{ private $mailer; function __construct(Zend_Mail $mail){ $this->mail=$mail; } function toBeTestedFunction(){ //Some code $this->mail->setTo('some value'); $this->mail->setSubject('some value'); $this->mail->setBody('some value'); $this->mail->send(); //Some } } However, Unit testing demands that I test one component at a time, so I need to mock the Zend_Mail class. In addition I'm violating the Dependency Inversion principle as my Logger class now depends on concretion not abstraction. Does that mean that I can never use a library class directly and must always wrap it in a class of my own? Example: interface Mailer{ public function setTo($to); public function setSubject($subject); public function setBody($body); public function send(); } class MyMailer implements Mailer{ private $mailer; function __construct(){ $this->mail=new Zend_Mail; //The class isn't injected this time } function setTo($to){ $this->mailer->setTo($to); } //implement the rest of the interface functions similarly } And now my Logger class can be happy :D class Logger{ private $mailer; function __construct(Mailer $mail){ $this->mail=$mail; } //rest of the code unchanged } Questions: Although I solved the mocking problem by introducing an interface, I have created a totally new class Mailer that now needs to be unit tested although it only wraps Zend_Mail which is already unit tested by the Zend team. Is there a better approach to all this? Zend_Mail's send() function could actually have a Zend_Transport object when called (i.e. public function send($transport = null)). Does this make the idea of a wrapper class more appealing? The code is in PHP, but answers doesn't have to be. This is more of a design issue than a language specific feature

    Read the article

  • BoundingBox Intersection Problems

    - by Deukalion
    When I try to render two cubes, same sizes, one beside the other. With the same proportions (XYZ). My problem is, why do a Box1.BoundingBox.Contains(Box2.BoundingBox) == ContaintmentType.Intersects - when it clearly doesn't? I'm trying to place objects with BoundingBoxes as "intersection" checking, but this simple example clearly shows that this doesn't work. Why is that? I also try checking height of the next object to be placed, by checking intersection, adding each boxes height += (Max.Y - Min.Y) to a Height value, so when I add a new Box it has a height value. This works, but sometimes due to strange behavior it adds extra values when there isn't anything there. This is an example of what I mean: BoundingBox box1 = GetBoundaries(new Vector3(0, 0, 0), new Vector3(128, 64, 128)); BoundingBox box2 = GetBoundaries(new Vector3(128, 0, 0), new Vector3(128, 64, 128)); if (box1.Contains(box2) == ContainmentType.Intersects) { // This will be executed System.Windows.Forms.MessageBox.Show("Intersects = True"); } if (box1.Contains(box2) == ContainmentType.Disjoint) { System.Windows.Forms.MessageBox.Show("Disjoint = True"); } if (box1.Contains(box2) == ContainmentType.Contains) { System.Windows.Forms.MessageBox.Show("Contains = True"); } Test Method: public BoundingBox GetBoundaries(Vector3 position, Vector3 size) { Vector3[] vertices = new Vector3[8]; vertices[0] = position + new Vector3(-0.5f, 0.5f, -0.5f) * size; vertices[1] = position + new Vector3(-0.5f, 0.5f, 0.5f) * size; vertices[2] = position + new Vector3(0.5f, 0.5f, -0.5f) * size; vertices[3] = position + new Vector3(0.5f, 0.5f, 0.5f) * size; vertices[4] = position + new Vector3(-0.5f, -0.5f, -0.5f) * size; vertices[5] = position + new Vector3(-0.5f, -0.5f, 0.5f) * size; vertices[6] = position + new Vector3(0.5f, -0.5f, -0.5f) * size; vertices[7] = position + new Vector3(0.5f, -0.5f, 0.5f) * size; return BoundingBox.CreateFromPoints(vertices); } Box 1 should start at x -64, Box 2 should start at x 64 which means they never overlap. If I add Box 2 to 129 instead it creates a small gap between the cubes which is not pretty. So, the question is how can I place two cubes beside eachother and make them understand that they do not overlap or actually intersect? Because this way I can never automatically check for intersections or place cube beside eachother.

    Read the article

  • How can I create a flexible system for tiling a 2D RPG map?

    - by CptSupermrkt
    Using libgdx here. I've just finished learning some of the basics of creating a 2D environment and using an OrthographicCamera to view it. The tutorials I went through, however, hardcoded their tiled map in, and none made mention of how to do it any other way. By tiled map, I mean like Final Fantasy 1, where the world map is a grid of squares, each with a different texture. So for example, I've got a 6 tile x 6 tile map, using the following code: Array<Tile> tiles = new Array<Tile>(); tiles.add(new Tile(new Vector2(0,5), TileType.FOREST)); tiles.add(new Tile(new Vector2(1,5), TileType.FOREST)); tiles.add(new Tile(new Vector2(2,5), TileType.FOREST)); tiles.add(new Tile(new Vector2(3,5), TileType.GRASS)); tiles.add(new Tile(new Vector2(4,5), TileType.STONE)); tiles.add(new Tile(new Vector2(5,5), TileType.STONE)); //... x5 more times. Given the random nature of the environment, for loops don't really help as I have to start and finish a loop before I was able to do enough to make it worth setting up the loop. I can see how a loop might be helpful for like tiling an ocean or something, but not in the above case. The above code DOES get me my final desired output, however, if I were to decide I wanted to move a piece or swap two pieces out, oh boy, what a nightmare, even with just a 6x6 test piece, much less a 1000x1000 world map. There must be a better way of doing this. Someone on some post somewhere (can't find it now, of course) said to check out MapEditor. Looks legit. The question is, if that is the answer, how can I make something in MapEditor and have the output map plug in to a variable in my code? I need the tiles as objects in my code, because for example, I determine whether or not a tile is can be passed through or collided with based on my TileTyle enum variable. Are there alternative/language "native" (i.e. not using an outside tool) methods to doing this?

    Read the article

  • Website File and Folder Structure

    - by Drummss
    I am having a problem learning how proper website structure should be. And by that I mean how to code the pages and how folder structure should be. Currently I am navigating around my website using GET variables in PHP when you want go to another page, so I am always loading the index.php file. And I would load the page I wanted like so: $page = "error"; if(isset($_GET["page"]) AND file_exists("pages/".$_GET["page"].".php")) { $page = $_GET["page"]; } elseif(!isset($_GET["page"])) { $page = "home"; } And this: <div id="page"> <?php include("pages/".$page.".php"); ?> </div> The reason I am doing this is because it makes making new pages a lot easier as I don't have to link style sheets and javascript in every file like so: <head> <title> Website Name </title> <link href='http://fonts.googleapis.com/css?family=Lato:300,400' rel='stylesheet' type='text/css'> <link rel="stylesheet" href="css/style.css" type="text/css"> <link rel="shortcut icon" href="favicon.png"/> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js" type="text/javascript"></script> <script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.10.4/jquery-ui.min.js" type="text/javascript"></script> </head> There is a lot of problems doing it this way as URLs don't look normal ("?page=apanel/app") if I am trying to access a page from inside a folder inside the pages folder to prevent clutter. Obviously this is not the only way to go about it and I can't find the proper way to do it because I'm sure websites don't link style sheets in every file as that would be very bad if you changed a file name or needed to add another file you would have to change it in every file. If anyone could tell me how it is actually done or point me towards a tutorial that would be great. Thanks for your time :)

    Read the article

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • Platformer Collision Error [closed]

    - by Connor
    I am currently working on a relatively simple platform game that has an odd bug.You start the game by falling onto the ground (you spawn a few blocks above the ground), but when you land your feet get stuck INSIDE the world and you can't move until you jump. Here's what I mean: The player's feet are a few pixels below the ground level. However, this problem only occurs in 3 places throughout the map and only in those 3 select places. I'm assuming that the problem lies within my collision detection code but I'm not entirely sure, as I don't get an error when it happens. public boolean isCollidingWithBlock(Point pt1, Point pt2) { //Checks x for(int x = (int) (this.x / Tile.tileSize); x < (int) (this.x / Tile.tileSize + 4); x++) { //Checks y for(int y = (int) (this.y / Tile.tileSize); y < (int) (this.y / Tile.tileSize + 4); y++) { if(x >= 0 && y >= 0 && x < Component.dungeon.block.length && y < Component.dungeon.block[0].length) { //If the block is not air if(Component.dungeon.block[x][y].id != Tile.air) { //If the player is in contact with point one or two on the block if(Component.dungeon.block[x][y].contains(pt1) || Component.dungeon.block[x][y].contains(pt2)) { //Checks for specific blocks if(Component.dungeon.block[x][y].id == Tile.portalBlock) { Component.isLevelDone = true; } if(Component.dungeon.block[x][y].id == Tile.spike) { Health.health -= 1; Component.isJumping = true; if(Health.health == 0) { Component.isDead = true; } } return true; } } } } } return false; } What I'm asking is how I would fix the problem. I've looked over my code for quite a while and I'm not sure what's wrong with it. Also, if there's a more efficient way to do my collision checking then please let me know! I hope that is enough information, if it's not just tell me what you need and I'll be sure to add it. Thank you! [EDIT] Jump code: if(!isJumping && !isCollidingWithBlock(new Point((int) x + 2, (int) (y + height)), new Point((int) (x + width + 2), (int) (y + height)))) { y += fallSpeed; //sY is the screen's Y. The game is a side-scroller Component.sY += fallSpeed; } else { if(Component.isJumping) { isJumping = true; } } if(isJumping) { if(!isCollidingWithBlock(new Point((int) x + 2, (int) y), new Point((int) (x + width + 2), (int) y))) { if(jumpCount >= jumpHeight) { isJumping = false; jumpCount = 0; } else { y -= jumpSpeed; Component.sY -= jumpSpeed; jumpCount += 1; } } else { isJumping = false; jumpCount = 0; } }

    Read the article

  • Frame rate on one of two machines running same code seems to be capped at 60 for no reason

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated.

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • Why do some user agents have spam urls in them (and why are they always Opera/Presto User-Agents)?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • SSMS Tools Pack 2.7 is released. New website, improved licensing and features.

    - by Mladen Prajdic
    New website Nice, isn't it? Cleaner, simpler, better looking and more modern. If you have any suggestions for further improvements I'd be glad to hear them. Simpler licensing With SSMS tools Pack 2.7 the licensing is finally where it should be. It is now based on the activate/deactivate model. This way you can move a license from machine to machine with simple deactivation on one and reactivation on another machine. Much better, no? Because of very good feedback I have added an option for 6 machines and lowered the 4 machines option to 3 machines. This should make it much simpler for you to choose the right option for yourself. Improved features Version 2.5.3 was already extremely stable and 2.7 continues with that tradition. Because of that I could fully focus on features and why 3.0 will rock even more that 2.7! ;) In version 2.7 I have addressed quite a few improvements you were requesting for a while now. SQL History This is probably the biggest time saver out there, therefore it's only fair it gets a few important updates. If you have an existing .sql file opened, the Window Content History now saves your code to that existing file and also makes a backup in the SQL History log default location. Search is still done through the SQL History log but the Tab Sessions Restore opens your existing .sql file. This way you don't have to remember to save your existing files by yourself anymore. A bug when you couldn't search properly if you copied the log files to a new location was fixed. Unfortunately this removed the option to filter a search with the time component. The smallest search interval is now one day. The SSMS Tools Pack now remembers the visibility of the Current Window History window when you exit SSMS. SQL Snippets You can now set the position of the cursor in your snippets by placing {C} somewhere in your snippet. It's a small improvement but can be a huge time saver since you don't have to move through the snippet to the desired location anymore. Run script on multiple databases Database choices can now be saved with a name and then loaded again next time. You can also choose to run the script in a new window for each chosen database. Search through grid results You can now go previous/next search result with the Prev/Next control inside the search window. This is extremely useful if you have a large resultset. IT saves you the scrolling. CRUD generator Four new variables have been added: |CurrentDate| writes current date in format yyyy-MM-dd to your script |CurrentTime| writes current time in 24h format HH:mm:ss to your script |CurrentWinUser| writes current Windows logged on user to your script |CurrentSqlUser| writes current SQL logged on login to your script This was actually quite a requested feature so if you have any other ideas for extra variables, do let me know. That's about it. I hope you're going to enjoy this version as much as the previous ones. Have fun!

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >