Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 193/335 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Sync KeePassX with KeePass2

    - by bioShark
    Simply put: In Ubuntu I am using KeePassX and in Windows KeePass2. In am not able to export/import passwords from one to another. I would prefer to use the same database, but I don't really know how. If there is no possibility to sync the 2, can you recommend another password vault, which is able to sync passwords from 2 OS, using a shared DB. Thanks I am using Ubuntu 12.04 and Win 7. Edit: I have noticed that KeePass2 is available in the Software Center, so I have installed it, and I can successfully open my Win7 database. Now I will migrate my KeePassX passwords. I am seeing now a huge difference in the looks. While KeePassX doesn't exactly have Ubuntu like look&feel, it's 100 times more elegant than the interface KeePass2 comes with. Well, maybe that was my initial decision for installing KeePassX on my Ubuntu machine. I can't remember. @fossfreedom, please add your comment as a response, so that I can accept it. Thank for the suggestion

    Read the article

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

  • Squeezing all the SEO out of a URL as possible.

    - by John Isaacks
    I am working on an ecommerce site, I told our SEO consultant that I plan to make the URL scheme: /products/<id>/<name>. This is similar to Stackoverflow's URLs which are /questions/<id>/<title>. He asked me if I could change the URL scheme to /p/<id>/<name> instead. I know why he wants this change, the word "products" isn't needed to find the correct product, and it doesn't offer any SEO, so shortening it to just p would make the relevant keywords in the <name> weigh more. His main priority is maximizing SEO, but the part that I don't think he is considering is how this effects the semantics of the site. Also having the word "products" looks like it has meaning and a reason for being there, just having a p looks chaotic and ugly to me. I also don't think it makes that much of a difference does it? Stackoverflow doesn't use /q/<id>/<title> and they do just fine, I do realize that theres many factors at play here though, not just the URL. So I want some outside opinions on which is the better way and why?

    Read the article

  • Click and Drag from Clickpad stops working after a while 12.04

    - by Jason O'Neil
    I've got a Samsung Notebook (NP-QX412-S01AU) with a touchpad / clickpad. I'm running 12.04 Precise. When I first log into my computer, the touchpad behaves exactly as expected and desired. The longer I stay logged in, it slowly degrades. I'll try describe it. There are 3 ways of "dragging" on this clickpad: (Physical) click and hold with one finger, and drag around while still holding it down. All with one finger. (Physical) Click and hold with one finger, then with another finger drag around to move cursor. Double tap (not a physical click) and on the second tap, hold and drag. I most naturally use option 1, but here's how it works: When I first turn on, options 1, 2 and 3 all work. After a while, only options 2 and 3 work. Later still, only option 3 works. Restarting X causes all 3 to work again. I've compared the output of "synclient" in each of the states, and there was no difference. Anybody know what to look at? Or at the very least, a command I can run to "restart" the mouse driver without restarting X?

    Read the article

  • Split a 2D scene in layers or have a z coordinate

    - by Bane
    I am in the process of writing a 2D game engine, and a dilemma emerged. Let me explain the situation... I have a Scene class, to which various objects can be added (Drawable, ParticleEmitter, Light2D, etc), and as this is a 2D scene, things will obviously be drawn over each other. My first thought was that I could have basic add and remove methods, but I soon realized that then there would be no way for the programmer to control the order in which things were drawn. So I can up with two options, each with its pros and cons. A) Would be to split the scene in layers. By that I mean instead of having the scene be a container of objects, have it be a container of layers, which are in turn the containers of objects. B) Would require to have some kind of z-coordinate, and then have the scene sorted so objects with lower z get drawn first. Option A is pretty solid, but the problem is with the lights. In what layer do I add it? Does it work cross-layer? On all bottom layers? And I still need the Z coordinate to calculate the shadow! Option B would require me to change all my code from having Vector2D positions, to some kind of class that inherits from Vector2D and adds a z coordinate to it (I don't want it to be a Vector3D because I still need all the same methods the 2D kind has, just with .z clamped on). Am I missing something? Is there an alternative to these methods? I'm working in Javascript, if that makes a difference.

    Read the article

  • Orthographic unit translation mismatch on grid (e.g. 64 pixels translates incorrectly)

    - by Justin Van Horne
    I am looking for some insight into a small problem with unit translations on a grid. Setup 512x448 window 64x64 grid gl_Position = projection * world * position; projection is defined by ortho(-w/2.0f, w/2.0f, -h/2.0f, h/2.0f); This is a textbook orthogonal projection function. world is defined by a fixed camera position at (0, 0) position is defined by the sprite's position. Problem In the screenshot below (1:1 scaling) the grid spacing is 64x64 and I am drawing the unit at (64, 64), however the unit draws roughly ~10px in the wrong position. I've tried uniform window dimensions to prevent any distortion on the pixel size, but now I am a bit lost in the proper way in providing a 1:1 pixel-to-world-unit projection. Anyhow, here are some quick images to aide in the problem. I decided to super-impose a bunch of the sprites at what the engine believes is 64x offsets. When this seemed off place, I went about and did the base case of 1 unit. Which seemed to line up as expected. The yellow shows a 1px difference in the movement. Vertices It would appear that the vertices going into the vertex shader are correct. For example, in reference to the first image the data looks like this in the VBO: x y x y ---------------------------- tl | 0.0 24.0 64.0 24.0 bl | 0.0 0.0 -> 64.0 0.0 tr | 16.0 0.0 80.0 0.0 br | 16.0 24.0 80.0 24.0 With that said, all I am left to believe is that I am munging up my actual projection. So, I am looking for any insight into maintaining the 1:1 pixel-to-world-unit projection.

    Read the article

  • Self Service Reporting With PowerPivot

    - by blakmk
    There are so many cool new features in Sql 2008 release 2 it was difficult for me to pick a topic for T-SQL Tuesday . But the one that I am now a secret fan of, I once resented for its creation. Let me explain, for years I have encountered reporting systems cobbled together in tools like Access and Excel built by "database hobbyists" who had no formal training in database design or best practices. They would take their monstrosities as far as they could go before ultimatley it stopped working or the person that wrote it left the company. At that point it would become the resident DBA's problem to support it as a Live application. So when I first heard of Power Pivot, a sense of Deja Vu overtook me and I felt like the guy in the Ausin Powers movie , knowing the inevitable is coming but somehow unsure how to get out of the way. But when I eventually saw it in action, I quickly realised that it is a very powerful tool. It has a much smaller "time to market" than traditional BI architectures. Combined with the new features of Excel, some pretty impressive dashboards can be produced.Of course PowerPivot is not a magic bullet and along with potential scalability issues there are the usual issues such as master data management and data quality that cannot be overcome easily with power pivot. As a tool though, it has potential. Traditional BI is expensive, both in terms of time and the amount of resources it takes to deliver the system. The time lag between an analyst or a commercial accountant requesting reports and the report being delivered can make a huge commercial difference. I have observed companies where empowered end users become extremely productive when allowed to plough in to various disperate datasets. It may not be the correct way or the most sustainable but its cheap and quick. In these times when budgets are being slashed and we are forced to deliver more with less, why not empower the end user in a tool that is designed for exactly this task.... @blakmk  

    Read the article

  • Automatic TRIM vs. manual TRIM

    - by Eike Cochu
    I am currently trying to find out how to trim with my new TP and was wondering about the difference of manual/online trimming. Here is my setup: ThinkPad T430s with SSD Samsung 830, 128GB and Xubuntu 12.10, here are some outputs to check if trim will work on my system (got these from here: http://wiki.ubuntuusers.de/SSD/TRIM) root@eike-tp:~# sudo hdparm -I /dev/sda | grep -i TRIM * Data Set Management TRIM supported (limit 8 blocks) First, I tried the online trimming: How to enable TRIM? my fstab with discard inserted: UUID=d6c49c17-a4f1-466c-9f7e-896c20db3bba / ext4 discard,noatime,errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=a0322f5f-c6c1-4896-863f-668f0638d8cf none swap sw 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 I tried to test if it works (but I don't get any zeroes when I try it with /dev/sda), but found out that this method is only possible with SSD type 2 and I seem to have type 3. So I don't know if it works or not. The Ubuntuwiki (first link) recommends manual trimming, so I set up a daily cronjob instead of discard: #!/bin/sh LOG=/var/log/batched_discard.log echo "*** $(date -R) ***" >> $LOG fstrim -v / >> $LOG the wiki article suggests weekly or daily. Now to my questions: How often executes the automated trim? How often is recommended? Online vs. manual trimming? Thank you for your help

    Read the article

  • Statistical Software Quality Control References

    - by Xodarap
    I'm looking for references about hypothesis testing in software management. For example, we might wonder whether "crunch time" leads to an increase in defect rate - this is a surprisingly difficult thing to do. There are many questions on how to measure quality - this isn't what I'm asking. And there are books like Kan which discuss various quality metrics and their utilities. I'm not asking this either. I want to know how one applies these metrics to make decisions. E.g. suppose we decide to go with critical errors / KLOC. One of the problems we'll have to deal with with that this is not a normally distributed data set (almost all patches have zero critical errors). And further, it's not clear that we really want to examine the difference in means. So what should our alternative hypothesis be? (Note: Based on previous questions, my guess is that I'll get a lot of answers telling me that this is a bad idea. That's fine, but I'd request that it's based on published data, instead of your own experience.)

    Read the article

  • Ubuntu 12.04 VPN default route issue when using the UI

    - by Pieter Pabst
    When setting up a VPN connection using the UI (System settings = Network) the VPN connection is used as the default route after connecting to it. How can I avoid this? That is without assigning manual addresses. I want the connection to use DHCP. It should use the eth* interfaces for my default route, like they are used before making the connection. The VPN route should only be used when connecting to addresses in the ppp0 adapters range. If possible using the UI only, not that I'm not used to editing config files... But the point of using Ubuntu for me was the "it just works" concept (which is true for 99% of the time, keep up the good work). I tried setting the "Method" combobox in the "Configure..." dialogue to "Automatic (VPN) addresses only". That doesn't make a difference. (Another thing I noticed; the UI shows my connection as a "Wireless connection" in the list control on the left. Don't know why... it has the right icon, but that's probably something for the bug tracker)

    Read the article

  • SEO + international sites? country.domain.com or domain.country?

    - by Pure.Krome
    Hi folks, is it better to have seperate country specific domains (which costs more money) or subdomains which define the country, for better SEO? eg. stackoverflow.com stackoverflow.com.au stackoverflow.co.uk vs stackoverflow.com au.stackoverflow.com uk.stackoverflow.com Assumption: int the search engine web master tools, each subdomain are associated to a country. eg. au.stackoverflow.com is associated to the country Australia. cheers! Update I understand that both methods do work, especially when i utilize the assumption, listed above. The question is about: Which method is better? Is there such a small SEO difference between them? Is the first method way way way better than the second with getting better SEO results? Update #2 A number of folks have suggested that the following is a good/better approach: stackoverflow.com/ stackoverflow.com/au stackoverflow.com/uk By adding a country specific iso code to the end of the url/the first folder of the domain can be recognised as the country. But a number of SEO mates have suggested that this is a valuable waste of folder level space. Er.. how can i explain. Ok, it's been suggested by some SEO experts that if the number of levels or folders in the domain exceeds 5 then the page drops dramatically in importance. Basically, you don't want to make it deep. As such, adding the country as the first level can be considered a waste, especially when it can be handled by the domain OR subdomain - hence the question :) So, any more thoughts on this? (Maybe SO is the wrong place to ask this question?)

    Read the article

  • Sprites rendering blurry with velocity

    - by ashes999
    After adding velocity to my game, I feel like my textures are twitching. I thought it was just my eyes, until I finally captured it in a screenshot: The one on the left is what renders in my game; the one on the right is the original sprite, pasted over. (This is a screenshot from Photoshop, zoomed in 6x.) Notice the edges are aliasing -- it looks almost like sub-pixel rendering. In fact, if I had not forced my sprites (which have position and velocity as ints) to draw using integer values, I would swear that MonoGame is drawing with floating point values. But it isn't. What could be the cause of these things appearing blurry? It doesn't happen without velocity applied. To be precise, my SpriteComponent class has a Vector2 Position field. When I call Draw, I essentially use new Vector2((int)Math.Round(this.Position.X), (int)Math.Round(this.Position.Y)) for the position. I had a bug before where even stationary objects would jitter -- that was due to me using the straight Position vector and not rounding the values to ints. If I use Floor/Ceiling instead of round, the sprite sinks/hovers (one pixel difference either way) but still draws blurry.

    Read the article

  • Are Intel compilers really better than Microsoft ones?

    - by Rocket Surgeon
    Years ago I was surprised when discovered that Intel sells Studio compatible compilers. I tried it in particular for C/C++ as well as fantastic diagnostic tools. But the code was simply not that computationally intensive to notice the difference. The only impression was: did Intel really did it for me just now, Wow, amazing tools with nanoseconds resolution, unbeleivable. But the trial ended and team never seriously considered a purchase. From your experience, if license cost does not matter, which vendor is a winner ? It is not broad or vague question or attemt to spark a holy war. This sort of question about 2 very visible tools. Nobody likes when tools have any mysteries or surprises. And choices between best and best are always the pain. I also understand the "grass greener" argument. I want to hear all "what ifs" stories. What if Intel just locally optimizes it for the chip stepping of the month, and not every hardware target will actually work as well as Microsoft compiled ? What if AMD hardware is the target and everything will slow down for no reason ? Or on other hand, what if Intel's hardware has so many unnoticable opportunities, that Microsoft compiler writers are too slow to adopt and never implement in the compiler ? What if both are the same exactly, actually a single codebase just wrapped into 2 different boxes and licensed to both vendors by some 3rd party shop? And so on. But someone knows some answers.

    Read the article

  • How To Scale Canvas In Android

    - by Daniel Braithwaite
    I am writing a android game using Canvas as the way to draw everything, the problem is that when i run it on different android phones the canvas dosn't change size i tried using canvas.scale() but that didn't make a i difference. The code i use for drawing is ... public void draw( Canvas c, int score ) { Obstical2[] obstmp = Queue.toArray(this.o); Coin[] cointmp = QueueC.toArray(this.c); for( int i = 0; i < obstmp.length; i++ ) { obstmp[i].draw(c); } for( int i = 0; i < cointmp.length; i++ ) { cointmp[i].draw(c); } c.drawText(String.format("%d", score ), 20, 50, textPaint); if( isWon && isStarted ) c.drawText("YOU WON", 20, 400, resPaint); else if( isLost && isStarted ) c.drawText("YOU LOST", 20, 400, resPaint); } The function above calls the draw functions for the entity's on the screen, theses function are as follows Draw Function For Obstical : public void draw( Canvas c ) { Log.i("D", "COIN"); coin.draw(c); } Draw Function For Coin : public void draw( Canvas c ) { obstical.draw(c); } How could i make the canvas re-size to it would look the same on any screen ? Cheers Daniel

    Read the article

  • Minimizing data sent over a webservice call on expensive connection

    - by aceinthehole
    I am working on a system that has many remote laptops all connected to the internet through cellular data connections. The application will synchronize periodically to a central database. The problem is, due to factors outside our control, the cost to move data across the cellular networks are spectacularly expensive. Currently the we are sending a compressed XML file across the wire where it is being processed and various things are done with (mainly stuffing it into a database). My first couple of thoughts were to convert that XML doc to json, just prior to transmission and convert back to XML just after receipt on the other end, and get some extra compression for free without changing much. Another thought was to test various other compression algorithms to determine the smallest one possible. Although, I am not entirely sure how much difference json vs xml would make once it is compressed. I thought that their must be resources available that address this problem from an information theory perspective. Does anyone know of any such resources or suggestions on what direction to go in. This developed on the MS .net stack on windows for reference.

    Read the article

  • JSR Updates - Multiple JSRs migrate to latest JCP version

    - by Heather VanCura
    As part of the JCP.Next reform effort, many JSRs have migrated to the latest version of the JCP program in the last month.  These JSRs' Spec Leads and Expert Groups are contributing to the strides the JCP has been making to enable greater community transparency, participation and agility to the working of the JSR development through the JCP program. Any other JSR Spec Leads interested in migrating to the latest JCP version, now JCP 2.9, as of 13 November, incorporating the Merged Executive Committee (EC), see the Spec Lead Guide for instructions on migrating to the latest JCP version.  For JCP 2.8 JSRs, you are effectively already operating under JCP 2.9 since there are no longer two ECs.  This is the difference for JCP 2.8 JSRs migrating to JCP 2.9 -- a merged EC.  To make the migration official, just inform your Expert Group on a public channel and email your request to admin at jcp.org. JSR 310, Date and Time API, led by Stephen Colebourne and Michael Nascimento and Oracle (Roger Riggs)  JSR 349, Bean Valirdation 1.1, led by RedHat (Emmanuel Bernard) JSR 350, Java State Management, led by Oracle (Mitch Upton) JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services, led by Oracle, (Santiago Pericas-Geertsen and Marek Potociar) JSR 347, Data Grids for the Java Platform, led by RedHat (Manik Surtani)

    Read the article

  • Paying it Forward

    - by Liam McLennan
    You’re a talented guy (or girl). You’ve done alright. Years of hard work and stick-to-it-ive-ness have paid off and left you with plenty and an opportunity to make a positive difference to someone else. And then there are people with less than they need. Sometimes all they need to help themselves is a start. Opportunity International provide micro financing to help people grow their small businesses so that they can afford food, shelter, water and education. microfinance to solve poverty   MonsoonerOrLater This June, Chris, Angus (my brother) and I are travelling to India, entering into a rickshaw race and raising money for charity (Opportunity International and Round Table India). The Deccan Odyssey is a nine day rally, racing up the coast of India from Goa to Mumbai in a three wheeled motorbike. If you would like to support us, please make a tax deductable donation via our secure site at GoFundraise. For more information take a look at the MonsoonerOrLater site. If you live in Brisbane come along to An Evening With MonsoonerOrLater. The entry fee includes a three-course Indian meal, live music, henna tattooist, chilli eating competition, best bollywood dance off, Dhalsim Vs Dhalsim Street fighter, Delhi Belly Bet, Auctions and Prizes. All profits go to our charities: Round Table India and Opportunity International.

    Read the article

  • Meet up with the JCP at JavaOne Latin America

    - by Heather VanCura
    The JCP made it to JavaOne Brazil!  We had a quickie presentation earlier today on JCP.Next that was well attended.  Come to see us at@ the  OTN mini-theatre tomorrow from 12:00-12:15 pm for a quickie on participation.  Then make your way to the Mazanino Sala 12 at 12:30 pm for CON-22250.  "The Java Community Process: How You Can Make a Positive Difference" will be presented with Heather VanCura, JCP,  and Fabio Velloso, SouJava, on Thursday, 6 December, at 12:30 pm.  Find out more about how to participate in the JCP program, the JCP.Next effort and how to get involved with Adopt-a-JSR through your JUG (or on your own)!  Here is the description in Portuguese: A JCP desempenha um papel fundamental na evolução do Java. A sessão vai enfatizar o valor da transparência e participação através da JCP, Grupos de Usuários Java e do programa Adote um JSR. Vamos explorar também algumas das mudanças futuras no processo através da iniciativa JCP.Next, e explicar como você pode se envolver. Traga suas dúvidas, suas sugestões, e suas preocupações. Nós queremos ouvir de você, e incentivá-lo e facilitar a sua participação ativa no avanço da plataforma Java

    Read the article

  • What is meant by, "A user shouldn't decide whether it is an Admin or not. The Privileges or Security system should."

    - by GlenPeterson
    The example used in the question pass bare minimum data to a function touches on the best way to determine whether the user is an administrator or not. One common answer was: user.isAdmin() This prompted a comment which was repeated several times and up-voted many times: A user shouldn't decide whether it is an Admin or not. The Privileges or Security system should. Something being tightly coupled to a class doesn't mean it is a good idea to make it part of that class. I replied, The user isn't deciding anything. The User object/table stores data about each user. Actual users don't get to change everything about themselves. But this was not productive. Clearly there is an underlying difference of perspective which is making communication difficult. Can someone explain to me why user.isAdmin() is bad, and paint a brief sketch of what it looks like done "right"? Really, I fail to see the advantage of separating security from the system that it protects. Any security text will say that security needs to be designed into a system from the beginning and considered at every stage of development, deployment, maintenance, and even end-of-life. It is not something that can be bolted on the side. But 17 up-votes so far on this comment says that I'm missing something important.

    Read the article

  • Attempting to install netgear N300 Wireless USB Adapter on Ubuntu without a present internet connection

    - by Liz
    Hello Linux/Ubuntu world out there. I don't have internet presently on the desktop I am trying to install the USB wireless adapter on. This seems to be the problem, which if the hardware would work would theoretically fix the problem. I can NOT access the internet via anything but wireless. I am presently on my laptop searching for answers while trying to install this little device. So any advice will have to take that into account. Now I have tried so far, using WINE which does not want to work, I have tried Windows Wireless Drivers which doesn't want to work, I have tried Software Sources, Other Software and it will not acknowledge the cdrom as a repository stating errors like E:Unable to stat the mount point /cdrom/ -stat (2: No such file or directory) However I can open the CD icon on my computer and access and browse the files. The computer can read the CD. I can read the CD. I've tried just plugging it in and seeing if the computer will automatically recognize the hardware, and go from there. That does not work either. I have tested USB port to just verify that the USB port works. It does. My laptop recognizes the hardware, and would easily install the software if I prompted it to. The difference is that my laptop is Vista, and I HATE Vista. Any tips, tricks?

    Read the article

  • Laptop Charger Not Recognised Properly on Samsung NP900X3F

    - by user193732
    Firstly thanks for your time. Secondly, having an issue with my power charger on my Samsung Series 9 NP900X3F. When I boot into Ubuntu with the charger plugged in it recognises it as charging. When I unplug the charger after this it is still says it is charging. If I suspend in Ubuntu then plug/unplug during this suspended state it recognises it, but not during normal running. If I knew a little more I'm sure I could grab logs and find out what the difference between wake on suspend and normal running is, but alas I need help! I also am having issues with my keyboard backlight via the fn keys, but that I care about far less. Thank you very much. Linux mikey-900X3F 3.12.0-031200rc1-generic #201309161735 SMP Mon Sep 16 21:38:21 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux (I upgraded my kernel version to remove heinous horizontal artefacts I was getting) Happy to list more info about my system, ima bit of a noob. I did try searching however I can't find any questions at all about my system or related models with the same issue.

    Read the article

  • Order in which passphrase is asked for encrypted volumes

    - by Lars Kotthoff
    I have installed 12.10 on a machine with two disks. The root partition is on one disk, the swap partition on the other. Both disks are encrypted and I have added the corresponding entries to /etc/crypttab. During boot, it asks for the passphrase for the disk with the root filesystem. Then it continues booting and gets to the login screen before I get a chance to enter the passphrase for the other disk. After logging in, I verified that it was actually waiting for me to enter the passphrase for that second partition (askpass process is running). But at that point, I have no way of entering the passphrase anymore. The manpage for crypttab suggests that the order in which the volumes are specified matters, so I changed it to have the swap disk first. I updated the initramfs and grub afterwards, but it didn't make any difference. How can I specify the order in which the encrypted partitions are unlocked? I'm looking for a solution that either asks for the swap passphrase first or tells the system to wait until all encrypted partitions are unlocked before displaying the login screen.

    Read the article

  • Unable to print login-required images in IE

    - by Tim Fountain
    I have some images in a section of a site that require the user to be logged in in order to view. These images are served by a PHP script, which checks the user's login state and if valid, serves the binary data with the appropriate headers. This all works fine. The issue comes when a user tries to print one of these images. In Internet Explorer, when they go to print preview they get the broken image box with a red cross in the corner instead of the actual file. This is what gets printed also. All other browsers can print the images without issue. I have some images elsewhere on the site that are also served via. PHP but these don't require a login. These print fine. The PHP-powered HTML pages on the site that require a login also print fine in IE. It's just login-required images. The user hitting print preview does not seem to result in additional HTTP request to the server for the file. However I do see an additional HTTP request a few seconds later that comes from the same IP (may or may not be related), This request includes no host header, no REQUEST_URI and no user agent. The 'please login' page sends an appropriate 403 header. I've also added a far-in-future expires header to the image response itself to ensure that browsers can serve/print the files from their own cache but this hasn't made any difference. Why can't IE print the images and what else can I do to investigate or fix the problem?

    Read the article

  • Letting search engines know that different links to identical pages stress different parts of the page

    - by balpha
    When you follow a permalink to a chat message in the Stack Exchange chat, you get a view of the transcript page for the day that contains the particular message. This message is highlighted in yellow, and the page is scrolled to its position. Sometimes – admittedly rarely, but it happens – a web search will result in such a transcript link. Here's a (constructed, obviously) example: A Google search for strange behavior of the \bibliography command site:chat.stackexchange.com gives me a link to this chat message. This message is obiously unrelated to my query, but the transcript page does indeed contain my search terms – just in a totally different spot. Both the above links lead to the same content, and Google knows this, since both pages have <link rel="canonical" href="/transcript/41/2012/4/9/0-24" /> in their <head>. The only difference between the two links is Which message has the highlight css class?. Is there a way to let Google know that while all three links have the same content, they put an emphasis on a different part of the content? Note that the permalinks on the transcript page already have a #12345 hash to "point" to the relavant chat message, but Google appears to drop it.

    Read the article

  • Benchmarking ORM associations

    - by barerd
    I am trying to benchmark two cases of self referential many to many as described in datamapper associations. Both cases consist of an Item clss, which may require many other items. In both cases, I required the ruby benchmark library and source file, created two items and benchmarked require/unrequie functions as below: Benchmark.bmbm do |x| x.report("require:") { item_1.require_item item_2, 10 } x.report("unrequire:") { item_1.unrequire_item item_2 } end To be clear, both functions are datamapper add/modify functions like: componentMaps.create :component_id => item.id, :quantity => quantity componentMaps.all(:component_id => item.id).destroy! and links_to_components.create :component_id => item.id, :quantity => quantity links_to_components.all(:component_id => item.id).destroy! The results are variable and in the range of 0.018001 to 0.022001 for require function in both cases, and 0.006 to 0.01 for unrequire function in both cases. This made me suspicious about the correctness of my test method. Edit I went ahead and compared a "get by primary key case" to a "finding first matching record case" by: (1..10000).each do |i| Item.create :name => "item_#{i}" end Benchmark.bmbm do |x| x.report("Get") { item = Item.get 9712 } x.report("First") { item = Item.first :name => "item_9712" } end where the results were very different like 0 sec compared to 0.0312, as expected. This suggests that the benchmarking works. I wonder whether I benchmarked the two types of associations correctly, and whether a difference between 0.018 and 0.022 sec significant?

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >