Search Results

Search found 18329 results on 734 pages for 'interpret order'.

Page 362/734 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • Why would a FaceBook application "work" on a profile, but not a page?

    - by edt
    I made a FaceBook application that works fine on profiles, but I can't figure out how to get it to show on a FaceBook page. For example, after I visit the application canvas URL, allow the application, then edit application settings and "add" to box and tab view... I cannot click the "plus" symbol to the left of the tabs in order to add a tab for the application. It does not appear in the list of available applications. Meanwhile, the application is working/showing up on profiles with no issues. I DID check the "Installable to Pages" checkbox on the application (authentication tab) settings. What could cause this? Here is the application canvas URL: http://apps.facebook.com/russian_girls/

    Read the article

  • What parameters to mdadm, to re-create md device with payload starting at 0x22000 position on backing storage?

    - by Adam Ryczkowski
    I try to recover from mdadm raid disaster, which happened when moving from ubuntu server 10.04 to 12.04. I know the correct order of devices from dmesg log, but given this information, I still cannot access the data. The superblocks look messy; the mdadm --examine for each disk is on this question on askubuntu By inspecting the raw contents of backing storage, I found the beginning of my data (the LUKS container in my case) at position 0x22000 relative to the beginning of the first partition in the raid. Question: What is the combination of options issued to "mdadm --create" to re-create mdadm that starts with the given offset? Bitmap size? PS. The relevant information from syslog when the system was healthy are pasted here.

    Read the article

  • How to dissuade a customer who just learned a technology and wants to use it everywhere?

    - by MainMa
    My customer recently discovered what is URL Rewriting, without completely understanding what it is, how it works and the pros and cons of it. Now, he asks for lots of strange changes in actual requirements of current projects and changes in old projects in order to implement what he believes is URL Rewriting. On one hand, I'm annoyed being asked to do things which doesn't make any sense instead of doing real work. On the other hand, I can't tell my customer that he doesn't understand anything in the subject despite his interest in it. I think many people have had situations when their manager or their customer just learned a new buzzword or a new technology, and he loved it so much than he wanted to use it in every project, everywhere, rewrite the whole codebase just to use this new thing, etc. Also, I've recently read something related on Programmers.SE where people told about their experiences when there was a huge buzz around XML, and some managers would ask to introduce XML in every project just to show to everyone that they have used it. So those who have been in similar situation, how have you managed it?

    Read the article

  • Windows 7 Not Booting After Moving Partition

    - by Guillermo Phillips
    I have a Sony Vaio laptop. After using GParted to move the primary Windows partion, the laptop no longer boots, saying 'Operating system not found'. I don't have a recovery disc and the only other machine I have access to is a Mac Mini. I have tried creating a bootable USB using the recovery ISO from Microsoft. I can see all the files on the USB stick from my Mac. I followed the instructions here: http://borgstrom.ca/2010/10/14/os-x-bootable-usb.html I have set the laptop BIOS boot order to be 'External' first, but the laptop refuses to boot from the USB stick. I have previously been able to boot from a linux installation on the USB stick. Any help or ideas would be appreciated.

    Read the article

  • Museum of Modern Art Starts Video Game Collection; Acquires Myst, Pac-Man, and More

    - by Jason Fitzpatrick
    The Museum of Modern Art is weighing in on the video-games-as-art debate by starting a collection of iconic video games and putting them up for public display. Read on to see what games are included in the initial batch and the MoMA’s reasons behind starting a video game collection. Although the MoMA is slated to grow to over 40 titles, the seed batch is 14 titles including: Pac-Man, Tetris, Sim City 2000, Myst, Portal, and Dwarf Fortress. In the announcement they explain the motivation for building a video game collection: Are video games art? They sure are, but they are also design, and a design approach is what we chose for this new foray into this universe. The games are selected as outstanding examples of interaction design—a field that MoMA has already explored and collected extensively, and one of the most important and oft-discussed expressions of contemporary design creativity. Our criteria, therefore, emphasize not only the visual quality and aesthetic experience of each game, but also the many other aspects—from the elegance of the code to the design of the player’s behavior—that pertain to interaction design. In order to develop an even stronger curatorial stance, over the past year and a half we have sought the advice of scholars, digital conservation and legal experts, historians, and critics, all of whom helped us refine not only the criteria and the wish list, but also the issues of acquisition, display, and conservation of digital artifacts that are made even more complex by the games’ interactive nature. This acquisition allows the Museum to study, preserve, and exhibit video games as part of its Architecture and Design collection. The above quote is only a small snippet of a much lengthier look at the benefits of examining and preserving video games, hit up the link below to check out the full post including future titles the MoMA would like to include in their archive. Video Games: 14 in the Collection, for Starters [Inside/Out] How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices

    Read the article

  • Maximizing TCP connections on HAProxy load balancer

    - by imaginative
    I am currently using HAProxy in order to load balance tcp connections from clients to my Erlang app server. The connection is persistent, which means I'm limited to roughly 64K clients on an optimized server (I'm currently running HAProxy on an m1.large EC2 instance). My app server is designed to horizontally scale based on the number of TCP connections. What's worrying me though is I'll need an equal number of HAProxy servers as app servers since it's a 1:1 connection. Is there currently a way to "proxy" the tcp connection to the app server so that once HAProxy sends the client off to my Erlang server, it can free up the connection, ready to serve another client? Are there any papers, existing solutions out there I can read so that I only have to worry about the 64K limit on my app servers, and not on the load balancing servers themselves?

    Read the article

  • Speeding up ROW_NUMBER in SQL Server

    - by BlueRaja
    We have a number of machines which record data into a database at sporadic intervals. For each record, I'd like to obtain the time period between this recording and the previous recording. I can do this using ROW_NUMBER as follows: WITH TempTable AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Machine_ID ORDER BY Date_Time) AS Ordering FROM dbo.DataTable ) SELECT [Current].*, Previous.Date_Time AS PreviousDateTime FROM TempTable AS [Current] INNER JOIN TempTable AS Previous ON [Current].Machine_ID = Previous.Machine_ID AND Previous.Ordering = [Current].Ordering + 1 The problem is, it goes really slow (several minutes on a table with about 10k entries) - I tried creating separate indicies on Machine_ID and Date_Time, and a single joined-index, but nothing helps. Is there anyway to rewrite this query to go faster?

    Read the article

  • Setting Redmine path

    - by David Parunakian
    Hello, How can I set the path on which Redmine should be run, e.g. example.com:3000/redmine instead of example.com:3000? There is an appropriately named option in the settings panel, but it serves a different purpose, according to the documentation and experimental observations (it is used to write URL in emails sent to users). A wider picture: I need this in order to properly serve Redmine on port 80 over Apache's mod_proxy (i.e. at example.com/redmine). The problem is that when running it on /, the home page fails to load the necessary resources (such as JavaScript and CSS files). Any workarounds?

    Read the article

  • Real life example of an agile game development process outputs

    - by Ken
    I'm trying to learn about applying agile methodologies to game development. But seems to be impossible to find real life examples. There seems to be plenty of material discussing how 'in principle' agile is applied to a game. But that is NOT what I am looking for. I have the Keith book. What I AMlooking for are real EXAMPLES of things like; Initial user stories Final user stories (complete, covering the entire game requirements) Acceptance criteria Task list Sprint backlogs (before and after each sprint) The agile books seem to have some limited examples, many of which seem contrived or limited. In this era of open source software, there must be a publicly available documented example of the process applied to a real game. I am asking specifically about games because they are so different from normal applications. Regular applications are built to all users to complete specific tasks in order to get stuff done(book a room, print a report etc). People play games for much less tangible reasons, so I think the process is significantly different. [it doesn't have to be scrum, it could be any process, just needs to be a real life example game and be reasonably complete]

    Read the article

  • HP Officejet 6000 E609n unexpectedly goes offline

    - by Sajee
    My local library has a number of Windows Vista SP1 PCs connected to two HP Officejet 6000 E609n wireless printers. Each PC can print to either of the two printers and one of the two printers is the default on each PC. This configuration has worked well over the last year w/o any trouble. Recently, the library staff is reporting that sometimes when patrons try to print, they can't. Closer inspection shows that the the default wireless printer is offline. In order to get the printer online again, the printer has to be restarted. In Control Panel Printers applet, under the Printer menu, the "Use Printer Offline" option is grayed out and there's no way to bring the printer back online w/o restarting it. Does anyone know what's going on here?

    Read the article

  • Disable PXE progamatically in parallels?

    - by Stefan Lasiewski
    I'm running Parallels 4.0 on Mac OS X 10.5.8. I'm trying to create a bunch of Virtual Machines from the commandline, using the prlctl tool, like so: $ prlctl create test1 -o linux -d centos $ prlctl set test1 --device-del cdrom0 $ prlctl start test1 Now, each time I start a new VM, the VM spends time waiting for a PXE boot. I'd like to turn this off. Can I disable PXE requests using Parallels or a Parallels commandline tool? Or, can I set the boot order of a VM from the commandline?

    Read the article

  • How can i update Preview.app from the command line without loosing focus on OSX ?

    - by snies
    Hello, i want to update Preview.app in the background from the command line without loosing focus of my current window. I know that i can use the following to open/update the view of a file, but than i loose focus to the Preview.app. open -a Preview foo.pdf I guess there might be some clever Apple Script commands to do so but so far i didn't find the right one. Alternatively i would be interested into transfering the focus back to my current app directly after the update. I need this in order to update Preview.app's view of a pdf through a vi autocmd after i update the pdf according to changes in a tex file i am editing. Here is an example of what i want to achive but using Ubuntu and evince.

    Read the article

  • OpenSSH SFTP: chrooted user with access to other chrooted users' files

    - by HannesFostie
    Decided to re-phrase the question entirely in order to not have to make a new one. I currently have an SFTP server set up using OpenSSH's SFTP functionality. All my users are chrooted, and everything works. What I need most right now is for one user, which is not root (because this user can't have any real SSH powers!), to have access to all other users' chrooted dirs. This user's job is to fetch all uploaded documents every once in a while. Directory structure as of now is: /home |_ /home/user1 |_ /home/user2 |_ /home/user3 With ChrootDirectory set as /home/%u User "adminuser" should have access to user1, user2 and user3's directories without having access to /home or at the very least not to anything but /home. Bonus points for the one who can tell me how to let users write inside /home/%u without having to make a new directory inside that dir which they own themselves, and not root as is the case with /home/%u (openssh chroot prerequisite).

    Read the article

  • May the file size returned by stat be compromised?

    - by codeholic
    I want to make sure that nobody changed a file. In order to accomplish that, I want not only to check MD5 sum of the file, but also check its size, since as far as I understand this additional simple check can sophisticate falsification by several digits. May I trust the size that stat returns? I don't mean if changes were made to stat itself. I don't go that deep. But, for instance, may one compromise the file size that stat returns by hacking the directory file? Or by similar means, that do not require superuser privileges? It's Linux.

    Read the article

  • Weather Logging Software on Windows Home Server

    - by Cruiser
    I'm looking for some weather logging software that I can run as a Windows Home Server add-in, or as a service on my Home Server, so I don't need to log into my Home Server to log weather data. I have an Oregon Scientific WMR918 weather station, and an HP MediaSmart EX485 Windows Home Server. The two are currently connected through a serial bluetooth adapter, but that shouldn't matter as the computer sees it basically as a serial device. I'm currently using Cumulus to log data and upload to Weather Underground, but it is a regular windows application, so I need to remain logged into my Home Server by RDP in order to run the software (I disconnect, but don't log off so the session remains open). Ideally I would like something to run as a service or WHS add-in, so that it runs all the time without logging in, can log data from my WMR918, and can upload to Weather Underground. Thanks!

    Read the article

  • Upcoming: Oracle Advanced Benefits Advisor Webcasts Announced

    - by user793553
    Oracle support is pleased to announce a new webcast covering the Open Enrollment functionality in Oracle Advanced Benefits.  The webcast is repeated on three different dates, in order to make attendance easier, whatever timezone you operate in. These one-hour sessions are recommended for technical and functional users who will be having an Open Enrollment cycle in the next 12 months.  The session will review the best proactive practices recommended by Oracle Support regardless of when your Open Enrollment takes place.  It will review planning, patching, data corruption and critical checklists. TOPICS WILL INCLUDE: Planning Ahead for Open Enrollment testing Required Patches Test performance Avoid major patching/updates Data corruption issues A short, live demonstration (only if applicable) and question and answer period will be included.  Below is the schedule for the webcasts.  The same can be found in the MyOracleSupport Document Advisor Webcast Current Schedule Doc ID 740966.1 Please follow the links to register for your chosen session. Webcast Topic and Description Registration Details Date and Time Best Benefits Practices for Open Enrollment Session 3   Doc ID 1489318.1 October 17, 2012 at 16:00 US EST Best Benefits Practices for Open Enrollment Session 4   Doc ID 1489319.1 October 31, 2012 at 16:00 US EST Product Enhancements in R12.1.3 RUP 5 Session 2   Doc ID 1489320.1 November 07, 2012 at 16:00 US EST

    Read the article

  • How to keep your third party libraries up to date?

    - by Joonas Pulakka
    Let's say that I have a project that depends on 10 libraries, and within my project's trunk I'm free to use any versions of those libraries. So I start with the most recent versions. Then, each of those libraries gets an update once a month (on average). Now, keeping my trunk completely up to date would require updating a library reference every three days. This is obviously too much. Even though usually version 1.2.3 is a drop-in replacement for version 1.2.2, you never know without testing. Unit tests aren't enough; if it's a DB / file engine, you have to ensure that it works properly with files that were created with older versions, and maybe vice versa. If it has something to do with GUI, you have to visually inspect everything. And so on. How do you handle this? Some possible approaches: If it ain't broke, don't fix it. Stay with your current version of the library as long as you don't notice anything wrong with it when used in your application, no matter how often the library vendor publishes updates. Small incremental changes are just waste. Update frequently in order to keep change small. Since you'll have to update some day in any case, it's better to update often so that you notice any problems early when they're easy to fix, instead of jumping over several versions and letting potential problems to accumulate. Something in between. Is there a sweet spot?

    Read the article

  • .htaccess url rewriting problem

    - by letsworktogether
    I'm kind of stuck at this part and was hoping that I'd get some assistance. I'm building a highscores page in PHP, that's going great, it works. however, I dislike the idea of "index.php?skill=name" and therefore wanted a bit of SEO in this. I have successfully replaced the url with a more friendly version: "highscores/skill/name" And this is where the problem starts, I have added pagination to the highscores and the page is read from the HTTP_GET page variable ($_GET['page']). I dislike the idea of "highscores/skill/name&page=2" and was hoping if you guys could assist me to make the url like the following: Page 1, so accessing the file without declaring the page number: DOMAIN.TLD/highscores/skill/name Page 1 so now the page variable is needed:DOMAIN.TLD/highscores/skill/name/2 As you can tell the "2" will define page 2 and load the correct data for page 2. However, I'm having much trouble in my .htaccess file to configure it this way. RewriteRule ^highscores\/skill\/(.*?)(\/(.*?)*)$ highscores/skills.php?skill=$1&page=$2 [L] # Skills page That is my latest attempt in order to get it to work, unfortunately it does not work, it makes the page look horrible (CSS doesn't work) and it doesn't go to the page specified on the URL. I hope you understand my issue, thank you!

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • How to use a custom Windows 7 system drive letter?

    - by Ivan
    The subject PC has many hard drive partitions dedicated for different purposes, C: being a Windows XP system drive and F: (which is actually the next primary partition placed right after C: physically) being intended to host a newly installed Windows 7 instance (meant for "dual boot" configuration). Needless to say the intention was all the partitions to have exactly the same letters under both OSes, needless to say Windows 7 has detected all of them in a completely different order which would not be a problem (as the non-system drives letters can be changed easily after installation) if it wouldn't have named it's system drive C: (meant to be F:), which I have no Idea how to change. Is there a way to set the letter you want? I don't mind reinstalling Windows 7 from scratch if it is to be set at installation time or even configured in some text files on the installation DVD. I have tried this way, but it renders the Windows 7 system desktop unbootable (gets stuck on "Preparing your desktop..." after "Welcome").

    Read the article

  • Breaking The Promise of Web Service Interoperability

    The promise of web service interoperability is achievable if certain technical and non-technical issues are dealt with properly. As the world gets smaller and smaller thanks to our growing global economy the need for security is increasing. The use of security is vital in the transferring of data from one server to another. As new security standards and protocols are created, the environments for web service hosts and clients must be in sync so that they can communicate on the same standard and protocols. For example, if a new protocol x can only be implemented on computers built after 2010 then all computers built prior to 2010 will not be able to connect to any web service hosts that only use this protocol in its security policy. If both the host and client of a web service cannot communicate using a set of common standards and protocols then web services are not available to these clients thus breaking the promise of interoperability. Another limiting factor of web services is governmental policies and regulations. I have experienced this first hand last year when I had to work on a project that dealt with personally identifiable information (PII) regarding US and Canadian Citizens. Currently the Canadian government regulates that any data pertaining to Canadian citizens must be store in Canada only. The issue that we had was that fact that we are a US based company that sometimes works with Canadian PII as part of a service that we provide. As you can see we are US based company and dealing with Canadian Data, so we had to place a file server inside the border of Canada in order for us to continue working for our Canadian customers.

    Read the article

  • MonoGame not all letters being drawn with DrawString

    - by Lex Webb
    I'm currently making a dynamic user interface for my game and are setting up having text on my buttons. I'm having an odd issue where, when i use a specific piece of code to determine the text position, it will not render all of the text passed to DrawString. Even weirder, is if i insert another DrawString after this, drawing more text at a different place, different parts of the text will be drawn. The code for drawing my button with the text attached is: public override void Draw(SpriteBatch sb, GameTime gt) { sb.Draw(currentImage, GetRelativeRectangle(), Color.White); sb.DrawString(font, text, new Vector2(this.GetRelativeDrawOffset().X + this.Width / 2 - font.MeasureString(text).X / 2, this.GetRelativeDrawOffset().Y + this.Height / 2 - font.MeasureString(text).Y / 2), textColor); } The methods in the creation of the Vector2 simply get the draw position of the button. I'm then doing some calculation to center the text. This produces this when the text is set to 'Test': And when i enter this piece of code below the first DrawString: sb.DrawString(font, "test", new Vector2(500, 50), Color.Pink); I should mention that that grey square is being drawn in the same spritebatch, before the button and the text. Any ideas as to what could be causing this? I have a feeling it may be due to draw order, but i have no idea how to control that.

    Read the article

  • Apache ProxyPass ignore static files

    - by virtualeyes
    Having an issue with Apache front server connecting to a Jetty application server. I thought that ProxyPass ! in a location block was supposed to NOT pass on processing to the application server, but for some reason that is not happening in my case, Jetty shows a 404 on the missing statics (js, css, etc.) Here's my Apache (v 2.4, BTW) virtual host block: DocumentRoot /path/to/foo ServerName foo.com ServerAdmin [email protected] RewriteEngine On <Directory /path/to/foo> AllowOverride None Require all granted </Directory> ProxyRequests Off ProxyVia Off ProxyPreserveHost On <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> # don't pass through requests for statics (image,js,css, etc.) <Location /static/> ProxyPass ! </Location> <Location /> ProxyPass http://localhost:8081/ ProxyPassReverse http://localhost:8081/ SetEnv proxy-sendchunks 1 </Location>

    Read the article

  • What are the things to look out for when adding other domain user to your IIS?

    - by Jack
    I have a Windows Server 2008 R2 called Jack11 that joined a domain called Watson.org. It has IIS 7 installed. From my understanding, we need to add the following into the web.config file <system.web> <identity impersonate="true" /> </system.web> Also, we need to ensure that the server Jack11 can ping to the domain Watson.org. What other setting do we need to setup in order for a user of domain Watson.org (e.g. the user Watson\User1 to access the application in the Server IIS? This is because currently, there is a problem as follows: Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'WATSON\User1'. The error message was displayed when the user User1 wish to access one of the web application in server Jack11 IIS and that web application also do some retrieval of records from the database, which is installed in SQL Server 2008 Enterprise located in the same server Jack11.

    Read the article

  • Drupal + Lighttpd: enabling clean urls (rewriting)

    - by Patrick
    I'm emulating Ubuntu on my mac, and I use it as a server. I've installed lighttpd + Drupal and the following configuration section requires a domain name in order to make clean urls to work. Since I'm using a local server I don't have a domain name and I was wondering how to make it work given the fact the ip of the local machine is usually changing. thanks $HTTP["host"] =~ "(^|\.)mywebsite\.com" { server.document-root = "/var/www/sites/mywebsite" server.errorlog = "/var/log/lighttpd/mywebsite/error.log" server.name = "mywebsite.com" accesslog.filename = "/var/log/lighttpd/mywebsite/access.log" include_shell "./drupal-lua-conf.sh mywebsite.com" url.access-deny += ( "~", ".inc", ".engine", ".install", ".info", ".module", ".sh", "sql", ".theme", ".tpl.php", ".xtmpl", "Entries", "Repository", "Root" ) # "Fix" for Drupal SA-2006-006, requires lighttpd 1.4.13 or above # Only serve .php files of the drupal base directory $HTTP["url"] =~ "^/.*/.*\.php$" { fastcgi.server = () url.access-deny = ("") } magnet.attract-physical-path-to = ("/etc/lighttpd/drupal-lua-scripts/p-.lua") }

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >