Search Results

Search found 22067 results on 883 pages for 'point clouds'.

Page 382/883 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • SSH tunnel RDP through gateway server outside the network?

    - by Mike
    I need to access a PC via RDP that is behind a firewall. There's no way to connect to it directly that I know of. What I'd like to do is SSH from that remote PC to my home Ubuntu server, then connect to the remote PC using my home PC with the Ubuntu server as a gateway. I've tried SSH from remote PC to Ubuntu server, tunneling remote port 3389 to 127.0.0.1:3389, then SSH from home PC to Ubuntu server, tunneling local port 13389 to remote port 3389. At that point I try to RDP into: 127.0.0.1:13389, 127.0.0.2:13389, :3389 - no dice. I suppose I could simply set up an SSH server on my home PC and SSH from remote PC into home PC and then establish the tunnel that way, but I'd rather not go through the hassle of installing and configuring an ssh server on my home PC. I know LogMeIn would work here, but I don't want to go that route for various reasons. Any ideas? Thanks!

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • Unable to install SQL Server 2008 Express SP1

    - by dahacker89
    I am facing difficulties installing the MS SQL Server Express 2008 Service Pack 1. I already have MS SQL Server Express 2008 installed and all I want to do is to install the SP1 however I get following error message even though all features are selected, it still tells me to select one or more features: Also just for information, when I open the SQL Server Configuration Manager to manage my SQL Server Services, the following error message is displayed: If anyone who has faced this and has a solution then please let me know, as my aim is to install management studio, but for that I must have SP1 installed at least, and I'm stuck at that point. Thanks.

    Read the article

  • Making an interactive 2D map

    - by Chad
    So recently I have been working on a Legend of Zelda: A Link to the Past clone, and I am wondering how I could handle certain map interactions (like cutting grass, lifting rocks, etc). The way I am currently doing the tilemap is with 2 PNGs. The first is the "tilemap" where each pixel represents a 16x16 tile and the (red, green) values are the (x, y) coords for the tile in the second PNG (the "tileset"). I am then using the blue channel to store collision data. Each tile is split into 4 8x8 tiles and represented by a 2 bit value (0 = empty, 1 = Jumpdown point, 2 = unused right now, 3 = blocking). 4 of these 2 bit values make up the full blue channel (1 byte). So collisions work great, and I am moving on to putting interactive units on the level; but I am not sure what a good way is to do it. I have experimented with spawning an entity for each grass and rock, but there are just WAY to many; FPS just dies even if I confine it to the current "zone" the user is in (for those who remember LTTP it had zones you moved between). It does make a difference that this is a browser-based JavaScript game. tl;dr: What is a good way to have an interactive map without using full blown entities for each interactive item?

    Read the article

  • Booting sequence. Ubuntu 12.04 installation and cohabitation with former OSes

    - by Stephane Rolland
    I am on the brink of installing Ubuntu 12.04 Precise Pengolin on the first primary partition of my hard-drive. (A day in History for me since I had always kept a MS windows at this first place). But I have some fears: This is my last computer available (In the past I used to have 2 or even 3 machines so I could always un/plug HDs for recovery operations and rescue) The current booting sequence is not straight forard. So as to explain the boot sequence let me briefly sum-up the history of this laptop computer. It was a dedicated Windows Vista computer. 1st and only Primary partition. Then I added Windows 7 (on the 2nd primary partition) letting the Windows Vista Boot Loader manage the boot sequence. Then I added Ubuntu 10.04 Lucid Lynx on the 1st sub-partition of the Extended Partitionm asking Grub to be the boot loader. But when I ask Grub to launch windows it launches the Vista BootLoader that manages the choice betzeen Vista and 7. So in theory Grub is on the MasterBootRecord - though I understand where the Vista BootLoader remains. Now, I will no longer use the Ubuntu 10.04 ( on extended partition) and also the Windows vista (on the first primary partition). I will install Ubuntu 12.04 on the First Primary, asking it to install a new bootloader. I want to keep the Windows 7 that is already on the Second Primary partition. And I want it to be loaded by the Ubuntu Boot loader(I don4t knoz zhich is included in this version)... And I am afraid the last point will not work.

    Read the article

  • Windows 2008 File Share

    - by user36540
    Hi, I have 3 Windows 2008 Standard servers in my system with no domain controller. Two of the servers are running a NLB cluster and the third server is a file server that the web servers connect to. I want to store my source code on the file server and point the IIS config to the network file share. The web sites also need access to a file share on the file server. I was able to share the network drive and access while logged into either of the web servers but my web apps are unable to access the file share - I assume due to permissions. Does anybody know the correct way to do this? Thanks, Chris

    Read the article

  • Gimpshop 2.8. Available for Win & Mac. No Linux version?

    - by Jorge M. Treviño
    Finally got around to upgrading 12.04 to 12.10. One of the nice things about the new version is that Gimp 2.8 is in the repositories. Installed and it's a far cry from the 2.2, 2.4 and 2.6 versions which were —at least from my untrained point of view– next to unusable. Now 2.8 is a much more intuitive, for Photoshop users at least, and I'm trying to really learn it. Browsing around, I found that there's s new version of Gimpshop, something that was a sorely amateurish attempt to a PS interface over an old Gimp version and sure to mess up your system. Seeing "2.8" prominently displayed in the page, I decided to try the Windows version. Oddly, there's a Mac version too but no Linux one. The link directs one to a non-existent file in one of the cloud storage sites. After the Win version was installed, I fired it up and, surprise!, it's exactly the same as I can tell without diving into menus and dialogs, as the plain vanilla Ubuntu version I have installed. Can anybody shed light on what goes on here? Is this a scheme to get inadvertent users to install some "optional extras" that come with the installer? Very curious about it (thanks God I'm not a cat ).

    Read the article

  • help with Outlook Exchange server and curl

    - by stib
    I work on a mac in a building full of PCs, and the IT department here doesn't have IMAP access turned on on the exchange servers. So I miss a lot of meetings because I don't get reminders because I access my mail via Outlook Web access. I had written a script to scrape my Outlook Web Access calendar and turn it into iCal format, so I could get my reminders via thunderbird or iCal.app. It basically downloaded the calendar page via curl, parsed the HTML and reformatted all the appointments as ical. it wasn't elegant, but it worked. Then they changed to outlook 2007, and it doesn't work any more. I have a sketchy knowledge of curl, and almost zero knowledge of how outlook works. Can anyone point me towards a reference for getting calendar info out of an exchange server without using outlook? If I can configure curl to get the HTML I will be happy, but if there's a more elegant way, such as getting the calendar info as XML I'll be delirious.

    Read the article

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • Website restyle, SEO migration plan?

    - by Goboozo
    I am currently in a project for one of my biggest clients. We have built a website that will -replace- the old website. When it comes to actual content its is largely the same. However, the presentation of the content has changed drastically. From our point of view much more user-friendly (main reason to update the site). Now, since the sites presentation has changed we have some major changes in: HTML & CSS: To change the presentation of the content URL's: To make them better understandable (301 redirects have been taken care of and are in place) Breadcrumbs: To enhance the navigation (we have made the breadcrumbs match exactly with the url's) Pagination: This was added to enable content browsing Title tags: Added descriptive title tags to the major links and buttons. Basically all user content including meta tags have remained the same. Now since this company is rather successful and 90% of its clients come from Google's organic results I am obliged to take all necessary precautions. People tell me I need a migration plan to prevent the site being hurt in Google, but I have never worked using such a plan... ...So, based on the above. Would you consider a migration plan necessary and what precautions/actions would you recommend to prevent us being put down in our SERP positions? Many thanks in advance for your answers.

    Read the article

  • Where could Distributed Version Control Systems currently be in Gartner's hype cycle?

    - by dukeofgaming
    Edit: Given the recent downvoting (+8/-6 at this point) it was made clear to me that Gartner's lifecycle is a biased metric from a programmer's perspective. This is something that is part of a paper I'm going to present to management, and management types are part of Gartner's audience. Giving DVCS exposure & enthusiasm (that "could" be deemed as hype, or at least attacked as such), think about the following question when reading this one: "how could I use Gartner's hype cycle to convince management that DVCSs are ready (or ready-enough) for us, and that it is not just hype" Just asking if DVCSs is hype wouldn't be constructive, Gartner's hype cycle is a more objective instrument than just asking that (even if this instrument is regarded as biased). If you know any other instrument please, by all means, mention it. Edit #2: I agree that Gartner's Life Cycle is not for every technology, but I consider it may have generated enough buzz to be considered hype by some, so it maybe deserves to be at least evaluated/pondered as such by using this instrument in order to prove/disprove it to whatever degree. I'm an advocate of DVCS, BTW. I'm doing research for a whitepaper I'm writing in favor of DVCS adoption at company and I stumbled upon the concept of social proof. I want to prove that the social proof of DVCS adoption is not necessarily cargo cult and doing further research I now stumbled upon Gartner's hype cycle which describes technology maturity in 5 phases. My question is: what could be an indicator of the current location of Distributed Version Control Systems (I mean git, mercurial, bazaar, etc. in general) at a particular phase in the hype cycle?... in other (less convoluted) words, would you say that currently expectations of DVCSs are a) starting, b)inflated, c)decreasing (disillusionment), d)increasing (enlightenment) or e)stabilizing (mature) and (more importantly) why? I know it is a hard question and there is subjectivity involved, but I'll grant the answer (and the traditional cookie) to the clearest argument/evidence for a particular phase.

    Read the article

  • How to manage a developer who has poor communication skills

    - by djcredo
    I manage a small team of developers on an application which is in the mid-point of its lifecycle, within a big firm. This unfortunately means there is commonly a 30/70 split of Programming tasks to "other technical work". This work includes: Working with DBA / Unix / Network / Loadbalancer teams on various tasks Placing & managing orders for hardware or infrastructure in different regions Running tests that have not yet been migrated to CI Analysis Support / Investigation Its fair to say that the Developers would all prefer to be coding, rather than doing these more mundane tasks, so I try to hand out the fun programming jobs evenly amongst the team. Most of the team was hired because, though they may not have the elite programming skills to write their own compiler / game engine / high-frequency trading system etc., they are good communicators who "can get stuff done", work with other teams, and somewhat navigate the complex beaurocracy here. They are good developers, but they are also good all-round technical staff. However, one member of the team probably has above-average coding skills, but below-average communication skills. Traditionally, the previous Development Manager tended to give him the Programming tasks and not the more mundane tasks listed above. However, I don't feel that this is fair to the rest of the team, who have shown an aptitute for developing a well-rounded skillset that is commonly required in a big-business IT department. What should I do in this situation? If I continue to give him more programming work, I know that it will be done faster (and conversly, I would expect him to complete the other work slower). But it goes against my principles, and promotes the idea that you can carve out a "comfortable niche" for yourself simply by being bad at the tasks you don't like.

    Read the article

  • Stream Music To Ventrilo From ESXi VM

    - by omghai2u
    I would like to stream music to my Ventrilo server from a Windows XP virtual machine running on an ESXi host. I have followed the instructions outlined here to stream music from something like VLC to the Ventrilo server on another machine and it works fine. I have also added the lines: sound.present = "TRUE" sound.virtualDev = "es1371" sound.fileName = "-1" sound.autodetect = "TRUE" to my .vmx file, as suggested here (http://communities.vmware.com/thread/191878), to get a sound card in my VM. The problem I am having is that it seems that my VM is not outputting any sound, so there's nothing to stream through Ventrilo. The Device Manager on the VM shows that this new sound card has drivers and doesn't appear to have any concerns with it. Can someone point me in the right direction to get my desired outcome? Thanks! PS. sorry for the long 2nd link, apparently I can only post 1 hyperlink with this low reputation.

    Read the article

  • If You Include the Groovy Editor...

    - by Geertjan
    ...in a NetBeans RCP application, what additional JARs will you need to include for the Groovy Editor to work? Leaving aside the debate on the current state & quality of the NetBeans Groovy Editor, so, assuming you need the Groovy support that the NetBeans Groovy Editor provides, you would check the Groovy Editor checkbox in the Project Properties dialog of your application: As you can see, however, the Groovy Editor depends on other modules, some of which, in turn, depend on yet other modules, and so on. So, I clicked the "Resolve" button above and then created a ZIP distribution, to see which additional JARs had been included. Until that point, I had only been using the "platform" cluster, which means that absolutely everything found in the ZIP's "ide" cluster and "java" cluster have only been included so that the Groovy Editor could be included, i.e., all thanks to clicking the "Resolve" button above. Let's first look at what that means for the "java" cluster: That's not so bad and kind of a side effect of Groovy being Java, i.e., a lot of Java functionality is needed. Now let's look at the "ide" cluster: So, in answer to the original question, if all you want in your NetBeans Platform application, in terms of editor functionality, is the Groovy Editor, then you have a pretty high price to pay. At the very least, I would have assumed that the project support JARs and the debugger support JARs would not be so tightly coupled with the Groovy Editor. That would be a cool thing to separate out from the editor support.

    Read the article

  • Java game object pool management

    - by Kenneth Bray
    Currently I am using arrays to handle all of my game objects in the game I am making, and I know how terrible this is for performance. My question is what is the best way to handle game objects and not hurt performance? Here is how I am creating an array and then looping through it to update the objects in the array: public static ArrayList<VboCube> game_objects = new ArrayList<VboCube>(); /* add objects to the game */ while (!Display.isCloseRequested() && !Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { for (int i = 0; i < game_objects.size(); i++){ // draw the object game_objects.get(i).Draw(); game_objects.get(i).Update(); //world.updatePhysics(); } } I am not looking for someone to write me code for asset or object management, just point me into a better direction to get better performance. I appreciate the help you guys have provided me in the past, and I dont think I would be as far along with my project without the support on stack exchange!

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Mount TMPFS instead of ro /dev

    - by schiggn
    I am working on a ARM-Based embedded system with a custom Debian Linux based on kernel 2.6.31. In the final system, the Root file system is stored as squashfs on flash. Now, the folder /dev is created by udev, but since there is no hot plugging functionality needed and booting time is critical, I wanted to delete udev and "hard code" the /dev folder (read here, page 5). because i still need to change parameters of the devices (with ioctl /sysfs) this does not work for me in this case. so i thought of mounting a tmpfs on /dev and change the parameters there. is this possible? and how to do best? my approach would be: delete /dev from RFS create tar containing basic devices mount tmpfs /dev untar tar-file into /dev change parameters Could this work? Do you see any problems? I found out, that you can mount on top of already mounted mount point, is it somehow possible just to take data with while mounting the new file system? if so that would be very convenient! Thanks Update: I just tried that out, but I'm stuck at a certain point. I packed all my devices into devices.tar, packed it into /usr of my squashfs and added the following lines to mountkernfs.sh, which is executed right after INIT. #mount /dev on tmpfs echo -n "Mounting /dev on tmpfs..." mount -o size=5M,mode=0755 -t tmpfs tmpfs /dev mknod -m 600 /dev/console c 5 1 mknod -m 600 /dev/null c 1 3 echo "done." echo -n "Populating /dev..." tar -xf /usr/devices.tar -C /dev echo "done." This works fine on the version over NFS, if I place printf's in the code, I can see it executing, if I comment out the extracting part, its complaining about missing devices. Booting OK mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. System Clock set to: Thu Sep 13 11:26:23 UTC 2012. INIT: Entering runlevel: 2 UBI: attaching mtd8 to ubi0 Commenting out the extraction of the tar mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Thu Sep 13 12:24:00 UTC 2012 ... (warning). INIT: Entering runlevel: 2 libubi: error!: cannot open "/dev/ubi_ctrl" So far so good. But if I pack the whole story into a squashfs and boot from there, it is acting strange. It's telling me while booting that it is unable to open an initial console and its throwing errors on mounting the UBIFS devices, but finally provides a login anyway. Over that my echo's are not executed. If I then log in, /dev is mounted as TMPFS as desired and all the devices reside inside. When I redo the "mount" command to mount the UBIFS partitions it is executed whitout problem and useable. From squashfs VFS: Mounted root (squashfs filesystem) readonly on device 31:15. Freeing init memory: 136K Warning: unable to open an initial console. mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 UBIFS error (pid 484): ubifs_get_sb: cannot open "ubi1_0", error -19 Additionally, a part of the rest of the bootscripts is still exexuted, but not all of them. Does anyone has a clue why? Other question, is 5MB enough/too much for /dev?

    Read the article

  • Steam on 64-bit 14.04: need some help, missing a few 32-bit libs

    - by YellowShark
    Steam says I'm missing the following libs, I'm hoping someone can help me get things in better shape: xyz@abc:~$ STEAM_RUNTIME=0 steam Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is disabled by the user Error: You are missing the following 32-bit libraries, and Steam may not run: libpangoft2-1.0.so.0 libpango-1.0.so.0 libgtk-x11-2.0.so.0 libgdk_pixbuf-2.0.so.0 Installing breakpad exception handler for appid(steam)/version(1401381906_client) Installing breakpad exception handler for appid(steam)/version(1401381906_client) [2014-06-11 20:45:39] Startup - updater built May 29 2014 09:19:23 [2014-06-11 20:45:39] Verifying installation... [2014-06-11 20:45:39] Verification complete [2014-06-11 20:45:42] Shutdown I tried installing the following i386 packages: libpango-1.0-0:i386, libpangoft2-1.0-0:i386, and libgdk-pixbuf2.0-0:i386, and symlinking the .so files (from usr/lib/i386whatever../) into the ~/.local/share/Steam/ubuntu12_32/ folder, but wasn't able to find the right match for the gtk-x11 lib, and ultimately would up with a different, but still non-working situation. So I've back-tracked to this point, and have removed those i386 packages for now. It's worth noting thatSteam runs if I don't use STEAM_RUNTIME=0. Also, Steam seemed to "recognize" the i386 version of the libpango & libpangoft2 libs after I symlinked them into place, during the course of my troubleshooting; when I would rerun STEAM_RUNTIME=0 steam, it wouldn't list those two items as missing anymore. Instead though, I had a bunch of gtk-related issues, something about overlay-scrollbar not available, as well as warnings that it can't find the murrine engine... a whole bunch of stuff that sounded like I'd gone too far down the wrong path. Anyhow, any help sorting this out would be appreciated, and thanks!

    Read the article

  • Stopping local drive mappings from transfering to a RDP session

    - by Chad
    We have a SQL server that locally has about 6 physical drives mapped. However let's say G: is a mount point to the SAN, if I connect with my local machine and have a personal folder mapped locally as g:\userdata that transfers to the remote desktop session on the server overwriting the value of the 'NAME' of the share. Here is the kicker, the G: on the server still has the right information but has the wrong label coming from the share on my PC. Does anyone know how to prevent this from happening? My tick box for local resources is unchecked in my Microsoft RDP client.

    Read the article

  • Second DocumentRoot for certain URLS

    - by scrr
    Hello, I have the following setup in my apache-config: <VirtualHost 1.2.3.4:80> ServerName example.com:80 ServerAlias www.example.com DocumentRoot /var/www/page <Location "/blog"> DocumentRoot /var/www/blog </Location> RailsBaseURI / RailsEnv development </VirtualHost> However, Apache tells me, I am not allowed to have a second DocumentRoot. How can I make "www.example.com/blog" point to "/var/www/blog"? I'm sure this is basic, but I just can't find the proper documentation online.

    Read the article

  • How to stop Windows 7 from automatically connecting to unsecure wifi network

    - by Remi Despres-Smyth
    One of my neighbors has an unsecure wifi network called WLAN. At one point in the past, I accidentally connected to it, and disconnected immediately when I noticed. Now, when I open my laptop at home, it sometimes connects to the WLAN network first, before trying my (secured) home wifi network. The information I've found regarding this issue seems to suggest this network should have a profile on the "Manage wireless networks" screen - but it does not. How do I tell Windows 7 to never connect to networks with SSIDs called WLAN? Or to never connect to unsecured networks without confirming with me first?

    Read the article

  • Is it possible to do a full Android backup without first rooting the phone?

    - by Howiecamp
    I'm running stock 2.1 on my Moto Droid and am interested in rooting. My (admittedly weak at this point) understanding is that, in order to perform a backup[*], you need to root first. But in order to root, you've got to replace the 2.1 image with a rooted 2.0.1 or a stock 2.0.1 and then a rooted 2.1. So there's no CYA protection given that you've got to take the risk of replacing the image in order to get root and then do a backup. [*] Ideally, I'd like to backup the stock 2.1 image AND my apps. Am I understanding this correctly, or is there a way to do a backup without first replacing the image?

    Read the article

  • MS Word - Close Word when you close the last open document **using keyboard**

    - by Chad
    In MS Word, by default, you can use: Ctrl+F4 to close Word Ctrl+W to close the current document Is it possible to make Word close when you close the last open document? For instance, in Chrome, if you keep hitting Ctrl+W you'll eventually close the last tab, which will also close Chrome. I'd like the same functionality with Word (and the other Office products) where I can just keep closing documents until I close the last one, at which point the application closes. Unfortunately, Ctrl+W doesn't close Word, even when there are no documents open.

    Read the article

  • Windows Firewall: How to allow traffic on port 8080?

    - by Chadworthington
    I am trying to configure team Foundation Server so that 1) it is accessible from within my Home Network 2) and then make the Web site access accessible via the Internet I have a problem with point 1: When I access http://192.168.1.106:8080/tfs/web/ locally from 192.168.1.106, it works. When I access the same web site from another PC in my home network, the abive URL works only if I turn of the Firewall on 192.168.1.106. Can someone please tell me specifically how to allow traffic on port 8080 without turning off Windows Firewall? It seems that the exceptions that I specify are intended for listing programs on the box that need to communicate out. Is IIS the program that I need to make the exception? How do I specify that port 8080 traffic should be allowed for web site traffic on this port? I hope to have success with pt. 2 later but I figure (1) should be done first. I expect issues.

    Read the article

  • Oracle Delivers Special Recognition for Specialized Partners

    - by michaela.seika(at)oracle.com
    Since announcing Oracle PartnerNetwork Specialized (OPN Specialized) in October 2009, Oracle has been focused on building a program that first enables solution providers to become highly skilled Oracle partners who deliver value to customers and that then recognizes and rewards their achievements in a meaningful way. Today the company unveiled new benefits reserved for partners who have achieved one or more of the over 50 specializations currently available. The benefits demonstrate Oracle's commitment to showcase these valued partners to three key audiences: customers, other partners, and Oracle employees.With today's launch of www.oracle.com/specialized Oracle has taken what IDC believes is a first of its kind approach to putting top partners front and center with customers and prospects. While most vendors offer a business partner finder tool on their website none has gone as far as Oracle with the creation of this new site dedicated to the promotion of Specialized Partners. The tag lines - "Recognized by Oracle, Preferred by Customers" and "Specialized. Recognized. Preferred." gets right to the point - these are the solution providers with which customers should choose to engage. The contents of the page offer multiple proof points to justify the marketing phrases.One of the benefits Oracle offers its Specialized Partners is video creation and placement. While Oracle works with partners to create informal or "guerilla" videos which often are placed on YouTube to generate awareness and buzz, the company also produces professional videos for its partners. The greatest value the partner receives from this benefit isn't the non-trivial production costs that Oracle covers but instead the prominent exposure Oracle gives the finished product. Partner videos are featured on www.oracle.com/specialized, used as part of monthly OPN Specialized Partners monthly webcasts, placed on a customer facing website, the Oracle Media Network, which includes several partner sites such as PartnerCast. A solution provider gains a great deal of credibility when they can send a prospect to an Oracle website where they are featured. Read the full article here.

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >