Search Results

Search found 7583 results on 304 pages for 'roger guess'.

Page 196/304 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • "You are missing the following 32-bit libraries, and Steam may not run: libc.so.6" The common fixes don't work,

    - by M_Steam_User
    So I know this is a problem that has been asked around a lot, but I've tried a bunch of solutions with no success. I'm running Ubuntu 12.04 (64 bit), and I just installed it yesterday. This is my first time working with linux. The error is: You are missing the following 32-bit libraries, and Steam may not run: libc.so.6 Things I've tried. First, I had downloaded from the steam website. I uninstalled it, and tried again from the ubuntu software centre. sudo apt-get update sudo apt-get install ia32-libs sudo apt-get upgrade This installed a bunch of the 32 bit libraries, but did not fix the issue. This seems like the major fix for most people. The direct approach of sudo apt-get install libc.so.6 returns this: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libc.so.6 E: Couldn't find any package by regex 'libc.so.6' I guess libc.so.6 isn't a package, just a single file or something? I also tried gksudo gedit /etc/ld.so.conf.d/steam.conf Added these two lines, those the second one was all ready in the file, but copied over: /usr/lib32 /usr/lib/i386-linux-gnu/mesa Then executed: sudo ldconfig But nothing seemed to happen, steam still doesn't work. So, I feel like it is more likely that I have the library and steam isn't looking in the right place. One thing I've seen is people usually reference /usr/local/lib/ for your library locations. However, I can't find where to cd into /usr/, it isn't in my home folder. If /usr/ is the home folder, there is only a /.local folder which only has /share, no lib anywhere. Sorry for my linux ignorance. I appreciate any help, I honestly have no idea how to confirm I have the library and point steam to it, or if that is even the right thing to do. Edit: Tried this, not entirely sure what it means ~$ ls -l /lib32/libc* -rwxr-xr-x 1 root root 1721832 Sep 30 11:06 /lib32/libc-2.15.so -rw-r--r-- 1 root root 185928 Sep 30 11:06 /lib32/libcidn-2.15.so lrwxrwxrwx 1 root root 15 Sep 30 11:06 /lib32/libcidn.so.1 -> libcidn-2.15.so -rw-r--r-- 1 root root 34316 Sep 30 11:06 /lib32/libcrypt-2.15.so lrwxrwxrwx 1 root root 16 Sep 30 11:06 /lib32/libcrypt.so.1 -> libcrypt-2.15.so lrwxrwxrwx 1 root root 12 Sep 30 11:06 /lib32/libc.so.6 -> libc-2.15.so

    Read the article

  • Boot error aftter clean Ubuntu 13.04 install: [Reboot and select proper boot device]

    - by IcarusNM
    I am having the same problem as this guy where a fresh Ubuntu install completes beautifully but will not boot. I get the ASUS (?) "Reboot and select proper boot device" error, first with Xubuntu 13.10 and after finally giving up there, and Xubuntu 13.4, I am back to regular Ubuntu 13.4. ASUS motherboard Z77, Intel chipset. Standard internal SATA 500GB HD. 64-bit. All-new hardware less than 3 months old. It was running Ubuntu 12.04LTC great until I tried this upgrade. I have re-installed from scratch every which way: with LVM, without LVM; with the default partitions, with my own partitions. With ext3 or ext4. Alongside; replace; upgrade. No difference. On the last two tries, I have booted afterward from the same USB stick, downloaded and run boot-repair, and now I guess I am off to the boot-repair support email with my URLs from that. It did all kinds of cool stuff but ultimately made no difference. I never got anything like this with Ubuntu 12.04. I've now probably re-installed Ubuntu 13.04 ten times slightly different ways. I finally found how to skip the language packs, so at least that sped things up! :) This starts from the ubuntu-13.04-desktop-amd64.iso and UNetbootin as suggested on the official instructions for USB thumb drive creation from OSX. That part all works fine (booting the USB on the PC and trying Ubuntu and/or installing from there on the PC HD.) I have no CD drive on this PC, but I suppose I could get one. I would rather find some Linux install that works from USB like I've always done. After running boot-repair twice, in the ASUS BIOS I now see three different UEFI boot options in the priority list, and they are all labeled exactly the same: ubuntu (P6: WDC WD5000AAKX-00U6AA0) Then there's a non-UEFI option: P6: WDC WD5000AAKX-00U6AA0 (476940MB) And a fifth option appeared after the first boot-repair: Windows Boot Manager (P6: WDC WD5000AAKX-00U6AA0) I have tried all 5 of these, and I get exactly the same error. I have never had Windows installed on this HD. ASUS is calling it Windows Boot Manaer but I presume that's a mistaken label for whatever boot-repair did. I can boot on USB and run GParted and it looks great. The partitions all look normal. I found another case of this online with no solution posted. I can't find much about it online. Needs a Master Boot Record wipe/redo?? I'm not sure how.

    Read the article

  • Operation times out trying to SSH outside LAN i.e. from internet to LAN no connection is established

    - by Pelle L
    I run Ubuntu 12.04 and have no success connecting with SSH from "Internet". The router is a TL-MR3420 which is set up to forward requests to one of the NIC's on ubuntu machine (which has in total 3 NICs). I can SSH from a client on the "local" network/LAN. The forward mechanism in the router seems to work. If I stop SSH service on the Ubuntu machine and instead start one on the windows machine - it works like a charm. I do not use the Std port 22 but that shouldn't be an issue as far as I understand - sine it works on the same port on the win machine. Since my public IS isn't static I use a dynDNS service but as said earlier the same setup works from the win machine. The router is located on 192.168.0.1 The Ubuntu NICs has the following IP: eth2 192.168.0.100 , eth1 192.168.0.101 , eth0 192.168.0.102 and I have forwarded the "outside" request to 192.168.0.100 In regards for firewall settings on the Ubuntu machine I have disabled the ufw and the command ufw status give status: inactive. I don't now it this is relevant information but teh command iptables --list give: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have tried to catch traffic with help of wireshark (a tool I'm not too used to use) and it seems as a few (3?) "requests" actually reaches the NIC but ... nothing happens. The syslog does not show any entries during these attempts. Perhaps it could be some routing issues but I have reached my level of competence and are stuck ... all help and support to get this sorted out is much appreciated. I'm new to Linux so please do not assume I have a configuration that is correct - but as I wrote earlier - if the client that initiate SSH is on the LAN it all works. PS:I have also tried to get VPN (PPP) working from Internet with no success - once again VPN works on the windows machine ... so my best guess is that this is related to how the ubuntu machine handles (IP) traffic and not the TL-MR3420 router or other network issues.

    Read the article

  • Should Item Grouping/Filter be in the ViewModel or View layer?

    - by ronag
    I'm in a situation where I have a list of items that need to be displayed depending on their properties. What I'm unsure of is where is the best place to put the filtering/grouping logic of the viewmodel state? Currently I have it in my view using converters, but I'm unsure whether I should have the logic in the viewmodel? e.g. ViewModel Layer: class ItemViewModel { DateTime LastAccessed { get; set; } bool IsActive { get; set; } } class ContainerViewModel { ObservableCollection<Item> Items {get; set;} } View Layer: <TextView Text="Active Items"/> <List ItemsSource={Binding Items, Converter=GroupActiveItemsByDay}/> <TextView Text="Active Items"/> <List ItemsSource={Binding Items, Converter=GroupInActiveItemsByDay}/> or should I build it like this? ViewModel Layer: class ContainerViewModel { ObservableCollection<IGrouping<string, Item>> ActiveItemsByGroup {get; set;} ObservableCollection<IGrouping<string, Item>> InActiveItemsByGroup {get; set;} } View Layer: <TextView Text="Active Items"/> <List ItemsSource={Binding ActiveItemsGroupByDate }/> <TextView Text="Active Items"/> <List ItemsSource={Binding InActiveItemsGroupByDate }/> Or maybe something in between? ViewModel Layer: class ContainerViewModel { ObservableCollection<IGrouping<string, Item>> ActiveItems {get; set;} ObservableCollection<IGrouping<string, Item>> InActiveItems {get; set;} } View Layer: <TextView Text="Active Items"/> <List ItemsSource={Binding ActiveItems, Converter=GroupByDate }/> <TextView Text="Active Items"/> <List ItemsSource={Binding InActiveItems, Converter=GroupByDate }/> I guess my question is what is good practice in terms as to what logic to put into the ViewModel and what logic to put into the Binding in the View, as they seem to overlap a bit?

    Read the article

  • High traffic chat - how to check if there is new message and show it for all users

    - by user2633999
    I already had question about this but obviously it was not accepted very well, apparently too long when it's actually more information so you could have given me better answer. Ok, I will be much clearer now. Best possible logic to develop scalable chat in terms of stability, storing/reading messages on chat, updating chat on new message for all users etc.? I have most of this developed, the logic I think I miss is -- check if there is new message and show it for all users. I have this implemented but it crashes the site due to its traffic of 300k-400k people, so that's my main question. The chat is PHP based and uses Pusher (www.pusher.com) for instant messaging but it lacks what I need because it's more like a websocket. I'm using hardcoded files to keep messages (want to avoid database as much as possible). It's a no extension type of file, I'm sure you know. I'm getting crash with $fp = fopen(..., "w"); // pretend ... is the path and filename fwrite($fp, $msg); //hardcode the message fclose($fp); where $msg is the message itself. I'm having 1 file per message. I show last 150 messages = 150 file accesses and reads, yeah it's too much I guess. I have better logic now which I'm pursuing and that is 1 file with last 50-100 messages at all time. Sure it should be much better. How does it crash, that's the trickiest part because everything seems ordinary, believe me it is difficult to determine what exactly crashes the site, but in like 5 minutes when I try to open the site it's gone, then I put the old content without chat and is back online again. I'm having jquery post every 1 second to check if there is new message. I'm using timestamp in a special file where I keep the time last message was sent and if ((time() - time in file) <= 2) = reload last 150 messages including the last one. Too much input/output, write/read or however to say it I think is what crashes the site.

    Read the article

  • Help with DB Structure, vOD site

    - by Chud37
    I have a video on demand style site that hosts series of videos under different modules. However with the way I have designed the database it is proving to be very slow. I have asked this question before and someone suggested indexing, but i cannot seem to get my head around it. But I would like someone to help with the structure of the database here to see if it can be improved. The core table is Videos: ID bigint(20) (primary key, auto-increment) pID text airdate text title text subject mediumtext url mediumtext mID int(11) vID int(11) sID int(11) pID is a unique 5 digit string to each video that is a shorthand identifier. Airdate is the TS, (stored in text format, right there maybe I should change that to TIMESTAMP AUTO UPDATE), title is self explanatory, subject is self explanatory, url is the hard link on the site to the video, mID is joined to another table for the module title, vID is joined to another table for the language of the video, (english, russian, etc) and sID is the summary for the module, a paragraph stored in an external database. The slowest part of the website is the logging part of it. I store the data in another table called 'Hits': id mediumint(10) (primary key, auto-increment) progID text ts int(10) Again, here (this was all made a while ago) but my Timestamp (ts) is an INT instead of ON UPDATE CURRENT TIMESTAMP, which I guess it should be. However This table is now 47,492 rows long and the script that I wrote to process it is very very slow, so slow in fact that it times out. A row is added to this table each time a user clicks 'Play' on the website and then so the progID is the same as the pID, and it logs the php time() timestamp in ts. Basically I load the entire database of 'Hits' into an array and count the hits in each day using the TS column. I am guessing (i'm quite slow at all this, but I had no idea this would happen when I built the thing) that this is possibly the worst way to go about this. So my questions are as follows: Is there a better way of structuring the 'Videos' table, is so, what do you suggest? Is there a better way of structuring 'hits', if so, please help/tell me! Or is it the fact that my tables are fine and the PHP coding is crappy?

    Read the article

  • Tiling Problem Solutions for Various Size "Dominoes"

    - by user67081
    I've got an interesting tiling problem, I have a large square image (size 128k so 131072 squares) with dimensons 256x512... I want to fill this image with certain grain types (a 1x1 tile, a 1x2 strip, a 2x1 strip, and 2x2 square) and have no overlap, no holes, and no extension past the image boundary. Given some probability for each of these grain types, a list of the number required to be placed is generated for each. Obviously an iterative/brute force method doesn't work well here if we just randomly place the pieces, instead a certain algorithm is required. 1) all 2x2 square grains are randomly placed until exhaustion. 2) 1x2 and 2x1 grains are randomly placed alternatively until exhaustion 3) the remaining 1x1 tiles are placed to fill in all holes. It turns out this algorithm works pretty well for some cases and has no problem filling the entire image, however as you might guess, increasing the probability (and thus number) of 1x2 and 2x1 grains eventually causes the placement to stall (since there are too many holes created by the strips and not all them can be placed). My approach to this solution has been as follows: 1) Create a mini-image of size 8x8 or 16x16. 2) Fill this image randomly and following the algorithm specified above so that the desired probability of the entire image is realized in the mini-image. 3) Create N of these mini-images and then randomly successively place them in the large image. Unfortunately there are some downfalls to this simplification. 1) given the small size of the mini-images, nailing an exact probability for the entire image is not possible. Example if I want p(2x1)=P(1x2)=0.4, the mini image may only give 0.41 as the closes probability. 2) The mini-images create a pseudo boundary where no overlaps occur which isn't really descriptive of the model this is being used for. 3) There is only a fixed number of mini-images so i'm not sure how random this really is. I'm really just looking to brainstorm about possible solutions to this. My main concern is really to nail down closer probabilities, now one might suggest I just increase the mini-image size. Well I have, and it turns out that in certain cases(p(1x2)=p(2x1)=0.5) the mini-image 16x16 isn't even iteratively solvable.. So it's pretty obvious how difficult it is to randomly solve this for anything greater than 8x8 sizes.. So I'd love to hear some ideas. Thanks

    Read the article

  • Application workflow

    - by manseuk
    I am in the planning process for a new application, the application will be written in PHP (using the Symfony 2 framework) but I'm not sure how relevant that is. The application will be browser based, although there will eventually be API access for other systems to interact with the data stored within the application, again probably not relavent at this point. The application manages SIM cards for lots of different providers - each SIM card belongs to a single provider but a single customer might have many SIM cards across many providers. The application allows the user to perform actions against the SIM card - for example Activate it, Barr it, Check on its status etc Some of the providers provide an API for doing this - so a single access point with multiple methods eg activateSIM, getStatus, barrSIM etc. The method names differ for each provider and some providers offer methods for extra functions that others don't. Some providers don't have APIs but do offer these methods by sending emails with attachments - the attachments are normally a CSV file that contains the SIM reference and action required - the email is processed by the provider and replied to once the action has been complete. To give you an example - the front end of my application will provide a customer with a list of SIM cards they own and give them access to the actions that are provided by the provider of each specific SIM card - some methods may require extra data which will either be stored in the backend or collected from the user frontend. Once the user has selected their action and added any required data I will handle the process in the backend and provide either instant feedback, in the case of the providers with APIs, or start the process off by sending an email and waiting for its reply before processing it and updating the backend so that next time the user checks the SIM card its status is correct (ie updated by a backend process). My reason for creating this question is because I'm stuck !! I'm confused about how to approach the actual workflow logic. I was thinking about creating a Provider Interface with the most common methods getStatus, activateSIM and barrSIM and then implementing that interface for each provider. So class Provider1 implements Provider - Then use a Factory to create the required class depending on user selected SIM card and invoking the method selected. This would work fine if all providers offered the same methods but they don't - there are a subset which are common but some providers offer extra methods - how can I implement that flexibly ? How can I deal with the processes where the workflow is different - ie some methods require and API call and value returned and some require an email to be sent and the next stage of the process doesn't start until the email reply is recieved ... Please help ! (I hope this is a readable question and that this is the correct place to be asking) Update I guess what I'm trying to avoid is a big if or switch / case statement - some design pattern that gives me a flexible approach to implementing this kind of fluid workflow .. anyone ?

    Read the article

  • Protobuf design patterns

    - by Monster Truck
    I am evaluating Google Protocol Buffers for a Java based service (but am expecting language agnostic patterns). I have two questions: The first is a broad general question: What patterns are we seeing people use? Said patterns being related to class organization (e.g., messages per .proto file, packaging, and distribution) and message definition (e.g., repeated fields vs. repeated encapsulated fields*) etc. There is very little information of this sort on the Google Protobuf Help pages and public blogs while there is a ton of information for established protocols such as XML. I also have specific questions over the following two different patterns: Represent messages in .proto files, package them as a separate jar, and ship it to target consumers of the service --which is basically the default approach I guess. Do the same but also include hand crafted wrappers (not sub-classes!) around each message that implement a contract supporting at least these two methods (T is the wrapper class, V is the message class (using generics but simplified syntax for brevity): public V toProtobufMessage() { V.Builder builder = V.newBuilder(); for (Item item : getItemList()) { builder.addItem(item); } return builder.setAmountPayable(getAmountPayable()). setShippingAddress(getShippingAddress()). build(); } public static T fromProtobufMessage(V message_) { return new T(message_.getShippingAddress(), message_.getItemList(), message_.getAmountPayable()); } One advantage I see with (2) is that I can hide away the complexities introduced by V.newBuilder().addField().build() and add some meaningful methods such as isOpenForTrade() or isAddressInFreeDeliveryZone() etc. in my wrappers. The second advantage I see with (2) is that my clients deal with immutable objects (something I can enforce in the wrapper class). One disadvantage I see with (2) is that I duplicate code and have to sync up my wrapper classes with .proto files. Does anyone have better techniques or further critiques on any of the two approaches? *By encapsulating a repeated field I mean messages such as this one: message ItemList { repeated item = 1; } message CustomerInvoice { required ShippingAddress address = 1; required ItemList = 2; required double amountPayable = 3; } instead of messages such as this one: message CustomerInvoice { required ShippingAddress address = 1; repeated Item item = 2; required double amountPayable = 3; } I like the latter but am happy to hear arguments against it.

    Read the article

  • How to temporarily save the result of the query, to use in another?

    - by Truth
    I have this problem I think you may help me with. P.S. I'm not sure how to call this, so if anyone finds a more appropriate title, please do edit. Background I'm making this application for searching bus transit lines. Bus lines are a 3 digit number, and is unique and will never change. The requirement is to be able to search for lines from stop A to stop B. The user interface is already successful in hinting the user to only use valid stop names. The requirement is to be able to display if a route has a direct line, and if not, display a 2-line and even 3-line combination. Example: I need to get from point A to point D. The program should show: If there's a direct line A-D. If not, display alternative, 2 line combos, such as A-C, C-D. If there aren't any 2-line combos, search for 3-line combos: A-B, B-C, C-D. Of course, the app should display bus line numbers, as well as when to switch buses. What I have: My database is structured as follows (simplified, actual database includes locations and times and whatnot): +-----------+ | bus_stops | +----+------+ | id | name | +----+------+ +-------------------------------+ | lines_stops_relationship | +-------------+---------+-------+ | bus_line | stop_id | order | +-------------+---------+-------+ Where lines_stops_relationship describe a many-to-many relationship between the bus lines and the stops. Order, signifies the order in which stops appear in a single line. Not all lines go back and forth, and order has meaning (point A with order 2 comes after point B with order 1). The Problem We find out if a line can pass through the route easily enough. Just search for a single line which passes through both points in the correct order. How can I find if there's a 2/3 line combo? I was thinking to search for a line which matches the source stop, and one for the destination stop, and see if I can get a common stop between them, where the user can switch buses. How do I remember that stop? 3 line combo is even trickier, I find a line for the source, and a line for the destination, and then what? Search for a line which has 2 stops I guess, but again, How do I remember the stops? tl;dr How do I remember results from a query to be able to use it again? I'm hoping to achieve this in a single query (for each, a query for 1-line routes, a query for 2, and a query for 3-line combos). Note: I don't mind if someone suggests a completely different approach than what I have, I'm open to any solutions. Will award any assistance with a cookie and an upvote. Thanks in advance!

    Read the article

  • Zelda-style top-down RPG. How to store tile and collision data?

    - by Delerat
    I'm looking to build a Zelda: LTTP style top-down RPG. I've read a lot on the subject and am currently going back and forth on a few solutions. I'm using C#, MonoGame, and Tiled. For my tile maps, these are the choices I can see in front of me: Store each tile as its own array. Each one having 3-4 layers, texture/animation, depth, flags, and maybe collision(depending on how I do it). I've read warning about memory issues going this route, and my biggest map will probably be 160x120 tiles. My average map however will be about 40x30. The number of tiles might cut in half if I decide to double my tile size, which is currently 16x16. This is the most appealing approach for me, as I feel like I would know how to save maps, make changes, and separate it into chunks for collision checks. Store the static parts of my tile map in multiple arrays acting as the different layers. Then I would just use entities for anything that wasn't static. All of the other tile data such as collisions, depth, etc., would be stored in their own layers as well I guess? This way just seems messy to me though. Regardless of which one I choose, I'm also unsure how to plan all of that other tile data. I could write a bunch of code that would know which integer represents what tile and it's data, but if I changed a tileset in Tiled and exported it again, all of those integers could potentially change and I'd have to adjust a whole bunch of code. My other issue is about how I could do collision. I want to at least support angled collision that slides you around the corners of objects like LTTP does, if not more oddball shapes as well. So do I: Store collision as a flag for binary collision. Could I get this to support angles? Would it be fine to store collision as an integer and have each number represent a certain angle of collision? Store a list of rectangles or other shapes and do collision that way? Sorry for the large two-part(three-part?) question. I felt like these needed to be asked together as I believe each choice influences the other.

    Read the article

  • What is the standard term for my role?

    - by sigil
    I'm doing work that involves writing code and managing developers in a "special projects" division of a large company. I'd like to define my role better and figure out if there's an industry standard term for what I do, so that it will be easier for me to research best practices and work on a career path What I do all day: A macro that connects an Excel sheet to an Access database is acting funny; I get called in to figure out what's happening and debug it. Someone needs data extracted from a bunch of files on Sharepoint. I figure out a client-side solution because I'm not authorized to do anything server-side and getting IT to do anything would take several months and need a business case. A manager wants a new data entry tool for their team. I interview the manager and team members to work out the functional requirements, then design/develop/test the application. Someone needs a VBA script to crunch some data for their presentation that's due in two hours. I drop everything I'm doing to hack out a quick script and run the analysis, without much in the way of testing. A developer has been hired to build a database for one of the teams, since I'm working on too many different things and don't have time to take this project on in the timeframe required. I direct his work and push him to meet certain deadlines, interview stakeholders to get more info that will help him figure out how to build the necessary forms, and modify the functional requirements of the database to fit in the timeframe. Someone wants to load a set of data into a GIS system and set up an ongoing refresh and reporting of this data set. I facilitate the conversation between the GIS developers and the owners of this data set, and design a demo application as proof of concept. It's kind of an "all-purpose programming and IT management" position, but it's not officially IT because the company has an actual IT department with a rigorously defined system of submitting requests, developing code, and managing projects. What I do, I guess, is more of a handyman job, where stuff falls to me because I'm the geekiest one in the room. Is there a standard term in the software world for what I do?

    Read the article

  • Where can I find comprehensive documentation on the various aspects of Linux?

    - by Fsando
    Whenever a problem pops up in my use of Linux (full time user for 6 years) I start by googling. The simplest (or most common) issues will usually be cleared right away. If it's not in one of those two categories the advise I find tend to be wrong, misguided, obsolete, and if you ask a question here on "ask" you risk being "duplicated" to some superficially similar question. Some issues have haunted me for years until I suddenly hit on the actual developer's documentation (or equivalent) which in a few lines explains how to solve my issue, the correct and consistent way. Whenever that happens I'm always kicking myself: why didn't I just go here to begin with? And the obvious answer is: "I had no idea this was what I was looking for". And for the issues that this hasn't yet happened I'm banging my head: "This ought to be either straight forward or someone tell me it's not doable" So my question is: Are there projects out there trying to collect or list this documentation in a searchable/browseable way. I know there are many very good "if you want this do that" tutorials on Ubuntu but I'm looking for actual documentation. That either are or could be collected in one place (at least conceptually) so that search for information could start in one place. I'm fully aware this is a broad question but if you approach it as: Does gnome have a comprehensive documentation project - where do I find it? Does Ubuntu have a comprehensive documentation project - where do i find it? For example: how exactly does the mime-type association work in Ubuntu and in xubuntu? How exactly are menus created (in Ubuntu: quicklists, xubuntu/gnome: the main menu) How exactly does the rendering process work for compiz/x? (I'm having this issue where windows randomly stops updating until somehow forced to resume (I guess). So for instance where do I look for logs that may indicate the problem. How may I change randr or other settings that may influence this issue. So my point is to organize exact documentation or preferably to find projects that do this already. Thanks! If answers to this question get me started I'm hoping to collect such a list.

    Read the article

  • How to import .ops file in Outlook 2007

    - by r0ca
    Hi all, I install ORK.exe on my Windows 7 machine to create a copy of my profile so I will have it as a template to push it to new user. So basically, I ran the Profile Wizard to create an .ops file. I saved it in my documents for the moment. Right now, I'm trying to import that file into my Outlook but I just can't figure it out! My goal is to have a profile template to push to new users when they log in for first time. Thanks a bunch in advance! EDIT: I just figured it out. But I have a problem: When I create my profile, Outlook 2007 on exchange was not in cache mode. So I created my profile with the Profile Wizard with cache mode disabled. I save my outlook.ops in My documents. Then, I enabled cache mode so when I will import my .ops file, it should be restore as cache mode disabled... But When I do that, cache mode is still checked (which I don't want to) Am I doing something wrong? I guess so but I just can't figure it out

    Read the article

  • Motherboard HDDPWR1 connector

    - by Eric Leschinski
    I need help identifying the name of a connector. I have a Gateway DX4870-UB318 computer, I opened the case and wanted to attach another hard drive, but to my surprise one existing SATA hard drive was connected to the motherboard with this connector: And here is the spot on the Motherboard where the power was supplied. What is the name of this adapter and where can I get another one? Clues: This computer was bought new October 2013 from best buy, box number: DX4870-UB318. The gateway folks won't divulge the type of motherboard it has nor give specs on it. On the wire itself is an identification code: H.35090NJ01-000 Next to the connector on the motherboard it says: HDDPWR1 and the second one says HDDPWR2. This cable has two SATA power connectors and one mystery connector. The power supply has no molex power cables and no SATA power connectors! This is the most bizarre hard drive power system I've seen. I guess the motherboard folks are trying to remove the burden for desktop power supplies to provide adapters (molex, SATA, other) to CD's and hard drives. Can someone put a name on that white flat 6 pin HDD Power Connector? My Solution I can buy a "SATA Power Y Splitter Cable" to provide more spaces to power sata devices.

    Read the article

  • procdump on w3wp.exe: Only part of a ReadProcessMemory or WriteProcessMemory request was completed

    - by JakeS
    I'm having a problem with an IIS application that occasionally spikes up in CPU usage, and am trying to use procdump to get a memory dump for examination. I'm running "procdump.exe -64 -mA 9999" where 9999 is the pid of the process. But every time I do it, I get an error: Only part of a ReadProcessMemory or WriteProcessMemory request was completed. Doing this also recycles the apppool, relieving the CPU spike, so I can't keep trying until I get it right. Does anyone know what is going wrong? EDIT WITH MORE INFO: So far I've failed to generate a debug dump no matter what tool I try. All of them seem to generate the same sort of error. This is 2008 R2 Datacenter running IIS7 with a 64-bit asp.net web site. My best guess is that something is getting blocked, causing some requests to remain open in IIS and gradually using up resources. If I monitor the worker process using the IIS Manager and view all requests, throughout the day I'll start to see some requests that "stick" and run forever. Some of these are for static files. Some are for aspx pages. I cannot see any "common" reason for them. Every once in a while the app pool starts taking up 100% CPU and the only remedy is to kill it.

    Read the article

  • WmiPrvSE memory leak on Windows 2008 *R2*

    - by MichaelGG
    I've seen references on Windows 2008 to WmiPrvSE leaks, but nothing about Windows 2008 R2. We're running R2 on top of Hyper-V (2008). We are also running NSClient++ for monitoring from opsview. Over time, WmiPrvSE.exe starts to use a lot of memory, causing memory alert issues (less than 10% free). VM has 2GB, WmiPrvSE consumes up to 500-600MB before I kill it. Killing the process doesn't seem to have any negative effect; it starts up again and I haven't noticed any problems. But after a day or two, it's back in the same situation. Any ideas on what to do? Resource Monitor doesn't show any Disk or Network IO by WmiPrvSE.exe. Just slowly climbing private memory... Edited to add: We aren't running clustering, or Windows System Resource Manager. The only regular WMI user I can guess is NSClient++, but we don't seem to have this problem on other servers.

    Read the article

  • Adaptec 6405 RAID controller turned on red LED

    - by nn4l
    I have a server with an Adaptec 6405 RAID controller and 4 disks in a RAID 5 configuration. Staff in the data center called me because they noticed a red LED was turned on in one of the drive bays. I have then checked the status using 'arcconf getconfig 1' and I got the status message 'Logical devices/Failed/Degraded: 2/0/1'. The status of the logical devices was listed as 'Rebuilding'. However, I did not get any suspicious status of the affected physical device, the S.M.A.R.T. setting was 'no', the S.M.A.R.T. warnings were '0' and also 'arcconf getsmartstatus 1' returned no problems with any of the disk drives. The 'arcconf getlogs 1 events tabular' command gives lots of output (sorry, can't paste the log file here as I only have remote console access, I could post a screenshot though). Here are some sample entries: eventtype FSA_EM_EXPANDED_EVENT grouptype FSA_EXE_SCSI_GROUP subtype FSA_EXE_SCSI_SENSE_DATA subtypecode 12 cdb 28 00 17 c4 74 00 00 02 00 00 00 00 data 70 00 06 00 00 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 0 The 'arcconf getlogs 1 device tabular' command reports mediumErrors 1 for two of the disks. Today, I have checked the status of the controller again. Everything is back to normal, the controller status is now 'Logical devices/Failed/Degraded: 2/0/0', the logical devices are also all back to 'Optimal'. I was not able to check the LED status, my guess is that the red LED is off again. Now I have a lot of questions: what is a possible cause for the medium error, why it is not reported by the SMART log too? Should I replace the disk drives? They were purchased just a month ago. The rebuilding process took one or two days, is that normal? The disks are 2 TByte each and the storage system is mostly idling. the timestamp of the logs seem to show the moment of the log retrieval, not the moment of the incident. Please advise, all help is very appreciated.

    Read the article

  • Juniper EX3300 routing issue

    - by Richard Whitman
    The routing on my Juniper EX3300 does not seem to be working. My ISP's gateway is at xx.xx.xx.xx. And I have the following in the configuration: routing-options { static { route 0.0.0.0/0 { next-hop xx.xx.xx.xx; retain; } } } I can ping to my ISP's gateway from the switch. However, I can NOT ping to any other IP. When I do a traceroute (to Google.com's IP). This is what I get: traceroute to 74.125.224.69 (74.125.224.69), 30 hops max, 40 byte packets traceroute: sendto: No route to host 1 traceroute: wrote 74.125.224.69 40 chars, ret=-1 *traceroute: sendto: No route to host Do I need to enable any protocols? I guess this goes without saying, but I am kind of new to Junos. Update: This is the output from show interfaces terse | match inet: bme0.32768 up up inet 128.0.0.1/2 jsrv.1 up up inet 128.0.0.127/2 vlan.0 up up inet 10.0.1.1/24 vlan.1 up up inet xx.xx.xx.110/30 and this is the output from: show route forwarding-table: Routing table: default.inet Internet: Destination Type RtRef Next hop Type Index NhRef Netif default perm 0 rjct 36 1 0.0.0.0/32 perm 0 dscd 34 1 10.0.1.0/24 intf 0 rslv 1321 1 vlan.0 10.0.1.0/32 dest 0 10.0.1.0 recv 1319 1 vlan.0 10.0.1.1/32 intf 0 10.0.1.1 locl 1320 2 10.0.1.1/32 dest 0 10.0.1.1 locl 1320 2 10.0.1.3/32 dest 1 0:25:90:63:26:53 ucst 1331 2 vlan.0 10.0.1.255/32 dest 0 10.0.1.255 bcst 1318 1 vlan.0

    Read the article

  • FTP Error: 550 Cant change directory to /: Permission denied

    - by Alessandro Merletti de Palo
    I installed Pureftpd and Ispconfig3 on my server. Starting from the point I'll probably uninstall ispconfig3 and make things directly on the server, now I am so stubborn I really want to see where's the problem. I created a ftp user through ispconfig, named amdpftp. It is related to a server user named web7. It logs in with username and password, but if i try to ls, it tells me: FTP Error: 550 Cant change directory to /: Permission denied I thought many things, like: 1. It is a problem of permissions. I went to /var/www/clients/client0/web7 , it was immutable and owned by root. Chattr -i and chown web7:client0 changed permissions, but with no effect. I restored to root:root, and made it immutable again. 2. I make some mistakes in the pureftpd installation: Wrong, it works pretty fine. The pureftpd.log doesn't seem to say anything bad. 3. The pureftpd.log file is only the pureftpd one, I should also check the mysqld functionality, as it is in a mysql database that user, password and working directory are stored. I enabled logging in the my.cnf, but also in the ispconfig database operation there wasn't anything wrong. Then I mkdir testftp in /var/www, chown web7:client0, and edited amdpftp user root directory from /var/www/clients/client0/web7 into /var/www/testftp . Guess what? It worked. So, now I know: 1. The PureFtpd works pretty fine 2. The mysql ispconfig database as well 3. The username and password of the virtual user created by ispconfig into pureftpd work 4. The correlation between username and password and the user web7 and the group client0 does work. What kind of magic has been cast upon the ispconfig directories [/var/www/clients/*] that block ftpusers to operate?

    Read the article

  • How do I configure freeSSHd on Windows Server 2008 so I can log in using ssh?

    - by Daryl Spitzer
    I've installed freeSSHd on a Windows Server 2008 box (following the instructions in How to install an SSH Server in Windows Server 2008), including: created a user named "dspitzer" with NTLM authorization opened an exception for port 22 in the Windows Firewall But when I try to connect (from a Mac OS X 10.5.8 command-line), I get permission denied after entering the password: $ ssh 12.34.56.78 [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Received disconnect from 12.34.56.78: 2: Too many attempts. I've also tried: $ ssh [email protected] [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Received disconnect from 12.34.56.78: 2: Too many attempts. I've also tried changing the authorization to "Password stored as SHA1 hash" and entering a simple password, but I get the same problem. And I've tried a different user name ("Administrator") with no luck. I've confirmed that I am connecting to the server I'm configuring—if I stop freeSSHd and try to connect I get: $ ssh 12.34.56.78 ssh: connect to host 12.34.56.78 port 22: Operation timed out I get the exact same results from a Linux command-line. Any advice or troubleshooting tips? Update: I tried disabling the firewall (in response to geeklin's comment) and it made no difference. Update #2: I no longer have this machine (I've changed employers), so I have no way of verifying the answers. I guess all I can do is make this question "community wiki".

    Read the article

  • How do I configure freeSSHd on Windows Server 2008 so I can log in using ssh?

    - by Daryl Spitzer
    I've installed freeSSHd on a Windows Server 2008 box (following the instructions in How to install an SSH Server in Windows Server 2008), including: created a user named "dspitzer" with NTLM authorization opened an exception for port 22 in the Windows Firewall But when I try to connect (from a Mac OS X 10.5.8 command-line), I get permission denied after entering the password: $ ssh 12.34.56.78 [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Received disconnect from 12.34.56.78: 2: Too many attempts. I've also tried: $ ssh [email protected] [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Received disconnect from 12.34.56.78: 2: Too many attempts. I've also tried changing the authorization to "Password stored as SHA1 hash" and entering a simple password, but I get the same problem. And I've tried a different user name ("Administrator") with no luck. I've confirmed that I am connecting to the server I'm configuring—if I stop freeSSHd and try to connect I get: $ ssh 12.34.56.78 ssh: connect to host 12.34.56.78 port 22: Operation timed out I get the exact same results from a Linux command-line. Any advice or troubleshooting tips? Update: I tried disabling the firewall (in response to geeklin's comment) and it made no difference. Update #2: I no longer have this machine (I've changed employers), so I have no way of verifying the answers. I guess all I can do is make this question "community wiki".

    Read the article

  • Postfix unable to create lock file, permission denied

    - by John Bowlinger
    I thought I had my postfix configuration all set up on my Amazon Ubuntu server but I guess not. I'm trying to set up an admin email account for 3 virtually hosted Apache websites. Here's my postfix main.cf file: myhostname = ip-XX-XXX-XX-XXX.us-west-2.compute.internal alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = ip-XX-XXX-XX-XXX.us-west-2.compute.internal, localhost.us-west-2.compute.internal, , localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all virtual_mailbox_domains = example1.com, example2.com, example3.com virtual_mailbox_base = /var/mail/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_minimum_uid = 100 virtual_uid_maps = static:115 virtual_gid_maps = static:115 virtual_alias_maps = hash:/etc/postfix/virtual Here's my vmailbox file: [email protected] example1.com/admin [email protected] example2.com/admin [email protected] example3.com/admin @example1.com example1.com/catchall @example2.com example2.com/catchall @example3.com example3.com/catchall And finally my virtual file: [email protected] postmaster [email protected] postmaster [email protected] postmaster When I try to send an email to through netcat to my one of my domains, I get: unable to create lock file /var/mail/vhosts/example1.com/admin.lock: Permission denied This is despite the fact that I set example1.com group to postfix and also my virtual_uid_maps and virtual_gid_maps are both set to Postfix group id of 115.

    Read the article

  • Moving from ColdFusion 8 to ColdFusion 10 - Migration Fails

    - by XenoFoxx
    After having made several attempts to migrate from a ColdFusion 8 Standard server to a ColdFusion 10 Standard server, it feels like I am "almost" there. I'm using the 64 bit installer from Adobe's website. I'm using a Windows Server 2008 (64 bit) server with IIS 7.0. The installation itself goes smooth and the services start and are running. But at the end of the installation it says "ColdFusion Installed, but with errors" and it generates a log file. The log file reads: Migration Error: : Check that "C:\ColdFusion8" is a valid directory and is an installation of either ColdFusion MX 6 or ColdFusionMX 7 and further down says: Status: WARNING Additional Notes: WARNING - Could not migrate settings from previous version of ColdFusion Custom Action: com.macromedia.ia.action.MigrateColdFusionAction Status: ERROR Additional Notes: ERROR - class com.macromedia.ia.action.MigrateColdFusionAction NonfatalInstallException null The applicationHost.config file has new XML referencing the ColdFusion 10 directory, but IIS is still using ColdFusion 8. I'm also going to guess that the settings in the CF Administrator have not been migrated based on the message in the log above. I've followed the instructions on Adobe's site, including ensuring that ASP.NET, CGI, ISAPI Extensions, and ISAPI Filters are all enabled. I've also enabled IIS 6 Metabase Compatibility even though I don't think it's needed. Has anyone else had similar issues with ColdFusion 10 and IIS 7. Currently I have uninstalled CF 10 and reverted back to

    Read the article

  • PfSense: dhcpd: send_packet: No buffer space available

    - by Tillebeck
    Pfsense 2.0.1-RELEASE (i386) I get a lot of entries in the log saying: dhcpd: send_packet: No buffer space available There are aprox 40 active users and they complain about prolonged times for getting an IP and periodically about problems accessing the internet, prolonged response times etc. In the log there is this entry repeatedly (periodically): Jun 10 18:27:30 dhcpd: send_packet: No buffer space available Jun 10 18:40:53 dhcpd: send_packet: No buffer space available Jun 10 19:01:15 dhcpd: send_packet: No buffer space available Jun 10 19:10:47 dhcpd: send_packet: No buffer space available Jun 10 19:31:10 dhcpd: send_packet: No buffer space available Jun 10 19:53:51 dhcpd: send_packet: No buffer space available Jun 10 20:23:32 dhcpd: send_packet: No buffer space available Jun 10 21:01:42 dhcpd: send_packet: No buffer space available Jun 10 21:04:52 dhcpd: send_packet: No buffer space available Jun 10 21:23:29 dhcpd: send_packet: No buffer space available Jun 10 21:46:05 dhcpd: send_packet: No buffer space available Jun 10 22:02:17 dhcpd: send_packet: No buffer space available Jun 10 22:02:45 dhcpd: send_packet: No buffer space available Jun 10 22:06:21 dhcpd: send_packet: No buffer space available Jun 10 22:08:45 dhcpd: send_packet: No buffer space available Jun 10 22:09:19 dhcpd: send_packet: No buffer space available Jun 10 22:22:23 dhcpd: send_packet: No buffer space available I guess it is the same periods that the users complain about reduced access to the internet. I have seen other threads about it. But none related to pfsense. Any ideas of what to do?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >