Search Results

Search found 1705 results on 69 pages for 'dr johnson'.

Page 59/69 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Inherit one instance variable from the global scope

    - by Julian
    I'm using Curses to create a command line GUI with Ruby. Everything's going well, but I have hit a slight snag. I don't think Curses knowledge (esoteric to be fair) is required to answer this question, just Ruby concepts such as objects and inheritance. I'm going to explain my problem now, but if I'm banging on, just look at the example below. Basically, every Window instance needs to have .close called on it in order to close it. Some Window instances have other Windows associated with it. When closing a Window instance, I want to be able to close all of the other Window instances associated with it at the same time. Because associated Windows are generated in a logical fashion, (I append the name with a number: instance_variable_set(self + integer, Window.new(10,10,10,10)) ), it's easy to target generated windows, because methods can anticipate what assosiated windows will be called, (I can recreate the instance variable name from scratch, and almost query it: instance_variable_get(self + integer). I have a delete method that handles this. If the delete method is just a normal, global method (called like this: delete_window(@win543) then everything works perfectly. However, if the delete method is an instance method, which it needs to be in-order to use the self keyword, it doesn't work for a very clear reason; it can 'query' the correct instance variable perfectly well (instance_variable_get(self + integer)), however, because it's an instance method, the global instances aren't scoped to it! Now, one way around this would obviously be to simply make a global method like this: delete_window(@win543). But I have attributes associated with my window instances, and it all works very elegantly. This is very simplified, but it literally translates the problem exactly: class Dog def speak woof end end def woof if @dog_generic == nil puts "@dog_generic isn't scoped when .woof is called from a class method!\n" else puts "@dog_generic is scoped when .woof is called from the global scope. See:\n" + @dog_generic end end @dog_generic = "Woof!" lassie = Dog.new lassie.speak #=> @dog_generic isn't scoped when .woof is called from an instance method!\n woof #=> @dog_generic is scoped when .woof is called from the global scope. See:\nWoof! TL/DR: I need lassie.speak to return this string: "@dog_generic is scoped when .woof is called from the global scope. See:\nWoof!" @dog_generic must remain as an insance variable. The use of Globals or Constants is not acceptable. Could woof inherit from the Global scope? Maybe some sort of keyword: def woof < global # This 'code' is just to conceptualise what I want to do, don't take offence! end Is there some way the .woof method could 'pull in' @dog_generic from the global scope? Will @dog_generic have to be passed in as a parameter?

    Read the article

  • Can't change color of sprites in unity

    - by Aceleeon
    I would like to create a script that targets a 2d sprite "enemy" and changes their color to red (slightly opaque red if possible) when you hit tab. I have this code from a 3d tutorial hoping the transition would work. But it does not. I only get the script to cycle the enemy tags but never changes the color of the sprite. I have the code below I'm very new to coding, and any help would be FANTASTIC! HELP! hahah. TL;DR Cant get 3d color targeting to work for 2D. Check out the c#code below using UnityEngine; using System.Collections; using System.Collections.Generic; public class Targetting : MonoBehaviour { public List targets; public Transform selectedTarget; private Transform myTransform; // Use this for initialization void Start () { targets = new List(); selectedTarget = null; myTransform = transform; AddAllEnemies(); } public void AddAllEnemies() { GameObject[] go = GameObject.FindGameObjectsWithTag("Enemy"); foreach(GameObject enemy in go) AddTarget(enemy.transform); } public void AddTarget(Transform enemy) { targets.Add(enemy); } private void SortTargetsByDistance() { targets.Sort(delegate(Transform t1,Transform t2) { return Vector3.Distance(t1.position, myTransform.position).CompareTo(Vector3.Distance(t2.position, myTransform.position)); }); } private void TargetEnemy() { if(selectedTarget == null) { SortTargetsByDistance(); selectedTarget = targets[0]; } else { int index = targets.IndexOf(selectedTarget); if(index < targets.Count -1) { index++; } else { index = 0; } selectedTarget = targets[index]; } } private void SelectTarget() { selectedTarget.GetComponent().color = Color.red; } private void DeselectTarget() { selectedTarget.GetComponent().color = Color.blue; selectedTarget = null; } // Update is called once per frame void Update() { if(Input.GetKeyDown(KeyCode.Tab)) { TargetEnemy(); } } }

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • Is Ubuntu recognizing and/or using my NVIDIA graphics card?

    - by user212860
    This is my first post here, and I'm pretty new to Ubuntu/Linux. I currently have no other OS except for Ubuntu 13.10. (I used to have Win7 until i got a new terabyte hard drive). My current PC build, if any of this helps: CPU: Intel i5 quad-core Graphics: NVIDIA GeForce GTX 650 RAM: 8 GB HDD: 1 TB SATA 3 Motherboard: MSi Z77 A-G41 OS Ubuntu 13.10 So I recently installed Ubuntu 13.10 and put Steam on it, and I'm seeing that my games run a lot slower than they did when I had Win7. I figured it was a graphics problem, so I checked System Settings Details Overview. It says in "Graphics" that I have "Gallium 0.4 on NVE7" (don't really know what that is). Does this mean that Ubuntu is not using my graphics card? In System Settings Software & Updates Additional Drivers, it clearly shows like this: NVIDIA Corporation: GK107 [GeForce GTX 650] -This device is using an alternative driver (And then it shows a list of drivers that I can switch back and forth to) So this is a bit confusing. In Software and Updates, it clearly shows that I have my NVIDIA card installed, and that I have a driver selected for it. But in System Settings, it shows I have some Gallium 0.4 thing. I had done a bit of research, and ended up typing command: "lspci|grep VGA" in the Terminal. It showed this in response: VGA compatible controller: NVIDIA Corporation GK107 [GeForce GTX 650] (rev a1) The Terminal seems to recognize my graphics card. What it looks like to me, is that I don't have the proper driver, and I might be using my CPU's integrated graphics. When I switch around which driver I am using in that list, it still does not see my card in System Settings. Some of the drivers in the list give me some sort of OpenGL error when I try to run a game. It might just be that my games are running slow because the game developers have not optimized it for Ubuntu that well. However, that still doesn't take away from the fact that System Settings is not showing my NVIDIA card. TL;DR Version: How do I know if my video card is being recognized/used? If my video card is not being used, what is the best way fix that? Please make your answers easy to understand. I do not mind wordy responses, as long as I can follow what you're saying. Any help would be greatly appreciated! Thanks, Jabber5

    Read the article

  • How to temporarily save the result of the query, to use in another?

    - by Truth
    I have this problem I think you may help me with. P.S. I'm not sure how to call this, so if anyone finds a more appropriate title, please do edit. Background I'm making this application for searching bus transit lines. Bus lines are a 3 digit number, and is unique and will never change. The requirement is to be able to search for lines from stop A to stop B. The user interface is already successful in hinting the user to only use valid stop names. The requirement is to be able to display if a route has a direct line, and if not, display a 2-line and even 3-line combination. Example: I need to get from point A to point D. The program should show: If there's a direct line A-D. If not, display alternative, 2 line combos, such as A-C, C-D. If there aren't any 2-line combos, search for 3-line combos: A-B, B-C, C-D. Of course, the app should display bus line numbers, as well as when to switch buses. What I have: My database is structured as follows (simplified, actual database includes locations and times and whatnot): +-----------+ | bus_stops | +----+------+ | id | name | +----+------+ +-------------------------------+ | lines_stops_relationship | +-------------+---------+-------+ | bus_line | stop_id | order | +-------------+---------+-------+ Where lines_stops_relationship describe a many-to-many relationship between the bus lines and the stops. Order, signifies the order in which stops appear in a single line. Not all lines go back and forth, and order has meaning (point A with order 2 comes after point B with order 1). The Problem We find out if a line can pass through the route easily enough. Just search for a single line which passes through both points in the correct order. How can I find if there's a 2/3 line combo? I was thinking to search for a line which matches the source stop, and one for the destination stop, and see if I can get a common stop between them, where the user can switch buses. How do I remember that stop? 3 line combo is even trickier, I find a line for the source, and a line for the destination, and then what? Search for a line which has 2 stops I guess, but again, How do I remember the stops? tl;dr How do I remember results from a query to be able to use it again? I'm hoping to achieve this in a single query (for each, a query for 1-line routes, a query for 2, and a query for 3-line combos). Note: I don't mind if someone suggests a completely different approach than what I have, I'm open to any solutions. Will award any assistance with a cookie and an upvote. Thanks in advance!

    Read the article

  • Deserializing JSON data to C# using JSON.NET

    - by Derek Utah
    I'm relatively new to working with C# and JSON data and am seeking guidance. I'm using C# 3.0, with .NET3.5SP1, and JSON.NET 3.5r6. I have a defined C# class that I need to populate from a JSON structure. However, not every JSON structure for an entry that is retrieved from the web service contains all possible attributes that are defined within the C# class. I've been being doing what seems to be the wrong, hard way and just picking out each value one by one from the JObject and transforming the string into the desired class property. JsonSerializer serializer = new JsonSerializer(); var o = (JObject)serializer.Deserialize(myjsondata); MyAccount.EmployeeID = (string)o["employeeid"][0]; What is the best way to deserialize a JSON structure into the C# class and handling possible missing data from the JSON source? My class is defined as: public class MyAccount { [JsonProperty(PropertyName = "username")] public string UserID { get; set; } [JsonProperty(PropertyName = "givenname")] public string GivenName { get; set; } [JsonProperty(PropertyName = "sn")] public string Surname { get; set; } [JsonProperty(PropertyName = "passwordexpired")] public DateTime PasswordExpire { get; set; } [JsonProperty(PropertyName = "primaryaffiliation")] public string PrimaryAffiliation { get; set; } [JsonProperty(PropertyName = "affiliation")] public string[] Affiliation { get; set; } [JsonProperty(PropertyName = "affiliationstatus")] public string AffiliationStatus { get; set; } [JsonProperty(PropertyName = "affiliationmodifytimestamp")] public DateTime AffiliationLastModified { get; set; } [JsonProperty(PropertyName = "employeeid")] public string EmployeeID { get; set; } [JsonProperty(PropertyName = "accountstatus")] public string AccountStatus { get; set; } [JsonProperty(PropertyName = "accountstatusexpiration")] public DateTime AccountStatusExpiration { get; set; } [JsonProperty(PropertyName = "accountstatusexpmaxdate")] public DateTime AccountStatusExpirationMaxDate { get; set; } [JsonProperty(PropertyName = "accountstatusmodifytimestamp")] public DateTime AccountStatusModified { get; set; } [JsonProperty(PropertyName = "accountstatusexpnotice")] public string AccountStatusExpNotice { get; set; } [JsonProperty(PropertyName = "accountstatusmodifiedby")] public Dictionary<DateTime, string> AccountStatusModifiedBy { get; set; } [JsonProperty(PropertyName = "entrycreatedate")] public DateTime EntryCreatedate { get; set; } [JsonProperty(PropertyName = "entrydeactivationdate")] public DateTime EntryDeactivationDate { get; set; } } And a sample of the JSON to parse is: { "givenname": [ "Robert" ], "passwordexpired": "20091031041550Z", "accountstatus": [ "active" ], "accountstatusexpiration": [ "20100612000000Z" ], "accountstatusexpmaxdate": [ "20110410000000Z" ], "accountstatusmodifiedby": { "20100214173242Z": "tdecker", "20100304003242Z": "jsmith", "20100324103242Z": "jsmith", "20100325000005Z": "rjones", "20100326210634Z": "jsmith", "20100326211130Z": "jsmith" }, "accountstatusmodifytimestamp": [ "20100312001213Z" ], "affiliation": [ "Employee", "Contractor", "Staff" ], "affiliationmodifytimestamp": [ "20100312001213Z" ], "affiliationstatus": [ "detached" ], "entrycreatedate": [ "20000922072747Z" ], "username": [ "rjohnson" ], "primaryaffiliation": [ "Staff" ], "employeeid": [ "999777666" ], "sn": [ "Johnson" ] }

    Read the article

  • Smart Array P400 - Accelerator Replacement Battery Failure

    - by inflammable
    TL;DR - Is the immediate failure of a replacement battery, for a failed battery, on a battery backed accelerator for a Smart Array P400 controller a common occurrence? Or are we likely to have an storage controller with an impending and critical fault? We have a slightly confusing situation with a Smart Array P400 storage controller with the 512mb battery backed accelerator addon on an HP DL380 server. The storage controller is (afaik) running the latest firmware and driver: Model: Smart Array P400 Controller Status: OK Firmware Version: 7.24 Serial Number: *snip* Rebuild Priority: Medium Expand Priority: Medium Number Of Ports: 2 The storage diagnostic (both on the both boot-up screen for the controller and within the 'Management Homepage' and the 'HP Array Diagnostic Utility') recently starting showing the following status a fault for the battery for the accelerator: Accelerator Status: Temporarily Disabled Error Code: Cache Disabled Low Batteries Serial Number: *snip* Total Memory: 524288 KB Read Cache: 25% Write Cache: 75% Battery Status: Failed Read Errors: 0 Write Errors: 0 We replaced the battery with a new unit (a visual inspection of the P400 card showing nothing unusual) and saw the same fault - but expected this to disappear over the course of a few hours/days as it charged. This didn't happy, and the fault status remains the same as above. Given the battery is a genuine part from HP, I wouldn't have expected a replacement battery to fail straight away, or to be dead-on-arrival (is that naivety on my part?). Is the immediate failure of a replacement battery, for a failed battery, on a battery backed accelerator a common occurrence? Or are we likely to have an storage controller with an impending and critical fault? Is there any diagnostic that could tell me more about the failed battery, without cracking the server open again? Many thanks!

    Read the article

  • vCenter appliance won't use mail relay server

    - by Safado
    tl;dr: - sendmail is configured to use a relay server but still insists on using 127.0.01 as the relay, which results in mail not being sent. We have the open source vCenter appliance (v 5.0) managing our ESXi cluster. When connected to it via vSphere Client, you can configure the SMTP relay server to use by going to Administration > vCenter Server Settings > MAIL. There you can set the SMTP Server value. I looked through their documentation and also confirmed on the phone with support that all you have to do to configure mail is to put in the relay IP or fqdn in that box and hit OK. Well, I had done that and mail still wasn't sending. So I SSH into the server (which is SuSE) and look at /var/log/mail and it looks like it's trying to relay the email through 127.0.0.1 and it's rejecting it. So looking through the config files, I see there's /etc/sendmail.cf and /etc/mail/submit.cf. You can configure items in /etc/sysconfig/sendmail and run SuSEconfig --module sendmail to generate those to .cf files based on what's in /etc/sysconfig/sendmail. So playing around, I see that when you set the SMTP Server value in the vCenter gui, all that it does is change the "DS" line in /etc/mail/submit.cf to have DS[myrelayserver.com]. Looking on the internet, it would appear that the DS line is really the only thing you need to change in order to use a relay server. I got on the phone with VMWare support and spent 2 hours trying to modify ANY setting that had anything to do with relays and we couldn't get it to NOT use 127.0.0.1 as the relay. Just to note, any time we made any sort of configuration change, we restarted the sendmail service. Does anyone know whats going on? Have any ideas on how I can fix this?

    Read the article

  • SAN Replication for Fault tolerance using EVA4400

    - by Sergei
    Hi Everyone, I hope that someone would point me in the correct direction - it looks like I have no enough konwledge in the subject and timeframes are too tight for me to explore different scenarios in depth.. We have two datacenters few miles away from each other connected by 100 Mbps link.Each datacenter will have 5 BL490 blades with ESX Standard hosting about 50 VMs. Eac hsite has HP eva4400 SAN with SAN replication set up.VC is going to be in the first datacenter and both datacenter are networked. SAN Replication is block level so it seems like I cannot just replicate changes but all writes would have to be replicated.This should not be a problem as link can sustain about 1.8 TB a dayand data can be buffered. I am having trouble however visioning how recovery would work in this case.We don't need instant recovery , I would say 4 hours recovery time is accepted so fancy automatic SRM like DR scenario would not be easily accepted due to the financial reasons, however any comments are welcomed. Current idea is following: replicate LUNs from primary site to the secondary.When disaster strikes, IT personnel switches on ESX hosts on the remote side and connects replicated LUNS to them, then registers VMs and changes IP address. I understand that this seems like horribly manual process and I almost sure I have missed some obvious pitfalls here. Could someone let me know what direction should I go?An articles regarding the subject? This is a brand new setup and we would rather build up basic recovery process and scale it later.I just need to have a right direction to allow for such scalability. Thank you very much in advance!

    Read the article

  • Can VMWare Server 2.0 be useful in Production for easing backups?

    - by Keith Sirmons
    Howdy, Let's run this idea by the group here. I am thinking about using VMWare Server in production to host a 2008 Domain Controller with DHCP and DNS, a 2008 member server with WSUS, some virus software, and other "management" utilities a second 2008 member server with SQL, IIS, and File Shares for a medium business of 50-100 desktops. The reason I am leaning toward Server vs ESXi is for backup purposes. Using ESXi, if I want to backup the VM's, I would need a second server in the office with enough storage availability to hold a copy of the vmdks. I am wondering if putting this virtual environment on top of a basic 2008 server install will allow for easier backups to both tape and/or to offsite storage using JungleDisk. Can a snapshot be triggered easily via a scheduled job? I know this doesn't necessarily handle file level restores, but I want to make sure in a DR situation, we can restore production servers quickly. Does this concept hold water? Would a very minimum install of the 2008 Host remove too many resources from the actual production machines? This would be a new Dell 410 server with 12 GB ram and (6) 600 GB 15K in a RAID 6, Dual Intel Xeon 2.26GHz procs.

    Read the article

  • What's up with stat on MacOSX/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on MacOSX 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev <volfs> 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) <volfs> on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputting the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • Cachefilesd (cachefiles) everything seems to be set up, still not working

    - by Evgenius
    I'm trying to set up cachefilesd to work with my network folder shared using NFS. I have seemingly everything set up, however cachefilesd starts normally, however caching isn't functioning. Here is output of commands, which I ran in the same order 1 sudo mount ... cache-1:/mnt/datashared on /mnt/nfsshare type nfs (rw,sync,ac,acregmin=3,acregmax=60,acdirmin=30,acdirmax=300,lookupcache=pos,vers=3,fsc) ... 2 lsbmod | grep cachefiles cachefiles 40555 1 fscache 57430 4 nfs,cifs,cachefiles,nfsv4 3 [edited - deleted] 4 uname -r 3.8.0-34-generic 5 grep CONFIG_NFS_FSCACHE /boot/config-3.8.0-34-generic CONFIG_NFS_FSCACHE=y 6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring 7 sudo service cachefilesd restart * Restarting FilesCache daemon cachefilesd [ OK ] 8 dmesg [6211206.141781] FS-Cache: Withdrawing cache "mycache" [6211210.135236] FS-Cache: Cache "mycache" added (type cachefiles) [6211210.135242] CacheFiles: File cache on sdb1 registered [6214644.348929] CacheFiles: File cache on sdb1 unregistering [6214644.348935] FS-Cache: Withdrawing cache "mycache" [6214654.575909] FS-Cache: Cache "mycache" added (type cachefiles) [6214654.575915] CacheFiles: File cache on sdb1 registered 9 ps aux | grep cachefilesd root 65399 0.0 0.0 4460 540 ? Ss 23:14 0:00 /sbin/cachefilesd 1000 65464 0.0 0.0 8160 916 pts/0 S+ 23:16 0:00 grep --color=auto cachefilesd finally, biggest problem is 10 cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v3 64476645 801 0:24 233e020f0da07a93 no tl;dr I think I configured everything properly but FSC mount option + fscachefilesd don't seem to work

    Read the article

  • How do I Setup Multiple Sites in HostGator Shared Hosting?

    - by cillosis
    I recently decided to consolidate all of my random projects into a single hosting account as it was starting to get very expensive to run each on an individual hosting plan. I purchased the HostGator Baby plan which allows hosting of multiple domains. You have to set it up with a root domain name which is fine (I used my portfolio domain name). As far as file structure, I wanted a folder for each site in /public_html so the structure looks like this: - public_html/ - myportfolio.com/ - ... my files ... - anothersite.com/ - ... my files ... - thirdsite.com/ - ... my files ... I setup add-on domains and pointed them to their respective folders which works fine. My problem is the root domain ex. myportfolio.com expects it's files to be contained at the root of /public_html rather than within it's folder I created. I setup a redirect to point requests for myportfolio.com to myportfolio.com/myportfolio.com/ which works initially except (at least in my WordPress installation) it still references it's root folder as public_html. TL;DR; What is the best way to go about setting up multiple site hosting in a shared hosting environment (i.e. I can't setup vhosts). Does anybody know of any tutorials or videos that walk through this more clearly? Thanks.

    Read the article

  • Outlook users connected to exchange can email from other email accounts

    - by Sherriffwoody
    We have found an issue on our systems whereby an outlook user (both 2007 and 2010) connected to our Exchange server (2007) can send emails as other users using the following steps Within Outlook Click <New Email> Select the <From> button to show a list of accounts outlook contains, but it also shows the option Select<Other Email Address>. This brings up a small dialog box with another button which when selected allows the user to select an email from their contacts or the Active Directory. The user in most cases can select any email within the Active Directory and send an email as if it were coming from that selected email. It seems not everyone has this ability and I'm guessing it is something to do with settings in exchange or AD(version 6) or is there a group policy that can be implemented to stop users being able to do this. We have no idea what allows this and I have failed to find anything using Dr Google. No one has setup delegates within outlook but it does seem to be something similar? Does anyone know how to lock this down? Thanks in advance

    Read the article

  • Intel z77 vs h77 for intensive compiling, gaming [closed]

    - by Bilal Akhtar
    I'm in the market for a desktop motherboard (preferably ATX) that functions well with Intel i7-3770 Ivy Bridge processor at 3.4 GHz with LGA1155 socket. That processor is very fast, and it should handle all my tasks. My question is about the type of motherboard chipset I should choose to accompany it. I plan to use my rig for compiling and developing Debian package and other OS components, web development, occasional Android apps, chroots, VMs, FlightGear, other gaming but nothing serious, and heavy multitasking, all on Ubuntu. I do NOT plan to overclock, and I never will, so that's not a cause of concern for me. That said, I'm down to three chipset choices: Intel H77 Intel Z68 Intel Z77 I'm planning to go for H77 since I don't need any of the new features in Z77. I don't plan to use a second GPU and I will never overclock my CPU/GPU. My question is, will H77 based MoBos handle all my tasks well? Intel advertises that chipset as "everyday computing" but other sites say it's base functionality is the same as Z77. Intel rather advertises Z77 for "serious multitaskers, hardcore gamers and overclocking enthusiasts". But the problem with all Z77 motherboards I've seen is, they're way too expensive and their main feature seems to be overclocking, which won't be useful to me. Will I lose any raw CPU/GPU performance or HDD R/w with the H77 when comparing it to a Z77? Will heat, etc be an issue too? From what I've seen, Z77 motherboards have larger heat sinks when compared to H77 ones. Will that be an issue too, if I go with an H77 motherboard with no heat sinks for the chipset? The CPU will have a fan in both cases, of course. tl;dr When it comes to CPU/GPU performance and HDD r/w, is the Intel H77 chipset slower than the Z77? I don't care about overclocking or multiple GPUs, and for the processor, I'm set on Ivy Bridge i7-3770.

    Read the article

  • System fans connected to a Gigabyte Z77-D3H motherboard do not increase in speed

    - by Andrew
    The motherboard (Gigabyte Z77-D3H) controls my 3-pin CPU fan just fine. My system fans are a 3-pin fan (plugged into SYS_FAN1), and a 4-pin fan plugged into SYS_FAN3. All 3 of the system fan headers are 4-pin, but the user manual states that SYS_FAN1 is really a 3-pin header (that it can control the speed of a 3-pin fan) and the 4th pin is just a reserve. All my fans have a max RPM of 2000. Normally, all the fans run around 1000 RPMs when I'm not doing anything intensive. This proves that the motherboard can set the speed. However,when I run Folding@Home and my temperatures increase (around 70C) only the CPU fan increases to around 2000 RPMs. The system fans stay around 1000 RPMs. Through the BIOS I am able to disable the system fan control and the system fans then run at max RPMs (meaning the motherboard was doing something). I've updated the BIOS to the latest version and tried out Speedfan, but neither helped my situation. What I'd like is for the system fans to ramp up their RPMs as needed. Any thoughts? Tl;dr: My system (case) fans, but not my CPU fan, are always stuck around 1000 RPMs out of 2000 no matter the temperature.

    Read the article

  • Postfix SASL Authentication using PAM_Python

    - by Christian Joudrey
    Cross-post from: http://stackoverflow.com/questions/4337995/postfix-sasl-authentication-using-pam-python Hey guys, I just set up a Postfix server in Ubuntu and I want to add SASL authentication using PAM_Python. I've compiled pam_python.so and made sure that it is in /lib/security. I've also added created the /etc/pam.d/smtp file and added: auth required pam_python.so test.py The test.py file has been placed in /lib/security and contains: # # Duplicates pam_permit.c # DEFAULT_USER = "nobody" def pam_sm_authenticate(pamh, flags, argv): try: user = pamh.get_user(None) except pamh.exception, e: return e.pam_result if user == None: pam.user = DEFAULT_USER return pamh.PAM_SUCCESS def pam_sm_setcred(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_acct_mgmt(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_open_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_close_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_chauthtok(pamh, flags, argv): return pamh.PAM_SUCCESS When I test the authentication using auth plain amltbXkAamltbXkAcmVhbC1zZWNyZXQ= I get the following response: 535 5.7.8 Error: authentication failed: no mechanism available In the postfix logs I have this: Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication problem: unknown password verifier Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication failure: Password verification failed Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: localhost.localdomain[127.0.0.1]: SASL plain authentication failed: no mechanism available Any ideas? tl;dr Anyone have step by step instructions on how to set up PAM_Python with Postfix? Christian

    Read the article

  • find the next due date after today within a group in an Excel PivotTable

    - by Dennis George
    I have got a table set up in one sheet with "transactions". Each row contains a name of a vendor, the amount owed or paid depending on transaction type, and the due date/transaction date. Here is some simplified sample data: Vendor Date Invoice Payment Vendor A 6/30 $200 Vendor A 6/30 ($200) Vendor B 7/5 $500 Vendor B 7/5 ($500) Vendor C 10/28 $50 Vendor A 10/30 $100 Vendor C 11/15 $50 I have already built a PivotTable from that table to group these transactions by vendor and sum the remainder owed. What I'm trying to figure out is how to, for each vendor, get the next due date (min date of the group, excluding dates < Today()), or if there is no next due date then I want to see the max date for that group. Here is what my PivotTable looks like, plus the date column I'd like to add (assuming Today() = 10/23): Vendor Date Owed Vendor B 7/5 - Vendor C 10/28 $100 Vendor A 10/30 $100 I know calling it next due date might not be so accurate if I end up with the date of a payment in that column, but I'm ok with that. tl;dr : I want to find the next earliest date within each group, or the last date. How do I do this?

    Read the article

  • HDD cannot be booted from

    - by K.Wong
    I have an ASUS A52F, no mods, with Win7 preinstalled. There is one hard drive, two partitions, one for the OS (C:) and one for data (D:) After laptop trauma, i.e. I dropped it, it still worked fine. However, when chkdisk was run on the C: drive chkdisk crashed, and Windows is unbootable. Windows Startup Manager utility was run, but error code 0xc00000e9 was given. Windows help db shows that this is a BIOS problem, however in BIOS setup the drive can be accessed, and folders can be shown. I also burned a Ubuntu 12.04 distro, and booted off it, but my internal HDD is missing and cannot be accessed. When installation program is run the HDD shows up as a location to install to yet the drive is shown as blank. tl;dr: HDD damaged, chkdisk crashed. Windows 7 is unbootable. Given error code, according to Microsoft db, is a BIOS problem, yet in BIOS the drive is accessible. different OS (Ubuntu 12.04), booted off a distro, yet HDD is inaccessible. Quick help appreciated.

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

  • Downmix ALL SYSTEM audio to mono - Windows 7

    - by Mike K.
    I'm deaf in one ear and want to use my headphones when playing a game and talking with my friends on Skype/TS/Mumble/etc while also sometimes listening to music. I need ALL my system audio to be downmixed to mono so that my ONE hearing ear gets ALL audio channels instead of split stereo audio. No, none of the other similar questions on superuser have a solution. My headphone properties does not have a 'Mono' option, I don't have a 'Headphone Virtualization' option, and my Realtek HD audio driver software doesn't have these options either (driver was updated 11/14/2012). Don't even talk about setting the balance of one side of the headphones to 0. You're not paying attention if you suggest that. JACK and Virtual Audio Cable didn't work. It's possible I configured them wrong, but I followed the steps I found in related questions and still got split stereo out. TL;DR I need a viable, working, software solution (I say software because I have a USB headset) for forcing ALL system audio to mono so that I can hear literally everything through the one earpiece. Thanks!

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • How can I debug user mode driver failures in Windows 8

    - by Tom
    I have a 32 GB SD Card. Whenever I insert this card in to my newly upgraded Windows 8 laptop the OS stops responding normally. Metro Apps won't work. The system may or may not log in. Desktop apps may or may not be able to do things. When I remove the card and restart then all is fine. As soon as I put the card back in, the system starts misbehaving again. I've run Windows Update, so I have the latest drivers from Microsoft. This does not occur with the 8 GB cards I have. Unfortunately I only have one 32 GB card, so I can't test with others. From examining the system event log I've determined this is happening due to a user mode driver failure. How can I best debug this issue from here? How can I figure out which driver this is related to? Will there be a Dr. Watson crash dump somewhere? Details - System - Provider [ Name] Microsoft-Windows-DriverFrameworks-UserMode [ Guid] {2E35AAEB-857F-4BEB-A418-2E6C0E54D988} EventID 10110 Version 1 Level 1 Task 64 Opcode 0 Keywords 0x2000000000000000 - TimeCreated [ SystemTime] 2012-10-29T00:51:57.532718300Z EventRecordID 40417 Correlation - Execution [ ProcessID] 1056 [ ThreadID] 3796 Channel System Computer thebrain - Security [ UserID] S-1-5-18 - UserData - UMDFHostProblem [ lifetime] {811E3DC4-FBC6-420B-ABCC-AD7505A36F3B} - Problem [ code] 3 [ detectedBy] 2 ExitCode 3 - Operation [ code] 259 Message 72448 Status 4294967295 Edit 1 So I tried using Debug View from SysInternals (you can get it here: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx). That gave me this information: which is not especially helpful. Then I tried connecting WinDbg to WUDFHost.exe (the process that seems to host user mode drivers) to see if it could catch the error. Get it here: http://msdn.microsoft.com/en-US/windows/hardware/hh852363 Instructions: http://msdn.microsoft.com/en-US/library/windows/hardware/ff554716(v=vs.85).aspx That didn't help much. It didn't catch any exceptions as I'd hoped (which would point me to the cause of the crash at least). Here's the stack of one of the threads:

    Read the article

  • Why is wget so much faster than Firefox at some downloads?

    - by Earlz
    Recently, I needed to do an update of Xilinx WebPack, mind you, this is one hefty piece of software. It weighs in at 6gigs, which definitely isn't "quick" on any internet I've ever had available to me. So, when I went to download it(using Firefox of course), I was very... unsettled by the fact that the download was only going at 110kByte/s. My internet connection is capable of about 2200kByte/s download, so what gives!? My workaround in the past for this issue has been to take the link to my Linode linux server and download it there with wget, where the download will zip along at 14MByte/s, and then either copying it to my website directory and downloading it that way through HTTP, or using sftp. Both ways work about as well and will sufficiently max out my connection. However, I recently figured out the missing variable. I tried doing the download locally with wget and was able to max out my connection! TL;DR; Now, my question. Why is wget so much faster than firefox at downloading this file? I hardly ever have such a difference in download speeds except for with this one file.

    Read the article

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >