Search Results

Search found 1732 results on 70 pages for 'insight'.

Page 46/70 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • opath syntax to force dynamic distribution group field as numerical comparison? (Exchange 2010)

    - by Matt
    I'm upgrading a (working) query based group (Exchange 2003) to a new and 'improved' dynamic distribution group (2010). For better or worse, our company decided to store everyone's employee ID in the pager field, so it's easy to manipulate via ADUC. That employee number has significance, as all employees are in a certain range, and all contractors are in a very different range. Basically, the new opath syntax appears to be using string compare on my pager field, even though it's a number. Let's say my employee ID is 3004, well, it's "less than" 4 from a string check POV. Set-DynamicDistributionGroup -Identity "my-funky-new-group" -RecipientFilter "(pager -lt 4) -and (pager -like '*') -and (RecipientType -eq 'UserMailbox')" Shows up in EMC with this: ((((((Pager -lt '4') -and (Pager -ne $null))) -and (RecipientType -eq 'UserMailbox'))) -and (-not(Name -like 'SystemMailbox{*')) -and (-not(Name -like 'CAS_{*')) -and (-not(RecipientTypeDetailsValue -eq 'MailboxPlan')) -and (-not(RecipientTypeDetailsValue -eq 'DiscoveryMailbox')) -and (-not(RecipientTypeDetailsValue -eq 'ArbitrationMailbox'))) This group should have max of 3 members right? Nope - I get a ton because of the string compare. I show up, and I'm in the 3000 range. Question: Anyone know a clever way to force this to be an integer check? The read-only LDAP filter on this group looks good, but of course it can't be edited. The LDAP representation (look ma, no quotes on the 4!) - Also interesting it sort of 'fills the' bed with the (pager=4) thing... (&(pager<=4)(!(pager=4))(pager=*)(objectClass=user)(objectCategory=person)(mailNickname=*)(msExchHomeServerName=*)(!(name=SystemMailbox{*))(!(name=CAS_{*))!(msExchRecipientTypeDetails=16777216))(!(msExchRecipientTypeDetails=536870912))(!(msExchRecipientTypeDetails=8388608))) If there is no solution, I suppose my recourse is either finding an unused field that actually will be treated as an integer, or most likely building this list with powershell every morning with my own automation - lame. I know of a few ways to fix this outside of the opath filter (designate "full-time" in another field, etc.), but would rather exchange do the lifting since this is the environment at the moment. Any insight would be great - thanks! Matt

    Read the article

  • What is causing sudden freezing during running real-time program?

    - by Trevor Boyd Smith
    So I run a high intensive (CPU/GPU) real-time program. During normal execution suddenly everything freezes for 1-4 seconds. I opened "Process Explorer" in the background to help gain insight and maybe identify something. Here is what the CPU/GPU graphs looks like when I align them in time: Notice the 4 distinct drops in both the CPU/GPU. You can see that it goes from some sort of positive CPU/GPU usage to almost zero. These drops in the graph align with when the real-time program suddenly freezes. How do I find what is causing these sudden drops? NOTE: When you put your mouse over the graph it tells you the time, accurate to the second, for where your cursor is. Maybe this mouse over feature could be helpful in some way (e.g. what if you had a log of all processes every 100ms). EDIT: The real-time program is a video game and so I can't watch some sort of instrumentation while the video game is running. I need a solution that let's you look back in time somehow to see what was happening when the slow down occurred. EDIT: RE - Recording Data vs using real-time monitor: So the windows performance recorder is for some reason not recording what I expect it to record. So I switched to using "perfmon" and then opening it's "resource monitor". RE - Setting it up so I can view real-time monitor: In the video game I set it to spectate and then put the video game in "windowed" mode so that I can view the real time display that Resource Monitor has. Now that I can get semi-real time (only once per second... how do you get more than once per second?) I started looking at the various real time data readouts. Getting to the cause: I noticed a strong correlation in high disk IO and low CPU usage (which is also seen by having in-game freezing). How do you use resource monitor to find out who is doing all this offending disk IO?

    Read the article

  • Can't authorize a server for Amazon RDS

    - by Parris
    We are attempting to slowly migrate a website over to AWS among other things. We decided the first thing to move was the database. We have some dedicated server with a different hosting provider. We only have one IP. I am having trouble authorizing the ip so that the old server can connect to RDS. It simply hangs for a while while using the mysql cli, then responds: ERROR 2003 (HY000): Can't connect to MySQL server on 'db.address.us-east-1.rds.amazonaws.com' (110) It did work on my laptop though. I am not quite sure what is wrong. I have a feeling I don't quite understand CIDR/IP. I simply took the ip address and tacked on /32 at the end. Then I gleaned some information that it also has to do with subnet mask? ifconfig reports: 255.255.255.0 I found a calculator and the IP changed a bit and had /24 at the end. That still didn't work. One other note... perhaps i dont know enough about the differences between OS. The hosting provider is using centOS, while our development machines are all ubuntu. Any insight would be extremely helpful! THANKS :)

    Read the article

  • Codec Problems with trying to edit videos with VirtualDub

    - by Roy Rico
    So, I'm a little frustrated. According to this post and various other internet sources, virtualdub is supposed to allow users to quickly split and join video files. I am using windows 7 64 Bit and the latest version of VirtualDub (64-bit). I have tried to edit various movie files, and each attempt at editing various files I have done has not worked for me. AVI file A.avi won't load, saying that it can't located the Decompressor for the "FMP4" format. I have tried this solution and this one, and neither of them work. I have tried setting the VFW Decompressor for 'Other MPEG4' setting to XVID or LIBAVCODEC. There is no change in Virtual Dub. AVI file C.avi will load in Virtual Dub, but any attempt to split it gives me an error that I don't have XVID codecs installed. I've attempted to install the proper codecs (Shark's Windows 7 Codecs, CCCP) with no change. AVI file C.avi will load, and it will split, but won't split using the "Direct Stream Copy" claiming the compression algorithm is incompatible. I tried the "Fast Recompress" option and it created a 27GB file out of what was supposed to be about a 300-400MB file. Can someone please give me some insight into what I'm messing up?

    Read the article

  • How ZFS handles online replacement in a RAID-Z (theoretical)

    - by Kevin
    This is a somewhat theoretical question about ZFS and RAID-Z. I'll use a three disk single-parity array as an example for clarity, but the problem can be extended to any number of disks and any parity. Suppose we have disks A, B, and C in the pool, and that it is clean. Suppose now that we physically add disk D with the intention of replacing disk C, and that disk C is still functioning correctly and is only being replaced out of preventive maintenance. Some admins might just yank C and install D, which is a little more organized as devices need not change IDs - however this does leave the array degraded temporarily and so for this example suppose we install D without offlining or removing C. Solaris docs indicate that we can replace a disk without first offlining it, using a command such as: zpool replace pool C D This should cause a resilvering onto D. Let us say that resilvering proceeds "downwards" along a "cursor." (I don't know the actual terminology used in the internal implementation.) Suppose now that midways through the resilvering, disk A fails. In theory, this should be recoverable, as above the cursor B and D contain sufficient parity and below the cursor B and C contain sufficient parity. However, whether or not this is actually recoverable depnds upon internal design decisions in ZFS which I am not aware of (and which the manual doesn't say in certain terms). If ZFS continues to send writes to C below the cursor, then we are fine. If, however, ZFS internally treats C as though it were gone, resilvering D only from parity between A and B and only writing A and B below the cursor, then we're toast. Some experimenting could answer this question but I was hoping maybe someone on here already knows which way ZFS handles this situation. Thank you in advance for any insight!

    Read the article

  • Catastrophic Failure opening ODBC via Citrix

    - by Joshdan
    We recently had our Citrix server crash unexpectedly. When it came back up, there was a new issue -- every ODBC connection fails with "Catastrophic Failure" (0x8000FFFF). The issue is limited to Citrix / ICA connections; logging in as the same user via RDP works as usual. The following code is my minimal test case (for wscript): ''// test_odbc.vbs strConn = "Driver={Microsoft Text Driver (*.txt; *.csv)};Dbq=c:\files\;" Set rs = CreateObject("ADODB.recordset") strSQL = "SELECT * FROM myFile.csv" wscript.echo "Press OK to Test" ''// This line breaks over Citrix, but not over Terminal Services ''// ---------------------- rs.open strSQL, strConn, 3,3 ''// ---------------------- wscript.echo rs("a") Any insight would be greatly appreciated. Windows Server 2003 SP1, Citrix MetaFrame Presentation Server 4.0. Clients include at least versions 10.2-11 running on 2000-Vista, OS X. ODBC error happens whether a DSN is used or not, on at least Access, MS-SQL, and CSV. Connections both through the SSL Gateway and directly. There have been a few users actually able to log in without trouble, but I can't pin down anything special about them.

    Read the article

  • How to display escaped characters in tmux status bar

    - by walrus
    i am running tmux from a tty on an embedded linux device. (NOT a terminal emulator) because the screen is rather small, i want to add some "icons" to the tmux status bar. to achieve this, i have simply created a font with the appropriate glyphs for things like battery, or wifi. i can load the font, and display the characters with calls that use an escape to the line drawing characters like so: echo -e "\xe\234\xf" \xe escapes me into line drawing character mode, \234 is my created character, and \xf returns me to normal character mode so my terminal doesnt start getting goofy. this works perfectly if i enter the command at the terminal whether tmux is started or not. the issue arises if i then try to use it in my ~/.tmux.conf file for the status bar. i currently have a line like this: set -g status-right "#(echo -e "\xe\234\xf") #(/script/to/output/powerlevel) this simply outputs \xe\234\xf powerlevel this goes the same if i try printf over echo. this is the output i would expect to get on the terminal if i made the call without passing -e to echo, or without enclosing the statement with quotes. i then decided to wrap the calls to the echo or printf in a shell script. again, the script works when called from the terminal, but not in tmux's status bar. now i get the unprintable character "?" instead of my icon, like this: ? powerlevel this is what i would expect if i did not use the line drawing escapes previously mentioned above, or if i tried to copy and paste the character as text using tmux. in addition, the calling of these character scripts screws up the rest of my status-right, as the clock has about 6 digits for minutes when it is called (though it correctly only updates two of them). how can i make tmux respect the escape characters? any help or insight is greatly appreciated.

    Read the article

  • MySQL getting stuck, eating up disk i/o

    - by bonez05
    Hi all, Using mySQL 5.0.51 on Solaris . At intermittent times it looks like MySQL is getting 'stuck' . The disk usage on the server spikes to 98% busy from reads. I used dtrace (specifically DTrace toolkit - iosnoop) to track down what processes was using all the reads. Mysql was calling tablename.TDM hundreds of times per second. There was no more than average load on the webserver that could account for this. There were no cronjobs running, and no other utilities like mysqldump or anything. It is a master / slave replication setup. As a jury-rigged fix, I altered the mysql table from 'tablename' to 'tablename2' and then back to 'tablename' This fixed the problem temporarily, and "unsticks" mysql. The disk usage goes back down and dtrace is no longer showing hundreds of reads to 'tablename.TDM' / second. A couple ideas I had are: 1. MySQL version bug 2. Infinite loop somewhere in my application (which i'm not sure how likely this is) 3. ?? Has anybody seen this before or have any insight? Thanks

    Read the article

  • Windows desktop virutalization instead of replacing work stations

    - by Chris Marisic
    I'm head of the IT department at the small business I work for, however I am primarily a software architect and all of my system administration experience and knowledge is ancillary to software development. At some point this year or next we will be looking at upgrading our workstation environment to a uniform Windows 7 / Office 2010 environment as opposed to the hodge podge collection of various OEM licensed editions of software that are on each different machine. It occurred to me that it is probably possible to forgo upgrading each workstation and instead have it be a dumb terminal to access a virutalization server and have their entire virtual workstation hosted on the server. Now I know basically anything is possible but is this a feasible solution for a small business (25-50 work stations)? Assuming that this is feasible, what type of rough guidelines exist for calculating the required server resources needed for this. How exactly do solutions handle a user accessing their VM, do they log on normally to their physical workstation and then use remote desktop to access their VM, or is it usually done with a client piece of software to negotiate this? What types of software available for administering and monitoring these VM's, can this functionality be achieved out of box with Microsoft Server 2008? I'm mostly interested in these questions relating to Server 2008 with Hyper-V but fell free to offer insight with VMware's product line up, especially if there's any compelling reasons to choose them over Hyper-V in a Microsoft shop. Edit: Just to add some more information on implementation goals would be to upgrade our platform from a Win2k3 / XP environment to a full Windows 2008 / Win7 platform without having to perform any of that associated work with our each differently configured workstation. Also could anyone offer any realistic guidelines for how big of hardware is needed to support 25-50 workstations virtually? The majority the workstations do nothing except Office, Outlook and web. The only high demand workstations are the development workstations which would keep everything local.

    Read the article

  • linux container bridge filters ARP reply

    - by Dani Camps
    I am using kernel 3.0, and I have configured a linux container that is bridged to a tap interface in my host computer. This is the bridge configuration: :~$ brctl show bridge-1 bridge name bridge id STP enabled interfaces bridge-1 8000.9249c78a510b no ns3-mesh-tap-1 vethjUErij My problem is that this bridge is dropping ARP replies that come from the ns3-mesh-tap-1 interface. Instead, if I statically populate the ARP tables and ping directly everything works, so it has to be something related to ARP. I have read about similar problems in related posts, and I have tried with the solutions explained therein but nothing seems to work. Specifically: ~$ grep net.bridge /etc/sysctl.conf net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-filter-vlan-tagged = 0 net.bridge.bridge-nf-filter-pppoe-tagged = 0 arptables and ebtables are not installed. iptables FORWARD is all set to accept: Chain FORWARD (policy ACCEPT) target prot opt source destination The bridged interfaces are set to PROMISC: ~$ ifconfig ns3-mesh-tap-1 Link encap:Ethernet HWaddr 1a:c7:24:ef:36:1a ... UP BROADCAST PROMISC MULTICAST MTU:1500 Metric:1 vethjUErij Link encap:Ethernet HWaddr aa:b0:d1:3b:9a:0a .... UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 The macs learned by the bridge are correct (checked with brctl showmacs). Any insight on what I am doing wrong would be greatly appreciated. Best Regards Daniel

    Read the article

  • MAMP Pro virtual hosts on Mountain Lion not being recognized

    - by user135242
    I'm running MAMP PRO 2.1.1 on OSX 10.8.1 and not having any luck getting Apache to recognize any hosts that have been set up in MAMP besides localhost. This only happens when trying to use port 80. When I switch to port 8888 everything runs fine. Mountain Lion's Apache has been disabled and I get no errors or warnings when starting MAMP and the servers. Any doc root I set for "localhost" runs without problem. Any other server name I define, however, results in "cannot connect" when viewing in Chrome (not to be confused with "cannot find" - the browser is in fact following /etc/hosts to 127.0.0.1, but MAMP's Apache is simply not responding) I was wondering if anyone else has run into this issue or knows how to solve it. I'm working on some WordPress development and it keeps wanting to redirect to the base URL (with no port reference) even during setup. I'm sure I could fix things from the WP side but I'd rather figure out what the root issue is with MAMP. Thanks in advance for any insight you can provide.

    Read the article

  • iTunes Title Bar is Equal to the Location of the Library file?

    - by Urda
    I have never been able to get a clear answer on why the latest edition of iTunes does this. I have my entire iTunes library located in C:\itunes\ and the library data files inside C:\itunes\!library_info for backup purposes. However when version 9 of iTunes came out it went from having iTunes as the title, to !library_info. Anyway to get around this without moving my data files away? Annoying "feature" if that is what it is. Again Apple support and forums were of no help to me. Anyone have insight on this? System Info: Windows Vista x86 Ultimate, latest updates. iTunes Version 9.0.3.15 Screenshot: http://farm5.static.flickr.com/4072/4414557544_d0b25eb64c_o.jpg Flickr Page: http://www.flickr.com/photos/urda/4414557544/ UPDATE: I put a bounty on this, and would be open to hacking the registry or doing a custom config. Please help me fix my title bar!!! Update2: I would not like to move my library files out of itunes\!library_info to avoid them inter-mingling with my music library.

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • Server 2003 R2 - II6- granting access to website via IP with subnet range

    - by John
    We are trying to allow for a client to connect to our website. By default we are Denying all access except for those with the specified IPs we have configured to run, everything before has just been a single IP address. However now we must implement a range of IPs and rather than input thousands of records we want to use the group of computer options in the Grant Access page. However we have it configured to work off of the IP 72.21.192.0 with a subnet mask of 255.255.224.0 They are unable to connect. Looking over our IIS logs they are receiving a 302 error which is the same behavior anyone should get whom is unauthorized to view the page in question. The IP address coming in is 72.21.217.2, so it should be well within the rage of acceptable IP addresses. I'm at a loss as everything I look up tells me to do what we are doing. So any insight would be appreciated. Especially because I'm a software guy not hardware. Thanks!

    Read the article

  • Remove directory from URL IIS 7.5

    - by xalx
    I've tried to find a solution to this and found some guides out there but none seem to work. I have the following URL - http://www.mysite.com/aboutus.html However there are some other sites which link to my old hosted site and point to http://www.mysite.com/nw/aboutus.html. My issue here is trying to remove the 'nw' directory from the URL's. I have setup the following URL Rewrite in IIS but it does not seem to do anything, <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Redirect all to root folder" enabled="true" stopProcessing="true"> <match url="^nw$|^/nw/(.*)$" /> <conditions> </conditions> <action type="Redirect" url="nw/{R:1}" /> </rule> <rule name="RewriteToFile"> <match url="^(?!nw/)(.*)" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Any insight would be appreciated.

    Read the article

  • Using Computer name in URL causes issues when connecting to Web Services

    - by AWinters
    The set of applications I work on all access the same 8 or so web services that we have. These services and applications all reside on the same box and all use the computer name when trying to connect to the web service. For Example: If I have a web service called MapDataService and I have an application that accesses it, it would access it by the URL: http://COMPUTERNAME/MapDataService/MapDataService.asmx. This works in most of the applications that access the web service. However, we have several applications that, when using the computer name in the URL, will not get data returned from the service (actually a 503 is returned). In order to get it to work, the IP address of the system needs to be used in place of the COMPUTERNAME. This strikes me as very odd considering, as I mentioned before, all applications and services are on the same box and most other applications usr the COMPUTERNAME with no issues. Can someone give me some insight as to what could be causing this? We have no access to IIS logs and what logs we did get (this is on a customer site) are not very useful.

    Read the article

  • IIS 7 and ASP.NET State Service Configuration

    - by Shawn
    We have 2 web servers load balanced and we wanted to get away from sticky sessions for obvious reasons. Our attempted approach is to use the ASP.NET State service on one of the boxes to store the session state for both. I realize that it's best to have a server dedicated to storing sessions but we don't have the resources for that. I've followed these instructions to no avail. The session still isn't being shared between the two servers. I'm not receiving any errors. I have the same machine key for both servers, and I've set the application ID to a unique value that matches between the two servers. Any suggestions on how I can troubleshoot this issue? Update: I turned on the session state service on my local machine and pointed both servers to the ip address on my local machine and it worked as expected. The session was shared between both servers. This leads me to believe that the problem might be that I'm not using a standalone server as my state service. Perhaps the problem is because I am using the ip address 127.0.0.1 on one server and then using a different ip address on the other server. Unfortunately when I try to use the network ip address as opposed to localhost the connection doesn't seem to work from the host server. Any insight on whether my suspicions are correct would be appreciated.

    Read the article

  • How can I setup nginx to serve virtualhosts with rails(unicorn/passenger) and php-fpm

    - by NewAlexandria
    I would like to serve multiple sites on one instance. I install nginx, php-fpm, and a rails app. I use sites like this to guide me. I configure php-fpm to listen to a local socket listen = /var/run/php-fpm/php-fpm.sock I configure ngnix with multiple hosts: include /etc/nginx/conf.d/*.conf I have several site php conf files like /etc/nginx/conf.d/site1.conf server { listen 80; server_name site1.com www.site1.com; root /var/www/site1; location / { index index.html index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } } and rails site conf files like upstream rails { server 127.0.0.1:3000; } server { listen 80; server_name site2.com www.site2.com; root /var/www/site2; location / { proxy_pass http://rails; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Url-Scheme $scheme; } } I have a unicorn rails server running via rails s -p 3000 Yet, no sites come up for either site1.com or site2.com. I can get to the rails site at www.site2.com:3000 What is wrong? I've spent 2 days (nearly 30hr) trying many different blogs, SO / SF questions, etc. Please share your insight or answer. edit 1: No log entries are created when I try to visit either site. It's like the requests never come in.

    Read the article

  • Light Blue Monitor Screen

    - by SixfootJames
    I have seen this before with an older monitor that over time, the monitor colours change to a light blue haze. This has started happening with an older monitor of mine now (A GigaByte Monitor) and although none of the pins are bent and it's a brand new machine, there is no reason, other than aging that it should show the light blue screen. Perhaps it is just time for a new monitor, but if there is a way of saving it still. I would appreciate the insight. Perhaps there is something I have not tried, perhaps it has something to do with the new machine instead of the monitor? I had the monitor plugged into two other machines over the weekend and didn't have this problem. So I am not quite sure what to make of it. Many thanks! EDIT: I must also add that when I plugged the monitor into the older machines, I had the VGA converter attached to the end of the newer DVI output. Which, when plugged into the newer PC, I don't need of course.

    Read the article

  • How can I setup BluePill to Monitor a Rails App Running via Passenger (mod_rails)

    - by Jim Jeffers
    I recently launched a site running phusion passenger. Unfortunately, the site went down due to a frozen thread. I was able to save the server by doing kill -9 to the specific PID. Still though, I thought passenger was able to manage this automatically. I have a server with 1GB of memory running one rails app with passenger allotted up to 7 instances. However, when I came to discover the site went down I found that passenger had spawned 6 instances with one of them using up over 800mb of memory causing the server to swap. As a result I am hoping to setup something like bluepill on the server but I'm slightly confused as to how you go about doing it. Mainly because bluepill expects to start/stop the processes it's monitoring. However, in our case, passenger already restarts processes for us so we only need to monitor the pids of passengers instances and kill them once they've gotten too large. Has anyone here setup BluePill to monitor a rails app running under phusion's passenger? Any insight would be useful.

    Read the article

  • Are there any viable DNS or LDAP alternatives for distributed key/value storage and retrieval?

    - by makerofthings7
    I'm working on a software app that needs distributed decentralized name resolution, and isn't bound to TCP/IP. Or more precisely, I need to store a "key" and look up it's value, and the key may be a string, a number, or any other realistic data type. Examples: With a phone number, look up a name. (or with an area code, redirect to the server that handles that exchange) With an IP Address get a DNS name, or a Whois contact (string value) With a string, get an IP, ( like a DNS TXT or SRV record). I'm thinking out of the box here and looking for any software that allows for this. (more info below) Are there any secure, scalable DNS alternatives that have gained notoriety? I could ask on StackOverflow, but think the infrastructure groups would have better insight on this. Edit More info: I'm looking at "Namecoin" the DNS version of Bitcoin, and since that project is faltering, I'm looking at alternative ways to store name-value pairs, with an optional qualifier. I think a name value pair is of global interest is useful, but on a limited scale. Namecoin tried to be too much, and ended up becoming nothing. I'm trying to solve that problem in researching alternatives and applying distributed technologies where applicable. Bitcoin/Namecoin offers a Distributed Hash Table, which has some positive aspects, but not useful for DNS, except for root servers.

    Read the article

  • Proper Outlook Free/Busy status when working from home

    - by rwmnau
    Our office (pretty large - about 200 people) has recently started part-time telecommuting. It's only one day/week now, but it's already raised some questions about availability, so I wanted to see how the users here, some of whom I'm sure telecommute to a corporate job, how they set their out of office status. Outlook has four statuses, and here's what I (and most others?) take them to mean: Free: I'm available for meetings Busy: I'm in a meeting or otherwise occupied, and unavailable Tentative: Shy away from scheduling over, but I'm available if needed Out of office: I'm on vacation and unavailable. However, I don't travel for work - do people tend to use this status to mean they're remote, but available for a phone call/bridge? As we begin to telecommute, I'll be available by phone for meetings, but not in person - any meeting can have a conference bridge, but some meetings just need to be in person. I'd like to send the right message about my status - people can schedule meetings with me on my telecommute days, but they should expect me to be on a conference bridge when they do. What status do people use? Does "Out of Office" correctly reflect that you're working from home, even though I perceive this to mean that somebody is on vacation? Maybe I'm the only one confused here, but as a company that's never before done telecommuting of any kind, I'm in the dark about standard practices. Thanks for the insight! Though this isn't a technical question directly, I'm hoping it's still applicable to the group and constructive - if it's not, please close it and accept my apology.

    Read the article

  • QR Codes Printing, but Not Printing Correctly - Any Ideas?

    - by SDS
    I am mail merging some QR codes via file paths stored in Excel into a label template in MS Word 2013. I have the whole process with the Ctrl+F9 working properly, but I am stumped on this: On the 30 label sheet I am trying to print I have 6 labels that are repeating information, and stored in duplicated rows in Excel for this print job. All of the labels have 2 images on them, one is a logo and the other one is a unique QR code for that person. For the first set of 6 labels that print out, everything works perfect. However, from the 2nd time the information is printed onward all of the merged fields and logo look correct, but the QR codes are printing strangely. Basically it's the QR code as I want it, but with a copy of itself covering the top left 25% of the QR code. Print preview doesn't show this happening, only once it's printed does it come out like this. I've been trying everything I can think of to fix this and don't know what to do. So far I've: Recreated the document several times, tried duplicating the images in the source folder and giving the links in the Excel document new file paths in case the mail merge feeding from the same .jpg was an issue (even though it's not a problem with the logo) Any help or insight is greatly appreciate because this is a test run for a larger batch run that I need to get done soon :( Thank you!

    Read the article

  • SSH connection times out unless I tunnel in from a different server-

    - by rm-vanda
    OK, so this just started last week - Whenever we try to connect to our server via ssh (we use sftp, as well) - The connection times out. However, when you ssh to any other server and then ssh into the machine - it works flawlessly. Now, the mindblowing thing is that sometimes the ssh connection will succeed. Moments ago, I tried it from another machine, and then my own, and it worked - only to time out the next go around. Last week, simply restarting the ssh daemon worked, but this week, no such luck. I even went in and changed: /etc/hosts.allow ALL : ALL and /etc/hosts.deny is blank. The firewall config hasn't changed - but I even disabled the firewall to see if that would work - It did, for a moment - before cutting off, again. (ufw is set to "ALLOW" not "LIMIT") When I try SSH'ing in from my phone -- it works, fine -- So, it seems the problem is with our ISP/router/gateway - However, I see no log in the router/gateway that says its blocking our connections - And that wouldn't explain why we can SSH into any other server -- except for this one - from our network --- I truly appreciate any insight that anyone may have on this matter -

    Read the article

  • Window 7 image in vmware will allow network connection out but not http

    - by Ormis
    I am currently trying to create a set of images to deploy on my network, but I've run in to a snag. When I create my own Windows 7 image I can successfully use NAT for connecting to the network but whenever I try to access a webpage I get nothing. To be more specific, All firewalls/iptables are disabled on my host machine, my virtual machine, and my network. I can do lookups and all addresses respond correctly (i'm even using Google's DNS). On the host OS i have full connectivity. On the virtual machine I can ping any device I want and all addresses resolve correctly. Within a browser I cannot reach any page via hostname or IP. I feel almost like port 80 is being blocked but i can't find any reason this would be the case. If anyone has had this occur before, I would love some insight to the problem. I initially asked this on stackoverflow and now my eyes are now opened up to superuser. Thank you for any help you can provide.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >