Search Results

Search found 18475 results on 739 pages for 'non exist'.

Page 553/739 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • torrent downloads not showing on Squid log

    - by noobroot
    hello, i have just a few months working as sysadmin, hence i still have lots to learn, first thing id like to do is as follows: We have an OpenBSD 4.5 box acting like firewall,dns,cache etc, the box has 2 network cards, one conected directly to the internet and the other to our switch, i used to work with sarg for the log analysis but then changed to the much faster free-sa. I use a daily free-sa report to check the bandwidth usage and report our top 5 bandwidth consumers (3 days a week being #1 and you will be buying the pizzas :D, we are a small company ~20 so we are very familiar). this was working really good until recently, one of us required to download some stuff via torrent (~3GB) and since the pizza rule is active for non-work related downloads, he told me (verified) that his download was indeed work related so i would dismiss that 3GB off his quota, but to my surprise the log didnt showed that 3GB, since his ip consumption was only around 290MB. More recently, since the FIFA world cup started, we know that some of the employees are watching the match's streaming, we know it and we dont care about it since, like already stated, we are a small company so we dont have restrictive policies, we all can chat, watch youtube, download anything we want BUT we are only allowed 300MB a day otherwise you'll get in the top5-pizza-board, anyway, that streaming consumption is also not showing in the free-sa reports. So my question is, why is these data being excluded from the reports? im thinking that the free-sa reports list only certain types of things but im also thinking if are the squid logs the ones that are not erm... logging these conections. Any help, guide, advice or clarification is appreciated.

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • Nginx configuration leads to endless redirect loop

    - by brianthecoder
    So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2. Here's the ssl config server { listen 443 default ssl; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; ssl_certificate /home/app/ssl/<redacted>.com.pem; ssl_certificate_key /home/app/ssl/<redacted>.key; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Url-Scheme $scheme; proxy_redirect off; proxy_max_temp_file_size 0; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Here's the non-ssl config server { listen 80; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Let me know if there's any additional info I can give to help diagnose the issue.

    Read the article

  • logrotate: neither rotate nor compress empty files

    - by Andrew Tobey
    i have just set up an (r)syslog server to receive the logs of various clients, which works fine. only logrotate is still not behaving as intending. i want logrotate to create a new logfile for each day, but only to keep and store i.e. compress non-empty files. my logrotate config looks currently like this # sample configuration for logrotate being a remote server for multiple clients /var/log/syslog { rotate 3 daily missingok notifempty delaycompress compress dateext nomail postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # local i.e. the system's very own logs: keep logs for a whole month /var/log/kern.log /var/log/kernel-info /var/log/auth.log /var/log/auth-info /var/log/cron.log /var/log/cron-info /var/log/daemon.log /var/log/daemon-info /var/log/mail.log /var/log/rsyslog /var/log/rsyslog-info { rotate 31 daily missingok notifempty delaycompress compress dateext nomail sharedscripts postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # received i.e. logs from the clients /var/log/path-to-logs/*/* { rotate 31 daily missingok notifempty delaycompress compress dateext nomail } what i end up with is having is some sort of "summarized" files such as filename-datestampDay-Day and corresponding .gz files. What I do have are empty files, which are eventually zipped. so does the notifempty directive is in fact responsible for these DayX-DayY files, days on which really nothing happened? what would be an efficient way to drop both, empty log files and their .gz files, so that I eventually only keep logs/compressed files that truly contain data?

    Read the article

  • Log and debug/decrypt an windows application's HTTPS traffic

    - by cweiske
    I've got a proprietary windows-only application that uses HTTPS to speak with a (also proprietary, undocumented) web service. To ultimately be able to use the web service's functionality on my linux machines, I want to reverse-engineer the web service API by analyzing the requests sent by the application. Now the question: How can I decrypt and log the HTTPS traffic? I know of several solutions which don't apply in my case: Fiddler is a man-in-the-middle HTTPS proxy which I cannot use since the application doesn't support proxies. Also, I do not (yet) know if it works with self-signed server certificates, which I doubt. Wireshark is able to decrypt SSL streams if you have the server's private certificate, which I don't have. any browser extension since the application is not a browser If I remember correctly, there have been some trojans that capture online banking information by hooking into/replacing the window's crypto API. Since the machine is mine, low level changes are possible. Maybe there is a non-trojan (white-hat) network log application out there which does the same? There is a blackhat presentation with some details available to read. They refer to Microsoft Research Detour for easy API hooking.

    Read the article

  • vSphere Promiscuous mode only receiving packets one way from network switch

    - by steve.lippert
    We have two network switches, a POE switch (SwitchA) to power our phones / users computers and a non-POE switch (SwitchB for the rest network.) Each switch is setup to do port mirroring to support our VoIP recording system. SwitchA does port mirroring on specific ports if we need to record a user. SwitchB mirrors one port to monitor our work at home users (Internet comes in from managed router, to switch, back out to our firewall.) These two port mirroring setups feed into one vmware vSphere 4.1 server, it has four total physical cards. The other two NICs feed into an unmanaged switch for connecting to the rest of the network. Once into the vSphere server all network ports go into a vSwitch, and then one of the servers (Windows 2008 R2) sniffs them out and does its thing. Everything is working fine and dandy from SwitchB. But on SwitchA we only receive one side of the VoIP packets (going out to the phone, nothing coming in from the phone). Troubleshooting steps I have taken so far: I hooked up my laptop to the monitor port on SwitchB and I see both sides of the packets. I swapped which network interface is plugged into the monitor port on SwitchA. Because everything feeds into one vSwitch / vNetwork and both sides of the conversation arrive just fine from SwitchB I believe everything is configured correctly on the vSphere server/guest. What could be causing one way packets to arrive on my guest machine from only one interface, but not the other? Could a bad cable be causing the problems from SwitchB?

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell) ?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • Fetch new Mails (Also from Subfolders) from another IMAP server as new Mail in Postfix

    - by Tobi
    everyone. I have installed Postfix on a server with Aliases and Domains from a MySQL Database. It is configured to forward some adresses to other Mail Accounts and also delivers some mails in local mailboxes that will be queried over a dovecot imap server. For this example let there be two users: [email protected] what is a user that gets its mail just forwarded to let's say [email protected] [email protected] what is a user that accesses its mail from local IMAP. Now, I want to fetch some Mails from another mailserver and handle them as if they were sent to a user of my Mailserver. Lets say those corelations exist: [email protected] has two external accounts: [email protected] and [email protected] [email protected] has also one external account [email protected] The Problem is the new mails on that other Mailserver is not always in the inbox, it might be in subdirectories: mailinglists/all or mailinglists/it but also in mailinglists/some-other-department which is not interesting and should not be delivered. I already found a programm called fetchmail but I cannot find how to fetch subdirectories or decide which subdirectories are fetched.

    Read the article

  • Running batch file through a service.

    - by wallz
    I'm trying to schedule a batch file to run through a third party application, however the output file doesn't get created in the directory. If I run the .BAT file from the command line, it works and the file gets created. Also using the Windows Schedule will also succeed. Basically, the 3rd party software will schedule the .BAT file and it shows success within the 3rd party user interface. The difference between running from the command prompt and the software, is that the software will use its Windows service to launch the batch. The 3rd party software will show success since it was able to successfully call the .BAT file to run, however it has no control of the other EXE's that's being called within the script. I'm able to run a simple .BAT file in the 3rd party software, for example a copy command. The .BAT I'm having problems with calls a compiled EXE which launches Excel to create a file to a location. The .bat file calls something.exe, which then calls Excel.exe: C:\something.exe -o D:\filename.xlsm C:\filename.xlsm refresh_pivot Do you think it's a permissions issue? I used Process Monitor to verify any Access Denied errors but everything seems to be working according to the trace. It worked on a non-64-bit OS, I'm currently using Win2008 64-bit.

    Read the article

  • Computer does not boot, often

    - by tam
    I've ran into a issue with my computer that it does no longer reach POST, but simply powers on for a fraction of a second and powers off. But this is not always, some times it boots just normally and it works as it should, no issues with not enough power or anything. But as soon as I turn it of, I can not turn it back on, but then again at some random point it just powers up again, and resumes normal operation. If I disconnect the 8pin ATX connector from the motherboard, it powers up, fans and disks spinning normally until I power it off again. So this problem only happens when ATX is connected, which seems odd, I normally always saw this kind of an error if ATX was not connected, but here it's the exact opposite. It also does not emit any sound on the buzzer, except the normal beep, when it powers up normally. I have already tried: Remove graphics card Remove one and/or all RAM sticks Disconnect everything non-essential, even hard drives Clear CMOS I have not yet tried to remove all components and tried to boot everything outside of the case, because I did not have the time to disassemble and bleed the water loop. However, I can confirm that nothing is stuck underneath the motherboard, not is any of those brass raisers touching the board where it should not. Specs: Gigabyte GA-970A-UD3 AMD FX6300 ATI HD7850 I think this should be enough for this issue.

    Read the article

  • Windows 7 can't find Ubuntu computer by hostname

    - by endolith
    I got a new Windows 7 machine, and was using VNC, SSH etc to connect to my Ubuntu machine, and it worked fine previously connecting to the Ubuntu computer's hostname. Now it doesn't work if I use the machine's hostname, but it does if I use the local IP or DynDNS name. I can also access it from my Android phone using the local hostname over SSH. If I try to connect with SSH to the hostname, it says "Host does not exist". VNC says "Failed to get server address". NX says "no address associated with name", and I don't see it in Windows' "Network" folder. I've rebooted everything. I've turned off Windows firewall. It was working fine a few days ago, but now it's not. How do I figure out what's blocking it? Aha: It probably has something to do with Samba. I reset the Samba configuration the other day, and apparently this can affect it. http://ubuntu-virginia.ubuntuforums.org/showthread.php?t=1558925 I tried commenting out "encrypt passwords = No" as described there, but it still doesn't work.

    Read the article

  • nginx- Rewrite URL with Trailing Slash

    - by Bryan
    I have a specialized set of rewrite rules to accommodate a mutli site cms setup. I am trying to have nginx force a trailing slash on the request URL. I would like it to redirect requests for domain.com/some-random-article to domain.com/some-random-article/ I know there are semantic considerations with this, but I would like to do it for SEO purposes. Here is my current server config. server { listen 80; server_name domain.com mirror.domain.com; root /rails_apps/master/public; passenger_enabled on; # Redirect from www to non-www if ($host = 'domain.com' ) { rewrite ^/(.*)$ http://www.domain.com/$1 permanent; } location /assets/ { expires 1y; rewrite ^/assets/(.*)$ /assets/$http_host/$1 break; } # / -> index.html if (-f $document_root/cache/$host$uri/index.html) { rewrite (.*) /cache/$host$1/index.html break; } # /about -> /about.html if (-f $document_root/cache/$host$uri.html) { rewrite (.*) /cache/$host$1.html break; } # other files if (-f $document_root/cache/$host$uri) { rewrite (.*) /cache/$host$1 break; } } How would I modify this to add the trailing slash? I would assume there has to be a check for the slash so that you don't end up with domain.com/some-random-article//

    Read the article

  • Openfire: Granular alerts

    - by R.S.
    Our organization has had an Openfire server up and running for about a year now. So far we have used it for messaging in the I.T. Dept and Alerts to all users. We hit a snag this week when one system went down and several notifications were sent out to inform users of progress. Some of the users were Radiologists that do not use the particular system in question and these users found it more of an annoyance than informative. Since that I have been tasked with finding a more granular system for alerts. I am confident that Openfire can handle this and I have just about settled on a way of getting this to work. My idea is to create a half dozen or so users. For example: Staff, Doctor, Assitant and Supervisor. Using spark as our messenger has worked great so far so I would like to stick with that if possible. With that in mind, under advanced login features the resource name can be changed to something unique and non-unique users can log in under the same account, however, when a message is sent to one of these users, the message delivery is inconsistent. Currently I have 4 users under the Assistant user and it seems only 1 of the users receives the messages. Is this scenario even possible? I am avoiding working with the groups in Openfire because the function is atrocious. I could possibly integrate the system into our Active directory but I don’t think that will get us to a workable solution any quicker or more efficiently.

    Read the article

  • How to create video file from DVD

    - by pkaeding
    I want to take the DVD video slideshow that I got from my wedding photographer, and put the video on youtube (I have permission, I made sure to get the non-exclusive rights to use anything she created while working for me in any way I wanted when we working out the terms before-hand). Can anyone suggest the best way to get a video file that can be uploaded to YouTube from this DVD? The source DVD is not encrypted, so I don't need to worry about that. I am using a Mac, so Mac-friendly suggestions are preferred. Thanks! EDIT: So, I tried Handbrake, and it looks very promising. However, When I select the title in Handbrake, it says it is about 12 minutes long. The resulting file ends up only being about 4 minutes long, and the music is messed up. It seems to be going normally through the photo transitions, but removing the time that the slideshow stays on each photo. I believe the DVD was created using iDVD. Does it do anything weird to save space by varying hte framerate, or anything like that? Are there special settings in Handbrake I need to use?

    Read the article

  • setting the PATH for Git (not for me)

    - by Iain
    Hi, I'm running OSX 10.6.5 with Git 1.7.1 I have git installed in a non-standard location (though that really should be the standard on a mac;-) in /Library/Frameworks/Git.framework. My own PATH is set fine, git works fine, until... I set up a pre-commit hook with a Ruby script: $ git commit -m "added some Yard documentation" .git/hooks/pre-commit: line 1: #!/usr/bin/env: No such file or directory The pre-commit.sample runs ok, so it appears that git can't find /usr/bin/env, or much else as I've tried shebanging it directly to ruby etc. Just /bin/sh is ok. So, where does Git get it's PATH? because it's not using mine or this wouldn't be happening. And more to the point, how do I get it to see /usr/bin/env ? I've tested the ruby script already, it works. Just to add: $ cat /etc/paths /usr/bin /bin /usr/sbin /sbin /usr/local/bin $ cat /etc/paths.d/git /Library/Frameworks/Git.framework/Programs The first few lines of the Ruby script (which runs via ./pre-commit or ruby pre-commit) #!/usr/bin/env ruby -wKU class String def expand_path File.expand_path self end def parent_dir File.dirname self.expand_path end end

    Read the article

  • OS X won't boot up unless I hold down option key

    - by Gazzer
    I have a strange issue on an early 2008 Mac Pro running OS 10.6: if I restart the computer it restarts normally if I shutdown and boot, it stops at the grey screen just before the boot process if I shutdown and boot but hold down the option key, I can select the boot disk and all is good. I've just cloned the disk, and the same thing happens. The disk is a SAMSUNG HD154UI The disk is partitioned (the second partition holds a clone of the Snow Leopard Install disk) One weird thing on the original disk was one of the partitions said 'EFI Boot' in a non-aliased font rather than the name of the disk when the disks are listed upon holding down option. Solution: it seems that there was a problem with the disk. Part of the difficulty in finding the solution was that you need to remove the disk from the computer completely. For example, a good disk in Bay 3, wouldn't boot up if the bad disk was in Bay 2. So for ages I thought the problem was hardware related in Bay 3. So if you think you have a dodgy disk remove it totally if you are testing the hardware with a 'clean' disk. Cleaning the PRAM helped to get the new disk to work too.

    Read the article

  • Can Octopussy use messages other than syslog style?

    - by Lee Lowder
    I am currently exploring different options for a centralized log server. We use both Linux (Ubuntu 10.04 / 12.04, LTS for both) and Windows, though for this specific issue only Linux is relevant. I like the interface that octopussy has and it's feature list, but I am hesitant due to a few things. One of the biggest concerns I have is that it seems to be syslog only. The end goal is to have a centralized place for our devs and admins to be able to search through the logs generated by Apache, Tomcat and 70+ web apps spread out among a cluster, for both our prod and test environments. While I did see that octopussy has support for plugins, I haven't been able to find any sort of plugin repo or in depth guides as to what can be done with them. Does anyone know if plugins can be used to allow octopussy to non-syslog messages? Specifically log4j type log messages that may include multi-line stack traces and such. Also, is there a user community for this software, such as a mailing list or forum? I've been unable to locate any so far. Thank you.

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • Site name questions [closed]

    - by avenas8808
    I'm creating a website on German cars. No issues with it, since it's basically a PHP site, and let's say for the sake of argument, there are no server issues. However, I plan to call it "autohaus" (well, for the sake of this scenario anyway, it's not its actual name) The site is not for commercial gain; it's a fansite. Legally, can I use a name for a website if it's a commonly-used one - "autohaus" appears to be this. There are other sites called "Autohaus" but could someone use a similar name for non-commercial, information/educational use? Basically, what's the copyright law for such a situation? Currently my site is only on localhost for now, but what if I did want to release it to the web? Obviously the legal issues have to be taken into account before the site goes live, but there's no Data Protection Act. etc. since it's not selling anything, and no-one's buying anything, and the pictures in it are being used educationally (but that's for another question entirely)] [NOTE: This question is on trademark/names etc. not domains]

    Read the article

  • How to reinstall bootloader after migration to SSD

    - by hijarian
    I must say, it was difficult to name this question. Basically, I need to properly reinstall the bootloader on my system, because I already have the working system disks for my OSes. The long story is this: I had the large slow HDD with Windows7 & Debian Wheezy dual-boot on it, perfectly bootable. Then, I ordered the SSD drive and prepared my system partitions to fit onto the much smaller SSD. I wanted the following schema: 128 GB Windows 24 GB / on Debian 86 GB /home on Debian Strange size for /home because there's no such thing as true 256GB disk drive. So, I've prepared such a partitions on my initial HDD and installed the new SSD and then I loaded the GParted live USB (can't remember now how it was really named), and then just copypasted the partitions from HDD to SSD. So, now I have the following partitions across the physical disks: SSD 128 GB copy of original Windows partition 24 GB copy of presumably Debian / 86 GB copy of presumably Debian /home HDD 128 GB Windows 24 GB / on Debian 86 GB /home on Debian ... several other partitions with non-system data ... And the behavior of the system right after the Ctrl+C, Ctrl+V in GParted was as follows: no GRUB, system boots right into the Windows on HDD. In BIOS settings are to boot from SSD first. I managed to create the Debian Testing installation USB and loaded it into the rescue mode, found that it identified my SSD as /dev/sda and installed the GRUB to the /dev/sda. Now my system loads the GRUB which lists both Windows and Debian. From HDD. So, I am now back into initial position. Please, how I should set up the GRUB so it'll load the OSes correctly from SSD? Should I fire up my Debian, fiddle with the GRUB's config and reinstall it again to the same place (at SSD)?

    Read the article

  • Pushing Windows Store apps silently

    - by seagull
    As part of a requirement I have been issued, I seek the ability to push apps from the Windows Store (.appx) to remote users. The framework surrounding the data transmission is sorted; What I need to know is if there is a script that can be sent out along with a URI or the package proper to facilitate installation on the user-side of the Windows Store app. I am well aware that numerous guides exist from Microsoft on pushing out Line-of-Business (LOB) (AKA Enterprise) applications -- that is, apps that have been developed in-house and are not for the consumption of the Windows store. This is inappropriate for my requirement, however; the customer wants their clients to receive apps that appear currently on the Windows Store, and for them to be installed in a silent manner. I've seen this guide; http://blogs.technet.com/b/keithmayer/archive/2013/02/25/step-by-step-deploying-windows-8-apps-with-system-center-2012-service-pack-1.aspx, which details doing exactly this, but it is only applicable to machines that are administered by a system running Windows Server 2012 R2 with the 'System Center 2012' bundle installed. The systems I target are considerably more decentralised than this, making this guide inappropriate. I have a hunch that Microsoft have deliberately designed the Windows Store to be this way, but I figured I ought to ask around before I resign myself to the requirement. Much obliged

    Read the article

  • nginx giving of 404 when using set in an if-block

    - by ba
    I've just started using nginx and I'm now trying to make it play nice with the Wordpress plugin WP-SuperCache which adds static files of my blog posts. To serve the static file I need to make sure that some cookies aren't set, that it's not a POST-request and making sure the cached/static file exist. I found this guide and it seems like a good fit. But I've noticed that as soon as I try to set something inside an if my site starta giving 404s on an URL that isn't rewritten. The location block of the configuration: location /blog { index index.php; set $supercache_file ''; set $supercache_ok 1; if ($request_method = POST) { set $supercache_ok 0; } if ($http_cookie ~* "(comment_author_|wordpress|wp-postpass_)") { set $supercache_ok '0'; } if ($supercache_ok = '1') { set $supercache_file '$document_root/blog/wp-content/cache/supercache/$http_host/$1/index.html.gz'; } if (-f $supercache_file) { rewrite ^(.*)$ $supercache_file break; } try_files $uri $uri/ @wordpress; } The above doesn't work, and if I remove all the ifs above and add if ($http_host = 'mydomain.tld') { set $supercache_ok = 1; } and then I get the exact same message in the errors.log. Namely: 2010/05/12 19:53:39 [error] 15977#0: *84 "/home/ba/www/domain.tld/blog/2010/05/blogpost/index.php" is not found (2: No such file or directory), client: <ip>, server: domain.tld, request: "GET /blog/2010/05/blogpost/ HTTP/1.1", host: "domain.tld", referrer: "http://domain.tld/blog/" Remove the if and everything works as it should. I'm stymied, no idea at all where I should start searching. =/ ba@cell: ~> nginx -v nginx version: nginx/0.7.65

    Read the article

  • How to connect via SSH to a linux mint system that is connected via OpenVPN

    - by Hilyin
    Is there a way to make SSH port not get sent through VPN so when my computer is connected to a VPN, it can still be remoted in via SSH from its non-VPN IP? I am using Mint Linux 13. Thank you for your help! This is the instructions I followed to setup the VPN: Open Terminal Type: sudo apt-get install network-manager-openvpn Press Y to continue. Type: sudo restart network-manager Download BTGuard certificate (CA) by typing: sudo wget -O /etc/openvpn/btguard.ca.crt http://btguard.com/btguard.ca.crt Click on the Network Manager icon, expand VPN Connections, and choose Configure VPN A Network Connections window will appear with the VPN tab open. Click Add. 8. A Choose A VPN Connection Type window will open. Select OpenVPN in the drop-down menu and click Create.. . In the Editing VPN connection window, enter the following: Connection name: BTGuard VPN Gateway: vpn.btguard.com Optional: Manually select your server location by using ca.vpn.btguard.com for Canada or eu.vpn.btguard.com for Germany. Type: select Password User name: username Password: password CA Certificate: browse and select this file: /etc/openvpn/btguard.ca.crt Click Advanced... near the bottom of the window. Under the General tab, check the box next to Use a TCP connection Click OK, then click Apply. Setup complete! How To Connect Click on the Network Manager icon in the panel bar. Click on VPN Connections Select BTGuard VPN The Network Manager icon will begin spinning. You may be prompted to enter a password. If so, this is your system account keychain password, NOT your BTGuard password. Once connected, the Network Manager icon will have a lock next to it indicating you are browsing securely with BTGuard.

    Read the article

  • Can't access shared folder of win8/win7 machine - Error code: 0x80004005. Unspecified Error

    - by ruslan
    It's ironic that I, software engineer with 12 years of experience, continue to have this problem from one version of Windows to another without being able to achive consistent result (sometimes it works). Here it goes again. I have a machine with Win8 Consumer Preview. It doesn't really matter that it's win8. I had same issue with win7 before. On given machine I created local admin user with same name and password I have on second PC (the machine I'm typing this from now). I have two questions to you guys. Why I'm not able to access C$ share of win8 machine from another Win7 machine? I get error that C$ doesn't exist even though it does. Why I'm not able to access share named "test" in Win8 for which Permission set to Full for Everyone. When I attempt to access it from Win7 machine I'm asked to enter username and password. After entering administrator credentials I get error "Windows cannot access \192.168.1.123\test. Error code: 0x80004005. Unspecified Error". Windows Firewall is disabled on Win8 machine for both Private and Public networks. Guest account is disabled. Built-in admin account is enabled. Machine is pingable from other machines.

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >