Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 169/503 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Reasons why ports below 1024 cannot be opened

    - by Sitoplex
    I'm root on a machine I don't know how it was configured. I try to open SSHD on another port than 22 but it does not work. I changed the /etc/ssh/sshd_config file and added a new Port line extra to the Port 22. but it does only work when this second port is a number above 1024. Why is that? How can I find the reason? Infos: I'm restarting it using /etc/init.d/sshd restart as root. "netstat -apn" does not show the port is open by any other service (anyway I tried different ports and only above 1024 work). "telnet localhost port" also shows the service works only when they are above 1024. In iptables all tables are empty. Thanks!

    Read the article

  • SSH running slow on cygwin

    - by Robb
    I have a Windows XP box with Cygwin running and the SSH service. I'd like to use PuTTY to connect to it from other computers on the local network. PuTTY works fine and I actually get a relatively speedy login prompt. But anytime I do an 'ls' on the root directory ('/') it typically doesn't complete, like the command is hung. Other PuTTY sessions suffer as well, no matter what i'm doing (even just an 'ls' on my home directory might take awhile or not finish). It is like a deadlock occurred somewhere in the ssh/cygwin system. The root directory does contain the 'cygdrive' folder which is the contents of the host computer. Could this be causing the slowdown?

    Read the article

  • cron+pam heavily spamming my logs

    - by Lo'oris
    Two times every minute I get this in auth.log: May 12 15:21:01 ruptai CRON[25303]: pam_unix(cron:session): session opened for user root by (uid=0) May 12 15:21:01 ruptai CRON[25303]: pam_unix(cron:session): session closed for user root This never stops, two times every minute, every minute of every day. I've no idea what it is, I would just to stop it from pointless logging this stuff. This has been going on for ages so I can't recall when it started. OS is debian stable. Btw, I've found questions on google but no answers

    Read the article

  • How can I set clean urls (enable rewrite) if I don't have a domain ?

    - by Patrick
    In order to enable clean urls in Drupal, I add the lines below to the lighttpd configuration file. However I'm now working on a local server and I don't have a domain available. So I need to work with this address http://local.ip/Sites/mywebsite I've tried to replace ["host"] with ["socket"] and replace the domain with ip and subfolders (see address above), but unsuccessfully. How can I set the configuration file to set clean urls even if I don't have a domain ? thanks $HTTP["host"] =~ "(^|\.)mywebsite\.com" { server.document-root = "/var/www/sites/mywebsite" server.errorlog = "/var/log/lighttpd/mywebsite/error.log" server.name = "mywebsite.com" accesslog.filename = "/var/log/lighttpd/mywebsite/access.log" include_shell "./drupal-lua-conf.sh mywebsite.com" url.access-deny += ( "~", ".inc", ".engine", ".install", ".info", ".module", ".sh", "sql", ".theme", ".tpl.php", ".xtmpl", "Entries", "Repository", "Root" ) # "Fix" for Drupal SA-2006-006, requires lighttpd 1.4.13 or above # Only serve .php files of the drupal base directory $HTTP["url"] =~ "^/.*/.*\.php$" { fastcgi.server = () url.access-deny = ("") } magnet.attract-physical-path-to = ("/etc/lighttpd/drupal-lua-scripts/p-.lua") }

    Read the article

  • Soap client call has slow performance

    - by Alon_A
    OS is Centos 6.2 with PHP 5.3.15. We have a Facebook application that is using PHP soap web services. We sometimes experince slow preformance when connecting to these services, but we cant understand what exacly is causing the problem. We've try to analyse the behavior using the profiling tool Kcachgrind. Here is a call graph from the index.php page that took 21 seconds to load. You can clearly see that calling the soap client is the bottle neck. I've also noticed that exactly before the page finishes to load, this file is being created in our serve's /tmp folder: wsdl-apache-d1032d85dfd16c0d91a6b70facc70e43 These are the permission of /tmp drwxrwxrwt 6 root root 40960 Aug 30 10:39 tmp I know its not the most specific question, but if any one had similar performance issues with soap client, We would love some ideas about what can cause this kind of performance problem, what can we do to investigate more accurately or how to overcome the problem ? Thanks.

    Read the article

  • Ubuntu 10.04 Apache Configuration for Websites

    - by completenoob
    Looking at a basic Ubuntu 10.04 server setup, Apache points to /var/www for where to it looks for files to serve up. The default apache user is www. I'm just trying to set up a plain old WordPress blog. Should I just dump the files into /var/www/ as root or www? User www seems inconvenient since I won't log in as the user, but I guess I can chown the files in /var/www to www. Not that I would log in as root either, but what is the recommended user who should own the /var/www files? Thanks for the help.

    Read the article

  • How to autostart service in ANY linux?

    - by user329115
    I need to install the dumbest possible service (a binary) and have it reliably run as the current user at boot (or login, whatever) at as many platforms as possible (of the aging point-of-sale type). The app monitors another archives generated by another app in the user-session. Startup alternatives considered: init.d @reboot in crontab a .desktop file in ~/.config/autostart a myriad of other solutions including .profile and .bashrc All of the above break down at some point. The problems stem from not wanting to run as root (I want the files generated to be user-accessible), and not having a way to reliably get the current user name in sudo on all platforms. Ideally not even sudo can be assumed available. Hey, I just want to run something on boot and I have "root" power to do so. Windows get the job done easily enough. This isn't rocket science, is it?

    Read the article

  • Wait for linux machine to be rebooted

    - by Theo
    I have a small script to install on my remote machine an update. I would like to reboot the machine remotely and if it is rebooted, continue with some more commands. What I currently do is: ssh root@myMachine << COMMANDS_ISSUED ###... Tasks init 6 COMMANDS_ISSUED sleep 180s ssh root@myMachine << POST_REBOOT_COMMANDS ###.... More stuff POST_REBOOT_COMMANDS Is there a more elegant way to do it? Like pinging the machine all 5 seconds up to a maximum of 4 minutes? I play with a few linux machines which have different boot up times and if my script would continue immediately after reboot, this could safe quite some time for me. (Note: I don't want to parallelize execution over all machines as I want to see for each machine if everything worked fine)

    Read the article

  • How to move a Windows instance to a different EC2 data centre?

    - by Darren Cook
    I've a Windows EC2 instance running in one region and need to move it to another region (Tokyo and Singapore in this case). Is that even possible? What potential problems do I need to watch out for? (I found http://stackoverflow.com/questions/2181849/ec2-instance-cloning which is describing how to do it, but appears to assume Linux instances, and to assume the same data centre. Is it possible to move my keys across to another region?) I tried something similar with a Windows instance a few months ago, just trying to clone it in the same data centre, but I couldn't get it working quickly, so I had to give up and just create a fresh instance at that time. This time I've got a bit of breathing space, and want to research how to do it properly! Root Device Type: ebs Block Devices: sda1 xvdf (both are ebs, "attached", and have Delete on termination set to "no"; sda1 is the root device) The AMI is described as "Unavailable" (then an ami number).

    Read the article

  • Debian - no sound output

    - by Gogeta70
    So I've been trying for the last few days to get sound output on my Linux desktop. The onboard audio is Intel HD Audio ICH9, but I couldn't get Alsa to even detect it, so I disabled it in BIOS and installed a PCI card - a Dynex DX-SC51. Searching around, I found that it needed the Alsa driver for ice1724, so I installed all the stuff for that. Now, the system detects my sound card, but I can't get any audio to play out of it. Here's some information: root@debian:~# lspci | grep audio 02:01.0 Multimedia audio controller: VIA Technologies Inc. VT1720/24 [Envy24PT/HT] PCI Multi-Channel Audio Controller (rev 01) root@debian:~# aplay -l **** List of PLAYBACK Hardware Devices **** card 0: ICE1724 [ICEnsemble ICE1724], device 0: ICE1724 [ICE1724] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: ICE1724 [ICEnsemble ICE1724], device 1: ICE1724 IEC958 [ICE1724 IEC958] Subdevices: 1/1 Subdevice #0: subdevice #0 I've been trying various solutions found on Google for a few days now and I'm getting nowhere. Hopefully someone here can point me in the right direction. Thanks.

    Read the article

  • swapon --all --verbose : 'read swap header failed: Invalid argument'

    - by user66088
    Recently ran through EnableHibernateWithEncryptedSwap and ran the following command: swapon --all --verbose and received: 'read swap header failed: Invalid argument' How do I fix this? Here's some more pertinent output... Output of sudo fdisk -l: Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00006d20 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 156301311 77899777 5 Extended /dev/sda5 501760 156301311 77899776 8e Linux LVM Disk /dev/mapper/ubuntu--t10194-root: 75.5 GB, 75539415040 bytes 255 heads, 63 sectors/track, 9183 cylinders, total 147537920 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/ubuntu--t10194-root doesn't contain a valid partition table Disk /dev/mapper/ubuntu--t10194-swap_1: 4227 MB, 4227858432 bytes 255 heads, 63 sectors/track, 514 cylinders, total 8257536 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x08040000 Disk /dev/mapper/ubuntu--t10194-swap_1 doesn't contain a valid partition table Disk /dev/mapper/cryptswap1: 4225 MB, 4225761280 bytes 255 heads, 63 sectors/track, 513 cylinders, total 8253440 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd2236983 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table Thanks for any and ALL help!

    Read the article

  • polkit: disable all users except those in group wheel?

    - by John Nash
    Is it possible to do the following using 1 polkit .pkla file? Disable all users except those in the wheel group from using polkit. The users in the wheel group will need to provide the root password when using polkit. /etc/polkit-1/localauthority/50-local.d/wheel-only.pkla [disable all users except the wheel group] Identity=unix-group:wheel Action=* ResultAny=??? ResultInactive=??? ResultActive=??? The following file works but you need to provide all the users in /etc/group: [disable all users except those in the wheel group: root and myuser] Identity=unix-user:daemon;unix-user:bin;unix-user:sys;unix-user:adm;unix-user:tty;unix-user:disk;unix-user:lp;unix-user:mail;unix-user:news;unix-user:uucp;unix-user:man;unix-user:proxy;unix-user:kmem;unix-user:dialout;unix-user:fax;unix-user:voice;unix-user:cdrom;unix-user:floppy;unix-user:tape;unix-user:sudo;unix-user:audio;unix-user:dip;unix-user:www-data;unix-user:backup;unix-user:operator;unix-user:list;unix-user:irc;unix-user:src;unix-user:gnats;unix-user:shadow;unix-user:utmp;unix-user:video;unix-user:sasl;unix-user:plugdev;unix-user:staff;unix-user:games;unix-user:users;unix-user:nogroup;unix-user:libuuid;unix-user:crontab;unix-user:messagebus;unix-user:Debian-exim;unix-user:mlocate;unix-user:avahi;unix-user:netdev;unix-user:bluetooth;unix-user:lpadmin;unix-user:ssl-cert;unix-user:fuse;unix-user:utempter;unix-user:Debian-gdm;unix-user:scanner;unix-user:saned;unix-user:i2c;unix-user:haldaemon;unix-user:powerdev Action=* ResultAny=no ResultInactive=no ResultActive=no

    Read the article

  • sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0

    - by 7UR7L3
    Whenever I try to do anything at all that requires my password it returns this: u7ur7l3@ubuntu:~$ sudo sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0 sudo: fatal error, unable to load plugins u7ur7l3@ubuntu:~$ So I can't install anything from the Software Center / package manager or run any commands in terminal that require my password. I can log in, but that's pretty much it. I accidentally changed the permissions of some files, then changed some more trying to fix it :/. Now I'm completely lost as to what to do. This is what happened when I tried to get sudo working again using pkexec: u7ur7l3@ubuntu:~$ pkexec chown root /usr/lib/sudo/sudoers.so Error getting authority: Error initializing authority: Error calling StartServiceByName for org.freedesktop.PolicyKit1: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ExecFailed: Failed to execute program /usr/lib/dbus-1.0/dbus-daemon-launch-helper: Success u7ur7l3@ubuntu:~$ sudo ls sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0 sudo: fatal error, unable to load plugins And to change permissions I was using Root Actions as a dolphin service/ plugin thing, so history doesn't show me the permission changes. I just realized that sounds don't work at all anymore. When I go into Phonon my default settings and playback devices aren't even there. Also I don't have the option to shutdown, I can only log out or leave.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Windows Phone 7 developer resources

    - by Daniel Moth
    Developers of Windows Mobile 6.x (and indeed Windows CE) applications still use the rich .NET Compact Framework 3.5 with Visual Studio 2008 for development. That is still a great platform and the Mobile Development Handbook is still a useful resource (if I may say so myself :-). The release of Windows Phone 7, changes the programming paradigm. The programming model has NETCF in its guts, but the developer uses the Silverlight or XNA APIs (and they can call from one into the other). I thought I'd gather here (for your reference and mine) the top 10 resources for getting started. Windows Phone Developer Home - get the official word and latest announcements. Windows Phone Developer Tools RTW - download the free developer tools (on my machine the installation took 30 minutes, over my existing vanilla Visual Studio 2010 install). Windows Phone 7 Jump Start video training - watch the 12 sessions by Wigley/Miles. Windows Phone 7 Developer Training Kit - work through the labs. Windows Phone RSS tag - channel9 has tons more WP7 videos, stay tuned. Windows Phone 7 in 7 Minutes - watch 20 7-minute videos. Programming Windows Phone 7 - read 11 free chapters from Petzold's eBook. The Windows Phone Developer Blog - subscribe to the official blog. Getting Started with Windows Phone Development - explore all links from the MSDN Library root page.            Silverlight for Windows Phone – another root MSDN library page. If after all that you get your hands dirty and still can't find the answer ask questions at the WP7 development MSDN Forum.   On a personal note, I was pleased to see that the Parallel Stacks debugger window works fine with the WP7 project ;-) Comments about this post welcome at the original blog.

    Read the article

  • ASP 3.0 Folder/File Permissions Settings (ASP Classic)

    - by ASP Pee-Wee
    Dear Stack Exchange, Hi, I have built a form input page in HTML that has an action to post to an ASP handler/processor .asp file. The form handler/processor .asp file contains only <% Insert VBScript Here % and no HTML output whatsoever. The .asp file was never intended to be a "web viewable" .asp file like an .asp home page file or html file would. It's supposed to be for my eyes only- not the public's however it does need to take info posted by the public and do something with it on it's end. I have used VBScript/ASP3.0 to build the form handler/processor file and would like to know how to keep someone from viewing the actual VBScript in the handler/processor .asp file. I am aware of obfuscation but I would like to know how to keep prying eyes from even being able to take a look at the obfuscated code in the handler/processor file. I realize that the server executes the .asp file first before outputting anything to the browser so I guess that my main concern is mostly that someone may could "download" the form handler/processor .asp file, then view it's contents on their machine. Assuming the form handler .asp file is where it is, behind the root, and is on a windows server (no htaccess approach) how could one protect it so that it could never be viewed or simply pulled down via anonymous ftp or something like that? Is there something like "script only" permissions that the system administrator could set up for a particular folder? Remember, with shared hosting I can't go above the root. If so, would the form still be able to post? How would any of you guys go about protecting the asp file in addition to obfuscation? Any help would be greatly appreciated. Thanks, ASP Pee-Wee

    Read the article

  • Sitemaps - do I need to submit each sitemap in sitemap_index.xml to Google Webmaster tools?

    - by iSumitG
    I am having a Wordpress blog on my CentOS server. There is no sitemap.xml in the root directory but there is sitemap_index.xml file in the root directory which contains the following code: <?xml-stylesheet type="text/xsl" href="http://mywebsite.com/wp-content/plugins/wordpress-seo/css/xml-sitemap-xsl.php"?> <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <sitemap> <loc> http://mywebsite.com/post-sitemap.xml </loc> <lastmod> 2012-12-18T19:47:47+00:00 </lastmod> </sitemap> <sitemap> <loc> http://mywebsite.com/page-sitemap.xml </loc> <lastmod> 2012-12-18T17:32:49+00:00 </lastmod> </sitemap> </sitemapindex> My question: Which sitemap should I submit to Google Webmasters Tools? Options are: Only sitemap_index.xml Only post-sitemap.xml and page-sitemap.xml All 3 (sitemap_index.xml, post-sitemap.xml and page-sitemap.xml) Any other, please let me know.

    Read the article

  • Automated “ubuntu-12.04.1-server-amd64” OS installation on physical machine

    - by user285336
    We are using Physical server and are in process of Automated “ubuntu-12.04.1-server-amd64” OS installation on it. There are two HDD for OS installation purpose and there are RAID1 relation between them. This setup has been done through BIOS. The kickstart configuration file looks like this: #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone Asia/Dili #Root password rootpw --iscrypted $1$Yl1QJyta$KzIT.kq3i9E5XaiQKcUJn/ #Initial user user ankit --fullname "Ankit" --iscrypted --password $1$c6Yflpea$pi1QQ59/jgywmGwBv25z3/ #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use Web installation url --url my_repo_location #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --fstype ext4 --size 100 --ondisk sda part / --fstype ext4 --size 10000 --ondisk sda part /var --fstype ext4 --size 10000 --ondisk sda part swap --size 1024 --ondisk sdb #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --enabled --trust=eth0 --http --ftp --ssh --telnet --smtp #X Window System configuration information xconfig --depth=8 --resolution=640x480 --defaultdesktop=GNOME But I am getting the below error : No root file system is defined Please suggest on this. Do we need to do any modification in kickstart configuration file. Any help in this regard will be very helpful for us. The automated Ubuntu OS installation is successful in Virtual Machine(VM) with the above ks.cfg (kickstart configuration file ) but failing in case of physical machine. Please suggest on this and if possible provide the new ks.cfg file to resolve above problem. Thanks & Regards, Rajesh Prasad

    Read the article

  • Wordpress Multisite- How to restrict a non-wordpress page only to one domain

    - by yibe
    I am running three blogs from a wordpress multisite. Currently, I am developing a separate, non wordpress application. I want to link this application with blog1. I put the application in the subfoler of my root directory (The Wordpress multisite is installed in the root). I linked the application in the menu of blog1. So far so good. But, it is possible to access the application from all the three blogs. I want the application to be accessible only from blog1-domain/my-app/ but when I tried blog2-domain/my-app, it just opened normally. Generally, the application is accessible from all the three domains followed by /my-app. I don't think it is proper to go this way. Then, what can I do to restrict my app to be accessible only from a my blog1-domain/my-app ? Or am I not using a good blog structure?. I am open to all kind of suggestions. Thank you for your help.

    Read the article

  • “psmouse.ko' not found” installing ALPS touchpad in a Lenovo Ideapad Flex 14

    - by user279806
    I was following this instructions with Ubuntu 14.04. https://github.com/he1per/psmouse-dkms-alpsv7 and psmouse serio1: alps: Unknown ALPS touchpad in a Lenovo Ideapad Flex 15 And after many tries I received the following error message: root@alisson-Lenovo-Ideapad-Flex14:/tmp/psmouse-dkms-alpsv7# ./install.sh ------ Building with dkms ------- Error! DKMS tree already contains: psmouse-dkms-alpsv7-1.0 You cannot add the same module/version combo more than once. Module psmouse-dkms-alpsv7/1.0 already built for kernel 3.13.0-24-generic/4 ------ Installing with dkms ------- Module psmouse-dkms-alpsv7/1.0 already installed on kernel 3.13.0-24-generic/x86_64 ???? Error: dkms install failed:\n '/usr/lib/modules/3.13.0-24-generic//updates/psmouse.ko' not found. root@alisson-Lenovo-Ideapad-Flex14:/tmp/psmouse-dkms-alpsv7# than, searching for "psmouse.ko": alisson@alisson-Lenovo-Ideapad-Flex14:~$ locate psmouse.ko /lib/modules/3.13.0-24-generic/kernel/drivers/input/mouse/psmouse.ko /lib/modules/3.13.0-24-generic/updates/dkms/psmouse.ko /var/lib/dkms/psmouse-dkms-alpsv7/1.0/3.13.0-24-generic/x86_64/module/psmouse.ko /var/lib/dkms/psmouse-dkms-alpsv7/1.0/build/src/.psmouse.ko.cmd /var/lib/dkms/psmouse-dkms-alpsv7/1.0/build/src/psmouse.ko alisson@alisson-Lenovo-Ideapad-Flex14:~$ What should I do?

    Read the article

  • Why do we need URIs for XML namespaces?

    - by Patryk
    I am trying to figure out why we need URIs for XML namespaces and I cannot find a purpose for that. Can anyone brighten me a little showing their use on a concrete example? EDIT: Ok so for instance: I have this from w3schools <root xmlns:h="http://www.w3.org/TR/html4/" xmlns:f="http://www.w3schools.com/furniture"> <h:table> <h:tr> <h:td>Apples</h:td> <h:td>Bananas</h:td> </h:tr> </h:table> <f:table> <f:name>African Coffee Table</f:name> <f:width>80</f:width> <f:length>120</f:length> </f:table> </root> So what should http://www.w3schools.com/furniture hold ?

    Read the article

  • IPV6 auto configuration not working

    - by Allan Ruin
    In Windows 7, my computer can automatically get a IPV6 global address and use IPV6 network, but in Ubuntu Natty, I can't find out how to let stateless configuration work. My network is a university campus network,so I don't need tunnels. I think if one thing can silently and successfully be accomplished in Windows, it shouldn't be impossible in linux. I tried manually editing /etc/network/interfaces and used a static IPV6 address, and I can use IPV6 this way, but I just want to use auto-configuration. I found this post: http://superuser.com/questions/33196/how-to-disable-autoconfiguration-on-ipv6-in-linux and tried sudo sysctl -w net.ipv6.conf.all.autoconf=1 sudo sysctl -w net.ipv6.conf.all.accept_ra=1 but without any luck. I got this in dmesg: root@natty-150:~# dmesg |grep IPv6 [ 26.239607] eth0: no IPv6 routers present [ 657.365194] eth0: no IPv6 routers present [ 719.101383] eth0: no IPv6 routers present [32864.604234] eth0: no IPv6 routers present [33267.619767] eth0: no IPv6 routers present [33341.507307] eth0: no IPv6 routers present I am not sure whether it matters,but then I setup a static IPv6 address (with gateway) and restart network,I ping6 ipv6.google.com and the ipv6 network is fine.This time a entry was added in dmesg [33971.214920] eth0: no IPv6 routers present So I guess the complain of no IPv6 router does not matter? Here is the ipv6 forwarding setting.But I guessed forwarding is used for radvd stuff? root@natty-150:/# cat /proc/sys/net/ipv6/conf/eth0/forwarding 0 After ajmitch mentioned forwarding setting, I added this to sysctl.conf file: net.ipv6.conf.all.autoconf = 1 net.ipv6.conf.all.accept_ra = 1 net.ipv6.conf.default.forwarding = 1 net.ipv6.conf.lo.forwarding = 1 net.ipv6.conf.eth0.forwarding = 1 and then ran sysctl -p /etc/init.d/networking restart But this still doesn't work.

    Read the article

  • Correct drivers for AMD Radeon™ HD 7650M

    - by Hailwood
    So, I have a Sony Vaio sve15118fg running Ubuntu 12.10 which comes with an AMD Radeon™ HD 7650M graphics chip. This was all working fine until some updates (note sure what they were) came through, after installing my laptop now no longer boots properly, specifically: Upon attempting to boot the screen starts flicking violent striped horizontal lines. This can only be corrected via the power button. Attempting to do anything in recovery - bar dropping to a root shell - runs fsck which ends up hanging (although the system still responds there is no HDD activity). This is remedied via Ctrl+Alt+Delete to restart the system. Running fsck manually from the root shell completes successfully. I have attempted numerous things including removing all propriety drivers, Trying the two installed propriety drivers, installing the latest beta driver from AMD. etc. These all yeild various results, such as the same as above, or a gnome-fallback session, or a gnome-shell session with various flickers and graphical artifacts. So, I am wondering, what the heck do I have to do to get this to work?!? I don't really game, especially in ubuntu, so I just want a working system! not fussed about 3d acceleration of whatever...

    Read the article

  • Publish Static Content to WebLogic

    - by James Taylor
    Most people know WebLogic has a built in web server. Typically this is not an issue as you deploy java applications and WebLogic publishes to the web. But what if you just want to display a simple static HTML page. In WebLogic you can develop a simple web application to display static HTML content. In this example I used WLS 10.3.3. I want to display 2 files, an HTML file, and an xsd for reference. Create a directory of your choice, this is what I will call the document root. mkdir /u01/oracle/doc_root Copy the static files to this directory  In the document root directory created in step 1 create the directory WEB-INF mkdir WEB-INF In the WEB-INF directory create a file called web.xml with the following content <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/j2ee/dtds/web-app_2_3.dtd"> <web-app> </web-app> Login to the WebLogic console to deploy application Click on Deployments Click on Lock & Edit Click Install and set the path to the directory created in step 1 Leave default "Install this deployment as an application" and click Next Select a Managed Server to deploy to and click Next Accept the defaults and click Finish  Deployment completes successfully, now click the Activate Changes You should now see the application started in the deployments You can now access your static content via the following URL http://localhost:7001/doc_root/helloworld.html

    Read the article

  • Google Analytics setting cookies on static content despite being on entirely separate domain

    - by Donald Jenkins
    I recently decided to comply with the YSlow recommendation that static content is hosted on a cookieless domain. As I already use the root of my domain (donaldjenkins.com) to host my website—on which Google Analytics sets a few cookies—that meant I had to move the CNAME URL for the CDN serving the static files from cdn.donaldjenkins.com to an entirely separate, dedicated domain. I purchased cdn.dj (yes, it's a real Djibouti domain name), hosted the files on the root (which contains nothing else, other than a robots.txt file) and set a CNAME of e.cdn.dj for the CDN. This setup works, but I was rather surprised to find that YSlow was still flagging the static files for not being cookie-free: here's a screenshot: The cdn.djdomain was new, and was never used for anything other than hosting these static files. Running httpfox on the site shows the _utma and _utmz Google Analytics cookies are being set on the static files listed above—despite their being hosted on an entirely separate, dedicated domain. Here's my Google Analytics code: //Google Analytics tracking code var _gaq=[['_setAccount','UA-5245947-5'],['_trackPageview']]; (function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0]; g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js'; s.parentNode.insertBefore(g,s)}(document,'script')); // [END] Google Analytics tracking code I'm not obsessing about this issue—I know it's not really affecting server performance—but I'd like to just understand what is causing it not to go away...

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >