Search Results

Search found 72319 results on 2893 pages for 'file explorer'.

Page 1144/2893 | < Previous Page | 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151  | Next Page >

  • Check connection and reconnect wifi

    - by Ruud
    I'm building a wireless photoframe. The one thing I haven't been able to figure out is how to get my wifi connection back up using a recommended method. Right now I edited /etc/network/interfaces so wlan0 is started at boot: auto wlan0 iface wlan0 inet dhcp wireless-essid ourssid This method works fine for booting. But I found that if I do not check the connection for a long time (could be a week) it might be down. So I should reconnect. What I do now to verify the connection is working is downloading a file from the server that can't be cached (http://server.ext/ping.php?randomize=123456) If I fail to retrieve the file I assume that the connection is no longer working and I run a shell script like #!/bin/bash ifconfig wlan0 up iwconfig wlan0 essid "ourssid" dhclient wlan0 And the connection comes back. But I can't find anything if this is in any way a good method. Can this be improved upon, or is this already right?

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-31

    - by Bob Rhubart
    SOA Suite 11g Asynchronous Testing with soapUI | Greg Mally Greg Mally walks you through testing asynchronous web services with the free edition of soapUI. The Role of Oracle VM Server for SPARC in a Virtualization Strategy | Matthias Pfutzner Matthias Pfutzner's overview of hardware and software virtualization basics, and the role that Oracle VM Server for SPARC plays in a virtualization strategy. Cloud Computing: Oracle RDS on AWS - Connecting with DB tools | Tom Laszewski Cloud expert and author Tom Laszewski shares brief comments about the tools he used to connect two Oracle RDS instances in AWS. Keystore Wallet File – cwallet.sso – Zum Teufel! | Christian Screen "One of the items that trips up a FMW implementation, if only for mere minutes, is the cwallet.sso file," says Oracle ACE Christian Screen. In this short post he offers information to help you avoid landing on your face. Thought for the Day "With good program architecture debugging is a breeze, because bugs will be where they should be." — David May Source: SoftwareQuotes.com

    Read the article

  • Once mounted using TrueCrypt, cannot unmount

    - by zeiger
    I have an external HDD and use TrueCrypt for keeping encrypted file containers. After mounting, whenever I try to dismount a file container (using TrueCrypt 7.0a on Ubuntu 11.04), it just does not happen and I get the following message: device-mapper: remove ioctl failed: Device or resource busy Command failed Further, if I close TrueCrypt and then try to start it again, it says that TrueCrypt is already running, but I cannot access it from the Unity sidebar (because it is not there). Also, if I power down my external HDD, the TrueCrypt volume still shows as one of the mounted volumes, but I cannot do anything with it. Any possible work around? I remember this NOT happening in earlier versions of Ubuntu so I am guessing there is something to do with 11.04. Thanks

    Read the article

  • Executable execution path. Does it depends of the place the executable is called from?

    - by Valkea
    as I'm still a new Linux user, I still discover some behaviours and I'm unable to tell if they are "normals" or not. I searched the Internet but as I can't really find an answer I guess it's time to ask here. Few weeks ago I installed a small game called "Machinarium" and I played it... but few days later when I wanted to continue my game I was unable to make the game start correctly. And as I didn't had the time to search I given up. But yesterday as I was working on a program of mine, I had the exactly same behaviour. So I searched a bit and I discovered that when using Nautilus with the "List view", I was able to run the program (ie: the program does find the sound, images etc resources) when I was literally "inside" the executable folder, but unable when I was in a parent folder and expanding it to the executable folder to run it. To illustrate the behaviour here are two screen shots. It doesn't works if the executable is double clicked from here It does works if the executable is double clicked from here This is indeed the same "place", but the Nautilus view is slightly different as the current folder is not the same and it seems to make a difference for the program. Furthermore, when I create a menu items via System Settings/Main Menu to the executable, it behaves just like if the executable can't find the resources (that's why I was unable to play Machinarium the second time as I created a menu short-cut after my first game). So I asked my program to generate a text file at it's root when running, and I started to launch it from different "parent" folders to see where is generated the text file. Each time the file was generated on the top folder of the current Nautilus view. I was expected to see it appears in the same folder of the executable (well not as I was guessing what as happening, but before that I would have expected to see it appears in the exe folder). Does anyone can explain me why it does works like this (I guess it's normal) ? How I'm supposed to solve this when creating programs (Should I detect the executable path in my C++ code or should I organize the resources files another way than on windows ?)

    Read the article

  • How can I tweak this split-tar-gz-archive script?

    - by Sai
    I came across this shell script that can be used to split an entire directory across multiple compressed files. I am interested in making a minor tweak to this script. Lets say I have n GB of files in a directory. I do not want the contents of a particular folder to be split across multiple tar files but inside the same file. I want the n MB folder to be fit within a single compressed file and the remaining (n GB -n MB) split across multiple files. I am not sure whether it is possible with this script. I was looking forward to some suggestions. Though the script has been well documented, it is quite complex to understand for me

    Read the article

  • Extending Blend for Visual Studio 2013

    - by Chris Skardon
    Originally posted on: http://geekswithblogs.net/cskardon/archive/2013/11/01/extending-blend-for-visual-studio-2013.aspxSo, I got a comment yesterday on my post about Extending Blend 4 and Blend for Visual Studio 2012 asking if I knew how to get it working for Blend for Visual Studio 2013.. My initial thoughts were, just change the location to get the blend dlls from Visual Studio 11.0 to 12.0 and you’re all set, so I went to do that, only to discover that the dlls I normally reference, well – they don’t exist. So… I’ve made a presumption that the actual process of using MEF etc is still the same. I was wrong. So, the route to discovery – required DotPeek and opening a few of blends dlls.. Browsing through the Blend install directory (./Microsoft Visual Studio 12.0/Blend/) I notice the .addin files: So I decide to peek into the SketchFlow dll, then promptly remember SketchFlow is quite a big thing, and hunting through there is not ideal, luckily there is another dll using an .addin file, ‘Microsoft.Expression.Importers.Host’, so we’ll go for that instead. We can see it’s still using the ‘IPackage’ formula, but where is that sucker? Well, we just press F12 on the ‘IPackage’ bit and DotPeek takes us there, with a very handy comment at the top: // Type: Microsoft.Expression.Framework.IPackage // Assembly: Microsoft.Expression.Framework, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a // MVID: E092EA54-4941-463C-BD74-283FD36478E2 // Assembly location: C:\Program Files (x86)\Microsoft Visual Studio 12.0\Blend\Microsoft.Expression.Framework.dll Now we know where the IPackage interface is defined, so let’s just try writing a control. Last time I did a separate dll for the control, this time I’m not, but it still works if you want to do it that way. Let’s build a control! STEP 1 Create a new WPF application Naming doesn’t matter any more! I have gone with ‘Hello2013’ (see what I did there?) STEP 2 Delete: App.Config App.xaml MainWindow.xaml We won’t be needing them STEP 3 Change your application to be a Class Library instead. (You might also want to delete the ‘vshost’ stuff in your output directory now, as they only exist for hosting the WPF app, and just cause clutter) STEP 4 Add a reference to the ‘Microsoft.Expression.Framework.dll’ (which you can find in ‘C:\Program Files\Microsoft Visual Studio 12.0\Blend’ – that’s Program Files (x86) if you’re on an x64 machine!). STEP 5 Add a User Control, I’m going with ‘Hello2013Control’, and following from last time, it’s just a TextBlock in a Grid: <UserControl x:Class="Hello2013.Hello2013Control" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300"> <Grid> <TextBlock>Hello Blend for VS 2013</TextBlock> </Grid> </UserControl> STEP 6 Add a class to load the package – I’ve called it – yes you guessed – Hello2013Package, which will look like this: namespace Hello2013 { using Microsoft.Expression.Framework; using Microsoft.Expression.Framework.UserInterface; public class Hello2013Package : IPackage { private Hello2013Control _hello2013Control; private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); Initialize(); } private void Initialize() { _hello2013Control = new Hello2013Control(); if (_windowService.PaletteRegistry["HelloPanel"] == null) _windowService.RegisterPalette("HelloPanel", _hello2013Control, "Hello Window"); } public void Unload(){} } } You might note that compared to the 2012 version we’re no longer [Exporting(typeof(IPackage))]. The file you create in STEP 7 covers this for us. STEP 7 Add a new file called: ‘<PROJECT_OUTPUT_NAME>.addin’ – in reality you can call it anything and it’ll still read it in just fine, it’s just nicer if it all matches up, so I have ‘Hello2013.addin’. Content wise, we need to have: <?xml version="1.0" encoding="utf-8"?> <AddIn AssemblyFile="Hello2013.dll" /> obviously, replacing ‘Hello2013.dll’ with whatever your dll is called. STEP 8 We set the ‘addin’ file to be copied to the output directory: STEP 9 Build! STEP 10 Go to your output directory (./bin/debug) and copy the 3 files (Hello2013.dll, Hello2013.pdb, Hello2013.addin) and then paste into the ‘Addins’ folder in your Blend directory (C:\Program Files\Microsoft Visual Studio 12.0\Blend\Addins) STEP 11 Start Blend for Visual Studio 2013 STEP 12 Go to the ‘Window’ menu and select ‘Hello Window’ STEP 13 Marvel at your new control! Feel free to email me / comment with any problems!

    Read the article

  • How can I switch between fglrx and ati drivers?

    - by Santa Maradona
    Hi there, How can I switch between fglrx and ati (open source) drivers without uninstalling fglrx? The thing is I'm using FGLRX on a daily basis, because it's better for movies--OS driver displays annoying horizontal line while playing movies. FGLRX also does this, but it's less visible. That's only disadvantage of OS driver. When I want to plug in external monitor with different resolution than my laptop's LCD I have to use open source driver, because FGLRX can't display two different resolutions at the same time. I can uninstall FGLRX, but it's rather inconvenient to do this all the time. I've noticed that when I'm using FGLRX there is xorg.conf file and when I uninstall it, then the file is missing. So my question is how to switch between them? It'll be perfect if I could do this without restarting my computer either. I'm using Ubuntu 10.10, laptop/netbook MSI U250.

    Read the article

  • MTP Won't Work With Newer Ubuntu

    - by spacesword
    I have a Philips GoGear Vibe 4 GB, set to MTP mode, as it's always been. This works fine with older versions of Ubuntu and works fine with Windows, but doesn't work with newer versions of Ubuntu. The version history is like this: 12.04 - works 12.10 - works 13.04 - doesn't work 14.04 - doesn't work Windows 7 - works When you plug the MP3 player in Ubuntu the file manager opens the root of the device, which contains the folder "internal storage". When you click on "internal storage" to open it, the file manager just hangs. And if you try to unmount the MP3 player, that process hangs too, until you just unplug it. In Windows when you click on internal storage, it opens and shows all the files it contains. And in the earlier versions of Ubuntu it just worked. Any ideas? Thanks.

    Read the article

  • Using Windows Previous Versions to access ZFS Snapshots (July 14, 2009)

    - by user12612012
    The Previous Versions tab on the Windows desktop provides a straightforward, intuitive way for users to view or recover files from ZFS snapshots.  ZFS snapshots are read-only, point-in-time instances of a ZFS dataset, based on the same copy-on-write transactional model used throughout ZFS.  ZFS snapshots can be used to recover deleted files or previous versions of files and they are space efficient because unchanged data is shared between the file system and its snapshots.  Snapshots are available locally via the .zfs/snapshot directory and remotely via Previous Versions on the Windows desktop. Shadow Copies for Shared Folders was introduced with Windows Server 2003 but subsequently renamed to Previous Versions with the release of Windows Vista and Windows Server 2008.  Windows shadow copies, or snapshots, are based on the Volume Snapshot Service (VSS) and, as the [Shared Folders part of the] name implies, are accessible to clients via SMB shares, which is good news when using the Solaris CIFS Service.  And the nice thing is that no additional configuration is required - it "just works". On Windows clients, snapshots are accessible via the Previous Versions tab in Windows Explorer using the Shadow Copy client, which is available by default on Windows XP SP2 and later.  For Windows 2000 and pre-SP2 Windows XP, the client software is available for download from Microsoft: Shadow Copies for Shared Folders Client. Assuming that we already have a shared ZFS dataset, we can create ZFS snapshots and view them from a Windows client. zfs snapshot tank/home/administrator@snap101zfs snapshot tank/home/administrator@snap102 To view the snapshots on Windows, map the dataset on the client then right click on a folder or file and select Previous Versions.  Note that Windows will only display previous versions of objects that differ from the originals.  So you may have to modify files after creating a snapshot in order to see previous versions of those files. The screenshot above shows various snapshots in the Previous Versions window, created at different times.  On the left panel, the .zfs folder is visible, illustrating that this is a ZFS share.  The .zfs setting can be toggled as desired, it makes no difference when using previous versions.  To make the .zfs folder visible: zfs set snapdir=visible tank/home/administrator To hide the .zfs folder: zfs set snapdir=hidden tank/home/administrator The following screenshot shows the Previous Versions panel when a file has been selected.  In this case the user is prompted to view, copy or restore the file from one of the available snapshots. As can be seen from the screenshots above, the Previous Versions window doesn't display snapshot names: snapshots are listed by snapshot creation time, sorted in time order from most recent to oldest.  There's nothing we can do about this, it's the way that the interface works.  Perhaps one point of note, to avoid confusion, is that the ZFS snapshot creation time isnot the same as the root directory creation timestamp. In ZFS, all object attributes in the original dataset are preserved when a snapshot is taken, including the creation time of the root directory.  Thus the root directory creation timestamp is the time that the directory was created in the original dataset. # ls -d% all /home/administrator         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 # ls -d% all /home/administrator/.zfs/snapshot/snap101         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 The snapshot creation time can be obtained using the zfs command as shown below. # zfs get all tank/home/administrator@snap101NAME                             PROPERTY  VALUEtank/home/administrator@snap101  type      snapshottank/home/administrator@snap101  creation  Mon Mar 23 18:21 2009 In this example, the dataset was created on March 19th and the snapshot was created on March 23rd. In conclusion, Shadow Copies for Shared Folders provides a straightforward way for users to view or recover files from ZFS snapshots.  The Windows desktop provides an easy to use, intuitive GUI and no configuration is required to use or access previous versions of files or folders. REFERENCES FOR MORE INFORMATION ZFS ZFS Learning Center Introduction to Shadow Copies of Shared Folders Shadow Copies for Shared Folders Client

    Read the article

  • Localization in php, best practice or approach?

    - by sree
    I am Localizing my php application. I have a dilemma on choosing best method to accomplish the same. Method 1: Currently am storing words to be localized in an array in a php file <?php $values = array ( 'welcome' => 'bienvenida' ); ?> I am using a function to extract and return each word according to requirement Method 2: Should I use a txt file that stores string of the same? <?php $welcome = 'bienvenida'; ?> My question is which is a better method, in terms of speed and effort to develop the same and why? Edit: I would like to know which method out of two is faster in responding and why would that be? also, any improvement on the above code would be appreciated!!

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • How can I view an R32G32B32 texture?

    - by bobobobo
    I have a texture with R32G32B32 floats. I create this texture in-program on D3D11, using DXGI_FORMAT_R32G32B32_FLOAT. Now I need to see the texture data for debug purposes, but it will not save to anything but dds, showing the error in debug output, "Can't find matching WIC format, please save this file to a DDS". So, I write it to DDS but I can't open it now! The DirectX texture tool says "An error occurred trying to open that file". I know the texture is working because I can read it in the GPU and the colors seem correct. How can I view an R32G32B32 texture in an image viewer?

    Read the article

  • Good practice about Javascript referencing

    - by AngeloBad
    I am fighting about a web application script optimization. I have an ASP.NET web app that reference jQuery in the master page, and in every child page can reference other library or JavaScript extension. I would like to optimize the application with YUI for .NET. The question is, I should put all the libraries reference in the master page or to compress all the JavaScript code in a single file, or I should create a file for every page that contains only the code useful to the page? Is there any guidance to follow? Thanks!

    Read the article

  • Always disable the 8.3 name creation on Windows before installing WebCenter Content or WebLogic Server

    - by Kevin Smith
    You should always disable the 8.3 name creation feature when installing WebCenter Content on a Windows platform. The installs will normally work without it disabled, but you will find the weird 8.3 file and directory names in all the config files. Disabling it can also improve performance. On Windows XP and Windows Server 2003 and above you can do it with this command: fsutil.exe behavior set disable8dot3 1 To make sure it is disabled you can run this command to check: fsutil.exe behavior query disable8dot3 If the 8.3 file name creation is disabled you will see the following output from the command: The registry state of NtfsDisable8dot3NameCreation is 1 (Disable 8dot3 name creation on all volumes). Here is a Microsoft note on how to do this on Windows 2000 and Windows NT. How to Disable the 8.3 Name Creation on NTFS Partitions

    Read the article

  • JDeveloper and Upgrading Your JDK on Ubuntu

    - by Duncan Mills
    One little gotcha, if you, as I did recently, upgrade your JDK on Ubuntu then you may have to make sure you reflect that change in a couple of places for JDeveloper to stay happy. Assuming that you've installed from the jar version of the JDeveloper installer, then the JDK that you specified at install time will be recorded in the .jdev_jdk file in your home directory.However, be aware that this is not the only reference to the absolute path of the JDK. When you run the embedded WebLogic for the first time then the .jdeveloper/system11.1.1.3.37.56.60/DefaultDomain/bin/startWebLogic.shscript will be created, and associated with that, the setDomainEnv.sh script in the same directory. So, if you do want to change the JDK location be sure to change this file as well. (or of course do everything with symbolic links)

    Read the article

  • Global UTF-encoding, the right way

    - by mowgli
    I'm curious, as to what is the right way to have UTF-8 encoding on all web files All my files (incl. CSS and JS) are made and saved in UTF-8 encoding In PHP, I set the char-set on top of the main page (this page includes all others) with: header('Content-type: text/html; charset=utf-8'); In the same page I have this html meta tag: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Then I stubled upon an external css file that has this on first line: @charset "UTF-8"; And now I wonder, should I set the charset INSIDE all my CSS/JS files too, like that? And/or should I serve each file with charset=utf-8 in the meta tag?

    Read the article

  • Change permission to mount disk at rdesktop

    - by Tal
    I have ubuntu 10.04 and have installed rdesktop 1.7. I have run these commands: sudo umount /media/Tal sudo mount -t ntfs-3g -o uid=1000,gid=1000,umask=0000 /dev/sdb1 /media/Tal rdesktop -0 -r sound:local -f -u administrator -r clipboard:PRIMARYCLIPBOARD -r disk:tal=/media/Tal myip Tal is external hard drive connecting at USB in ntfs file system. I connect to windows 7 I see the hard drive in computer and I can access to files and create new files and folders, But when I try to copy a new file to a folder he show me an error message: You need permission perform this action Your require permission from computer's administrator to make changes to this folder Tal on my computername Disk from Remote Desktop Connection. I try chmod and chown too but I read I linux forum when it ntfs is no use. Some one can help me with my problem?

    Read the article

  • Internet Good but update manager not good

    - by Raja
    I am using Ubuntu 12.04 . I am doing Internet with a USB Modem of 236kbps . My issue is , If i am accessing webpage through browser its good but if i am doing sudo apt-get update through terminal , its not responding & its just hanging there .I am unable to get whats my issue . help me to solve this . one more thing , i am unable to change my dns also . raja@badfox:~$ cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 220.226.100.40 nameserver 220.226.6.104 nameserver 127.0.0.1 I wanna replace them to dns ip's linked here http://www.cyberciti.biz/faq/free-dns-server/ see the picture to have some idea about my issue .

    Read the article

  • How to install Visual Python?

    - by user214152
    I'm a Linux newbie but I'm determined to get my favorite applications on my laptop running Ubuntu 12.04. I just installed Cinnamon. I'm trying to install Visual Python and it requires Python 2.7. I followed the instructions on the VPython site but the Wine application isn't extracting anything from the Python .msi file. From the first line wine start /i python-2.7.5.amd64.msi /qn TARGETDIR=~/Python27 ALLUSERS=1 it says fixme:storage:create_storagefile Storage share mode not implemented. I created that Python27 directory so I know it exists and it's empty. I know Ubuntu already has Python 2.7 so I just tried running the VPython.exe file but it says "This program can only be installed on versions of Windows designed for the following processor architectures: x64." My Toshiba satellite has a 64-bit processor. Could anybody help?

    Read the article

  • Firefox profile issue

    - by vooda gopal
    Is there any limit in the number of firefox profiles that can be created? My issue is I am currently doing selenium webdriver automation in linux for a device. There are 50 devices of same kind and framework will pickup a device depending on availability. I need to by pass unsigned ssl pages. I am using firefox 14. I have implemented following but it is not consistent. everytime a device is chosen adds cer of device to the cert file in firefox profile but I am getting sec_error_bad_signature very frequently. So I started recreating cert [delete and create if by opening firefox] file for every run. Now this is posing a problem if multiple devices are run at the same time. Hence I want to create separate firefox profile

    Read the article

  • Failing report subscriptions

    - by DavidWimbush
    We had an interesting problem while I was on holiday. (Why doesn't this stuff ever happen when I'm there?) The sysadmin upgraded our Exchange server to Exchange 2010 and everone's subscriptions stopped. My Subscriptions showed an error message saying that the email address of one of the recipients is invalid. When you create a subscription, Reporting puts your Windows user name into the To field and most users have no permissions to edit it. By default, Reporting leaves it up to exchange to resolve that into an email address. This only works if Exchange is set up to translate aliases or 'short names' into email addresses. It turns out this leaves Exchange open to being used as a relay so it is disabled out of the box. You now have three options: Open up Exchange. That would be bad. Give all Reporting users the ability to edit the To field in a subscription. a) They shouldn't have to, it should just work. b) They don't really have any business subscribing anyone but themselves. Fix the report server to add the domain. This looks like the right choice and it works for us. See below for details. Pre-requisites: A single email domain name. A clear relationship between the Windows user name and the email address. eg. If the user name is joebloggs, then joebloggs@domainname needs to be the email address or an alias of it. Warning: Saving changes to the rsreportserver.config file will restart the Report Server service which effectively takes Reporting down for around 30 seconds. Time your action accordingly. Edit the file rsreportserver.config (most probably in the folder ..\Program Files[ (x86)]\Microsoft SQL Server\MSRS10_50[.instancename]\Reporting Services\ReportServer). There's a setting called DefaultHostName which is empty by default. Enter your email domain name without the leading '@'. Save the file. This domain name will be appended to any destination addresses that don't have a domain name of their own.

    Read the article

  • Problems creating a debdiff

    - by Chris Wilson
    I'm following this guide to create a debdiff for a package I'm patching. Everything goes fine until step number 8 and I attempt to create the debdiff after committing the changes. The package in question is Zim, pulled form Launchpad using bzr branch lp:zim and according to this guide I should execute the following command to create the debdiff: debdiff zim_0.49.dsc zim_0.49ubuntu1.dsc > zim_0.49ubuntu1.debdiff however, when I actually try to execute this command, I get the following error: debdiff: fatal error at line 314: Can't read file: zim_0.49.dsc Upon inspection of the directory in which the files created from debuild -S (step 6) are deposited, I find zim_0.49ubuntu1_source.changes zim_0.49ubuntu1.dsc zim_0.49ubuntu1.tar.gz zim_0.49ubuntu1_source.build but no sign of zim_0.49.dsc. I could probably create one by debuilding the package as soon as I check out the code, before starting work, but that would add an extraneous entry in the changelog. Is there a step missing from the guide that creates zim_0.49.dsc or is the file itself missing from the source?

    Read the article

  • sysctl.conf ignore net settings

    - by Steffen Unland
    I have a little problem with sysctl on a Ubuntu 10.04 LTS system. When I set the sysctl values with "sysctl -w " all work fine, but when I try to use the sysctl.conf file. the net settings will be ignored. For example my sysctl.conf # /etc/sysctl.conf - Configuration file for setting system variables kernel.domainname=findme.sysctl # Corefiles information fs.suid_dumpable=2 kernel.core_pattern=/cores/core-%e-%s-%u-%g-%p-%t ##############################################################3 # Functions previously found in netbase net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait=1 net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait=1 when I grep to the values, I can see that the sysctl settings for net.ipv4.netfilter don't set. [host:~ ] $ sysctl -a | grep domainname kernel.domainname = findme.sysctl [host:~ ] $ sysctl -a | grep "core_pattern" kernel.core_pattern = /cores/core-%e-%s-%u-%g-%p-%t [host:~ ] $ sysctl -a | grep "timeout_fin_wait" net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120 net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120 [host:~ ] $ sysctl -a | grep "timeout_close_wait" net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60 net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60 can somebody help me to solve the problem? If you need more information I can post it. Cheers, Steffen

    Read the article

  • RewriteRule not working at server level?

    - by Alexis Wilke
    I wanted to forbid some robots from doing certain things to my websites and decided to add a RewriteRule for that purpose. The rule works when put in one of my <VirtualHost *:80> tag and looks like this: RewriteEngine On RewriteCond %{HTTP_USER_AGENT} libwww-perl RewriteCond %{REQUEST_METHOD} POST RewriteRule . - [F,L] However, I wanted to apply that to all my websites instead of just one of them. So with the newest version of Apache2 settings, I decided to put that code in the security.conf file. This file is defined under /etc/apache2/conf-available/... (and yes, I have a softlink from the /etc/apache2/conf-enabled/... directory.) However, if the definition is only in the conf-available/security.conf files, it somehow gets ignored. From the documentation, it says that these Rewrite* commands all work at server level! Any idea of what I would be missing?

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

< Previous Page | 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151  | Next Page >