Search Results

Search found 17202 results on 689 pages for 'folder permissions'.

Page 175/689 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • How to install nuget package manually :)

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/13/how-to-install-nuget-package-manually.aspxWhen you install a nuget package in your project then Visual studio cached it for you for future purpose. If you live offline few time you can add the this cache directory as your nuget source. For see the folder where nuget cached then just go to Tools > options > Package manager > source > Cached packages > browse. You can add this cached folder as source.     For use nuget package without Visual studio you can try this trick.   open this in your explorer C:\Users\{yourusername}\AppData\Local\NuGet\Cache Now rename the file as described here. for example bootstrap.3.0.2.nupkg to bootstrap.3.0.2.rar open the rar and see the content folder. this folder contain the assembly/library that nuget install in your project.

    Read the article

  • Does the Instant Preview in Google webmaster tools takes Robot.txt in account?

    - by rockyraw
    Is that the way to go If I want to visually see what the googlebot see? I'm trying to check a folder which I have just blocked in my robots.txt. If I fetch the folder as google bot, It fetches ok, so that doesn't tell me nothing about whether the block is working I know there's a tool to check for blocking, but it is dependent on the input of the robots.txt Therefore I've tried the Instant preview, and I don't get a preview for what the bot sees ("pre-render), so I think that means that it's because the robots.txt blocks it; however - I don't see the bot tried beforehand to access my updated robots.txt, so I'm not sure how does it know that this folder is blocked? (it does preview another new folder, that is not blocked)

    Read the article

  • How to get files that have been added/modifed in a batch file

    - by Chris L
    I have the following batch file which concatenates all of the files in a folder that have a .sql ending. set func=%~dp0%Stored Procedures\*.sql for %%i in (%func%) do type "%%i" >>InstallScript.sql We use SVN as our repository, and we're using branching. Currently the script concatenates all the .sql files, even the ones that haven't changed. I'd like to change it so it only concatenates files that have been modified and/or created after the branch was created. We can do that by looking at the datetime on the .svn folder in each folder(there's a Stored Procedure, View, Function subfolders). But I don't know how to do that with batch files. Ideally something like this(psuedo code): set func=%~dp0%Stored Procedures\*.sql set branchDateTime=GetDateTime(%~dp0%.svn) <- Gets the datetime when the .svn folder was created for %%i in (%func%) { if(%%i.LastModifiedOrCreated > branchDateTime) do type "%%i" >> InstallScript.sql }

    Read the article

  • Cron won't execute if am not logged in

    - by JonaMX
    I have a cron that makes a backup of MySql, if I execute on shell it works pretty well even if I'm logged when cron supposed to execute works fine, but if I'm not logged just won't execute, I don't know what could happened, any suggestion ? Crontab 00 04 * * * /home/administrador/scripts/respaldo.sh respaldo.sh #!/bin/sh mysql -uroot -p[PASS] ccs < /home/administrador/scripts/limpia.sql mysqldump -uroot -p[PASS] --routines ccs > /home/administrador/backups/backup_$(date +%Y%m%d).sql mysqlcheck -uroot -p[PASS] --auto-repair --optimize ccs cd /home/administrador/backups/ tar -zcf backup_$(date +%Y%m%d).tgz backup_$(date +%Y%m%d).sql rm backup_$(date +%Y%m%d).sql find -name '*.tgz' -type f -mtime +90 -exec rm -f {} \; respaldo.sh has execute permission SOLUTION The problema was that the /home/adminsitrador directory was an encrypted folder so when the user is logged in the folder it's decrypted and everything works but when the user it's logged off the folder it's encrypted and the cron can't access to that path, so I've changed the cron script and backup to another unencrypted folder and to root user and now everything it's working pretty well, thanks to all for your help !

    Read the article

  • Dark NetBeans

    - by Geertjan
    Let's make NetBeans IDE look like this. Not saying it's a nice color or anything, just that it's possible to do so: I changed the coloring in the Java editor by going to Tools | Options, then chose "Fonts & Colors", then selected the "Norway Today" profile and changed the background setting to Dark Gray. Next, I put this themes.xml file into the "config" folder of the NetBeans IDE user directory, which you can identify as such by going to Help | About in the IDE. Go to the exact location defined by "User directory" in Help | About, and then go to the "config" folder within that folder: The "config" folder of the user directory is the readable/writable root of the NetBeans IDE virtual filesystem. If a themes.xml file is found there, it is used, as described here. Then, in netbeans.conf file, which is not in the NetBeans user directory but in the NetBeans installation directory, within its "etc" folder, I added the following to "netbeans_default_options": -J-Dnetbeans.useTheme=true --laf Metal The first of these enables usage of the themes.xml file, i.e., it notifies NetBeans IDE at startup to load the themes.xml file and to apply the content to the relevant UI components, while the second is needed because most/all of the themes only work if you're using the Metal Look and Feel. Note: I must add that in most cases, whatever it is you're trying to achieve via a themes.xml file can probably be achieved in a different, and better, way. The themes.xml mechanism has been there forever, but is not actively supported or tested, though it may work for the specific thing you're trying to do anyway. For example, if you're trying to change the background color of a TopComponent, use the paintComponent method of the TopComponent instead of using a themes.xml file.

    Read the article

  • How can I prevent JungleDisk/MacOS X (10.6) creating a local volume for a removed external drive?

    - by Rew
    Ok, here is situation: I use JungleDisk to sync an online folder on to a external drive connected to my Mac. If I right click Finder, click Go to Folder... then type /Volumes/ I see the drive linked here. Once I remove the external drive, an actual folder is created here in the name of the external drive, JungleDisk continues to copy files to this folder, rather than stop. Is this a feature of Mac OS X? Can I turn if off? After I re-connect my external drive, the link to the drive is appended with a 1 (so if I called the drive SpareDrive it becomes SpareDrive 1 as the newly created folder is called SpareDrive. I realise my explanation isn't very clear, but anyone understand this, and knows how to prevent it happening please let me know. PS: I have a low reputation as I don't use this often, I tend to use stackoverflow, but will check back here for answers.

    Read the article

  • How are Linux files and applications organized?

    - by doup
    Hi there, I'm a newbie Linux (Ubuntu) user and I'll like to know if someone can give some advices of where to install stuff, which folders don't touch, which is the meaning of each folder and so on. My first concern is, should everything go into my home folder? I've installed "manually" Komodo Edit (it's an IDE) and it has gone to my home folder, I really don't like the idea of having an application there. (in windows I used to have my workfiles/pictures/downloads... partition and then the OS partition with all the apps). So, is there any place where I could install this software? Any advice for having my home folder ordered? Maybe I should create an apps folder in my home dir? Thanks in advance. :) pd: most of the time I use apt to install stuff, but I don't always found the software I want there...

    Read the article

  • Reading a file from an alternate location

    - by Highstaker
    I have a certain file (data.abc) located in, say, my home folder. I make a copy of it to another location (for example, "/mnt/ramtemp/"). Whenever the file in my home folder is accessed by any process, I want it to be read not from home folder, but from "/mnt/ramtemp/". As you might have guessed from the path of the latter, it is where I mount the ramfs. So, basically, I want a process to access not the file on my HDD (which is slower), but its copy on ramfs (which is way faster). At the same time, I want the file data.abc to remain in my home folder under that name, I don't want to rename or delete it. Is there any way I could guide the system to redirect the processes to read the file from alternative location whenever they try to read it from home folder?

    Read the article

  • Emails disappear when moving from Inbox to anywhere else

    - by Jason T.
    Whenever a client of mine has a message open, they click on Move To Folder and the list of recent folders comes up. That person selects a folder but the message does not appear in there. What I have found out was that if I right-click on the message and move it that way the message will move successfully. Or if the message is open and I do use the Move To Folder shortcut that if I select Other Folder and choose a folder they will move that way. Which leads me to believe that the folders in the recent folders list point somewhere else. So my question is, how can I find out where those folders point to?

    Read the article

  • Access samba shares befor login

    - by everlearnin
    I installed Ubuntu 12.10 on an old Compaq C300. I want to use it to share my movies over lan so that the rest of my family can watch without hi-jacking my pc. I installed samba and shared the Public folder under the Home folder of my user account. But I can only access this folder from my Win 7 pc when I am logged in. If I log out or restart without logging in then I cannot access the shared folder. I am used to the Windows service that starts at boot making shared files available over the network before the user has logged in. How can I accomplish this in Ubuntu?

    Read the article

  • Can't find my DVD drive in Ubuntu 12.04

    - by user72953
    I am completely new to Linux, due to some problems in my Windows I decided to give Linux a try. I am quite liking it accept for some minor issues. The biggest of them is that I cannot find my DVD drive in Ubuntu. After I insert the disk, the drive's green light flickers and then nothing. I don't get anything on my unity bar or desktop. There is no folder inside my /dev folder which indicates a dvd drive. There is a cdrom folder in my root folder but nothing is there. If you guys know what's wrong, I'd appreciate the help. Am a complete newbie so don't expect me to know anything about Ubuntu, although I have spent lots of hours in last 2 days googling my issue so I have basic information...

    Read the article

  • Our company has 100,000s+ photos, how to store and browse/find these efficiently?

    - by tobefound
    We currently store our photos in a structure like this: folder\1\10000 - 19999.JPG|ORF|TIF (10 000 files) folder\2\20000 - 29999.JPG|ORF|TIF (10 000 files) etc... They are stored on 4 different 2TB D-link NASes attached and shared on our office network (\\nas1, \\nas2, and so on...) Problems: 1) When a client (Windows only, Vista and 7) wishes to browse the let's say \\nas1\folder\1\ folder, performance is quite poor. A problem. List takes a long time to generate in explorer window. Even with icons turned off. 2) Initial access to the NAS itself is sometimes slow. Problem. SAN disks too expensive for us. Even with iSCSI interface/switch technology. I've read a lot of tech pages saying that storing 100 000+ files in one single folder shouldn't be a problem. But we don't dare go there now that we experience problems on a 10K level. All input greatly appreciated, /T

    Read the article

  • Why the cryptographic key was not provided at ubuntu 12.04 first run?

    - by user64720
    So I installed Ubuntu 12.04 a few days ago and strangely I missed the part where we choose to encrypt home folder. However I already ran the commands on this question (How to check if your home folder and swap partition are encrypted using terminal?) to check if home folder and swap partition are encrypted and they are. So why is that Ubuntu did not provide me the cryptographic key the same way it happened when I installed Ubuntu 11.04???

    Read the article

  • How to Format a Hard Drive

    - by JOLGOM
    After installing Ubuntu 11.10 and making an additional partition for my documents I find that space reserved for said documents is gone. Does not appear anywhere and after several tries all I got in return was the /home folder with the Lost+Found folder in it saying "The content of this folder can not be accessed. You do not have the sufficient permissions to view the content" For anyone that knows what to do, please answer.

    Read the article

  • Windows Sharing requires password

    - by Linux Intel
    I have 3 machines on my local network Machine A , Machine B and Machine C OS on all machines is : Windows 7 64bit. Sharing Permissions on all machines : Everyone ( Read/Write ) no domain. Sharing folder name : project Machine A is sharing folder over the network without password. Machine B is sharing folder over the network without password. Machine C is sharing folder over the network without password. Machine A can normally access B and C without password required. Machine B can normally access A and C without password required Machine C can normally access Machine B without password. My problem is *Machine C* requires a password when it access Machine A also the shared folder in Machine A don't have password protected and Machine B can access Machine A without a password ! How can i solve the problem .?

    Read the article

  • Subdomain redirects to subdirectory [duplicate]

    - by redraw
    This question already has an answer here: How do I get the root index page to redirect to a subdirectory without affecting SEO? 1 answer Supposing I have folder called "support" inside root folder "/public_html". I've added a subdomain in my server's panel so that when going to "support.mydomain.com" it redirects to "mydomain.com/support" The problem is that redirection is reflected on the browser's address bar, and I want to make that subdomain work like "base domain". i.e "support.mydomain.com/folder-inside-support" Is it something to be with .htaccess file?

    Read the article

  • Use alias source with relative path

    - by notme
    I want to add an alias file to my project folder to quickly open and edit files in it with a simple shell command. To make it more portable, I would like to use only relative paths. I want something like this: ### .profile source /my/project/folder/aliases.bash and ### aliases.bash editprojectfiles="edit [/my/project/folder/]afile.txt" The problem for me now is how to retrieve [/my/project/folder/] automatically. I tried to use $PWD variable, but the result is that the alias points to the folder of .profile file and not the aliases.bash ones. Is there a way to get this?

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by ErikE
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running under doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for both the web server and the domain account so they are "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by Emtucifor
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running as doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for the web server so it is "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • How does Visual Studio's source control integration work with Perforce?

    - by Weeble
    We're using Perforce and Visual Studio. Whenever we create a branch, some projects will not be bound to source control unless we use "Open from Source Control", but other projects work regardless. From my investigations, I know some of the things involved: In our .csproj files, there are these settings: <SccProjectName <SccLocalPath <SccAuxPath <SccProvider Sometimes they are all set to "SAK", sometimes not. It seems things are more likely to work if these say "SAK". In our .sln file, there are settings for many of the projects: SccLocalPath# SccProjectFilePathRelativizedFromConnection# SccProjectUniqueName# (The # is a number that identifies each project.) SccLocalPath is a path relative to the solution file. Often it is ".", sometimes it is the folder that the project is in, and sometimes it is ".." or "..\..", and it seems to be bad for it to point to a folder above the solution folder. The relativized one is a path from that folder to the project file. It will be missing entirely if SccLocalPath points to the project's folder. If the SccLocalPath has ".." in it, this path might include folder names that are not the same between branches, which I think causes problems. So, to finally get to the specifics I'd like to know: What happens when you do "Change source control" and bind projects? How does Visual Studio decide what to put in the project and solution files? What happens when you do "Open from source control"? What's this "connection" folder that SccLocalPath and SccProjectFilePathRelativizedFromConnection refer to? How does Visual Studio/Perforce pick it? Is there some recommended way to make the source control bindings continue to work even when you create a new branch of the solution?

    Read the article

  • "type" Command Not Working As Expected on Git Bash

    - by trysis
    The type command, in Linux, returns the location, on the filesystem, of the given file, if it is in the current folder or the $PATH. This functionality is also available through Windows with the Git Bash command line program. The command also returns a file's location given the file without its extension (.exe, .vbs, etc.) However, I have run into what seems like a strange corner case where the file exists on the $PATH but doesn't get returned using the command. I am thinking of buying a new computer soon, so I looked up the method of transferring the license key from one computer to another, in preparation for actually doing this. The method I found mentioned the files slmgr.vbs and slui.exe, both of which reside in the C:/Windows\System32 folder, which is in my $PATH, as usual for a Windows computer. However, these two files aren't showing up when I use the type command. Also, neither gets executed when I call the files as commands without their extensions in Git Bash, and only slmgr.vbs gets executed when I call them with the extensions. Finally, slmgr.vbs is shown when listing the folder's contents in Git Bash, as well, but slui.exe isn't. I thought this might have to do with permissions, and, indeed, both files have very restrictive permissions, as you can see in the pictures below, but they both have the same permissions, which wouldn't explain why one gets executed and the other doesn't when called directly, nor why one file is listed on command line but the other isn't. C:\Windows\System32 folder, proving the files exist: File permissions for the Users and Administrators groups for the two files (they are identical): And the folder: type command and its output in Git Bash for the 2 files, plus listing the files in the folder (using grep to filter as the folder is huge), as well as listing part of the $PATH (keep in mind, for all these, that Git Bash changes the paths as they are displayed): Sean@MYPC ~ $ type -a slmgr sh.exe": type: slmgr: not found Sean@MYPC ~ $ type -a slmgr.vbs sh.exe": type: slmgr.vbs: not found Sean@MYPC ~ $ type -a slui sh.exe": type: slui: not found Sean@MYPC ~ $ type -a slui.exe sh.exe": type: slui.exe: not found Sean@MYPC ~ $ slmgr sh.exe": slmgr: command not found Sean@MYPC ~ $ slmgr.vbs /c/WINDOWS/system32/slmgr.vbs: line 2: syntax error near unexpected token `(' /c/WINDOWS/system32/slmgr.vbs: line 2: `' Copyright (c) Microsoft Corporation. A ll rights reserved.' Sean@MYPC ~ $ slui sh.exe": slui: command not found Sean@MYPC ~ $ slui.exe sh.exe": slui.exe: command not found Sean@MYPC ~ $ ls /c/Windows/System32/slui.exe /c/Windows/System32/slmgr.vbs ls: /c/Windows/System32/slui.exe: No such file or directory /c/Windows/System32/slmgr.vbs Sean@MYPC ~ $ echo $PATH /c/Users/Sean/bin:.:/usr/local/bin:/mingw/bin:/bin:/cmd:/c/Python33/:/c/Program Files (x86)/Intel/iCLS Client/:/c/Program Files/Intel/iCLS Client/:/c/WINDOWS/sy stem32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/WINDOWS/System32/WindowsPowerShell /v1.0/:/c/Program Files/Intel/Intel(R) Management Engine Components/DAL:/c/Progr am Files/Intel/Intel(R) Management Engine Components/IPT:/c/Program Files (x86)/ Intel/Intel(R) Management Engine Components/DAL:/c/Program Files (x86)/Intel/Int el(R) Management Engine Components/IPT:/c/Program Files/Intel/WiFi/bin/:/c/Progr am Files/Common Files/Intel/WirelessCommon/:/c/strawberry/c/bin:/c/strawberry/pe rl/site/bin:/c/strawberry/perl/bin:/c/Program Files (x86)/Microsoft ASP.NET/ASP. NET Web Pages/v1.0/:/c/Program Files/Microsoft SQL Server/110/Tools/Binn/:/c/Pro gram Files (x86)/Microsoft SQL Server/90/Tools/binn/:/c/Program Files (x86)/Open AFS/Common:/c/HashiCorp/Vagrant/bin:/c/Program Files (x86)/Windows Kits/8.1/Wind ows Performance Toolkit/:/c/Program Files/nodejs/:/c/Program Files (x86)/Git/cmd :/c/Program Files (x86)/Git/bin:/c/Program Files/Microsoft/Web Platform Installe r/:/c/Ruby200-x64/bin:/c/Users/Sean/AppData/Local/Box/Box Edit/:/c/Program Files (x86)/SSH Communications Security/SSH Secure Shell:/c/Users/Sean/Documents/Lisp :/c/Program Files/GCL-2.6.1/lib/gcl-2.6.1/unixport:/c/Chocolatey/bin:/c/Users/Se an/AppData/Roaming/npm:/c/wamp/bin/mysql/mysql5.6.12/bin:/c/Program Files/Oracle /VirtualBox:/c/Program Files/Java/jdk1.7.0_51/bin:/c/Program Files/Node-Growl:/c /chocolatey/bin:/c/Program Files/eclipse:/c/MongoDB/bin:/c/Program Files/7-Zip:/ c/Program Files (x86)/Google/Chrome/Application:/c/Program Files (x86)/LibreOffi ce 4/program:/c/Program Files (x86)/OpenOffice 4/program What's happening? Why aren't these files listed with the type command? Is this issue because of weird Windows permissions, or something even weirder? If permissions, why do they seem to have the same permissions, yet both are not handled in the same way?

    Read the article

  • Deploying .NET COM dll, getting error (0x80070002)

    - by Brett
    I have a .NET COM assembly I am attempting to deploy to a web server (IIS 6 Win 2003). We have successfully deployed this assembly to our test environment, but the production environment is not working. The assembly is being called from a classic ASP page. Every time that page tries to initialize the assembly with “Set LTMRender = CreateObject("LTMRender.Render")”, I get an error “Error Type:, (0x80070002)”. This error seems to indicate a permission denied, or file not found type problem. I created a test app to see if the assembly works outside of the web page. The .exe initializes the assembly, and then makes a call designed to fail which in turn causes the assembly to produce a log file. It works if I run the .exe in the same folder as the assembly, but fails if I run it elsewhere. For some reason, the assembly is not accessible from outside it’s folder. I can’t figure out why this won’t work. Things I have confirmed: The deployment folder has adequate permissions. We have confirmed that the folder the assembly in installed in has the correct permissions for all the necessary user accounts. The Assembly is signed with a strong name, and was registered with regasm.exe C:_WebSites\LTMRender\LTMRender.dll /codebase /tlb:C:_WebSites\LTMRender\LTMRender.tlb. Regasm reported success. The Assembly has the attribute and relevant GUID’s set correctly. Any tips? EDIT We ran filemon against my testapp.exe and it seems to have indicated what the problem is. When testapp.exe runs in D:_websites\DocWebV2\ or D:_websites\DocWebV2\ LTMRender\ folder, it succeeds and filemon is showing D:_websites\DocWebV2\LTMRender\pinPDF.dll SUCCESS If I run my testapp.exe in the D:_websites\DocWebV2\Client – where my asp pages run, it shows D:_websites\DocWebV2\pinPDF.dll NAME NOT FOUND and then D:_websites\DocWebV2\pinPDF\pinPDF.dll FILE NOT FOUND I’m not sure why it is not looking in the correct folder if it’s under this particular folder only.

    Read the article

  • Internal bug tracking tickets - Redmine, Trac, or JIRA

    - by Tai Squared
    I've been looking at setting up Redmine, Trac, or JIRA to track issues. I want to be able to have my development team create internal tickets that are never seen by clients, while clients can create/edit tickets that are seen by the internal team. From the Trac documentation, you can set permissions to create or view tickets, but it doesn't seem to allow for viewing only certain tickets. It may be possible with Trac Fine Grained Permissions, but doesn't appear so. The Redmine documentation mentions: Define your own roles and set their permissions in a click but doesn't appear to have the level of granularity. From the JIRA documentation: At the moment JIRA is only able to support security at a project level or issue level. Currently there is no field level security available. According to this question, Redmine doesn't support internal tickets, so you would have to use multiple projects. I don't want a situation where I would have to create multiple projects - one internal, one external and have the external tickets brought into the internal repository. It seems as this would lead to unnecessary overhead and inevitably, the projects wouldn't be in sync. Is there any way with any of these products (possibly through a plug-in if not in the core product itself) to specify these permissions, or simplify having two projects with different users and permissions that must still share information?

    Read the article

  • jQuery multiple themes on one page

    - by lloydphillips
    This is driving me NUTS! I've followed the post here which just doesn't seem to be working: http://www.filamentgroup.com/lab/using_multiple_jquery_ui_themes_on_a_single_page/ I have a base theme, for examples sake it's the Smoothness theme from the jQuery UI gallery. Then I have a 'red' theme which basically colours the buttons red. Here is the theme I created. So I go to download my theme. Choose Advanced settings, set the scope to 'red' and my theme folder name to 'red' and download. First of all I'm not entirely 100% sure which folder I'm to copy over to my project is it the 'development-bundle\themes' folder (which contains my red folder) or the '\css\red' folder? I've tried both. The post above seems to suggest if I copy my themes folder and link to my theme in the css it'll work when I add a class of 'red' to a wrapper div or element. So I've linked the themes like so in my file: <link type="text/css" href="themes/base/jquery.ui.all.css" rel="stylesheet" /> <link type="text/css" href="themes/red/jquery.ui.all.css" rel="stylesheet" /> The base theme loads and works all honkey doorey but the red theme doesn't. I've got a button styled like so: <input type="submit" id="btn" value="A submit button" class="red" /> I've also tried: <div class="red"> <input type="submit" id="btn" value="A submit button" /> </div> Neither work. When I remove the 'themes/base/jquery.ui.all.css' css file link the button's aren't styled at all. Crazy! I'm pulling my hair out. Where am I going wrong? Surely they should just make it easy enough to download JUST the theme folder and reference the ui.all file.

    Read the article

  • Help with 2-part question on ASP.NET MVC and Custom Security Design

    - by JustAProgrammer
    I'm using ASP.NET MVC and I am trying to separate a lot of my logic. Eventually, this application will be pretty big. It's basically a SaaS app that I need to allow for different kinds of clients to access. I have a two part question; the first deals with my general design and the second deals with how to utilize in ASP.NET MVC Primarily, there will initially be an ASP.NET MVC "client" front-end and there will be a set of web-services for third parties to interact with (perhaps mobile, etc). I realize I could have the ASP.NET MVC app interact just through the Web Service but I think that is unnecessary overhead. So, I am creating an API that will essentially be a DLL that the Web App and the Web Services will utilize. The API consists of the main set of business logic and Data Transfer Objects, etc. (So, this includes methods like CreateCustomer, EditProduct, etc for example) Also, my permissions requirements are a little complicated. I can't really use a straight Roles system as I need to have some fine-grained permissions (but all permissions are positive rights). So, I don't think I can really use the ASP.NET Roles/Membership system or if I can it seems like I'd be doing more work than rolling my own. I've used Membership before and for this one I think I'd rather roll my own. Both the Web App and Web Services will need to keep security as a concern. So, my design is kind of like this: Each method in the API will need to verify the security of the caller In the Web App, each "page" ("action" in MVC speak) will also check the user's permissions (So, don't present the user with the "Add Customer" button if the user does not have that right but also whenever the API receives AddCustomer(), check the security too) I think the Web Service really needs the checking in the DLL because it may not always be used in some kind of pre-authenticated context (like using Session/Cookies in a Web App); also having the security checks in the API means I don't really HAVE TO check it in other places if I'm on a mobile (say iPhone) and don't want to do all kinds of checking on the client However, in the Web App I think there will be some duplication of work since the Web App checks the user's security before presenting the user with options, which is ok, but I was thinking of a way to avoid this duplication by allowing the Web App to tell the API not check the security; while the Web Service would always want security to be verified Is this a good method? If not, what's better? If so, what's a good way of implementing this. I was thinking of doing this: In the API, I would have two functions for each action: // Here, "Credential" objects are just something I made up public void AddCustomer(string customerName, Credential credential , bool checkSecurity) { if(checkSecurity) { if(Has_Rights_To_Add_Customer(credential)) // made up for clarity { AddCustomer(customerName); } else // throw an exception or somehow present an error } else AddCustomer(customerName); } public void AddCustomer(string customerName) { // actual logic to add the customer into the DB or whatever // Would it be good for this method to verify that the caller is the Web App // through some method? } So, is this a good design or should I do something differently? My next question is that clearly it doesn't seem like I can really use [Authorize ...] for determining if a user has the permissions to do something. In fact, one action might depend on a variety of permissions and the View might hide or show certain options depending on the permission. What's the best way to do this? Should I have some kind of PermissionSet object that the user carries around throughout the Web App in Session or whatever and the MVC Action method would check if that user can use that Action and then the View will have some ViewData or whatever where it checks the various permissions to do Hide/Show?

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >