Search Results

Search found 23480 results on 940 pages for 'directory structure'.

Page 107/940 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Single Sign On with adLDAP and apache (xampp 1.7.3)

    - by cvack
    I've successfully managed to connect to my Active Directory if I type in a username and password. To do this I'm using a PHP script called adLDAP. But I want my users to auto sign in if they are signed in on a computer connected to the Active Directory. If I understand things right, I need to use something called Single Sign On (SSO). I've tried searching for a tutorial on how to install this on a apache server running on windows 7, but with no luck. Could someone guide me in the right directions please? :)

    Read the article

  • Mac OS X Server Open Directory does not push Software Update settings to clients

    - by joxl
    I have an Xserve G5 running Mac OS X Server 10.5.8 configured as an Open Directory master. I have also enabled and configured Software Update service on the machine. The SUS is configured to serve Tiger, Leopard and Snow Leopard clients (see http://discussions.apple.com/message.jspa?messageID=10297359#10297359) The clients bound to the OD are a variety of Mac's running OS X 10.4, 10.5 or 10.6. In Workgroup Manager, I have created 3 machine groups for each client OS. Each group is configured with a custom SUS URL, and the managed client computers are members accordingly (see http://discussions.apple.com/thread.jspa?messageID=10493154#10493154) My problem is that the server pushes the SUS settings to some of the client machines, but not all. When I first configured all this stuff on the server (a few weeks ago) I was closely monitoring a few of the client machines to confirm that they received the custom settings. I noticed that some of the clients (10.4/5/6 alike) seemed to get the settings immediately, others didn't show the new settings until after a reboot. As I said, results are mixed across OS's, but some clients will not "sync" at all. My immediate thought was to unbind/rebind the problematic machines. I did this on several client computers with no success. For example, today I was working on one of the Tiger clients. I noticed it was not pointed at my local SUS, so I checked the OD binding; it was fine. Just to be sure I unbound the machine. Next, I checked WM and confirmed the computer record was gone. I noticed the machine group still had a residual (broken?) member from the unbound client; I manually removed this. Finally, I re-bound the client to OD and re-added the machine to it's correct group in WM. Unfortunately, the client still pings apple's SUS for updates. Just to play it safe I rebooted the client, but to no avail, it will not see my local SUS. To confirm that there is nothing wrong with the server, or the client's connection to it, forcefully pointed the machine at my SUS: sudo defaults write /Library/Preferences/com.apple.SoftwareUpdate CatalogURL "$LOCAL_SUS_URL" and the machine successfully updated off my local server. Great, successful updates, but problem not solved. I've done exhaustive reading on discussions.apple.com (not saying I read everything, I'm just saying I have read a lot) without a good answer. The discouraging thing is that a lot of OD problems I've read about only result in the sysadmin completely reinstalling the server, or OD, or some other similarly heavy-handed operation. At this point, I am not willing to go that route. I still have hope that I can find the reason for this flaky behavior. If anyone can point me in a helpful direction it would be much appreciated. EDIT: Indeed, some files are being pushed to the client: # from client machine: $ sudo find /Library -type f -name com.apple.SoftwareUpdate.plist /Library/Managed Preferences/com.apple.SoftwareUpdate.plist /Library/Managed Preferences/username/com.apple.SoftwareUpdate.plist /Library/Preferences/com.apple.SoftwareUpdate.plist A few weeks ago, prior to my (previously mentioned) modifications, the SUS was still running "stock". Which meant it could not serve SL (10.6) machines. At that time, the Software Update settings were setup in WM under User Groups. This didn't make any sense because some users work on multiple machines with different OS's. Before creating Machine Groups in WM, I deleted all the SU settings from the User Group Preferences. This just makes the whole thing more confusing, because when I see a file here: /Library/Managed Preferences/username/com.apple.SoftwareUpdate.plist I assume it's still remaining from the "old" settings, because I wouldn't think a Machine Setting belongs there. Despite all the com.apple.SoftwareUpdate.plist hanging around under the Managed Preferences, why does the client machine still call home to Apple and not my SUS? # on client machine: $ date Tue Jan 25 17:01:46 EST 2011 $ softwareupdate --list Software Update Tool Copyright 2002-2005 Apple No new software available. switch terminals... # on server: $ tail -n1 /var/log/swupd/swupd_access_log 10.x.x.x - - [25/Jan/2011:15:54:29 -0500] XXXX POST "/cgi-bin/SoftwareUpdateServerStats" 200 13 ... Notice the date of the client softwareupdate and the latest access to the SUS server; the server never heard a peep from that client.

    Read the article

  • file layout and setuptools configuration for the python bit of a multi-language library

    - by dan mackinlay
    So we're writing a full-text search framework MongoDb. MongoDB is pretty much javascript-native, so we wrote the javascript library first, and it works. Now I'm trying to write a python framework for it, which will be partially in python, but partially use those same stored javascript functions - the javascript functions are an intrinsic part of the library. On the other hand, the javascript framework does not depend on python. since they are pretty intertwined it seems like it's worthwhile keeping them in the same repository. I'm trying to work out a way of structuring the whole project to give the javascript and python frameworks equal status (maybe a ruby driver or whatever in the future?), but still allow the python library to install nicely. Currently it looks like this: (simplified a little) javascript/jstest/test1.js javascript/mongo-fulltext/search.js javascript/mongo-fulltext/util.js python/docs/indext.rst python/tests/search_test.py python/tests/__init__.py python/mongofulltextsearch/__init__.py python/mongofulltextsearch/mongo_search.py python/mongofulltextsearch/util.py python/setup.py I've skipped out a few files for simplicity, but you get the general idea; it' a pretty much standard python project... except that it depends critcally ona whole bunch of javascript which is stored in a sibling directory tree. What's the preferred setup for dealing with this kind of thing when it comes to setuptools? I can work out how to use package_data etc to install data files that live inside my python project as per the setuptools docs. The problem is if i want to use setuptools to install stuff, including the javascript files from outside the python code tree, and then also access them in a consistent way when I'm developing the python code and when it is easy_installed to someone's site. Is that supported behaviour for setuptools? Should i be using paver or distutils2 or Distribute or something? (basic distutils is not an option; the whole reason I'm doing this is to enable requirements tracking) How should i be reading the contents of those files into python scripts?

    Read the article

  • "jpeglib.h: No such file or directory" ghostscript port in OPENBSD

    - by holms
    Hello I have a problem with compiling a ghostscript from ports in openbsd 4.7. SO i have jpeg-7 installed, I have latest port tree for obsd4.7. ===> Building for ghostscript-8.63p11 mkdir -p /usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63/obj gmake LDFLAGS='-L/usr/local/lib -shared' GS_XE=./obj/../obj/libgs.so.11.0 STDIO_IMPLEMENTATION=c DISPLAY_DEV=./obj/../obj/display.dev BINDIR=./obj/../obj GLGENDIR=./obj/../obj GLOBJDIR=./obj/../obj PSGENDIR=./obj/../obj PSOBJDIR=./obj/../obj CFLAGS='-O2 -fno-reorder-blocks -fno-reorder-functions -fomit-frame-pointer -march=i386 -fPIC -Wall -Wstrict-prototypes -Wmissing-declarations -Wmissing-prototypes -fno-builtin -fno-common -DGS_DEVS_SHARED -DGS_DEVS_SHARED_DIR=\"/usr/local/lib/ghostscript/8.63\"' prefix=/usr/local ./obj/../obj/gsc gmake[1]: Entering directory `/usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63' cc -I./obj/../obj -I./src -DHAVE_MKSTEMP -O2 -fno-reorder-blocks -fno-reorder-functions -fomit-frame-pointer -march=i386 -fPIC -Wall -Wstrict-prototypes -Wmissing-declarations -Wmissing-prototypes -fno-builtin -fno-common -DGS_DEVS_SHARED -DGS_DEVS_SHARED_DIR=\"/usr/local/lib/ghostscript/8.63\" -DGX_COLOR_INDEX_TYPE='unsigned long long' -o ./obj/../obj/sdctc.o -c ./src/sdctc.c In file included from src/sdctc.c:17: obj/jpeglib_.h:1:21: jpeglib.h: No such file or directory In file included from src/sdctc.c:19: src/sdct.h:58: error: field `err' has incomplete type src/sdct.h:70: error: field `err' has incomplete type src/sdct.h:72: error: field `cinfo' has incomplete type src/sdct.h:73: error: field `destination' has incomplete type src/sdct.h:84: error: field `err' has incomplete type src/sdct.h:87: error: field `dinfo' has incomplete type src/sdct.h:88: error: field `source' has incomplete type gmake[1]: *** [obj/../obj/sdctc.o] Error 1 gmake[1]: Leaving directory `/usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63' gmake: *** [so] Error 2 *** Error code 2 Stop in /usr/ports/print/ghostscript/gnu (line 2225 of /usr/ports/infrastructure/mk/bsd.port.mk). I tried to place one more param in CFLAGS in Makefile with value "-I/usr/local" but no luck =( People in irc [freenode server, #openbsd channel] refuses give any help for ports at all, and even more - because this is 4.7 unstable version. I have my reasons to use this version and ports believe me =) CFLAGS+= -DSYS_TYPES_HAS_STDINT_TYPES \ -I${LOCALBASE}/include \ -I${LOCALBASE}/include/ijs \ -I${LOCALBASE}/include/libpng \

    Read the article

  • AD Stopping a Script and Writing a Value to a User's AD Account PPT Presentation

    - by Steven Maxon
    ‘This will launch the PPT in a GPO Dim ppt Set ppt = CreateObject("PowerPoint.Application") ppt.Visible = True ppt.Presentations.Open "C:\Scripts\Test.pptx" ‘This is the batch file at the end of the PPT that records the date, time, computer name and username echo "Logon Date:%date%,Logon Time:%time%,Computer Name:%computername%,User Name:%username%" >> \\servertest\g$\Tracking\LOGON.TXT ‘This is what I need but can’t find: I need the script to check a value in the Active Directory user’s account in the Web page: attribute that would shut off the script if the user has already competed reading the presentation. Could be as simple as writing XXXX. I need the value XXXX written to the Active Directory user’s account in the Web page: attribute when they finish reading the presentation after they click on the bat file so the script will not run again when they log in.

    Read the article

  • Get directory path by fd

    - by tylerl
    I've run into the need to be able refer to a directory by path given its file descriptor in Linux. The path doesn't have to be canonical, it just has to be functional so that I can pass it to other functions. So, taking the same parameters as passed to a function like fstatat(), I need to be able to call a function like getxattr() which doesn't have a f-XYZ-at() variant. So far I've come up with these solutions; though none are particularly elegant. The simplest solution is to avoid the problem by calling openat() and then using a function like fgetxattr(). This works, but not in every situation. So another method is needed to fill the gaps. The next solution involves looking up the information in proc: if (!access("/proc/self/fd",X_OK)) { sprintf(path,"/proc/self/fd/%i/",fd); } This, of course, totally breaks on systems without proc, including some chroot environments. The last option, a more portable but potentially-race-condition-prone solution, looks like this: DIR* save = opendir("."); fchdir(fd); getcwd(path,PATH_MAX); fchdir(dirfd(save)); closedir(save); The obvious problem here is that in a multithreaded app, changing the working directory around could have side effects. However, the fact that it works is compelling: if I can get the path of a directory by calling fchdir() followed by getcwd(), why shouldn't I be able to just get the information directly: fgetcwd() or something. Clearly the kernel is tracking the necessary information. So how do I get to it?

    Read the article

  • Extend LDAP Membership to append a prefix/sufix to the username

    - by Romias
    Our web applications are using LDAP Membership Provider to authenticate and register users in Active Directory. In order to allow users to provide usernames that exist in other applications, we need to add a prefix in its username and it should be as transparent and painless as possible. What I need is a way to extend the LDAP Membership Provider to be able to add (concatenate) a prefix to the username just before Membership authenticate or register it. For example, if user input is "JohnS" in application 1... I want to authenticate: "App1_JohnS". How could I extend the membership to accomplish this? Any idea what is triggered just before authenticate and register (create user)? Update: Each web app has an "OU" in AD where create users to and authenticate from. But as it is just ONE Active Directory Controller the usernames must be unique. We need to solve this issue using Membership providers and not adding more ADs.

    Read the article

  • Show image with link based on first directory with Javascript

    - by Nestorete
    Hi, i need some help please, i want to show an image on my Page depending of the first directory of the URL. Example: In Any of this URLs will show the image1.jpg www.mysite.com/audio/amplifiers/400wats.html www.mysite.com/audio/ www.mysite.com/audio/amplifiers/ In any of this others will show the image2.jpg www.mysite.com/video/spots/40wats.html www.mysite.com/video/amplifiers/400wats.html www.mysite.com/video/lighting/laser.html www.mysite.com/video/laser/ At the moment i can show the image only if the url is only the first directory, bu no in the internal directory or documents. This is the script that i'm using right now: <script type="text/javascript"> switch (location.pathname) { case "/audio/": document.write("From Web<BR>") break case "/video/": document.write('<A HREF="slides.htm" target="_blank"><IMG SRC="/adman/banners/joinvip.gif" WIDTH=728 HEIGHT=90 BORDER=0></A>') break default: document.write('<A HREF="http://www.apple.com" target="_blank"><IMG SRC="http://www.amd.com/us-en/assets/content_type/DownloadableAssets/NEW_PIB_728x90.gif" WIDTH=728 HEIGHT=90 BORDER=0></A>') break } </script> Thank you

    Read the article

  • Show image with link based on first directory with Javascript

    - by Nestorete
    Hi, i need some help please, i want to show an image on my Page depending of the first directory of the URL. Example: In Any of this URLs will show the image1.jpg www.mysite.com/audio/amplifiers/400wats.html www.mysite.com/audio/ www.mysite.com/audio/amplifiers/ In any of this others will show the image2.jpg www.mysite.com/video/spots/40wats.html www.mysite.com/video/amplifiers/400wats.html www.mysite.com/video/lighting/laser.html www.mysite.com/video/laser/ At the moment i can show the image only if the url is only the first directory, bu no in the internal directory or documents. This is the script that i'm using right now: <script type="text/javascript"> switch (location.pathname) { case "/audio/": document.write("From Web<BR>") break case "/video/": document.write('<A HREF="slides.htm" target="_blank"><IMG SRC="/adman/banners/joinvip.gif" WIDTH=728 HEIGHT=90 BORDER=0></A>') break default: document.write('<A HREF="http://www.apple.com" target="_blank"><IMG SRC="http://www.amd.com/us-en/assets/content_type/DownloadableAssets/NEW_PIB_728x90.gif" WIDTH=728 HEIGHT=90 BORDER=0></A>') break } </script> Thank you

    Read the article

  • How to set up a wcf-structure over internet, and not on the localhost

    - by djerry
    Hey guys, I want to convert the wcf-structure i have from localhost to a service which runs over the internet. My server starts when replacing the localhost with my ip-address. But then my clients cannot connect to the server anymore. This is my server setup : static void Main(string[] args) { NetTcpBinding binding = new NetTcpBinding(SecurityMode.Message); Uri address = new Uri("net.tcp://192.168.10.26"); //_svc = new ServiceHost(typeof(MonitoringSystemService), address); _monSysService = new MonitoringSystemService(); _svc = new ServiceHost(_monSysService, address); publishMetaData(_svc, "http://192.168.10.26"); _svc.AddServiceEndpoint(typeof(IMonitoringSystemService), binding, "Monitoring Server"); _svc.Open(); } My app.config for the client looks like this : <configuration> <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true"> <listeners> <add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData= "c:\log\Traces.svclog" /> </listeners> </source> </sources> </system.diagnostics> <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IMonitoringSystemService" closeTimeout="00:00:10" openTimeout="00:00:10" receiveTimeout="00:10:00" sendTimeout="00:00:10" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxConnections="500" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="32" maxStringContentLength="100000" maxArrayLength="100000" maxBytesPerRead="100000" maxNameTableCharCount="100000" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign"> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://192.168.10.26/Monitoring%20Server" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_IMonitoringSystemService" contract="IMonitoringSystemService" > <!--name="NetTcpBinding_IMonitoringSystemService"--> <identity> <userPrincipalName value="DJERRYY\djerry" /> </identity> </endpoint> </client> </system.serviceModel> </configuration>

    Read the article

  • TCL How to read , extract and count the occurent in .txt file (Current Directory)

    - by Passion
    Hi Folks, I am beginner to scripting and vigorously learning TCL for the development of embedded system. I have to Search for the files with only .txt format in the current directory, count the number of cases of each different "Interface # nnnn" string in .txt file, where nnnn is a four or 32 digits max hexadecimal no and o/p of a table of Interface number against occurrence. I am facing implementation issues while writing a script i.e, Unable to implement the data structure like Linked List, Two Dimensional array. I am rewriting a script using multi dimension array (Pass values into the arrays in and out of procedure) in TCL to scan through the every .txt file and search for the the string/regular expression ‘Interface # ’ to count and display the number of occurrences. If someone could help me to complete this part will be much appreciated. Search for only .txt extension files and obtain the size of the file Here is my piece of code for searching a .txt file in present directory set files [glob *.txt] if { [llength $files] > 0 } { puts "Files:" foreach f [lsort $files] { puts " [file size $f] - $f" } } else { puts "(no files)" } I reckon these are all the possible logical steps behind to complete it i) Once searched and find the .txt file then open all .txt files in read only mode ii) Create a array or list using the procedure (proc) Interface number to NULL and Interface count to zero 0 iii) Scan thro the .txt file and search for the string or regular expression "interface # iv) When a match found in .txt file, check the Interface Number and increment the count for the corresponding entry. Else add new element to the Interface Number list v) If there are no files return to the first directory My o/p is like follows Interface Frequency 123f 3 1232 4

    Read the article

  • Solution for cleaning an image cache directory on the SD card

    - by synic
    I've got an app that is heavily based on remote images. They are usually displayed alongside some data in a ListView. A lot of these images are new, and a lot of the old ones will never be seen again. I'm currently storing all of these images on the SD card in a custom cache directory (ala evancharlton's magnatune app). I noticed that after about 10 days, the directory totals ~30MB. This is quite a bit more than I expected, and it leads me to believe that I need to come up with a good solution for cleaning out old files... and I just can't think of a great one. Maybe you can help. These are the ideas that I've had: Delete old files. When the app starts, start a background thread, and delete all files older than X days. This seems to pose a problem, though, in that, if the user actively uses the app, this could make the device sluggish if there are hundreds of files to delete. After creating the files on the SD card, call new File("/path/to/file").deleteOnExit(); This will cause all files to be deleted when the VM exits (I don't even know if this method works on Android). This is acceptable, because, even though the files need to be cached for the session, they don't need to be cached for the next session. It seems like this will also slow the device down if there are a lot of files to be deleted when the VM exits. Delete old files, up to a max number of files. Same as #1, but only delete N number of files at a time. I don't really like this idea, and if the user was very active, it may never be able to catch up and keep the cache directory clean. That's about all I've got. Any suggestions would be appreciated.

    Read the article

  • Module based web project directory layout with git and symlinks

    - by karlthorwald
    I am planning my directory structure for a linux/apache/php web project like this: Only www.example.com/webroot/ will be exposed in apache www.example.com/ webroot/ index.php module1/ module2/ modules/ module1/ module1.class.php module1.js module2/ module2.class.php module2.css lib/ lib1/ lib1.class.php the modules/ and lib/ directory will only be in the php path. To make the css and js files visible in the webroot directory I am planning to use symlinks. webroot/ index.php module1/ module1.js (symlinked) module2/ module2.css (symlinked) I tried following these principles: layout by modules and libraries, not by file type and not by "public' or 'non public', index.php is an exception. This is for easier development. symlinking files that need to be public for the modules and libs to a public location, but still mirroring the layout. So the module structure is also visible in the resulting html code in the links, which might help development. How will git handle the symlinking of the single files correctly, is there something to consider? When it comes to images I will need to link directories, how to handle that with git? modules/ module3/ module3.class.php img/ img1.jpg img2.jpg img3.jpg They should be linked here: webroot/ module3/ img/ (symlinked ?) So this is a git and symlink question. But I would be interested to hear comments about the php layout, maybe you want to use the comment function for this.

    Read the article

  • I need help with creating a data structure in PHP

    - by alex
    What I need to do is have a data structure that shows jobs organised into 14 day periods, but only when an id is the same. I've implemented all sorts of stuff, but they have failed miserably. Ideally, maybe a SQL expert could handle all of this in the query. Here is some of my code. You can assume all library stuff works as expected. $query = 'SELECT date, rig_id, comments FROM dor ORDER BY date DESC'; $dors = Db::query(Database::SELECT, $query)->execute()->as_array(); This will return all jobs, but I need to have them organised by 14 day period with the same rig_id value. $hitches = array(); foreach($dors as $dor) { $rigId = $dor['rig_id']; $date = strtotime($dor['date']); if (empty($hitches)) { $hitches[] = array( 'rigId' => $rigId, 'startDate' => $date, 'dors' => array($dor) ); } else { $found = false; foreach($hitches as $key => $hitch) { $hitchStartDate = $hitch['startDate']; $dateDifference = abs($hitchStartDate - $date); $isSameHitchTimeFrame = $dateDifference < (Date::DAY * 14); if ($rigId == $hitch['rigId'] AND $isSameHitchTimeFrame) { $found = true; $hitches[$key]['dors'][] = $dor; } } if ($found === false) { $hitches[] = array( 'rigId' => $rigId, 'startDate' => $date, 'dors' => array($dor) ); } } } This seems to work OK splitting up by rig_id, but not by date. I also think I'm doing it wrong because I need to check the earliest date. Is it possible at all to do any of this in the database query? To recap, here is my problem I have a list of jobs with all have a rig_id (many jobs can have the same) and a date. I need the data to be organised into hitches. That is, the rig_id must be the same per hitch, and they must span a 14 day period, in which the next 14 days with the same rig_id will be a new hitch. Can someone please point me on the right track? Cheers

    Read the article

  • Setting directory security to allow user and deny all

    - by Rita
    I have winforms app, in which I need to access a secured directory. I'm using impersonation and create WindowsIdentity to access the folder. My problem is writing unit tests to test the directory security; I'd like to a write a code that creates a directory secured to only ONE user, which isn't the current user running the UT (or else the test would be worthless). I know how to add permissions to a certain user, but how can I deny the rest, including admins? (in case the user running the UT is an admin) (will this be a wise thing to do?) DirectoryInfo directoryInfo = new DirectoryInfo(path); DirectorySecurity directorySecurity = directoryInfo.GetAccessControl(); directorySecurity.AddAccessRule(new FileSystemAccessRule("Domain\SecuredUser", FileSystemRights.FullControl, InheritanceFlags.ContainerInherit | InheritanceFlags.ObjectInherit, PropagationFlags.InheritOnly, AccessControlType.Allow)); directorySecurity.RemoveAccessRule(new FileSystemAccessRule("??", FileSystemRights.FullControl, InheritanceFlags.ContainerInherit | InheritanceFlags.ObjectInherit, PropagationFlags.InheritOnly, AccessControlType.Deny)); directoryInfo.SetAccessControl(directorySecurity); This isn't working. I don't know who am I supposed to deny. Domain\Admins, Domain\Administrators, me... No one is being denied, and when I check folder's security - The SecuredUser has access to the folder, but the permissions are not checked, even though I specified FullControl. Basically I want to code this: <authorization> <allow users ="Domain\User" /> <deny users="*" /> </authorization> I was thinking about impersonating UT run with a weak user with no permissions, but this would result in: Impersonate - Run UT - Impersonate - Access folder, and I'm not sure if this is the right design. Help would be greatly appreciated, thank you.

    Read the article

  • IIS Restrict Access to Directory for table of users

    - by Dave
    I am trying to restrict access to files in a directory and it's sub directories based user rights. My user rights are stored in an MS SQL database in a custom format, however it is easy to query the list of users with rights to this directory. I need to know how to apply this to a web config on the server to authenticate against a query of a database table to determine if the username is authenticated and allowed to view the file. Of course if they are not they should be blocked / given a 404. I am using IIS and ASP.Net MVC3 with a form based security as opposed to the built in roles and responsibilities that was custom made for us and that works great. There are over 10k users tied to this non-Active Directory authentication so I am not planning to change my authentication type so please don't go there. It is not my decision on the choice of platform, or I would have gone with a LAMP server and been done with this. Edit 11-13-2012 @ 8:57a: In the web config can you put the result of an SQL query?

    Read the article

  • Google Maps and Json structure

    - by mark
    I found a great script to plot markers on Google Maps. It uses an Json file to laod it. The problem is I don't know what the structure looks like in this case. Can you help? function loadMarkers() { var bounds = map.getBounds(); var zoomLevel = map.getZoom(); $.post("/gmaps/markers/index.php", {zoom: zoomLevel, swLat: bounds.getSouthWest().lat(), swLon: bounds.getSouthWest().lng(), neLat: bounds.getNorthEast().lat(), neLon: bounds.getNorthEast().lng()}, function(data) { processMarkers(data, _smallMarkerSize); }, "json" ); } function processMarkers(webcams, markerSize) { var marker = null; var markersInView = new Array(); var idsInView = new Array(); // Loop through the new webcams for (var i = 0; i < webcams.length; i++) { var idx = markers.indexOf(webcams[i].id); if (idx == -1) { var info_html = "<table class='infowindow'>"; info_html += "<tr><td class='img'>"; info_html += "<img src='" + webcams[i].smallimg + "' /><td>"; info_html += "<td><p><b>" + webcams[i].loc + "</b>"; info_html += "<br /><a href='/webcam/" + webcams[i].url + "' target='_blank'>Show webcam</a></p></td></tr>"; info_html += "</table>"; marker = new WebcamMarker(new GLatLng(webcams[i].latitude, webcams[i].longitude), {image: "" + webcams[i].smallimg + "", height: markerSize, width: markerSize}); marker.myhtml = info_html; map.addOverlay(marker); markersInView[webcams[i].id] = marker; } else { markersInView[webcams[i].id] = markers[webcams[i].id]; } idsInView.push(webcams[i].id); } // Now remove the markers outside of the viewport for (var i = 0; i < webcamids.length; i++) { var idx = markersInView.indexOf(webcamids[i]); if (idx == -1) { marker = markers[webcamids[i]]; map.removeOverlay(marker); } } markers = markersInView; webcamids = idsInView; }

    Read the article

  • maven assemblies. Putting each dependency with transitive dependencies in own directory?

    - by jr
    I have a maven project which consists of a few modules. This is to be deployed on a client machine and will involve installing Tomcat and will make use of NSIS for installer. There is a separate application which monitors tomcat and can restart it, perform updates, etc. So, I have the modules setup as follows: project +-- client (all code, handlers, for the war) +-- client-common - (shared code, shared between monitor and client) +-- client-web - (the war, basically just uses war has applicationcontext, web.xml,etc) +-- monitor - (the monitor application jar. Uses wrapper to run) So, I need to create an installer. I was planning on creating another module which would be the installer. This is where I would have tomcat directory and I'd like maven to "assemble" everything and then run NSIS so I can create the final installer. However, I need to have the monitor jar file in a directory and then have all monitors dependencies in a lib/ directory. The final directory structure should be: project-installer-directory/monitor/monitor-version.jar project-installer-directory/monitor/lib/monitor-dep-1.jar project-installer-directory/monitor/lib/monitor-dep-2.jar project-installer-directory/monitor/lib/monitor-dep-3.jar project-installer-directory/webapps/client-web.war Where in the client-web\WEB-INF\lib directory we will have all client-web's dependencies after it is exploded. That works, I have the .war file. What I am having problems with is getting the monitor module dependencies independent of the dependencies of the client-web module. I tried to just create the installer module and make the monitor and client-web dependencies, but when I use dependencies-copy it gives me everything. Not what I want. I'm leaning towards creating a new module called monitor-assembly or something to give me a zip file which contains the directory format I need, but that is yet another module. Can someone please help me with the correct way to accomplish this? thanks!

    Read the article

  • one site or many?

    - by Alex
    I have about 10-12 websites (main site is classic ASP, others are ASP.NET 2). Each site has his own virtual directory. They are related to each other, mainly from main site other sites are calling to perform some service. Each site has from 2 to 5 pages. Does it make sense to unite them and create one bigger site with one virtual directory and one project in VS? Or leave them as they are separately? What are pro and contras?

    Read the article

  • Component based web project directory layout with git and symlinks

    - by karlthorwald
    I am planning my directory structure for a linux/apache/php web project like this: Only www.example.com/webroot/ will be exposed in apache www.example.com/ webroot/ index.php comp1/ comp2/ component/ comp1/ comp1.class.php comp1.js comp2/ comp2.class.php comp2.css lib/ lib1/ lib1.class.php the component/ and lib/ directory will only be in the php path. To make the css and js files visible in the webroot directory I am planning to use symlinks. webroot/ index.php comp1/ comp1.js (symlinked) comp2/ comp2.css (symlinked) I tried following these principles: layout by components and libraries, not by file type and not by "public' or 'non public', index.php is an exception. This is for easier development. symlinking files that need to be public for the components and libs to a public location, but still mirroring the layout. So the component and library structure is also visible in the resulting html code in the links, which might help development. git usage should be safe and always work. it would be ok to follow some procedure to add a symlink to git, but after that checking them out or changing branches should be handled safely and clean How will git handle the symlinking of the single files correctly, is there something to consider? When it comes to images I will need to link directories, how to handle that with git? component/ comp3/ comp3.class.php img/ img1.jpg img2.jpg img3.jpg They should be linked here: webroot/ comp3/ img/ (symlinked ?) If using symlinks for that has disadvantages maybe I could move images to the webroot/ tree directly, which would break the first principle for the third (git practicability). So this is a git and symlink question. But I would be interested to hear comments about the php layout, maybe you want to use the comment function for this.

    Read the article

  • How to Properly Reference a JavaScript File in an ASP.NET Project?

    - by DaveDev
    Hi Guys I have some pages that reference javascript files. The application exists locally in a Virtual Directory, i.e. http://localhost/MyVirtualDirectory/MyPage.aspx so locally I reference the files as follows: <script src="/MyVirtualDirectory/Scripts/MyScript.js" type="text/javascript"></script> The production setup is different though. The application exists as its own web site in production, so I don't need to include the reference to the virtual directory. The problem with this is that I need to modify every file that contains a javascript reference so it looks like the following: <script src="../Scripts/MyScript.js" type="text/javascript"></script> I've tried referencing the files this way in my local setup but it doesn't work. Am I going about this completely wrong? Can somebody tell me what I need to do? Thanks

    Read the article

  • Django sub-applications & module structure

    - by Rob Golding
    I am developing a Django application, which is a large system that requires multiple sub-applications to keep things neat. Therefore, I have a top level directory that is a Django app (as it has an empty models.py file), and multiple subdirectories, which are also applications in themselves. The reason I have laid my application out in this way is because the sub-applications are separated, but they would never be used on their own, outside the parent application. It therefore makes no sense to distribute them separately. When installing my application, the settings file has to include something like this: INSTALLED_APPS = ( ... 'myapp', 'myapp.subapp1', 'myapp.subapp2', ... ) ...which is obviously suboptimal. This also has the slightly nasty result of requiring that all the sub-applications are referred to by their "inner" name (i.e. subapp1, subapp2 etc.). For example, if I want to reset the database tables for subapp1, I have to type: python manage.py reset subapp1 This is annoying, especially because I have a sub-app called core - which is likely to conflict with another application's name when my application is installed in a user's project. Am I doing this completely wrongly, or is there away to force these "inner" apps to be referred to by their full name?

    Read the article

  • Model in sub-directory via app_label?

    - by prometheus
    In order to place my models in sub-folders I tried to use the app_label Meta field as described here. My directory structure looks like this: project apps foo models _init_.py bar_model.py In bar_model.py I define my Model like this: from django.db import models class SomeModel(models.Model): field = models.TextField() class Meta: app_label = "foo" I can successfully import the model like so: from apps.foo.models.bar_model import SomeModel However, running: ./manage.py syncdb does not create the table for the model. In verbose mode I do see, however, that the app "foo" is properly recognized (it's in INSTALLED_APPS in settings.py). Moving the model to models.py under foo does work. Is there some specific convention not documented with app_label or with the whole mechanism that prevents this model structure from being recognized by syncdb?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >