Search Results

Search found 436 results on 18 pages for 'subfolder'.

Page 16/18 | < Previous Page | 12 13 14 15 16 17 18  | Next Page >

  • Inbox not updating in Exchange 2010, all users affected

    - by TuxMeister
    I'm battling against this darn issue this morning. We have the following setup: Big Hyper-V machine hosting the servers as VM's VM for CAS: WEB.XXX.local VM for Mailbox: EXC.XXX.local Servers are running Server 2008 R2 with Exchange 2010 SP1 Clients are all running Windows 7 Pro x64 with Outlook 2010 x64 The problem we're having is that nobody is able to see any emails received today (16th of October), but they are able to send externally. When I reply back to the email received externally, I don't get an NDR, yet the user cannot see my email. This is what I found and tried thus far: If we create a subfolder in Outlook 2010 and move any email from the inbox into that folder, changes will be immediately reflected in OWA We've been sending test emails to other users internaly and external email addresses and the sent items folder contains all those tests, synced properly to OWA as well Have tried crating a new profile, new emails are still missing Tried disabling Cache Mode, still no luck Also disabled "Download shared folders", still no luck Tried to setup a brand new Exchange mailbox and configured it on a VM that never had Outlook on it, still the same issue Tried restarting Exchange services on both CAS and Mailbox servers, no luck Tried rebooting both CAS and Mailbox servers, still no luck Performed a Mailbox Discovery on my admin account, emails from today are being found in the Discovery results, so the stuff is there, just not updating the user inboxes Any idea about what this hellish thing can be? I've done everything I can think of and also everything I could find out there. Let me know if you need any more details and thanks for reading this!

    Read the article

  • How can I use an own asp page as 404 error page on a Windows 7 pro IIS 7.5 local installation

    - by Edelcom
    Running Windows 7 Pro Running local IIS 7.5 ( for web development purposes ) Edited hosts file to be able to use http://www.sitestepper.dev Site build in subfolder of the inetpub/wwwroot/staplijst Site can be viewed using http://www.sitestepper.dev/staplijst/whatever-page.htm I have an http://www.sitestepper.dev/p.asp page I would like to call when an unknown page is requested. This technique works fine on the deployed version of this web site ( deployed on an Windows 2003 server running IIS 7 - not sure about the 7 but it is the version deployed with Windows 2003 server ). I tried ( in the Edit Custom Error Dialog ) : Execute a URL on this site: with value /staplijst/p.asp and Respond with a 302 redirect: with value http://www.sitestepper.dev/staplijst/p.asp I tried this in the properties of the 'Default Web Site' , and I tried this at the staplijst level. Even tried it with values without the /staplijst. I restarted the default web site after each change. And even stopped/started the Web service. But nothing seems to work, I keep getting the 'Server Error In Application "DEFAULT WEB SITE"' , HTTP Error 404.0 error. What am I missing here - probably something obvious, but I don't see it ? Thanks in advance

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • How to add a writable folder to the PHP document root on linux

    - by Ron Whites
    We are building an example bash script for our PHP TestCoverage Tool use on Linux. The development environment is Ubuntu 12.04_1 but we intend to have the linux example work across as many linux versions as possible without modification. The example linux script requires a variable be set to the PHP Document Root path and by default uses a small PHP example source to show the user how our GUI and text report shows the covered and uncovered PHP code areas. The linux script is also intended to be easily alterable by the user to automate the TestCoverage display of users PHP code. The problem we are having with Ubuntu 12.04 (any linux?) is that the PHP Apache2 document root is defined in /etc/apache2/sites-available/default as /var/www and /var/www is defaulted with "drwxr-xr-x" read only access. So in order to add our own folder as /var/www/SDTestCoverage we must change /var/www to "drwxrwxrwx" read-write access. So it seems our script (at least on Ubuntu) will need to ... 1. acquire and save the /var/www permissions then do .. 2. sudo chmod 777 /var/www (to make writable) 3. mkdir -p /var/www/SDTestCoverage (create our folder under the document root) 4. sudo chmod 777 /var/www/SDTestCoverage (make our subfolder writable) 5. and finally restore /var/www permissions Thanks and our Questions are .. 1. Is this the standard way (using Ubuntu) one adds a writable folder under the PHP Document Root? 2. Is this the most general purpose way one adds a writable folder under the PHP Document Root on other versions of Linux?

    Read the article

  • Nginx Config - I can't access WordPress admin area

    - by WebDevDude
    I am a complete noob when it comes to Nginx, but I'm trying to make the switch over for my WordPress site. Everything works, even the permalink, but I can't access my WordPress admin directory (I get a 403 error). I have my WordPress install in a subfolder, so that complicates things a bit for me. Here is my Nginx config file: server { server_name mydomain.com; access_log /srv/www/mydomain.com/logs/access.log; error_log /srv/www/mydomain.com/logs/error.log; root /srv/www/mydomain.com/public_html; location / { index index.php; # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location /myWordpressDir { try_files $uri $uri/ /myWordpressDir/index.php?$args; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/mydomain.com/public_html$fastcgi_script_name; fastcgi_split_path_info ^(/myWordpressDir)(/.*)$; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } }

    Read the article

  • Why is my browser using so much memory?

    - by Steve
    Hi. I've recently had problems with Firefox running very slowly when I have many tabs open; say 20 tabs. My whole system would slow down. I decided to give Google Chrome a try, and it started out fine. But lately I am finding that it too, slows down my whole system. Looking at Task Manager, chrome.exe is using about 250MB of memory in about 6 different entries in task manager. However, when I shut Chrome down, memory usage is reduced by about 600MB. How can this be? (shows drop in memory usage after ending Chrome.) When my system locks up with Chrome having many tabs open, it takes 10 seconds to load the Start Menu, 10 seconds to expand All Programs, and each folder and subfolder, and 30 seconds for the program to be highlighted under my mouse. It also takes 10 seconds to switch to Notepad. Why is Chrome appearing to use so much more memory than Task Manager indicates? Why is my pagefile being used when I have around 1.1GB of memory? Can I set Chrome to run in RAM and not in the pagefile? How can 20 tabs use 600MB? That's 30MB per tab. Thanks for your help.

    Read the article

  • Different behaviour when running exe from command prompt than in windows

    - by valoukh
    Forgive me if I'm not using the correct terminology here but I'll try to explain the issue I'm having as best I can! We use a program that allows you to view video footage from several cameras, and it works in such a way that when it is opened it then automatically loads several video files within its interface. These video files are stored in a subfolder, e.g. "videos\video1.asf", etc. I didn't create the program so I can't say what method it uses to open the files. The files are stored on a network server and are being accessed via a share/UNC path. When the file is run from windows explorer (by navigating to the network share and double-clicking the exe), it works perfectly. When the file is run via the (elevated) command prompt (e.g. by typing the \server\path\to\file.exe) it opens, but the corresponding video files do not load. I am trying to create a script that launches the program via the command prompt, so the first step is finding out why the two actions above have different consequences. Any advice on how running executables from the command prompt could produce a different result would be greatly appreciated. Thanks

    Read the article

  • Convert Public Folder to Shared Mailbox

    - by Lilienthal
    Due to a change in company policy, all existing Public Folders (PF) have to be phased out in favour of shared mailboxes. Unfortunately, they don't seem to have any procedures or guidelines for this migration and I can't find much online either. I've already migrated one of our public folders so far as a sort of test case. Because we still use Exchange 2003, we can't create real shared mailboxes as we would in 2007 or 2010 (With New-Mailbox -Shared ... in the Exchange Shell). Instead, I simply created a new account on the AD and assigned it a mailbox. I then set the PF's permissions to read-only to keep it in a consistent state and copied the entire folder to a local PST in Outlook 2010, from which the folder was in turn copied to the new mailbox. Permissions and Folder Visible were set for all users and the migration was successful. While this works, the whole procedure feels very hackish to me and not at all efficient. I'd welcome some input on automating or at least streamlining the process. Additionally, we are unsure of what to do with our mail-enabled Public Folders. Several of these are nested under other PFs, some of which are also mail-enabled. Preserving folder structure is a key requirement and this seems impossible at first glance. I've considered creating dummy accounts for all the email addresses from our mail-enabled PFs and then setting up automated rules to forward messages to a subfolder of the new shared mailboxes, but I am not familiar enough with Exchange to know if this is even possible. Further points of concern are the Calendars and Contact lists in our public folders. I suppose I'll be forced to create new mailboxes for every one of these we have as well, then set up share permissions for their Calendar and Contact items, but would be happy to be proven wrong.

    Read the article

  • Windows XP to remote server 2008 R2 shares - awful response times

    - by nick3216
    I have a network infrastructure of Windows XP clients (a mix of XP and 64-bit XP), that are accessing a network share on a Windows 2008 R2 server. Whenever users type the address of a folder into the address bar of Windows Explorer it's as snappy at determining the contents of the current folder and presenting them to you in the address bar as if you're working on a local drive. But if you open one of the subfolders users get the animated red torch and 'Searching for items...' dialog, typically for 45 seconds. Similarly when using the open folder dialog to try and select a subfolder on this share it takes, on average, 45 seconds for the dialog to expand each node and show the subfolders of each node. Also, while the Explorer instance accsesing the network share is running slowly users notice that the performance of all other Explorer windows suffers. So while Explorer is searching for files on the network share they can't switch to another task and navigate around their local drive using Explorer because it's now as slow as a dead dog at accessing anything. Are there any settings we can change which will improve the performance accessing network shares?

    Read the article

  • Tool to convert blogger.com content to dasBlog

    - by Daniel Moth
    Due to blogger.com dropping FTP support, I've had to move my blog. If you are in a similar situation, this post will help you by showing you the necessary steps to take. Goals No loss on blog posts, comments AND all existing permalinks continue to work (redirect to the correct place). Steps Download the XML files corresponding to your blogger.com content and store them in a folder. Install and configure dasBlog on your local machine. Configure your web.config file (will need updating once you run step 4). Use the tool I describe further down to generate the content and place it at the right place. Test your site locally. Once you are happy, repeat step 2 on your hosting provider of choice. Remember to copy up your dasBlog theme folder if you created one. Copy up the local web.config file and the XML dasBlog content files generated by the tool of step 4. Test your site on the server. Once you are happy, go live (following instructions from your hoster). In my case, I gave the nameservers from my new hoster to my existing domain registrar and they made the switch. Tool (code) At step 4 above I referred to a tool. That is an overstatement, it is simply one 450-line C#code file that you can download here: BloggerToDasBlog.cs. I used this from a .NET 2.0 console app (and I run it under the Visual Studio debugger, i.e. F5) like this: Program.cs. The console app referenced the dasBlog 2.3 ASP.NET Blogging Engine i.e. the newtelligence.DasBlog.Runtime.dll assembly. Let me describe what the code does: Input: A path to a folder where the XML files from the old blogger.com blog reside. It can deal with both types of XML file. A full file path to a file where it creates XML redirect input (as required by the rewriteMap mentioned here). The blog URL. The author's email. The blog author name. A path to an empty folder where the new XML dasBlog content files will get created. The subfolder name used after the domain name in the URL. The 3 reg ex patterns to use. You can use the same as mine, but will need to tweak the monthly_archive rule. Again, to see what values I passed for all the above, see my Program.cs file. Output: It creates dasBlog XML files in the folder specified. It creates those by parsing the old blogger.com XML files that reside in the folder specified. After that is generated, copy it to the "Content" folder under your dasBlog installation. It creates an XML file with a single ignorable root element and a bunch of inner XML elements. You can copy paste these in the web.config file as discussed in this post. Other notes: For each blog post, it detects outgoing links to itself (i.e. to the same blog), and rewrites those to point to the new URLs. So internal links do not rely on the web.config redirects. It deals with duplicate post titles; it does not deal with triplicates and higher. Removes all references to blogger.com (e.g. references to [email protected], the injected hidden footer for statistics that each blog post has and others – see the code). It creates a lot of diagnostic output (in the Output window) and indeed the documentation for the code is in the Debug.WriteLine statements ;) This is not code I will maintain or support – it was a throwaway one-use project that I am sharing here as a starting point for anyone finding themselves in the same boat that I was. Enjoy "as is". Comments about this post welcome at the original blog.

    Read the article

  • Build Your Own CE6 Kernel

    - by Kate Moss' Big Fan
    The Share Source Program in Windows CE provides many modules in %_WINCEROOT%\Private\ tree, and the kernel is one of them! Although it is not full source of kernel but it is good enough for tracing it, even tweak the kernel. Tracing the kernel and see how it works is lots of fun, but it is fascinated to modify and verify the change you made. So first comes first, where is the source of kernel? It's in your %_WINCEROOT%\private\winceos\COREOS\nk\ And next question will be "How do I build it?", Some of you may say just "build -c" there and it should be good. If you are the owner of kernel and got full source, that is definitely the right answer, but none of them are applied to our case though. So what should I do? Let's dig deeper into the coreos\nk folder, there are a couples of subfolder, CELOG, KDSTUB, KERNEL and etc. KERNEL\ is the main component of kernel.dll, in the other word, most of the modify to kernel is going to happen here. And the good thing is, you could "build -c" in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ with no error at all. But before doing that, remember to backup eveything you are going to modify, including the source and binaries; remember, this is not something belong to you, and if you didn't restore them back later, it could end up confuse the subsequence QFE updates! Here is the steps Backup the source code, I will suggest the whole %_WINCEROOT%\private\winceos\COREOS\nk\ Backup the binaries in common\oak\lib\, and again if you are not sure which files, backup the whole %_WINCEROOT%\common\oak\lib\ is the safest way. Do whatever modification you want in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ build -c in %_WINCEROOT%\private\winceos\COREOS\nk\kernel If everything went well so far, you should get a new nkmain.lib,nkmain.pdb, nkprmain.lib and nkprmain.pdb in %_WINCEROOT%\public\common\oak\lib\%_TGTCPU%\%WINCEDEBUG%\ Basically, you just rebuild your new kernel, the rest is to "blddemo clean -q" to have your new kernel SYSGEN'd and include in your OS Image. Or just "set WINCEREL=1" then "sysgen -p common nk nkprof" and "makeimg" if you can't wait another minutes for "blddemo clean -q" Tat sounds good, but some of you may not like the idea to alter any code in private folder, and not to mention how annoying to backup/restore files every time. Better idea? Yes, Microsoft provides a tool SYSGEN_CAPTURE (http://msdn.microsoft.com/en-us/library/ee504678.aspx for detail and usage) to creates Sources files for public drivers that you want to modify and build in your platform directory. In fact, not only public drivers, virtually anything in the %_WINCEROOT%\public\<project name>\cesysgen\makefile can be captured, and of course including kernel. So I am going to introduce a second way to build your own kernel by using SYSGEN_CAPTURE tool. Again the steps Create a folder in your BSP for building kernel, says %_TARGETPLATROOT%\SRC\Kernel. Use "SYSGEN_CAPTURE -p common nk" and then you will get a SOURCES.KERN, you could also "SYSGEN_CAPTURE -p common nkprof" to generate profiler enabled kernel. rename the SOURCE.KERN to SOURCES and copy one of the sample makefile into your kernel directory. For example the one in PRIVATE\WINCEOS\COREOS\NK\KERNEL\NKNORMAL. Copy the source files you want to modify from private\winceos\coreos\nk\kernel\ into your kernel directory. Modifying the SOURCES= macro to the source files you addes in step 4. For example, if you copied the vm.c, it is going to be SOURCES=vm.c Refer to the private\winceos\COREOS\nk\kernel\sources.inc and add macro defines and proper include path in your SOURCES file. "set WINCEREL=1", "build -c" in your kernel directory and "makeimg", voila! Here is an example for the MACROS you need to add in x86 Here are the macros for x86 CDEFINES=$(CDEFINES) -DIN_KERNEL -DWINCEMACRO -DKERN_CORE # Machine independent defines CDEFINES=$(CDEFINES) -DDBGSUPPORT _COREOSROOT=$(_WINCEROOT)\private\winceos\coreos INCLUDES=$(_COREOSROOT)\inc;$(_COREOSROOT)\nk\inc !IFDEF DP_SETTINGS CDEFINES=$(CDEFINES) -DDP_SETTINGS=$(DP_SETTINGS) !ENDIF ASM_SAFESEH=1 CDEFINES=$(CDEFINES) -Gs100000 -DENCODE_GS_COOKIE

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • Sharing a session between vBulletin forum and status.net microblogging platform

    - by jaz
    Hello, I need to integrate vBulletin 4.0.3 Publishing Suite with status.net microblogging platform. The first thing I need to do is make these 2 to share 1 session so a user logged in vBulletin forums will also be logged in to status.net and vice versa. I have installed different vBulletin components under different subdomains: forums.sample.com - vBulletin forums blogs.sample.com - vBulletin blogs sample.com - vBulletin content management All of these point to the same place (.../public_html/index.php) which includes the respective php file (content.php for sample.com | blog.php for blogs.sample.com | forum.php for forums.sample.com) depending on the $_SERVER['HTTP_HOST'] I have configured vBulletin to use a single cookie.domain (.sample.com) for all of these 3 domains so visiting different domains doesn't break the session. I also have status.sample.com, which is the subdomain where status.net is installed. The subdomain configuration is different so the document_root is actually a subfolder (.../public_html/status/) in sample.com Now, can you please give me some pointers on how to make all these subdomains share a single session? I'm not sure if it helps, but as I understand, status.net does no custom session handling by default, but it is possible to turn it on so it will start storing session data in a database table called "session". Any tips will be appreciated. Thank you.

    Read the article

  • Image not showing in UIImageView in Interface Builder / iPhone

    - by dbonneville
    I have a UIView with an UIImageView dragged onto the view. All of a sudden, for all my xibs, the image no longer shows up. There is a blue X. However, when it builds, the image is there. At one point, I deleted and regenerated all my images and moved some into a subfolder in XCode. Normally, when you go to select an image for an UIImageView, IB allows you to pick from any image in the project. But, I can't see any of the images I had put in the folder anymore in the dropdown. All I see in the dropdown on the Inspector is the one image I want, but that is also the one that is not showing up. And like I said, if I build it on the device or simulator, it all works. There is some cache or something screwed up somewhere. Everything builds with no errors. I cleared the caches and rebuilt. It all works. No error or warnings. But...I can't see any other images and IB still thinks it's missing the image that is clearly selected in the dropdown. So how do I get XCode and IB back on track and see what assets it properly should be seeing in the XIBs?

    Read the article

  • Ajax Minifier Visual Studio include all javascript files

    - by Michael
    I am using the Ajax Minifier http://www.ajaxprojects.com/ajax/tutorialdetails.php?itemid=766 and have embedded it in the csproj file for use in Visual Studio 2008 (not the free version). I have two folders, Content and Scripts, directly under the root of the project. Also, the Content folder has subfolders, and would like to include all of these as well (if I have to manually add each subfolder that is fine as well). Currently, my csproj file looks like this (and is included within the Project tags as instructed). There are no build errors, the files simply do not get minified. (I've enabled Project - View All files) <Import Project="$(MSBuildExtensionsPath)\Microsoft\MicrosoftAjax\ajaxmin.tasks" /> <Target Name="AfterBuild"> <ItemGroup> <JS Include="Scripts\*.js" Exclude="Scripts\*.min.js;"/> <JS Include="Content\**\*.js" Exclude="Content\**\*.min.js;"/> </ItemGroup> <AjaxMin SourceFiles="@(JS)" SourceExtensionPattern="\.js$" TargetExtension=".min.js" /> </Target> How would I edit the csproj file in order to include these folders?

    Read the article

  • TFS 2010 - TF14040 The Folder may not be checked out.

    - by Patricker
    I have a .NET 4 website in VS2010 stored in a TFS 2010 team project. I need to add a reference to System.Data.Linq.dll to the website. I am referencing a LINQ DataContext that is defined in another project and I get build errors saying that I need the reference to System.Data.Linq. I go up to the "Add Reference" menu option and add it like I would any normal reference, and it even shows up in the Web.config and in the Properties pages for the website... BUT if I build I still get the same error. So I found a place in my code where I was referencing the LINQ count function and it told me it was invalid because I was missing a reference and it offered to add the reference automatically. I told it to add the reference automatically and it is at this point that I get the error mentioned in the subject: TF14040: The folder $/Folder/Subfolder may not be checked out. No items were checked out I've done some research online but I haven't been able to find much. I saw on a blog that making the folder not readonly fixed the issue for him, but it didn't seem to work for me unless I misunderstood something. I tried loading up the project from source control onto a fresh computer where that project had never been loaded before and I can reproduce the issue the same way. Help would be greatly appreciated.

    Read the article

  • Heroku and Github integration (how to structure the project)

    - by Noah
    I'm creating a webservice and I want to store the source on github and run the app on heroku. I haven't seen my exact scenario addressed anywhere on the 'net so far, so I'll ask it here: I want to have the following directory structure: /project .git README <-- project readme file TODO.otl <-- project outline ... <-- other project-related stuff /my_rails_app app config ... README <-- rails' readme file In the above, project corresponds to http://github.com/myuser/project, and my_rails_app is the code that should be pushed to heroku. Do I need a separate branch for the rails app, or is there a simpler way that I'm missing? I guess my project-related non-rails files could live in my_rails_app, but the rails README already lives there and it seems inconsistent to overwrite that. However, if I leave it, my github page for the rails app will contain the rails readme, which makes no sense. Thanks, Noah P.S. I tried just setting it up as described above and running git push heroku from the main project folder. Of course, heroku doesn't know I want to deploy the subfolder: -----> Heroku receiving push ! Heroku push rejected, no Rails or Rack app detected.

    Read the article

  • Apache Tomcat under Windows: Changing webapps default directory

    - by Maerch
    I am deploying my Java application with Ant. Unfortunately my test deployment on the local machine doesn't work because of Vista. The Program Files directionaries are protected and i don't want to start Ant or Eclipse as an Admin. So i had the idea to move my webapps folder to a workspace subfolder, so i can use relative paths in Ant as well. The solutions seems to be to modify the Host element in the server.xml. With Linux it isn't such a deal: <Host name="localhost" appBase="/path/to/webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> But with Windows i don't get it right. I tried every possible combinations i could imagine, like: C://Users//maerch//Workspaces//Tomcat6.0_webapps C:/Users/maerch/Workspaces/Tomcat6.0_webapps C:\Users\maerch\Workspaces\Tomcat6.0_webapps C:\\Users\\maerch\\Workspaces\\Tomcat6.0_webapps C://Users//maerch//Workspaces//Tomcat6.0_webapps\\ C:/Users/maerch/Workspaces/Tomcat6.0_webapps/ C:\Users\maerch\Workspaces\Tomcat6.0_webapps\ C:\\Users\\maerch\\Workspaces\\Tomcat6.0_webapps\\ The path is also correct, but it doesn't work. There are also no error messages in the log files neither the browser shows a 404 message or anything else. Just a white page without title and so on. Can anyone help?

    Read the article

  • ASP.NET MVC View Engine Resolution Sequence

    - by intangible02
    I created a simple ASP.NET MVC version 1.0 application. I have a ProductController which has one action Index. In the view, I created a corresponding Index.aspx under Product subfolder. Then I referenced the Spark dll and created Index.spark under the same Product view folder. The Application_Start looks like protected void Application_Start() { RegisterRoutes(RouteTable.Routes); ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new Spark.Web.Mvc.SparkViewFactory()); ViewEngines.Engines.Add(new WebFormViewEngine()); } My expectation is that since the Spark engine registers before default WebFormViewEngine, when browse the Index action in Product controller, the Spark engine should be used, and WebFormViewEngine should be used for all other urls. However, the test shows that the Index action for Product controller also uses the WebFormViewEngine. If I comment out the registration of WebFormViewEnginer (the last line in the code), I can see that the Index action is rendered by Spark engine and the rest urls generates an error (since the defualt engine is gone), it proves that all my Spark code is correct. Now my question is how the view engine is resolved? Why the registration sequence does not take effect?

    Read the article

  • How do I reference an unsigned assembly from a VSTO Word Doc project?

    - by Gishu
    I created a new project in VS2008. Project type Visual C# > Office > 2007 > Word 2007 Document Added some code.. got Word to do a few jumps through some custom hoops.. all fine. Now I need to reference another assembly (CopyLocal as false) which is not signed. So I add the project reference. Now the project will not build complaining error MSB3188: Assembly 'X.dll' must be strong signed in order to be marked as a prerequisite. The error code page is concise (now accustomed to this) Been googling and reading posts ever since.. No Luck. How do I get around this ? Or is the hidden commandment that all references (for VSTO?) must be strong named / signed. I cannot sign 'X.dll' and be done with it because it is a binary that I don't control also it depends on another bunch of unsigned dlls.. can't set off a chain sign reaction. Update: Solved the build issue by turning CopyLocal=True. But this meant dumping the referenced X.DLL and all its dependencies into the bin\debug folder... Ughh! Tried creating a subfolder called bin\debug\refExecs and referencing X.dll CopyLocal=false from there. The error message was back.

    Read the article

  • GoDaddy Subdomain Hosting Issue/Question with Disk Access (C#/ASP.NET 3.5)

    - by Vogel
    This isn't a very complicated scenario really, but as I start to type out the problem I'm realizing how convoluted it can become textually. Let me try and be very clear: First, the set up... I have a C#/ASP.NET web application that is publicly facing on my main domain (www), let's call it www.mysite.com. Nothing fancy, just a front-end that connects to SQL to display records. Then, I have a second C#/ASP.NET web application that is secured using forms authentication running on a subdomain, let's call it admin.mysite.com. This is a very light-weight CMS system to administer the public site. Now, the problem... Both of these sites run fine for basic tasks, however, my problem arises when I try to gain access to the file system for uploading. GoDaddy requires subdomains to run as a virtual directories under the main application in IIS (so the subdomains actually resolve/re-direct to www.mysite.com/admin when you type in admin.mysite.com), but because of this I am unable to write to my website root from the subfolder. Let me explain a little more... The CMS system (running as a virtual directory) gives the admin the ability to upload photos for display on the main site, the target folder of which is www.mysite.com/images - when attempting disk access from the root app, I am able to write to the virtual directory, but cannot do the opposite -- that is, write to the root from the virtual directory, getting security violations. If I can only upload to the /admin/ virtual directory, the entire point is moot because it's a secured folder that the public can't see! The only solution I can think of is to upload the files to the /admin/ virtual directory, then call a URL in the root that moves files from /admin/ back to the root, but that is entirely ghetto. I hope this post makes sense. Anyone else experience anything like this? The bottom line is that it seems virtual directories ONLY have access to themselves, and not their parent directories, no matter what credentials are used. Thanks!

    Read the article

  • How to properly force a Blackberry Java application to install using Loader.exe

    - by Kevin White
    I want to include the Application Loader process in a software installation, to ensure that users get our software installed on their Blackberry by the time our installer software finishes. I know this is possible, because Aerize Card Loader (http://aerize.com/blackberry/software/loader/) does this. When you install their software, if your Blackberry is connected the Application Loader will come up and force the .COD file to install to the device. I can't make it work. Looking at RIM's own documentation, I need to: Place the ALX and COD files into a subfolder here: C:\Program Files\Common Files\Research In Motion\Shared\Applications\ Add a path to the ALX file in HKCU\Software\Research In Motion\Blackberry\Loader\Packages Index the application, by executing this at the command line: loader.exe /index Start the force load, by doing this: loader.exe /defaultUSB /forceload When I execute that last command, the Application Loader comes up and says that all applications are up to date and nothing needs to be done. If I execute loader.exe by double-clicking on it (or typing in the command with no parameters), I get the regular Application Loader wizard. It shows my program as listed, but un-checked. If I check it and click next, it will install to the Blackberry. (This is the part that I want to avoid, and that Aerize Card Loader's install process avoids.) What am I missing? It appears that the Aerize installer is doing something different but I haven't been able to ascertain what.

    Read the article

  • Steps to Investigate Cause of Web.Config Duplicate Section

    - by pauly
    Symptoms In IIS Dot Net 2.0 Integrated app pool: double clicking to view any web.config section results in a the following error dialog. "There was an error while performing this operation.... Fielname... web.config... Error: There is a duplicate..." Browsing to the URL displays: "Http 500.19" internal server error.. There is a duplicate... 'system.web.extensions/scripting/scriptResourceHandler' section defined...." Running the app from VS 2008 an "Unable to start debugging on the web server..." dialog is displayed. Things Tried Looked at other application directories on same IIS server. No problem view web.config contents or serving up the app. Removed and re-added the application in IIS. Checked out a new version of the source code. Reverted to prior versions of the web.config file. Looked for web.config files that might have duplicate sections in: Inetpub root. "C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config" The "Views" subfolder of the ASP.Net MVC app. Checked out source code to another dev machine. Setup IIS 7 app folder. No problem with Web.config. Question If the reason for this error is another web.config file where else should I look? Are there other reasons for these symptoms?

    Read the article

< Previous Page | 12 13 14 15 16 17 18  | Next Page >