Search Results

Search found 19157 results on 767 pages for 'shared folder'.

Page 177/767 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Change the Views location

    - by Vinni
    I am developing a website in MVC 2.0. I want to change the View folder location in my website. I wanted to keep the views folder inside other folders, When I try to do so i am getting following errors The view 'Index' or its master was not found. The following locations were searched: ~/Views/Search/Index.aspx ~/Views/Search/Index.ascx ~/Views/Shared/Index.aspx ~/Views/Shared/Index.ascx Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. My Views folder will be in ~/XYZ/ABC/Views instead of ~/Views. Please solve my problem. Will I get any problems If I change the default Views folder location. Do I need to change anything in HTML Helper classes because I don't know anything in MVC as this is my starting project i dont want to risk..Please help me out...

    Read the article

  • Which c# project files should I version control?

    - by DTown
    I have a project I'm looking to manually manage via perforce version control as I only have the Express edition. What I'm looking for is which files should be excluded in the version control as locking many of the files can result in a problem for visual studio compiling and debugging. What I have, so far, included. .cs files (except properties folder) .resx files .csproj files Excluded bin folder obj folder Properties folder .user file Let me know if there is something more that should be included that I have excluded or if there is a better way to do this.

    Read the article

  • how to setup the sphinx with netbeans

    - by Pradeep
    i have successfully configured sphinx4 with eclipse. for that these steps i have used. copy my java and config files to SRC folder all the necessary jar files (in the lib). the lib folder added to the root of the project build those jar files (jsapi files too) change the configuration file and give the proper path test the java file but in Netbeans i really dont understand how to do the proper steps. can someone help me. the jar files should be added to "Libraries" rite. then after adding them how to build them. in the netbeans it dont show a SRC folder. so all the java files and configuration files should go to Source Packages folder rite. can someone help me with this. please

    Read the article

  • SSLsample in NSS

    - by karikari
    Since the latest version of NSS does not provide the SSLSample program, I copied the folder SSLSample from the older version of NSS (3.9, 3.12) to the /security/nss/cmd folder inside nss-3.12.4 . When I run make nss_build_all in my 3.12.4, the other programs generated its own binary but not my SSLSample folder. I would like to know why?

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • How do you remove a site from Sharepoint Designer?

    - by xpda
    I would like to use Sharepoint Designer 2007 as an html editor. I have a web site with a lot of files in a folder on my hard drive. I do not want Sharepoint Designer to make a web site out of this. I just want to use Sharepoint Designer to edit the html files, locally. If I ever make a mistake and click on a tool for Sites, such as summary or report, Sharepoint Designer will decide that my folder is now a web site. From that point on, Sharepoint Designer is painfully slow whenever I open a file contained in the folder that Sharepoint decided is my web site, instead of being instantaneous like it was before. I can resolve this situation by renaming the folder containing my web site -- everything gets fast again. I can also fix it by uninstalling and reinstalling Sharepoint Designer. Neither of these is a good solution. Is there a place in Sharepoint Designer, or in application data or the registry that I can kill off the Sharepoint Designer web site that's associated with a folder on my hard drive?

    Read the article

  • Who organizes your Matlab code?

    - by KE
    After reading How to organize MATLAB code?, I had a follow up question. If you work in a group of Matlab programmers, who enforces the organization of the shared Matlab code and project matfiles? For example do you have a dedicated Matlab IT person, or does the most senior programmer issue guidelines that everyone must follow, or does everyone agree to follow a system? In my small group, each person has their own 'system'. Matlab code and project matfiles are either piled into a shared drive or tucked away on people's own computers. Hard to recreate work done by another person, or even to locate their code. There were lots of good suggestions on how to get organized. But it seems like someone has to make the trains run on time. Who does it in your group?

    Read the article

  • VI/VIM file handling

    - by Abhimanyu
    Nowa days I'm working with the Vi editor with the positive approach that you can do most things using it - unlike other editors. I came across one problem: Let's assume I have open a folder with vi <folder name> so it opens the folder in Vi and lists the files in that folder. I select a file and read the content, then I want to go back to the previous view which has filenames listed so it is easy to choose another file. But don't know how to achieve this. I'm hoping some method should be there to achieve this.

    Read the article

  • SubWebFolder and mutliple bin folders with Website model?

    - by OutOFTouch
    Hi, I am looking for some advice on how what is the best approach to subweb folders and having mutliple bin folders in the WebSite Project model. For adding new pages at a later stage without recompiling the core files of a website and without building a full fledged Plug-in framework api. I am aware of being able to drop in the compiled dlls into the main bin folder and to just copy over the new page files to a sub folder but I am looking for a more organized file/folder approach. Here is the how it was done with WAP: Moving the Code-Behind Assemblies/DLLs to a different folder than /BIN with ASP.NET 1.1 Multiple /bin folders in ASP.NET I should also mention that I see that I can still do it the old way with the website project model by making the adjustment to the config section mentioned here but I was wondering if that has any side affects. AssemblyBinding in Web Config and XMLNS

    Read the article

  • django urlconf or .htaccess trouble

    - by Zayatzz
    Hello I am running my django project from subfolder of a website. Lets say the address where my project is meant to open from is. http://example.com/myproject/ the myproject folder is root folder for my user account. In that folder i have fcgi script that starts my project. The .htaccess file in the folder contains this: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L] The trouble is, that at some cases, instead of redireting user to page like http://example.com/myproject/social/someurl/ it redirects to http://example.com/social/someurl/ which does not work. What i want to know is how to fix this problem. Is this django problem and i should change it with urconf and add myproject to all urls, or should i do this with .htaccess? I found similar question, which, sadly, remains unanswered: http://stackoverflow.com/questions/2321154/how-to-write-htaccess-if-django-project-is-in-subfolder-and-subdomain Alan.

    Read the article

  • Git-svn branch hoses dcommit when using an odd branch structure

    - by Chuck Vose
    I had a boss, past-tense, who decided to put svn branches in the same folder as trunk. Normally, this wouldn't affect me that much but since I'm using git-svn things are going so well. After I did a fetch it created a folder for each branch in my root folder so I have three folders, drupal, trunk, and client. The drupal folder is git's master branch, client and trunk are the svn branches. Merging and committing works great, in fact everything git related is working superb. However dcommit is totally hosed, it's trying to commit a folder called client and one called trunk. I can't even imagine what havoc this would cause for svn later on. So my question is, what have I done wrong in my .git/config and is there anything I can do to fix this or am I going to have to suffer and go back to using svn? Please don't make me go back. I don't think I can take it anymore. Bastard boss knows how to leave a legacy. [svn-remote "svn"] url = https://svn.mydomain.com/svn/project_name fetch = trunk:refs/remotes/trunk branches = *:refs/remotes/* tags = tags/*:refs/remotes/tags/* Normally the branches line would look like this (when using --stdlayout): branches = branches/*:refs/remotes/branches/* ls output is thus: $ ls client/ docs/ drupal/ sql/ trunk/

    Read the article

  • HtAccess Rewrite Needed

    - by pws5068
    My host will not allow me to change the default folder of my primary domain. I have managed to Rewrite http://www.mysite.com to the real folder public_html/mysite.com/www/ with the following code: RewriteEngine On RewriteRule ^$ /mysite.com/www/ [R=301,L] This does successfully load my domain from the subfolder, but the url becomes: http://mysite.com/mysite.com/www/ How can I continue loading requests from http://mysite.com/index.html in the correct folder shown above, without showing it in the client-side url?

    Read the article

  • Why MAMP sets to 600 MySQL file permission?

    - by SergioP
    I usually put MAMP MySQL /db/db-site-name folder under SVN. When MAMP starts, it gives the drw------- (600) permission to all the files and dir in that folder. I have a problem because one of these folder is .svn one, that have to be drwxr-xr-x (755), otherwise I can't access to SVN working copy with my client. Can anyone help me to set MAMP properly?

    Read the article

  • Find localized Windows strings

    - by DougN
    I need to find some strings that the current version of Windows is using. For example, when I create a new folder, it is initially named "New Folder" on English Vista. I need to programmatically find what that folder would be named on any language and version of Windows that I might be running on. Anyone have any ideas how to do that?

    Read the article

  • Very unusual and weird problem with gVim 7.2

    - by Fabian
    After installing gVim and running gvim from the run window, if I were to type :cd followed by a tab, I will get \AppData, \Application Data, etc. Which basically means I'm at my $HOME directory (C:\Users\Fabian). The weird thing is I do not have a \Application Data folder there. But if I were to run gvim.exe from its installation folder, and I type :cd followed by tab, I would get \autoload, \colors, etc. which means I'm at the installation folder. And if I were to pin gvim.exe on to taskbar, upon launch and typing :cd then tab, I will get \Dictionaries and upon hitting tab again I get a beep. I think for the last scenario, I'm at some Adobe folder. Anybody knows how to fix this weird issue? I'd like to pin it to taskbar and upon launch, start in the $HOME directory (C:\Users\Fabian).

    Read the article

  • Want to improve my simple site/db backup script with auto removal of old backups.

    - by Robert Robb
    I have the following simple script for backing up my website files and db. The script is run each day via a cron job. #!/bin/sh NOW=$(date +"%Y-%m-%d") mysqldump --opt -h localhost -u username -p'password' dbname > /path/to/folder/backup/db-backup-$NOW.sql gzip -f /path/to/folder/backup/db-backup-$NOW.sql tar czf /path/to/folder/backup/web-backup-$NOW.tgz /path/to/folder/web/content/ It works great, but I don't want loads of old backups clogging my system. How can I modify the script to remove any backups older than a week when the script is run?

    Read the article

  • Getting started with Document Set in SharePoint2010

    - by ybbest
    Folders are widely used in traditional file based system, in SharePoint world you can create folder in the document library as well. However, there is a new improved feature in SharePoint called Document Set; you can attach metadata to the document set. To get start with Document set, you can perforce the following steps. 1. Go to Site Settings >>Site collection features >>Activate the Document Sets feature. 2. After the Document Sets feature is activated, you will get a new content type called Document Set. 3. Next, we can create a custom content type called Loan Application Document Set that inherited from Document Set Content Type. 4. Then I create a new column called Application Number. 5. Add this field to the loan application content type 6. Create a new Content Type called Loan Contract form that inherited from Document content type. 7. Add the Application Number to the Loan Contract form content type. 8. Create a new Content Type called Loan Application form that inherited from Document content type and add Application Number to it.(The same step as above.) 9.Go to the Loan Application Document Set content type and go to the Document Set Settings. 10. You can define which content type you would like this Document set contains and you can also define the default document for each content type. When you create a new document set, those default documents will get automatically created in the document set. You can also define the Shared field that shared across content types; in my case I define the Application number and description as my shared fields. Finally, you can define the fields that you’d like to show in the document set welcome page. 11. Now create a new document library and attach those content types to the document library and create a new loan application document set. 12. You will see the default document created in the document set.If you updated Application Number on the document set , the field will get updated in the documents inside the document set as well.

    Read the article

  • Make Apache to show different index.html also in browser's address bar

    - by Marco Demaio
    I have the following foloder tree on my shared hosting server: www.somesite.com | |_ public_html (document folder) |_ .htaccess (Apache file) |_ inde.html (page shown by server now when someone looks for www.somesite.com) | |_ site_editor (folder) | |_login.html (site editor control panel) | |_file1.php | |_file2.php | |_ ... | |_ website (folder) |_ index.html (website HOME PAGE) |_ page1.html |_ page2.html |_ etc. Now when someone looks for www.somesite.com the webserver look for index.html in public_html folder. I would like the web server to show website/index.html when someone looks for www.somesite.com and I would like his browser bar to show only www.somesite.com/index.html and not www.somesite.com/website/index.html I would also like the web server to show site_editor/login.html when someone looks for www.somesite.com/site_editor/ Is it possible to accomplish both task by setting .htaccess files in some ways??? Thanks!

    Read the article

  • actionscript 3.0 load images dynamically

    - by johnny
    I am making a photo slideshow in flash and would like to be able to load images dynamically from a folder. So that whenever i have a new photo i can just stick it in a folder and have my swf file read from that folder and update the slideshow. Is this doable in actionscript 3.0? if so any pointers would be helpful. thanks!

    Read the article

  • Chromium/Chrome tabs crashes on Ubuntu/Edubuntu 12.04.3

    - by Joakim Waern
    For some reason the tabs of Chromium and Chrome have started to crash (He's dead Jim) when running on Ubuntu or Edubuntu 12.04.3. It seems to happen only when I log in. I tried the browsers on Ubuntu 13.10 and everything worked fine. Any suggestions? Ps. The browser doesn't crash, just the tabs, and I can open new ones (but they also crash after a while). Here's the output command free on the terminal after a tab crash: total used free shared buffers cached Mem: 2041116 1722740 318376 0 2868 267256 -/+ buffers/cache: 1452616 588500 Swap: 2085884 104 2085780 Compared to the output before the crash: total used free shared buffers cached Mem: 2041116 1967108 74008 0 14632 253036 -/+ buffers/cache: 1699440 341676 Swap: 2085884 40 2085844

    Read the article

  • Not sure about ACL permissions

    - by Darko Miletic
    I'm writing up something about ACL usage on CentOS but since I still do not have a box ready I would like to ask something. Let us assume we have a folder /var/www/test If I do this in terms of permissions: /bin/chown -R root:root /var/www/test/ /bin/chmod -R u=rwx,go= /var/www/test/ /usr/bin/setfacl -R -m u:apache:rwx /var/www/test/ Will user apache be able to change owner of folder test or of any particular file within that folder? If answer is yes shall I than use group instead of user?

    Read the article

  • Missing AVFoundation.framework

    - by Alex
    Hi, AVFoundation.framework is not where the documentation says it should be. I have iPhone SDK 2.2 installed (never had previous sdk versions installed) and I can't find that folder under /System/Library/Frameworks I did find it under /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS2.2.sdk/System/Library/Frameworks/ folder but if I add it from that location, then the compiler can't find the header files. I tried copying the entire AVFoundation.framework folder to /System/Library/Framework, but it still can't find the header files. How can I use AVFoundation classes? Thanks, Alex

    Read the article

  • PeopleSoft at Alliance 2012 Executive Forum

    - by John Webb
    Guest Posting From Rebekah Jackson This week I jointed over 4,800 Higher Ed and Public Sector customers and partners in Nashville at our annual Alliance conference.   I got lost easily in the hallways of the sprawling Gaylord Opryland Hotel. I carried the resort map with me, and I would still stand for several minutes at a very confusing junction, studying the map and the signage on the walls. Hallways led off in many directions, some with elevators going down here and stairs going up there. When I took a wrong turn I would instantly feel stuck, lose my bearings, and occasionally even have to send out a call for help.    It strikes me that the theme for the Executive Forum this year outlines a less tangible but equally disorienting set of challenges that our higher education customer’s CIOs are facing: Making Decisions at the Intersection of Business Value, Strategic Investment, and Enterprise Technology. The forces acting upon higher education institutions today are not neat, straight-forward decision points, where one can glance to the right, glance to the left, and then quickly choose the best course of action. The operational, technological, and strategic factors that must be considered are complex, interrelated, messy…and the stakes are high. Michael Horn, co-author of “Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns”, set the tone for the day. He introduced the model of disruptive innovation, which grew out of the research he and his colleagues have done on ‘Why Successful Organizations Fail’. Highly simplified, the pattern he shared is that things start out decentralized, take a leap to extreme centralization, and then experience progressive decentralization. Using computers as an example, we started with a slide rule, then developed the computer which centralized in the form of mainframes, and gradually decentralized to mini-computers, desktop computers, laptops, and now mobile devices. According to Michael, you have more computing power in your cell phone than existed on the planet 60 years ago, or was on the first rocket that went to the moon. Applying this pattern to Higher Education means the introduction of expensive and prestigious private universities, followed by the advent of state schools, then by community colleges, and now online education. Michael shared statistics that indicate 50% of students will be taking at least one on line course by 2014…and by some measures, that’s already the case today. The implication is that technology moves from being the backbone of the campus, the IT department’s domain, and pushes into the academic core of the institution. Innovative programs are underway at many schools like Bellevue and BYU Idaho, joined by startups and disruptive new players like the Khan Academy.   This presents both threat and opportunity for higher education institutions, and means that IT decisions cannot afford to be disconnected from the institution’s strategic plan. Subsequent sessions explored this theme.    Theo Bosnak, from Attain, discussed the model they use for assessing the complete picture of an institution’s financial health. Compounding the issue are the dramatic trends occurring in technology and the vendors that provide it. Ovum analyst Nicole Engelbert, shared her insights next and suggested that incremental changes are no longer an option, instead fundamental changes are affecting the landscape of enterprise technology in higher ed.    Nicole closed with her recommendation that institutions focus on the trends in higher education with an eye towards the strategic requirements and business value first. Technology then is the enabler.   The last presentation of the day was from Tom Fisher, Sr. Vice President of Cloud Services at Oracle. Tom runs the delivery arm of the Cloud Services group, and shared his thoughts candidly about his experiences with cloud deployments as well as key issues around managing costs and security in cloud deployments. Okay, we’ve covered a lot of ground at this point, from financials planning, business strategy, and cloud computing, with the possibility that half of the institutions in the US might not be around in their current form 10 years from now. Did I forget to mention that was raised in the morning session? Seems a little hard to believe, and yet Michael Horn made a compelling point. Apparently 100 years ago, 8 of the top 10 education institutions in the world were German. Today, the leading German school is ranked somewhere in the 40’s or 50’s. What will the landscape be 100 years from now? Will there be an institution from China, India, or Brazil in the top 10? As Nicole suggested, maybe US parents will be sending their children to schools overseas much sooner, faced with the ever-increasing costs of a US based education. Will corporations begin to view skill-based certification from an online provider as a viable alternative to a 4 year degree from an accredited institution, fundamentally altering the education industry as we know it?

    Read the article

  • cakephp, i18n .po files, How to use them correctly

    - by ion
    I have finally managed to set up a multilingual cakephp site. Although not finished it is the first time where I can change the DEFAULT_LANGUAGE in the bootstrap and I can see the language to change. My problem right now is that I cannot understand very well how to use the po files correctly. According to the tutorials I've used I need to create a folder /app/locale and inside that folder create a folder for each language in the following format: /locale/eng/LC_MESSAGES. I have done that and I have also managed to extract a default.pot file using cake i18n extract. And it appears that all occurrences of the __() function have been found succesfully. In my application I'm using 2 languages: eng and gre. I can see why you would need a seperate folder for each language. However in my case nothing happens when I edit the po files inside each folder....well almost nothing. If I edit the /app/locale/gre/LC_MESSAGES/default.po I have no language changes. If I edit the /app/locale/eng/LC_MESSAGES/default.po then the language changes to the new value (on the translation field) and it does not switch to the other language. What am I doing wrong. I hope I made myself as clear as possible.

    Read the article

  • Employee Engagement Q&A with John Brunswick

    - by Kellsey Ruppel
    As we are focusing this week on Employee Engagement, I recently sat down with industry expert and thought leader John Brunswick on the topic. Here is the Q&A dialogue we shared.  Q: How do you effectively engage employees to drive business value?A: Motivation, both extrinsic and intrinsic, combined with the relevancy of various channels to support it.  Beyond chaining business strategies like compensation models within an organization, engagement ultimately is most successful when driven by employee's motivations.  Business value derived from engagement through technical capabilities can be objectively measured through metrics like the rate and accuracy of problem solving for a given business function or frequency of innovation created.  Providing employees performing "knowledge work" with capabilities that allow them to perform work with a higher degree of accuracy in the same or ideally less time, adds value for that individual and in turn, drives their level of engagement to drive business value. Q: Organizations with high levels of employee engagement outperform the total stock market index by 22%. Can you comment on why you think this might be? A: Alignment through shared purpose.  Zappos is an excellent example of a culture that arguably has higher than average levels of employee engagement and it permeates every aspect of their organization – embodied externally through their customer experience.  I recently made my first purchase with them and it was obvious through their web experience, visual design, communication style, customer service and attention to detail down to green packaging, that they have an amazingly strong shared purpose.  The Zappos.com ‘About page’ outlines their "Family Core Values", the first three being "Deliver WOW Through Service, Embrace and Drive Change & Create Fun and A Little Weirdness" – all reflected externally in my interaction with them.  Strong shared purpose enables higher product and service experience, equating to a dedicated customer base, repeat purchases and expanded marketshare. Q: Have you seen any trends in the market regarding employee engagement? A: Some companies now see offering a form of social engagement similar to Facebook and LinkedIn as standard communication infrastructure like email or instant messaging.  Originally offered as standalone tools, the value is now seen when these capabilities are offered in an integrated fashion in the context of business entities.  An emerging area of focus is around employee activities related to their organization on external social platforms, implicitly creating external communities with employees acting on behalf of the brand and interacting with each other (e.g. Twitter).  Companies have reached a formal understand that this now established communication medium requires strategies allowing employees to engage.  I have personally met colleagues from Oracle, like Oracle User Experience Director Ultan O'Broin (@ultan), via Twitter before meeting first through internal channels. Q: Employee engagement is important, but what about engaging customers and partners? A: The last few years we have witnessed an interesting evolution from the novelty of self-service to expectations of "intelligent" self-service.  From a consumer standpoint, engagement can end up being a key differentiator, especially in mature markets.  Customers that perform some level of interaction with a brand develop greater affinity for the brand and have a greater probability of acting as an advocate.  As organizations move toward a model of deeper engagement, they must ensure that their business is positioned to support deeper relationships, offering potentially greater transparency. From a partner standpoint greater engagement can lead to new types of business opportunities, much in the way that Amazon.com offers a unified shopping experience that can potentially span various vendors.  This same model can be extended to blending services and product delivery models, based on a closeness not easily possible before increased capability of engagement mechanisms. Q: What types of solutions are available to successfully deliver employee engagement? A: Solutions enabling higher levels of engagement do so on the basis of relevancy.  This relevancy is generally supported by aspects of content management, social collaboration, business intelligence, portal and process management technologies.  These technologies can help deliver an experience tailored to a given role or process within an organization that applies equally to work that is structured or unstructured, appearing in the form of functionality as simple as an online employee directory search, knowledge communities supported by social collaboration, as well as more feature rich business intelligence dashboards and portals. Looking to learn more about how to effectively engage your employees? Check out this webcast, or read more from John Brunswick. 

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >