Search Results

Search found 58379 results on 2336 pages for 'create directory'.

Page 67/2336 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Changing LDAP schema casts Confluence AD integration unoperable

    - by Maxim V. Pavlov
    I have had our instance of Atlassian Confluence configured to be integrated with our Active Directory. In AD, all the users were being created under default Users folder in Active Directory Users and Computers. We have decided to introduce cleaner separation and have created an Organizational Units structure in AD. Under root we have created Managed OU, and under it - Users OU and all user accounts were moved under Users OU. Now I though that to let the Confluence AD integration engine "know" where to look for user accounts now, I only need to adjust the BaseDN and prepand it with ou=Managed so it is aware that it is looking for cn=Users but under ou=Managed. That didn't work. How should I adjust LDAP schema root in a client application for it to be able to look for users in OU that then in a default folder.

    Read the article

  • User http does not have write permissions directory?

    - by dwieeb
    I have a bit of an odd set up, I think. I have groups for each domain my server hosts, and I add the user http to each domain group along with the users that should have access to the groups' domains. In my php script running from a directory 'public_html', I try creating a file: <?php $output = ""; print exec('touch test 2>&1', $output); But I get touch: cannot touch `test': Permission denied and the file is not created. But here, clearly stated, the group has all permissions on the directory: drwxrwxr-x 5 dwieeb example.com 1024 Feb 4 05:19 public_html And here are the permissions on the php file in public_html that is trying to use the exec function: -rw-rw-r-- 1 dwieeb example.com 59 Feb 4 05:19 test.php How is this possible if http is part of the example.com group (as seen from a cat on /etc/group) and the directory has full permissions for the group? ... example.com:x:1000:dwieeb,http I'm stumped. EDIT (since apparently I'm not cool enough to answer my own questions yet): Ah, I found the problem. Yes, I restarted Nginx, but the php-fpm daemon must be restarted as well when http is added to the group for my domain. On Arch Linux: rc.d restart php-fpm

    Read the article

  • How to add admin users in 389 LDAP, fedora directory server

    - by chandank
    I want to create couple of Admin users who have access to create/delete users on a particular group/Organization Unit. For example, User: uid=testadmin, ou=people, dc=my,dc=net Should have access to create new users/delete users under ou=People,dc=my,dc=net I tried with below ACI but did not work (target = "ldap:///ou=People,dc=my,dc=net")(targetattr = "*") (version 3.0;acl "testadmin Permissions";allow (proxy)(userdn = "ldap:///uid=testadmin,ou=people,dc=my,dc=net");) I am able to add administrative users from the Directory Server console, but this user data is not stored in ldif files and only stored in binary database at /var/lib/dirsrv/slap-ldap/db/. Only problem is these users have full power and I am not sure how to restrict their access.

    Read the article

  • xcopy Not Surpressing File/Directory Query

    - by Daniel Bingham
    Hey folks, I'm attempting to use xcopy to copy over a file from one machine to another on our network as part of a Java program. I'm calling xcopy like this: xcopy "C:\Program Files\path\to\my\file" "\\othermachine\c$\Documents and Settings\<myUserName>\Desktop\Test\path\in\directory\structure\to\file" /e /y /i Because I'm calling it from with in Java, I need all the prompts to be suppressed. For the most part, \i and \y have done exactly that. However, for this one file /i fails and I get the file or directory prompt. The result is that it hangs the entire program. I've also tried calling it with /s /t /q appended on to the existing options, to no avail. Why isn't /i working to suppress the File or Directory prompt? Is there an order I need to call the options in? Is there something else I need to do? EDIT: I should mention, the file is a text file - single line of text. It does not have an extension. It looks like this: FILE-NAME

    Read the article

  • linux: accessing thousands of files in hash of directories

    - by 130490868091234
    I would like to know what is the most efficient way of concurrently accessing thousands of files of a similar size in a modern Linux cluster of computers. I am carrying an indexing operation in each of these files, so the 4 index files, about 5-10x smaller than the data file, are produced next to the file to index. Right now I am using a hierarchy of directories from ./00/00/00 to ./99/99/99 and I place 1 file at the end of each directory, like ./00/00/00/file000000.ext to ./00/00/00/file999999.ext. It seems to work better than having thousands of files in the same directory but I would like to know if there is a better way of laying out the files to improve access.

    Read the article

  • How dow I remove 1.000.000 WebsiteCache directories?

    - by harper
    I found that in a WebsiteCache directory more than 1.000.000 subdirectories has been created. I want to remove all these directories. My first approach was to use the command line tool: cd WebsiteCache rmdir /Q /S . This will remove all subdirectories except WebsiteCache itself, since it is the current working directory. I noticed after two hours that the directoriws starting with A-H have been removed. Why does rmdir removes the directories in alphabetical order? It must take additional effort to do this ordered. What is a fast way to deleted such an amount of directories?

    Read the article

  • Windows script to create directories of 3,000 files

    - by uhpl1
    We have some email archiving that is dumping all the emails into a directory. Because of some performance reasons with the server, I want to setup an automated task that will run a script once a day and if there is more than 3,000 (or whatever number) of files in the main directory, create a new directory with the date and move all the main directory files into it. I'm sure someone has already written something similar, so if anyone could point me at it that would be great. Batch file or Powershell would both be fine.

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • Is there any trick to join and use Windows 8/8.1 with Samba 4 (4.1.6)?

    - by tenshimsm
    It seems that Samba doesn't like at all. I've followed various tutorials and I can't get Windows 8 to work properly with a Ubuntu Server as domain controller. This week i've downloaded ubuntu 14.04 lts and set a fast domain configuration. As usual all other Windows version (XP and 7) work but the newest M$ nightmare doesn't. In this try it doesn't even join the domain, keeps saying the my username or password are wrong. My /etc/samba/smb.conf # Global parameters [global] workgroup = DOMAIN realm = DOMAIN.LAN netbios name = DOM server role = active directory domain controller dns forwarder = 8.8.8.8 idmap_ldb:use rfc2307 = yes [netlogon] path = /var/lib/samba/sysvol/domain.lan/scripts read only = No [sysvol] path = /var/lib/samba/sysvol read only = No [test] directory mode = 0750 path = /SHARES/test read only = no Does anyone have a tutorial that really works? Because I've tried many, each one with different configurations that works only with the people that made them. And is there a way to import my old AD users, computers and ID in a way that I won't need to rejoin all computers?

    Read the article

  • Central Storage for windows user accounts homedirs .. hardware/software needed?

    - by mtkoan
    We have ~120+ users in our network, and are endeavoring to centralize logon authentication and home directory storage server-side. Most of the users are Windows 2000/XP machines, and a few running Mac OS X. Ideally the solution will be open-source-- can this all be managed from a Linux server running LDAP and Samba? Or would a hacked-NAS Box with a FreeNAS or similar suffice? Or is Micro$oft's Active Directory really the preference here. Is it viable to store PST files on this server for users to read from and write to? They are very large ~1.5gb. We have no mail server (or money) capable of Exchange or IMAP, only an old POP3. What kind of hardware horsepower and network architecture should we have for this kind of thing?

    Read the article

  • Is it possible to extend the ad schema in a Win2003 DC Server (NOT R2) to support DFSR?

    - by JohannesH
    we're in the process of installing a brand new Windows Server 2008 Web cluster and we would like to synchronize some files between the servers. The problem is that the DC in the domain is an old Windows Server 2003 Standard (NOT R2) which apparently doesn't contain some extension to the AD schema. Is it possible to upgrade the schema without upgrading the DC servers to R2? When I try to create a Replication Group on the 2008 Server I get the following message: --------------------------- Error --------------------------- srv.XXXXXX.XX: The Active Directory Domain Services schema on domain controller activedc07.srv.XXXXXX.XX cannot be read. This error might be caused by a schema that has not been extended, or was extended improperly. See Help and Support Center for information about extending the Active Directory Domain Services schema. Schema version 30 is not supported. --------------------------- OK ---------------------------

    Read the article

  • Create nice animation on your ASP.NET Menu control using jQuery

    - by hajan
    In this blog post, I will show how you can apply some nice animation effects on your ASP.NET Menu control. ASP.NET Menu control offers many possibilities, but together with jQuery, you can make very rich, interactive menu accompanied with animations and effects. Lets start with an example: - Create new ASP.NET Web Application and give it a name - Open your Default.aspx page (or any other .aspx page where you will create the menu) - Our page ASPX code is: <form id="form1" runat="server"> <div id="menu">     <asp:Menu ID="Menu1" runat="server" Orientation="Horizontal" RenderingMode="List">                     <Items>             <asp:MenuItem NavigateUrl="~/Default.aspx" ImageUrl="~/Images/Home.png" Text="Home" Value="Home"  />             <asp:MenuItem NavigateUrl="~/About.aspx" ImageUrl="~/Images/Friends.png" Text="About Us" Value="AboutUs" />             <asp:MenuItem NavigateUrl="~/Products.aspx" ImageUrl="~/Images/Box.png" Text="Products" Value="Products" />             <asp:MenuItem NavigateUrl="~/Contact.aspx" ImageUrl="~/Images/Chat.png" Text="Contact Us" Value="ContactUs" />         </Items>     </asp:Menu> </div> </form> As you can see, we have ASP.NET Menu with Horizontal orientation and RenderMode=”List”. It has four Menu Items where for each I have specified NavigateUrl, ImageUrl, Text and Value properties. All images are in Images folder in the root directory of this web application. The images I’m using for this demo are from Free Web Icons. - Next, lets create CSS for the LI and A tags (place this code inside head tag) <style type="text/css">     li     {         border:1px solid black;         padding:20px 20px 20px 20px;         width:110px;         background-color:Gray;         color:White;         cursor:pointer;     }     a { color:White; font-family:Tahoma; } </style> This is nothing very important and you can change the style as you want. - Now, lets reference the jQuery core library directly from Microsoft CDN. <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js"></script> - And we get to the most interesting part, applying the animations with jQuery Before we move on writing jQuery code, lets see what is the HTML code that our ASP.NET Menu control generates in the client browser.   <ul class="level1">     <li><a class="level1" href="Default.aspx"><img src="Images/Home.png" alt="" title="" class="icon" />Home</a></li>     <li><a class="level1" href="About.aspx"><img src="Images/Friends.png" alt="" title="" class="icon" />About Us</a></li>     <li><a class="level1" href="Products.aspx"><img src="Images/Box.png" alt="" title="" class="icon" />Products</a></li>     <li><a class="level1" href="Contact.aspx"><img src="Images/Chat.png" alt="" title="" class="icon" />Contact Us</a></li> </ul>   So, it generates unordered list which has class level1 and for each item creates li element with an anchor with image + menu text inside it. If we want to access the list element only from our menu (not other list element sin the page), we need to use the following jQuery selector: “ul.level1 li”, which will find all li elements which have parent element ul with class level1. Hence, the jQuery code is:   <script type="text/javascript">     $(function () {         $("ul.level1 li").hover(function () {             $(this).stop().animate({ opacity: 0.7, width: "170px" }, "slow");         }, function () {             $(this).stop().animate({ opacity: 1, width: "110px" }, "slow");         });     }); </script>   I’m using hover, so that the animation will occur once we go over the menu item. The two different functions are one for the over, the other for the out effect. The following line $(this).stop().animate({ opacity: 0.7, width: "170px" }, "slow");     does the real job. So, this will first stop any previous animations (if any) that are in progress and will animate the menu item by giving to it opacity of 0.7 and changing the width to 170px (the default width is 110px as in the defined CSS style for li tag). This happens on mouse over. The second function on mouse out reverts the opacity and width properties to the default ones. The last parameter “slow” is the speed of the animation. The end result is:   The complete ASPX code: <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server">     <title>ASP.NET Menu + jQuery</title>     <style type="text/css">         li         {             border:1px solid black;             padding:20px 20px 20px 20px;             width:110px;             background-color:Gray;             color:White;             cursor:pointer;         }         a { color:White; font-family:Tahoma; }     </style>     <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js"></script>     <script type="text/javascript">         $(function () {             $("ul.level1 li").hover(function () {                 $(this).stop().animate({ opacity: 0.7, width: "170px" }, "slow");             }, function () {                 $(this).stop().animate({ opacity: 1, width: "110px" }, "slow");             });         });     </script> </head> <body>     <form id="form1" runat="server">     <div id="menu">         <asp:Menu ID="Menu1" runat="server" Orientation="Horizontal" RenderingMode="List">                         <Items>                 <asp:MenuItem NavigateUrl="~/Default.aspx" ImageUrl="~/Images/Home.png" Text="Home" Value="Home"  />                 <asp:MenuItem NavigateUrl="~/About.aspx" ImageUrl="~/Images/Friends.png" Text="About Us" Value="AboutUs" />                 <asp:MenuItem NavigateUrl="~/Products.aspx" ImageUrl="~/Images/Box.png" Text="Products" Value="Products" />                 <asp:MenuItem NavigateUrl="~/Contact.aspx" ImageUrl="~/Images/Chat.png" Text="Contact Us" Value="ContactUs" />             </Items>         </asp:Menu>     </div>     </form> </body> </html> Hope this was useful. Regards, Hajan

    Read the article

  • How to create a link to Nintex Start Workflow Page in the document set home page

    - by ybbest
    In this blog post, I’d like to show you how to create a link to start Nintex Workflow Page in the document set home page. 1. Firstly, you need to upload the latest version of jQuery to the style library of your team site. 2. Then, upload a text file to the style library for writing your own html and JavaScript 3. In the document set home page, insert a new content editor web part and link the text file you just upload. 4. Update the text file with the following content, you can download this file here. <script type="text/javascript" src="/Style%20Library/jquery-1.9.0.min.js"></script> <script type="text/javascript" src="/_layouts/sp.js"></script> <script type="text/javascript"> $(document).ready(function() { listItemId=getParameterByName("ID"); setTheWorkflowLink("YBBESTDocumentLibrary"); }); function buildWorkflowLink(webRelativeUrl,listId,itemId) { var workflowLink =webRelativeUrl+"_layouts/NintexWorkflow/StartWorkflow.aspx?list="+listId+"&ID="+itemId+"&WorkflowName=Start Approval"; return workflowLink; } function getParameterByName(name) { name = name.replace(/[\[]/, "\\\[").replace(/[\]]/, "\\\]"); var regexS = "[\\?&]" + name + "=([^&#]*)"; var regex = new RegExp(regexS); var results = regex.exec(window.location.search); if(results == null){ return ""; } else{ return decodeURIComponent(results[1].replace(/\+/g, " ")); } } function setTheWorkflowLink(listName) { var SPContext = new SP.ClientContext.get_current(); web = SPContext.get_web(); list = web.get_lists().getByTitle(listName); SPContext.load(web,"ServerRelativeUrl"); SPContext.load(list, 'Title', 'Id'); SPContext.executeQueryAsync(setTheWorkflowLink_Success, setTheWorkflowLink_Fail); } function setTheWorkflowLink_Success(sender, args) { var listId = list.get_id(); var listTitle = list.get_title(); var webRelativeUrl = web.get_serverRelativeUrl(); var startWorkflowLink=buildWorkflowLink(webRelativeUrl,listId,listItemId) $("a#submitLink").attr('href',startWorkflowLink); } function setTheWorkflowLink_Fail(sender, args) { alert("There is a problem setting up the submit exam approval link"); } </script> <a href="" target="_blank" id="submitLink"><span style="font-size:14pt">Start the approval process.</span></a> 5. Save your changes and go to the document set Item, you will see the link is on the home page now. Notes: 1. You can create a link to start the workflow using the following build dynamic string configuration: {Common:WebUrl}/_layouts/NintexWorkflow/StartWorkflow.aspx?list={Common:ListID}&ID={ItemProperty:ID}&WorkflowName=workflowname. With this link you will still need to click the start button, this is standard SharePoint behaviour and cannot be altered. References: http://connect.nintex.com/forums/27143/ShowThread.aspx How to use html and JavaScript in Content Editor web part in SharePoint2010

    Read the article

  • How to Create and Manage Contact Groups in Outlook 2010

    - by Mysticgeek
    If you find you’re sending emails to the same people all the time during the day, it’s tedious entering in their addresses individually. Today we take a look at creating Contact Groups to make the process a lot easier. Create Contact Groups Open Outlook and click on New Items \ More Items \ Contact Group. This opens the Contract Group window. Give your group a name, click on Add Members, and select the people you want to add from your Outlook Contacts, Address Book, or Create new ones. If you select from your address book you can scroll through and add the contacts you want. If you have a large amount of contacts you might want to search for them or use Advanced Find. If you want to add a new email contact to your group, you’ll just need to enter in their display name and email address then click OK. If you want the new member added to your Contacts list then make sure Add to Contacts is checked. After you have the contacts you want in the group, click Save & Close. Now when you compose a message you should be able to type in the name of the Contact Group you created… If you want to make sure you have everyone included in the group, click on the plus icon to expand the contacts. You will get a dialog box telling you the members of the group will be shown and you cannot collapse it again. Check the box not to see the message again then click OK. Then the members of the group will appear in the To field. Of course you can enter a Contact Group into the CC or Bcc fields as well. Add or Remove Members to a Contact Group After expanding the group you might notice some contacts aren’t included, or there is an old contact you don’t want to be in the group anymore. Click on the To button… Right-click on the Contact Group and select Properties. Now you can go ahead and Add Members… Or highlight a member and remove them…when finished click Save & Close. If you need to send emails to several of the same people, creating Contact Groups is a great way to save time by not entering them individually. If you work in for a large company, creating Contact Groups by department is a must! Similar Articles Productive Geek Tips Schedule Auto Send & Receive in Microsoft OutlookCreate An Electronic Business Card In Outlook 2007Create an Email Template in Outlook 2003Clear the Auto-Complete Email Address Cache in OutlookGet Maps and Directions to Your Contacts in Outlook 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

  • Process Power to the People that Create Engagement

    - by Michael Snow
    Organizations often speak about their engagement problems as if the problem is the people they are trying to engage - employees,  partners, customers and citizens.  The reality of most engagement problems is that the processes put in place to engage are impersonal, inflexible, unintuitive, and often completely ignorant of the population they are trying to serve. Life, Liberty and the Pursuit of Delight? How appropriate during this short week of the US Independence Day Holiday that we're focusing on People, Process and Engagement. As we celebrate this holiday in the US and the historic independence we gained (sorry Brits!) - it's interesting to think back to 1776 to the creation of that pivotal document, the Declaration of Independence. What tremendous pressure to create an engaging document and founding experience they must have felt. "On June 11, 1776, in anticipation of the impending vote for independence from Great Britain, the Continental Congress appointed five men — Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman, and Robert Livingston — to write a declaration that would make clear to people everywhere why this break from Great Britain was both necessary and inevitable. The committee then appointed Jefferson to draft a statement. Jefferson produced a "fair copy" of his draft declaration, which became the basic text of his "original Rough draught." The text was first submitted to Adams, then Franklin, and finally to the other two members of the committee. Before the committee submitted the declaration to Congress on June 28, they made forty-seven emendations to the document. During the ensuing congressional debates of July 1-4, 1776, Congress adopted thirty-nine further revisions to the committee draft. (http://www.constitution.org) If anything was an attempt for engaging the hearts and minds of the 13 Colonies at the time, this document certainly succeeded in its mission. ...Their tools at the time were pen and ink and parchment. Although the final document would later be typeset with lead type for a printing press to distribute to the colonies, all of the original drafts were hand written. And today's enterprise complains about using "Review and Track Changes" at times.  Can you imagine the manual revision control process? or lack thereof?  Collaborative process? Time delays? Would  implementing a better process have helped our founding fathers collaborate better? Declaration of Independence rough draft below. One of many during the creation process. Great comparison across multiple versions of the document here. (from http://www.ushistory.org/): While you may not be creating a new independent nation, getting your employees to engage is crucial to your success as a company in today's world. Oracle WebCenter provides the tools that power engagement. Employees that have better tools for communication, collaboration and getting their job done are more engaged employees. Better engaged employees create more engaged customers and partners. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • getting base url of web site's root (absolute/relative url)

    - by uzay95
    I want to completely understand how to use relative and absolute url address in static and dynamic files. ~ : / : .. : in a relative URL indicates the parent directory . : refers to the current directory / : always replaces the entire pathname of the base URL // : always replaces everything from the hostname onwards This example is easy when you are working without virtual directory. But i am working on virtual directory. Relative URI Absolute URI about.html http://WebReference.com/html/about.html tutorial1/ http://WebReference.com/html/tutorial1/ tutorial1/2.html http://WebReference.com/html/tutorial1/2.html / http://WebReference.com/ //www.internet.com/ http://www.internet.com/ /experts/ http://WebReference.com/experts/ ../ http://WebReference.com/ ../experts/ http://WebReference.com/experts/ ../../../ http://WebReference.com/ ./ http://WebReference.com/html/ ./about.html http://WebReference.com/html/about.html I want to simulate a site below, like my project which is working on virtual directory. These are my aspx and ascx folder http://hostAddress:port/virtualDirectory/MainSite/ASPX/default.aspx http://hostAddress:port/virtualDirectory/MainSite/ASCX/UserCtrl/login.ascx http://hostAddress:port/virtualDirectory/AdminSite/ASPX/ASCX/default.aspx These are my JS Files(which will be use both with the aspx and ascx files): http://hostAddress:port/virtualDirectory/MainSite/JavascriptFolder/jsFile.js http://hostAddress:port/virtualDirectory/AdminSite/JavascriptFolder/jsFile.js this is my static web page address(I want to show some pictures and run inside some js functions): http://hostAddress:port/virtualDirectory/HTMLFiles/page.html this is my image folder http://hostAddress:port/virtualDirectory/Images/PNG/arrow.png http://hostAddress:port/virtualDirectory/Images/GIF/arrow.png if i want to write and image file's link in my ASPX file i should write aspxImgCtrl.ImageUrl = Server.MapPath("~")+"/Images/GIF/arrow.png"; But if i want to write the path hard coded or from javascript file, what kind of url address it should be?

    Read the article

  • CruiseControl.NET : Sending email using SVN username to ActiveDirectory mapping

    - by matthewpe
    Is it possible to configure CruiseControl.NET to send an email to the user(s) that did modifications to a broken build by mapping their SVN username to the corresponding Active-Directory alias (hence retrieving the correct, updated email address). Our SVN server is set-up to allow users of a certain Active-Directory group to read and commit changes: I don't want to have to maintain the CruiseControl.NET config every time a user gets added to our Programmers group in Active-Directory. Thanks a lot!

    Read the article

  • How do I get a pathname in ASP.NET?

    - by Wayne Werner
    Hi, I'm working on an internal web app and I need to get a directory path from the user. I (obviously) can use the asp:FileUpload to get a file but I can't find anything to just get a directory path. Is there any (preferably simple) way to have a directory-chooser dialog in asp.net? I haven't been able to find any solution on Google or SO yet... Thanks

    Read the article

  • Good reasons to pass paths as strings instead of using DirectoryInfo/FileInfo

    - by neodymium
    In my new code I am not using strings to pass directory paths or file names. Instead I am using DirectoryInfo and FileInfo as they seem to encapsulate a lot of information. I have seen a lot of code that uses strings to pass directory information then they "split" and "mid" and "instr" in long incomprehensible statements until they get the part of the directory they are looking for. Is there any good reason to pass paths as strings?

    Read the article

  • rename folder into sub-folder with PHP

    - by Workoholic
    Hi. I'm trying to move a folder by renaming it. Both the test1 and test2 folders already exist. rename( "test1", "test2/xxx1/xxx2" ); The error I get is: rename(...): No such file or directory I assume this is because the directory "xxx1" does not exist. How can I move the test1 directory anyway?

    Read the article

  • Create a trailing, ghosting effect of a sprite

    - by Neeko
    I want to create a trailing, ghosting like effect of a sprite that's moving fast. Something very similar to this image of Sonic (apologies of bad quality, it's the only example I could find of the effect I'm looking to achieve) However, I don't want to do this at the sprite sheet level, to avoid having to essentially double (or possibly quadruple) the amount of sprites in my atlas. It's also very labor intensive. So is there any other way to achieve this effect? Possibly by some shader voodoo magic? I am using Unity and 2D Toolkit, if that helps.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >