Search Results

Search found 41903 results on 1677 pages for 'anonymous type'.

Page 409/1677 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Samba does not reload user group members

    - by xato
    I am running a simple samba server setup where users connect to a share which contains folders for specific user groups. The folders are chmod 2770, so only users which are in the correct group can read/write in them. The problem is that if I change group memberships (i.e. remove user from group / add user to group; changes are in sync between clients and server!) samba does not automatically reload the group memberships for the user, so they can still write to groups that they are no longer a member of etc. I either have to reconnect to the share or to restart samba to apply the changes. Is there any way to prevent group caching and/or enable group membership reload in samba? My smb.conf: https://gist.github.com/anonymous/ca7c10a3b3e2168d7a03

    Read the article

  • Access Denied on LAN IIS Access via Integrated Authentication

    - by Pharao2k
    I have an IIS 7.5 (Win2k8R2) Webserver, which publishes an UNC Share (on a Fileserver) with restricted access. The AppPool Identity is a Domain User-Account with read access to mentioned UNC path. Authentication modes are set to Anonymous and Integration Authentication. When I access the path via localhost from the Webserver itself, it works, but if I try the Hostname or IP from either the Webserver or a Client, I get three authentication prompts (does not accept my credentials) and a 401.3 Unauthorized error message (but it states that I am logged in as my normal credentials which definitely have access rights to the UNC path and its files). Security Zone is set to Local Intranet. Sysiniternals Process Monitor lists CreateFile operations on the UNC path (and other existing files in it) with Access Denied and Impersonating on the correct credentials. I don't understand why it is not working, it seems to use the correct credentials on every step on the way but fails with is operations.

    Read the article

  • Should I impersonate PHP via FastCGI?

    - by AKeller
    I am installing the latest version of PHP onto IIS 7.5 via FastCGI, and all of the instructions say that FastCGI should impersonate the calling client by setting fastcgi.impersonate = 1 If my website will have this configuration dedicated application pool application pool identity of ApplicationPoolIdentity anonymous authentication only (as IUSR) why do I want to impersonate? I come from an ASP.NET background, where the IUSR gets read-only permissions and the application pool identity gets any write permissions. Giving write access to the IUSR usually opens the door for WebDAV vulnerabilities. So I hesitate to let PHP run as the IUSR. I can't find many people asking this question (1 | 2) so I think I must be missing something. Can someone clarify this for me?

    Read the article

  • How can I anonymize my browser useragent, yet still be counted as a FF/Ubuntu user?

    - by Rory
    I read about EFF's Panopticlick project to see how unique your webbrowser's headers are. I would like to anonymize that a bit. My current User Agent is Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.7) Gecko/20100106 Ubuntu/9.10 (karmic) Firefox/3.5.7 I would like to make that more anonymous, however I still wanted to be counted as a Firefox and Ubuntu user. How can I change my User Agent in Firefox? What should I change it to so that it's less unique, but will be counted as a Firefox user and a Ubuntu user on web analytics software? I know that there is no guarantee that I will be counted a Firefox/Ubuntu user, just something that 'works most of the time' would be sufficent.

    Read the article

  • Getting around broken DNS

    - by Haedrian
    I have someone who's trying to connect to the internet from a connection where the DNS server seems to be down. They can connect if I give them an ip address, but it can't resolve normal names. I tried getting them to change the DNS used from the internet options (setting it to use Google's) - however that didn't work - apparently the isp captures all dns requests. I also tried with a proxy but that didn't work either. Is there another solution to the problem? This isn't a case of censorship or anything, so there's no need to remain anonymous or whatever - I just need a way of 'forcing' the use of another dns server or routing the internet using something else.

    Read the article

  • What Kind of Spam is This? Testing Blog Comment Limits

    - by Yar
    I received this comment on one of my blogs today (on blogger.com): Easily I agree but I about the post should acquire more info then it has. It's the third in a series. Before there was: I will not acquiesce in on it. I over precise post. Expressly the title attracted me to be familiar with the sound story. and before that Your blog keeps getting better and better! Your older articles are not as good as newer ones you have a lot more creativity and originality now keep it up! It is obviously computer-generated (well, not this last one). The comments are from Anonymous, so they're not trying to legitimate a user on Blogger. Is this a spam attack? What might its goal be? Or are they just testing my blog to see if I reject or not? Does this kind of "attack" have a name?

    Read the article

  • Accessing localhost on IIS7 from another computer on the network

    - by Adam
    I recently upgraded computers to Windows 7 Professional and am running IIS7. When I'm on my computer I can easily access localhost through my web browser but when I try from another computer on my network (replacing localhost with my computer name) it doesn't work. I also tried using "computername.domain.com" and still no luck. I can access other computers running Windows XP and IIS 5 but I'm having no luck accessing my own from another computer. I checked and my IIS7 has anonymous users enabled. Am I missing any other setting? Is this an IIS7 thing or am I missing a setting? Thanks in advance!

    Read the article

  • Account not getting completed deleted within linux

    - by lbanz
    I've got a nas box running some flavour of linux 2.6.31.8.nv+v2 with an arm processor. It has got a samba share called 'all' that has full read write access to everyone. However one Windows machine cannot access it without prompting for authentication and I found out from the logs that the windows account matches a local account on the nas box. What I then went to do is delete the local account on the nas. I can see that /home,/etc/password + /etc/shadow the account doesn't exist anymore. However the samba logs, shows that it thinks it is still there as it says account is disabled. I've tried rebooting both nas + windows box. Is there somewhere else that it stores account information? I logged on with a different account on that Windows machine and I can access the share fine. The smb logs shows that it can't find the user and then allows anonymous access.

    Read the article

  • Postfix configuration w.r.t. port 25

    - by Monkey Boson
    After a considerable amount of research, I have configured my postfix server to use dovecot to accept SMTPS connections over port 465 and everything works swimmingly. Unfortunately, I forgot that, unless I listen to port 25, I'm not going to receive any e-mail from the net. I'm hoping somebody knows off the top of their head how to open up port 25 on Postfix for anonymous users, but disallow relaying and any other bad things on that port. And to leave the port 465 the way it is. As to my current configuration, I changed the master.cf file: smtps inet n - n - - smtpd and the main.cf file: # Use our SSL certificates smtpd_tls_cert_file = .....cer smtpd_tls_key_file = .....key smtpd_tls_security_level = may # Use Dovecot for SASL authentication smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination Any help is appreciated!

    Read the article

  • dav_svn write access

    - by canavar
    Good day! I am configuring dav_svn and apache with ldap auth. What I want to do: allow anonymous READ access to repo allow write access to authenticated users Here comes my config: # Uncomment this to enable the repository DAV svn SVNPath /home/svn/ldap-test-repo AuthType Basic AuthName "LDAP-REPO Repository" AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthLDAPBindDN "cn=svn,ou=applications,dc=company,dc=net" AuthLDAPBindPassword "pass" AuthLDAPURL ldap://ldap.company.net:389/ou=Users,dc=company,dc=net?uid?sub?(objectClass=person) <Limit GET PROPFIND OPTIONS REPORT> Allow from all </Limit> <LimitExcept GET PROPFIND OPTIONS REPORT> Require ldap-group cn=group,ou=services,dc=company,dc=net </LimitExcept> But when I do a test this config doesn't work... I can do checkout without auth and commit without auth... What I am doing wrong? Thanks!

    Read the article

  • Interested in scp recipe for sftp [closed]

    - by GJZ
    You wrote in a reply this Blockquote The problem is that sftp runs as the user's id -- first, the sftp client ssh's into the target host as the given user, then runs sftp-server. Since sftp-server is running as a regular user, it has no way to "give away" a file (change owner of a file). However, if you are able to use scp, and assign a key pair to each user, you can get around this. This involves adding a user's key to root's ~/.ssh/authorized_keys file, with a "command=" parameter to force it to run a script that sanitizes and alters the arguments of the server-side scp program. I've used this technique before to set up an anonymous scp dropbox that allowed anyone to submit a file, and ensure that no one could retrieve submitted files and also prevent overwrites. If you are open to this technique, let me know and I'll update this post with a quick recipe. We are interested in this scp quick recipe for our community services file sharing. Best Regards, Gert Jan Zeilstra

    Read the article

  • Taking user out of MACHINENAME\Users group does not disallow them from authenticating with IIS site

    - by jayrdub
    I have a site that has anonymous access disabled and uses only IIS basic authentication. The site's home directory only has the MACHINENAME\Users group with permissions. I have one user that I don't want to be able to log-in to this site, so I thought all I would need to do is take that user out of the Users group, but doing so still allows him to authenticate. I know it is the Users group that is allowing authentication because if I remove that group's permissions on the directory, he is not allowed to log in. Is there something special about the Users group that makes it so you are actually always a part of it? Is the only solution to revoke the Users group's permissions on the site's home directory and grant a new group access that contains only the allowed users?

    Read the article

  • Private Git repo using Smart HTTP with LDAP authentification

    - by ALOToverflow
    I've been crawling the interwebz and getting my hands dirty for the last few days, but I can't seem to make it all work together. I managed to get a HTTP repo working with Ubuntu 10.04 over Smart HTTP (pull and push over HTTP) for a single repo. This means that I do the initial setup over SSH to the server (git init --bare) and after that the clients can pull and push to it (git clone http://servername/allgitrepos/repo.git). Unfortunately it's impossible to add a new repo without SSHing to the server and adding it manually) i.e. git push http://servername/allgitrepos/repo2.git (allgitrepos is available for everyone to read-write and execute) would fail talking about git update-server-info (which seems to be a general error message). So far the repository is anonymous, so I would like to authenticate using LDAP and also use the LDAP creds to make the git commit. So, how can I push new repos to the server and how can I use the LDAP creds to make the git commit. Thanks

    Read the article

  • Can only connect to IIS site through localhost

    - by Rembrandt Q. Einstein
    I'm building a web service for my company's iPhone application, and everything's been working smoothly by running tests through localhost on the development machine. I'm now in the phase where I need to test connections from other computers within the network, and any connection other than localhost gives me a 404. My internal IP, 127.0.0.1, and computername all get 404 when connecting from any computer, either the one the site's hosted on or any others on the network. Telnet can get through to port 80, and I've temporarily disabled all firewalls on this machine (I do not have control over the external firewall, but I'm only testing connections within the network) Does anyone have a clue why this is happening? I was able to connect to the web service from other computers when hosted on a Mac via Apache, but because I'm now using a SQL Server connection I'm restricted to using IIS for Windows Authentication. Googling only provided answers related to firewalls, and mine is disabled note: I cannot use Anonymous Authentication, but even in testing that it did not affect the issue.

    Read the article

  • Sharepoint 2010 can't find domain users when granting permissions

    - by quani
    I'm trying to grant permissions to other people to view a SharePoint site but when granting permissions it uses "Check Names" and claims any user or group that is part of a domain does not exist. It does this if I try granting permissions to the team site or in central admin BUT if I try to add someone to Farm Administrators in Central admin then all of the sudden it can find all domain users. Why is it finding domain users in that one context but not others? It is supposed to be using NTLM authentication and has Windows configured as the authentication provider (And IIS is configured to use NTLM). What's even more strange is I enabled Anonymous Access for the team site which I thought would allow anyone to view it but others say they can't access it.

    Read the article

  • Is there a more elegant way to apply conditions in nginx?

    - by Ryan Detzel
    Is there a better way to do this? I can't find a way to nest or apply boolean operators to conditions in nginx. Basically if there is a cookie set(non-anonymous user) we want to hit the server. If the cookie is not set and the file exists we want to server the file otherwise hit the server. set $test "D"; if ($http_cookie ~* "session" ) { set $test "${test}C"; } if (-f $request_filename/index.html$is_args$args) { set $test "${test}F"; } if ($test = DF){ rewrite (.*)/ $1/index.html$is_args$args? break; } if ($test = DCF){ proxy_pass http://django; break; } if ($test = DC){ proxy_pass http://django; break; } if ($test = D){ proxy_pass http://django; break; }

    Read the article

  • Is it necessary for a Windows Server 2008 R2 to join a domain so that its IIS can communicate correctly?

    - by Jack
    I have a Windows Server 2008 R2 that is not join to any domain. I have developed an web application that will display the domain name and the username on the server itself. However, when I publish my web application to IIS, it always fail and display different types of error messages (because I change settings such as Enabled ASP.NET Impersonation, Disable Anonymous Authentication, Set Application Pool to Classic and so on) So, I was wondering if it is necessary for the Server to join in a domain so that I can reduce any unnecessary error message and be able to zoom into the correct direction?

    Read the article

  • IIS asks for login/pass when accessed using hostname but not when ‘localhost’ is used. Why?

    - by sb
    Hi All, I have setup IIS on my xp machine and have setup a default homepage (that comes with the IIS installed). It is a help page I think. when I access the page with http : // localhost it works fine (IE/Chrome or FF) but when I access it using http://hostname it prompts for a loging/password and works when I enter my domain id and password on the intranet. I have ensured that "anonymous access" is enabled in the properties window of the default site and "websites" node. I searched stack overflow for similar queries but some indicate I need to change the IE/FF settings to allow "integrated security" etc and some suggest to look at the "log file". I don't want to change the IE setting and there is nothing unusual in the log file of the IIS Server. Can anybody help me figure out why this is happening? thank you sb

    Read the article

  • FTP Folder Permissions / IIS8

    - by raam030
    I am having trouble copying information from one folder on an FTP site to another folder. Accessing the FTP site from a windows explorer. I have set Full Control over the parent folder, and I double checked...I have full control over the two folders that is trying to copy information from and to. It actually lets you right click and copy. Then when you try to go to another folder and right-click and paste, the paste option is grayed out. I was able to do it before and no one has changed the IIS permissions. I believe it's a Windows issue. Is it possible that even though the permissions are set to give full control over that directory, that something else is interfering? I did double check the IIS permissions. I am not on a domain, using anonymous access, made sure the access control is set to read/write.

    Read the article

  • Grafting LINQ onto C# 2 library

    - by P Daddy
    I'm writing a data access layer. It will have C# 2 and C# 3 clients, so I'm compiling against the 2.0 framework. Although encouraging the use of stored procedures, I'm still trying to provide a fairly complete ability to perform ad-hoc queries. I have this working fairly well, already. For the convenience of C# 3 clients, I'm trying to provide as much compatibility with LINQ query syntax as I can. Jon Skeet noticed that LINQ query expressions are duck typed, so I don't have to have an IQueryable and IQueryProvider (or IEnumerable<T>) to use them. I just have to provide methods with the correct signatures. So I got Select, Where, OrderBy, OrderByDescending, ThenBy, and ThenByDescending working. Where I need help are with Join and GroupJoin. I've got them working, but only for one join. A brief compilable example of what I have is this: // .NET 2.0 doesn't define the Func<...> delegates, so let's define some workalikes delegate TResult FakeFunc<T, TResult>(T arg); delegate TResult FakeFunc<T1, T2, TResult>(T1 arg1, T2 arg2); abstract class Projection{ public static Condition operator==(Projection a, Projection b){ return new EqualsCondition(a, b); } public static Condition operator!=(Projection a, Projection b){ throw new NotImplementedException(); } } class ColumnProjection : Projection{ readonly Table table; readonly string columnName; public ColumnProjection(Table table, string columnName){ this.table = table; this.columnName = columnName; } } abstract class Condition{} class EqualsCondition : Condition{ readonly Projection a; readonly Projection b; public EqualsCondition(Projection a, Projection b){ this.a = a; this.b = b; } } class TableView{ readonly Table table; readonly Projection[] projections; public TableView(Table table, Projection[] projections){ this.table = table; this.projections = projections; } } class Table{ public Projection this[string columnName]{ get{return new ColumnProjection(this, columnName);} } public TableView Select(params Projection[] projections){ return new TableView(this, projections); } public TableView Select(FakeFunc<Table, Projection[]> projections){ return new TableView(this, projections(this)); } public Table Join(Table other, Condition condition){ return new JoinedTable(this, other, condition); } public TableView Join(Table inner, FakeFunc<Table, Projection> outerKeySelector, FakeFunc<Table, Projection> innerKeySelector, FakeFunc<Table, Table, Projection[]> resultSelector){ Table join = new JoinedTable(this, inner, new EqualsCondition(outerKeySelector(this), innerKeySelector(inner))); return join.Select(resultSelector(this, inner)); } } class JoinedTable : Table{ readonly Table left; readonly Table right; readonly Condition condition; public JoinedTable(Table left, Table right, Condition condition){ this.left = left; this.right = right; this.condition = condition; } } This allows me to use a fairly decent syntax in C# 2: Table table1 = new Table(); Table table2 = new Table(); TableView result = table1 .Join(table2, table1["ID"] == table2["ID"]) .Select(table1["ID"], table2["Description"]); But an even nicer syntax in C# 3: TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] select new[]{t1["ID"], t2["Description"]}; This works well and gives me identical results to the first case. The problem is if I want to join in a third table. TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] join t3 in table3 on t1["ID"] equals t3["ID"] select new[]{t1["ID"], t2["Description"], t3["Foo"]}; Now I get an error (Cannot implicitly convert type 'AnonymousType#1' to 'Projection[]'), presumably because the second join is trying to join the third table to an anonymous type containing the first two tables. This anonymous type, of course, doesn't have a Join method. Any hints on how I can do this?

    Read the article

  • Get backreferences values and modificate these values

    - by roasted
    Could you please explain why im not able to get values of backreferences from a matched regex result and apply it some modification before effective replacement? The expected result is replacing for example string ".coord('X','Y')" by "X * Y". But if X to some value, divide this value by 2 and then use this new value in replacement. Here the code im currently testing: See /*>>1<<*/ & /*>>2<<*/ & /*>>3<<*/, this is where im stuck! I would like to be able to apply modification on backrefrences before replacement depending of backreferences values. Difference between /*>>2<<*/ & /*>>3<<*/ is just the self call anonymous function param The method /*>>2<<*/ is the expected working solution as i can understand it. But strangely, the replacement is not working correctly, replacing by alias $1 * $2 and not by value...? You can test the jsfiddle //string to test ".coord('125','255')" //array of regex pattern and replacement //just one for the example //for this example, pattern matching alphanumerics is not necessary (only decimal in coord) but keep it as it var regexes = [ //FORMAT is array of [PATTERN,REPLACEMENT] /*.coord("X","Y")*/ [/\.coord\(['"]([\w]+)['"],['"]?([\w:\.\\]+)['"]?\)/g, '$1 * $2'] ]; function testReg(inputText, $output) { //using regex for (var i = 0; i < regexes.length; i++) { /*==>**1**/ //this one works as usual but dont let me get backreferences values $output.val(inputText.replace(regexes[i][0], regexes[i][2])); /*==>**2**/ //this one should works as i understand it $output.val(inputText.replace(regexes[i][0], function(match, $1, $2, $3, $4) { $1 = checkReplace(match, $1, $2, $3, $4); //here want using $1 modified value in replacement return regexes[i][3]; })); /*==>**3**/ //this one is just a test by self call anonymous function $output.val(inputText.replace(regexes[i][0], function(match, $1, $2, $3, $4) { $1 = checkReplace(match, $1, $2, $3, $4); //here want using $1 modified value in replacement return regexes[i][4]; }())); inputText = $output.val(); } } function checkReplace(match, $1, $2, $3, $4) { console.log(match + ':::' + $1 + ':::' + $2 + ':::' + $3 + ':::' + $4); //HERE i should be able if lets say $1 > 200 divide it by 2 //then returning $1 value if($1 > 200) $1 = parseInt($1 / 2); return $1; }? Sure I'm missing something, but cannot get it! Thanks for your help, regards. EDIT WORKING METHOD: Finally get it, as mentionned by Eric: The key thing is that the function returns the literal text to substitute, not a string which is parsed for backreferences.?? JSFIDDLE So complete working code: (please note as pattern replacement will change for each matched pattern and optimisation of speed code is not an issue here, i will keep it like that) $('#btn').click(function() { testReg($('#input').val(), $('#output')); }); //array of regex pattern and replacement //just one for the example var regexes = [ //FORMAT is array of [PATTERN,REPLACEMENT] /*.coord("X","Y")*/ [/\.coord\(['"]([\w]+)['"],['"]?([\w:\.\\]+)['"]?\)/g, '$1 * $2'] ]; function testReg(inputText, $output) { //using regex for (var i = 0; i < regexes.length; i++) { $output.val(inputText.replace(regexes[i][0], function(match, $1, $2, $3, $4) { var checkedValues = checkReplace(match, $1, $2, $3, $4); $1 = checkedValues[0]; $2 = checkedValues[1]; regexes[i][1] = regexes[i][1].replace('$1', $1).replace('$2', $2); return regexes[i][1]; })); inputText = $output.val(); } } function checkReplace(match, $1, $2, $3, $4) { console.log(match + ':::' + $1 + ':::' + $2 + ':::' + $3 + ':::' + $4); if ($1 > 200) $1 = parseInt($1 / 2); if ($2 > 200) $2 = parseInt($2 / 2); return [$1,$2]; }

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • Adding the New HTML Editor Extender to a Web Forms Application using NuGet

    - by Stephen Walther
    The July 2011 release of the Ajax Control Toolkit includes a new, lightweight, HTML5 compatible HTML Editor extender. In this blog entry, I explain how you can take advantage of NuGet to quickly add the new HTML Editor control extender to a new or existing ASP.NET Web Forms application. Installing the Latest Version of the Ajax Control Toolkit with NuGet NuGet is a package manager. It enables you to quickly install new software directly from within Visual Studio 2010. You can use NuGet to install additional software when building any type of .NET application including ASP.NET Web Forms and ASP.NET MVC applications. If you have not already installed NuGet then you can install NuGet by navigating to the following address and clicking the giant install button: http://nuget.org/ After you install NuGet, you can add the Ajax Control Toolkit to a new or existing ASP.NET Web Forms application by selecting the Visual Studio menu option Tools, Library Package Manager, Package Manager Console: Selecting this menu option opens the Package Manager Console. You can enter the command Install-Package AjaxControlToolkit in the console to install the Ajax Control Toolkit: After you install the Ajax Control Toolkit with NuGet, your application will include an assembly reference to the AjaxControlToolkit.dll and SanitizerProviders.dll assemblies: Furthermore, your Web.config file will be updated to contain a new tag prefix for the Ajax Control Toolkit controls: <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> The configuration file installed by NuGet adds the prefix ajaxToolkit for all of the Ajax Control Toolkit controls. You can type ajaxToolkit: in source view to get auto-complete in Source view. You can, of course, change this prefix to anything you want. Using the HTML Editor Extender After you install the Ajax Control Toolkit, you can use the HTML Editor Extender with the standard ASP.NET TextBox control to enable users to enter rich formatting such as bold, underline, italic, different fonts, and different background and foreground colors. For example, the following page can be used for entering comments. The page contains a standard ASP.NET TextBox, Button, and Label control. When you click the button, any text entered into the TextBox is displayed in the Label control. It is a pretty boring page: Let’s make this page fancier by extending the standard ASP.NET TextBox with the HTML Editor extender control: Notice that the ASP.NET TextBox now has a toolbar which includes buttons for performing various kinds of formatting. For example, you can change the size and font used for the text. You also can change the foreground and background color – and make many other formatting changes. You can customize the toolbar buttons which the HTML Editor extender displays. To learn how to customize the toolbar, see the HTML Editor Extender sample page here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/HTMLEditorExtender/HTMLEditorExtender.aspx Here’s the source code for the ASP.NET page: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="WebApplication1.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Add Comments</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="TSM1" runat="server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="50" Rows="8" Runat="server" /> <ajaxToolkit:HtmlEditorExtender ID="hee" TargetControlID="txtComments" Runat="server" /> <br /><br /> <asp:Button ID="btnSubmit" Text="Add Comment" Runat="server" onclick="btnSubmit_Click" /> <hr /> <asp:Label ID="lblComment" Runat="server" /> </div> </form> </body> </html> Notice that the page above contains 5 controls. The page contains a standard ASP.NET TextBox, Button, and Label control. However, the page also contains an Ajax Control Toolkit ToolkitScriptManager control and HtmlEditorExtender control. The HTML Editor extender control extends the standard ASP.NET TextBox control. The HTML Editor TargetID attribute points at the TextBox control. Here’s the code-behind for the page above:   using System; namespace WebApplication1 { public partial class Default : System.Web.UI.Page { protected void btnSubmit_Click(object sender, EventArgs e) { lblComment.Text = txtComments.Text; } } }   Preventing XSS/JavaScript Injection Attacks If you use an HTML Editor -- any HTML Editor -- in a public facing web page then you are opening your website up to Cross-Site Scripting (XSS) attacks. An evil hacker could submit HTML using the HTML Editor which contains JavaScript that steals private information such as other user’s passwords. Imagine, for example, that you create a web page which enables your customers to post comments about your website. Furthermore, imagine that you decide to redisplay the comments so every user can see them. In that case, a malicious user could submit JavaScript which displays a dialog asking for a user name and password. When an unsuspecting customer enters their secret password, the script could transfer the password to the hacker’s website. So how do you accept HTML content without opening your website up to JavaScript injection attacks? The Ajax Control Toolkit HTML Editor supports the Anti-XSS library. You can use the Anti-XSS library to sanitize any HTML content. The Anti-XSS library, for example, strips away all JavaScript automatically. You can download the Anti-XSS library from NuGet. Open the Package Manager Console and execute the command Install-Package AntiXSS: Adding the Anti-XSS library to your application adds two assemblies to your application named AntiXssLibrary.dll and HtmlSanitizationLibrary.dll. After you install the Anti-XSS library, you can configure the HTML Editor extender to use the Anti-XSS library your application’s web.config file: <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit"/> </sectionGroup> </configSections> <system.web> <sanitizer defaultProvider="AntiXssSanitizerProvider"> <providers> <add name="AntiXssSanitizerProvider" type="AjaxControlToolkit.Sanitizer.AntiXssSanitizerProvider"></add> </providers> </sanitizer> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> Summary In this blog entry, I described how you can quickly get started using the new HTML Editor extender – included with the July 2011 release of the Ajax Control Toolkit – by installing the Ajax Control Toolkit with NuGet. If you want to learn more about the HTML Editor then please take a look at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/HTMLEditorExtender/HTMLEditorExtender.aspx

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >