Search Results

Search found 11051 results on 443 pages for 'group concat'.

Page 107/443 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • How do I classify using SVM Classifier?

    - by Gomathi
    I'm doing a project in liver tumor classification. Actually I initially used Region Growing method for liver segmentation and from that I segmented tumor using FCM. I,then, obtained the texture features using Gray Level Co-occurence Matrix. My output for that was stats = autoc: [1.857855266614132e+000 1.857955341199538e+000] contr: [5.103143332457753e-002 5.030548650257343e-002] corrm: [9.512661919561399e-001 9.519459060378332e-001] corrp: [9.512661919561385e-001 9.519459060378338e-001] cprom: [7.885631654779597e+001 7.905268525471267e+001] Now how should I give this as an input to the SVM program. function [itr] = multisvm( T,C,tst ) %MULTISVM(2.0) classifies the class of given training vector according to the % given group and gives us result that which class it belongs. % We have also to input the testing matrix %Inputs: T=Training Matrix, C=Group, tst=Testing matrix %Outputs: itr=Resultant class(Group,USE ROW VECTOR MATRIX) to which tst set belongs %----------------------------------------------------------------------% % IMPORTANT: DON'T USE THIS PROGRAM FOR CLASS LESS THAN 3, % % OTHERWISE USE svmtrain,svmclassify DIRECTLY or % % add an else condition also for that case in this program. % % Modify required data to use Kernel Functions and Plot also% %----------------------------------------------------------------------% % Date:11-08-2011(DD-MM-YYYY) % % This function for multiclass Support Vector Machine is written by % ANAND MISHRA (Machine Vision Lab. CEERI, Pilani, India) % and this is free to use. email: [email protected] % Updated version 2.0 Date:14-10-2011(DD-MM-YYYY) u=unique(C); N=length(u); c4=[]; c3=[]; j=1; k=1; if(N>2) itr=1; classes=0; cond=max(C)-min(C); while((classes~=1)&&(itr<=length(u))&& size(C,2)>1 && cond>0) %This while loop is the multiclass SVM Trick c1=(C==u(itr)); newClass=c1; svmStruct = svmtrain(T,newClass); classes = svmclassify(svmStruct,tst); % This is the loop for Reduction of Training Set for i=1:size(newClass,2) if newClass(1,i)==0; c3(k,:)=T(i,:); k=k+1; end end T=c3; c3=[]; k=1; % This is the loop for reduction of group for i=1:size(newClass,2) if newClass(1,i)==0; c4(1,j)=C(1,i); j=j+1; end end C=c4; c4=[]; j=1; cond=max(C)-min(C); % Condition for avoiding group %to contain similar type of values %and the reduce them to process % This condition can select the particular value of iteration % base on classes if classes~=1 itr=itr+1; end end end end Kindly guide me. Images:

    Read the article

  • segmented reduction with scattered segments

    - by Christian Rau
    I got to solve a pretty standard problem on the GPU, but I'm quite new to practical GPGPU, so I'm looking for ideas to approach this problem. I have many points in 3-space which are assigned to a very small number of groups (each point belongs to one group), specifically 15 in this case (doesn't ever change). Now I want to compute the mean and covariance matrix of all the groups. So on the CPU it's roughly the same as: for each point p { mean[p.group] += p.pos; covariance[p.group] += p.pos * p.pos; ++count[p.group]; } for each group g { mean[g] /= count[g]; covariance[g] = covariance[g]/count[g] - mean[g]*mean[g]; } Since the number of groups is extremely small, the last step can be done on the CPU (I need those values on the CPU, anyway). The first step is actually just a segmented reduction, but with the segments scattered around. So the first idea I came up with, was to first sort the points by their groups. I thought about a simple bucket sort using atomic_inc to compute bucket sizes and per-point relocation indices (got a better idea for sorting?, atomics may not be the best idea). After that they're sorted by groups and I could possibly come up with an adaption of the segmented scan algorithms presented here. But in this special case, I got a very large amount of data per point (9-10 floats, maybe even doubles if the need arises), so the standard algorithms using a shared memory element per thread and a thread per point might make problems regarding per-multiprocessor resources as shared memory or registers (Ok, much more on compute capability 1.x than 2.x, but still). Due to the very small and constant number of groups I thought there might be better approaches. Maybe there are already existing ideas suited for these specific properties of such a standard problem. Or maybe my general approach isn't that bad and you got ideas for improving the individual steps, like a good sorting algorithm suited for a very small number of keys or some segmented reduction algorithm minimizing shared memory/register usage. I'm looking for general approaches and don't want to use external libraries. FWIW I'm using OpenCL, but it shouldn't really matter as the general concepts of GPU computing don't really differ over the major frameworks.

    Read the article

  • How do I do distributed UML development (à la FOSS)?

    - by James A. Rosen
    I have a UML project (built in IBM's Rational System Architect/Modeler, so stored in their XML format) that has grown quite large. Additionally, it now contains several pieces that other groups would like to re-use. I come from a software development (especially FOSS) background, and am trying to understand how to use that as an analogy here. The problem I am grappling with is similar to the Fragile Base Class problem. Let me start with how it works in an object-oriented (say, Java or Ruby) FOSS ecosystem: Group 1 publishes some "core" package, say "net/smtp version 1.0" Group 2 includes Group 1's net/smtp 1.0 package in the vendor library of their software project At some point, Group 1 creates a new 2.0 branch of net/smtp that breaks backwards compatibility (say, it removes an old class or method, or moves a class from one package to another). They tell users of the 1.0 version that it will be deprecated in one year. Group 2, when they have the time, updates to net/smtp 2.0. When they drop in the new package, their compiler (or test suite, for Ruby) tells them about the incompatibility. They do have to make some manual changes, but all of the changes are in the code, in plain text, a medium with which they are quite familiar. Plus, they can often use their IDE's (or text editor's) "global-search-and-replace" function once they figure out what the fixes are. When we try to apply this model to UML in RSA, we run into some problems. RSA supports some fairly powerful refactorings, but they seem to only work if you have write access to all of the pieces. If I rename a class in one package, RSA can rename the references, but only at the same time. It's very difficult to look at the underlying source (the XML) and figure out what's broken. To fix such a problem in the RSA editor itself means tons of clicking on things -- there is no good equivalent of "global-search-and-replace," at least not after an incomplete refactor. They real sticking point seems to be that RSA assumes that you want to do all your editing using their GUI, but that makes certain operations prohibitively difficult. Does anyone have examples of open-source UML projects that have overcome this problem? What strategies do they use for communicating changes?

    Read the article

  • Problem with Clojure function

    - by Bozhidar Batsov
    Hi, everyone, I've started working yesterday on the Euler Project in Clojure and I have a problem with one of my solutions I cannot figure out. I have this function: (defn find-max-palindrom-in-range [beg end] (reduce max (loop [n beg result []] (if (>= n end) result (recur (inc n) (concat result (filter #(is-palindrom? %) (map #(* n %) (range beg end))))))))) I try to run it like this: (find-max-palindrom-in-range 100 1000) and I get this exception: java.lang.Integer cannot be cast to clojure.lang.IFn [Thrown class java.lang.ClassCastException] which I presume means that at some place I'm trying to evaluate an Integer as a function. I however cannot find this place and what puzzles me more is that everything works if I simply evaluate it like this: (reduce max (loop [n 100 result []] (if (>= n 1000) result (recur (inc n) (concat result (filter #(is-palindrom? %) (map #(* n %) (range 100 1000)))))))) (I've just stripped down the function definition and replaced the parameters with constants) Thanks in advance for your help and sorry that I probably bother you with idiotic mistake on my part. Btw I'm using Clojure 1.1 and the newest SLIME from ELPA.

    Read the article

  • Which field is explain telling me to index?

    - by shady
    I don't understand what this explain statement is saying. Which field needs an index?. The first line to me is confusing because ref is null. Here's the query I'm using: SELECT pp.property_id AS 'good_prop_id', pr.site_number AS 'pr.site_number', CONCAT(pr.site_street_name, ' ', pr.site_street_type) AS 'pr.partial_addr', pr.county FROM realval_newdb.preforeclosures AS pr INNER JOIN realval_newdb.properties_preforeclosures AS pp USE INDEX (mee_id) ON (pr.mee_id = pp.mee_id) INNER JOIN listings_copy AS lc ON (pr.site_number = lc.site_number) AND (lc.site_street_name = CONCAT(pr.site_street_name, ' ', pr.site_street_type)) WHERE lc.site_county = pr.county LIMIT 1; Can anyone help me optimize this query?

    Read the article

  • Can XPath concatenate two nodeset values? (for use in XForms)

    - by iHeartGreek
    Hi! I am wanting to concatenate two nodeset values using XPath in XForms. I know that XPath has a concat(string, string) function, but how would I go about concatenating two nodeset values? BEGIN EDIT: I tried concat function.. I tried this.. and variations of it to make it work, but it doesn't <xf:value ref="concat(instance('param_choices')/choice/root, .)"/> END EDIT Below is a simplified code example of what I am trying to achieve. XForms model: <xf:instance id="param_choices" xmlns=""> <choices> <root label="Param Choices">/param</root> <choice label="Name">/@AAA</choice> <choice label="Value">/@BBB</choice> </choices> </xf:instance> XForms ui code that I currently have: <xf:select ref="instance('criteria_data')/criteria/criterion" appearance="full"> <xf:label>Param choices:</xf:label> <br/> <xf:itemset nodeset="instance('param_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> (if user selects "Name" checkbox..) the XML output is: <criterion>/@BBB</criterion> However! I want to combine the root nodeset value with the current choice nodeset value. Essentially: <xf:value ref="(instance('definition_choices')/choice/root) + ."/> to achieve the following XML output: <criterion>/param/@BBB</criterion> Any suggestions on how to do this? (I am fairly new to XPath and XForms) p.s. what I am asking makes sense to me when I typed it out, but if you have trouble figuring out what I'm asking, just let me know.. thanks!

    Read the article

  • HTML Text writer

    - by user339160
    In my project , i have created a custom control which inherits from label. My aim is to add 2 links in this . I need to use the same label to render these two links.i tried the bellow code only the first link is loading ,not the second .Please help my sample code looks like writer.RenderBeginTag(HtmlTextWriterTag.Link); writer.AddAttribute(HtmlTextWriterAttribute.Href, string.Concat(Path 1)); writer.AddAttribute(HtmlTextWriterAttribute.Type, "text/css"); writer.AddAttribute(HtmlTextWriterAttribute.Rel, "stylesheet"); writer.RenderEndTag(); writer.RenderBeginTag(HtmlTextWriterTag.Link); writer.AddAttribute(HtmlTextWriterAttribute.Href, string.Concat(Path 2 )); writer.AddAttribute(HtmlTextWriterAttribute.Type, "text/css"); writer.AddAttribute(HtmlTextWriterAttribute.Rel, "stylesheet"); writer.RenderEndTag();

    Read the article

  • Time diff calculations where date and time are in seperate columns

    - by pedalpete
    I've got a query where I'm trying to get the hours in duration (eg 6.5 hours) between two different times. In my database, time and date are held in different fields so I can efficiently query on just a startDate, or endDate as I never query specifically on time. My query looks like this SELECT COUNT(*), IFNULL(SUM(TIMEDIFF(endTime,startTime)),0) FROM events WHERE user=18 Sometimes an event will go overnight, so the difference between times needs to take into account the differences between the dates as well. I've been trying SELECT COUNT(*), IFNULL(SUM(TIMEDIFF(CONCAT(endDate,' ',endTime),CONCAT(startDate,' ',startTime))),0) FROM events WHERE user=18 Unfortunately I only get errors when I do this, and I can't seem to combine the two fields into a single timestamp.

    Read the article

  • Java: Multithreading & UDP Socket Programming

    - by Ravi
    I am new to multithreading & socket programming in Java. I would like to know what is the best way to implement 2 threads - one for receiving a socket and one for sending a socket. If what I am trying to do sounds absurd, pls let me know why! The code is largely inspired from Sun's tutorials online.I want to use Multicast sockets so that I can work with a multicast group. class server extends Thread { static protected MulticastSocket socket = null; protected BufferedReader in = null; public InetAddress group; private static class receive implements Runnable { public void run() { try { byte[] buf = new byte[256]; DatagramPacket pkt = new DatagramPacket(buf,buf.length); socket.receive(pkt); String received = new String(pkt.getData(),0,pkt.getLength()); System.out.println("From server@" + received); Thread.sleep(1000); } catch (IOException e) { System.out.println("Error:"+e); } catch (InterruptedException e) { System.out.println("Error:"+e); } } } public server() throws IOException { super("server"); socket = new MulticastSocket(4446); group = InetAddress.getByName("239.231.12.3"); socket.joinGroup(group); } public void run() { while(1>0) { try { byte[] buf = new byte[256]; DatagramPacket pkt = new DatagramPacket(buf,buf.length); //String msg = reader.readLine(); String pid = ManagementFactory.getRuntimeMXBean().getName(); buf = pid.getBytes(); pkt = new DatagramPacket(buf,buf.length,group,4446); socket.send(pkt); Thread t = new Thread(new receive()); t.start(); while(t.isAlive()) { t.join(1000); } sleep(1); } catch (IOException e) { System.out.println("Error:"+e); } catch (InterruptedException e) { System.out.println("Error:"+e); } } //socket.close(); } public static void main(String[] args) throws IOException { new server().start(); //System.out.println("Hello"); } }

    Read the article

  • Creating and Saving an Excel File

    - by Kris
    I have the following code that creates a new Excel file in my C# code behind. When I attempt to save the file I would like the user to select the location of the save. In Method #1, I can save the file my using the workbook SaveCopyAs without prompting the user for a location. This saves one file to the C:\Temp directory. Method #2 will save the file in my Users\Documents folder, then prompt the user to select the location and save a second copy. How can I eliminate the first copy from saving in the Users\Documents folder? Excel.Application oXL; Excel._Workbook oWB; Excel._Worksheet oSheet; Excel.Range oRng; try { //Start Excel and get Application object. oXL = new Excel.Application(); oXL.Visible = false; //Get a new workbook. oWB = (Excel._Workbook)(oXL.Workbooks.Add(Missing.Value)); oSheet = (Excel._Worksheet)oWB.ActiveSheet; // ***** oSheet.Cells[2, 6] = "Ship To:"; oSheet.get_Range("F2", "F2").Font.Bold = true; oSheet.Cells[2, 7] = sShipToName; oSheet.Cells[3, 7] = sAddress; oSheet.Cells[4, 7] = sCityStateZip; oSheet.Cells[5, 7] = sContactName; oSheet.Cells[6, 7] = sContactPhone; oSheet.Cells[9, 1] = "Shipment No:"; oSheet.get_Range("A9", "A9").Font.Bold = true; oSheet.Cells[9, 2] = sJobNumber; oSheet.Cells[9, 6] = "Courier:"; oSheet.get_Range("F9", "F9").Font.Bold = true; oSheet.Cells[9, 7] = sCarrierName; oSheet.Cells[11, 1] = "Requested Delivery Date:"; oSheet.get_Range("A11", "A11").Font.Bold = true; oSheet.Cells[11, 2] = sRequestDeliveryDate; oSheet.Cells[11, 6] = "Courier Acct No:"; oSheet.get_Range("F11", "F11").Font.Bold = true; oSheet.Cells[11, 7] = sCarrierAcctNum; // ***** Method #1 //oWB.SaveCopyAs(@"C:\Temp\" + sJobNumber +".xls"); Method #2 oXL.SaveWorkspace(sJobNumber + ".xls"); } catch (Exception theException) { String errorMessage; errorMessage = "Error: "; errorMessage = String.Concat(errorMessage, theException.Message); errorMessage = String.Concat(errorMessage, " Line: "); errorMessage = String.Concat(errorMessage, theException.Source); }

    Read the article

  • How do I make my MySQL query with joins more concise?

    - by John Hoffman
    I have a huge MySQL query that depends on JOINs. SELECT m.id, l.name as location, CONCAT(u.firstName, " ", u.lastName) AS matchee, u.email AS mEmail, u.description AS description, m.time AS meetingTime FROM matches AS m LEFT JOIN locations AS l ON locationID=l.id LEFT JOIN users AS u ON (u.id=m.user1ID) WHERE m.user2ID=2 UNION SELECT m.id, l.name as location, CONCAT(u.firstName, " ", u.lastName) AS matchee, u.email AS mEmail, u.description AS description, m.time AS meetingTime FROM matches AS m LEFT JOIN locations AS l ON locationID=l.id LEFT JOIN users AS u ON (u.id=m.user2ID) WHERE m.user1ID=2 The first 3 lines of each sub-statement divided by UNION are identical. How can I abide by the DRY principle, not repeat those three lines, and make this query more concise?

    Read the article

  • why LINQ 2 SQL sometime add a field like select 1 as test, others valid fields....

    - by Fredou
    I have to concat 2 linq2sql query and I have an issue since the 2 query doesn't return the same number of columns, what is weird is after a .ToList() on the queries, they can concat without problem. The reason is, for some linq2sql reason, I have 2 more column named test and test2 which come from 2 left outer join that linq2sql automatically create, something like "select 1 as test, tablefields" Is there any good reason for that? how to remove this extra "1 as test" field? here a few of examples of what it look like: google result for linq 2 sql "select 1 as test"

    Read the article

  • Could not find any resources appropriate for the specified culture or the neutral culture.

    - by Captain Comic
    I have created an assembly and later renamed it. Then i started getting runtime errors when calling toolsMenuName = resourceManager.GetString(resourceName); The resourceName variable is "enTools" at runtime. Could not find any resources appropriate for the specified culture or the neutral culture. Make sure "Jfc.TFSAddIn.CommandBar.resources" was correctly embedded or linked into assembly "Jfc.TFSAddIn" at compile time, or that all the satellite assemblies required are loadable and fully signed. The code: string resourceName; ResourceManager resourceManager = new ResourceManager("Jfc.TFSAddIn.CommandBar", Assembly.GetExecutingAssembly()); CultureInfo cultureInfo = new CultureInfo(_applicationObject.LocaleID); if(cultureInfo.TwoLetterISOLanguageName == "zh") { System.Globalization.CultureInfo parentCultureInfo = cultureInfo.Parent; resourceName = String.Concat(parentCultureInfo.Name, "Tools"); } else { resourceName = String.Concat(cultureInfo.TwoLetterISOLanguageName, "Tools"); } // EXCEPTION IS HERE toolsMenuName = resourceManager.GetString(resourceName); I can see the file CommandBar.resx included in the project, i can open it and can see the "enTools" string there.

    Read the article

  • NOT LIKE not working on comparison to a column

    - by rodling
    Data is fairly large and takes few minutes to run it every time, so its taking a lot of time debugging this problem. When I run like concat('%',T.item,'%') on smaller data it seems to identify items properly. However, when I run it on the main DB (the code shown), it still shows many(maybe even all) of the exceptions. EDIT: it seems when i add NOT it stops identifying items select distinct T.comment from (select comment, source, item from data, non_informative where ticker != "O" and source != 7 and source != 6) as T where T.comment not like concat('%',T.item,'%') order by T.comment; comment and source are in data, item is in non_informative Some items from T.item: 'Stock Analysis -', '#InsideTrades', 'IIROC Trade' Example comment which should be removed '#InsideTrades #4 | MACNAB CRAIG (Director,Officer,Chief Executive Officer): Filed Form 4 for $NNN (NATIONAL RETA' Can't seem to figure out it why shows all the items

    Read the article

  • % confuses python raw sql query

    - by Jonathan
    Following this SO question, I'm trying to "truncate" all tables related to a certain django application using the following raw sql commands in python: cursor.execute("set foreign_key_checks = 0") cursor.execute("select concat('truncate table ',table_schema,'.',table_name,';') as sql_stmt from information_schema.tables where table_schema = 'my_db' and table_type = 'base table' AND table_name LIKE 'some_prefix%'") for sql in [sql[0] for sql in cursor.fetchall()]: cursor.execute(sql) cursor.execute("set foreign_key_checks = 1") Alas I receive the following error: C:\dev\my_project>my_script.py Traceback (most recent call last): File "C:\dev\my_project\my_script.py", line 295, in <module> cursor.execute(r"select concat('truncate table ',table_schema,'.',table_name,';') as sql_stmt from information_schema.tables where table_schema = 'my_db' and table_type = 'base table' AND table_name LIKE 'some_prefix%'") File "C:\Python26\lib\site-packages\django\db\backends\util.py", line 18, in execute sql = self.db.ops.last_executed_query(self.cursor, sql, params) File "C:\Python26\lib\site-packages\django\db\backends\__init__.py", line 216, in last_executed_query return smart_unicode(sql) % u_params TypeError: not enough arguments for format string Is the % in the LIKE making trouble? How can I workaround it?

    Read the article

  • 3d cube using canvas. Need a little improvement

    - by TimeManx
    I made this 3d cube using the following code Matrix mMatrix = canvas.getMatrix(); canvas.save(); camera.save(); camera.rotateY(-angle); camera.getMatrix(mMatrix); mMatrix.preTranslate(-width, 0); mMatrix.postTranslate(width, 0); canvas.concat(mMatrix); canvas.drawBitmap(bmp1, 0, 0, null); camera.restore(); canvas.restore(); camera.rotateY(90 - angle); camera.getMatrix(mMatrix); mMatrix.preTranslate(-width, 0); mMatrix.postTranslate(width2, 0); canvas.concat(mMatrix); canvas.drawBitmap(bmp2, width, 0, null); This is what it gives But what I need is It's because when Camera rotates the images, some part of the image gets hidden. Like this But I think this can be done.

    Read the article

  • MYSQL -Incorrect Syntax

    - by user1854392
    WHILE x > 1 DO SET x = x - 1; SET totalTime = SELECT CONCAT(FLOOR(HOUR(TIMEDIFF(end_time,start_time)) / 24), ' days ', MOD(HOUR(TIMEDIFF(end_time,start_time)), 24), ' hrs ', MINUTE(TIMEDIFF(end_time,start_time)), ' minutes ') AS total_Time I don't see why I am having a syntax error? It is part of a bigger procedure but is pointing to this aas being incorrect Error message: SQL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT CONCAT(FLOOR(HOUR(TIMEDIFF(end_time,start_time)) / 24,' days',' at line 11 and totalTime is declared as a VARCHAR(50)

    Read the article

  • javascript ul button dropdown crashes after if doing more than 1 button

    - by Apprentice Programmer
    So i have a button drop down list that i want users to select a choice, and basically the button will return the users selection.Pretty straight forward, tested on jsFiddle and works great. Using ruby on rails btw, so i'm not sure if it might conflicting the way rails handle javascript actions. Heres the code: <%= form_for(@user, :html => {:class => 'form-horizontal'}) do |f| %> <fieldset> <p>Do you have experience in Business? If yes, select one of the following: <div class="input-group"> <div class="input-group-btn btn-group"> <button type="button" class="btn btn-default dropdown-toggle" data-toggle="dropdown">Select one <span class="caret"></span></button> <ul class="dropdown-menu"> <li ><a href="#">Entrepreneurship</a></li> <li ><a href="#">Investments</a></li> <li ><a href="#">Management</a></li> <li class="divider"></li> <li ><a href="#">All of the Above</a></li> </ul> </div><!-- /btn-group --> <%= f.text_field :years_business , :class => "form-control", :placeholder => "Years of experience" %> </div> Now there are 2 more of these, and basically what happens is that if I select an item for the first time from the dropdown list, everything works great. But the moment I select the same button/or new button, the page immediately kind of refreshes, they selected value will not show up after the list drops down and user selects a value. I viewed the page source and added additional javascript src and types, but still doesnt work. the jquery code: jQuery(function ($) { $('.input-group-btn .dropdown-menu > li:not(.divider)').click(function(){ $(this).closest('ul').prev().text($(this).text()) }) }); Any suggestions what is causing the problem?? The jsfiddle link is here: http://jsfiddle.net/w4s8u/7/

    Read the article

  • Select Elements in nested Divs using JQuery

    - by PIKP
    I have the following html markup inside a Div named item and I want to select all the elements (inside nested divs) and clear the values. As shown in following given Jquery I have managed to access elements in each Div by using.children().each(). But the the problem is .children().each()goes one level down at a time from the parent div, so I have repeated the same code block with multiple .children() to access the elements inside nested Divs, can anyone suggest me a method to do this without repeating the code for N number of nested divs . html markup <div class="item"> <input type="hidden" value="1234" name="testVal"> <div class="form-group" id="c1"> <div class="controls "> <input type="text" value="Results" name="s1" maxlength="255" id="id2"> </div> </div> <div class="form-group" id="id4"> <input type="text" value="Results" name="s12" maxlength="255" id="id225"> <div class="form-group" id="id41"> <input type="text" value="Results" name="s12" maxlength="255" id="5"> <div class="form-group" id="id42"> <input type="text" value="Results" name="s12" maxlength="255" id="5"> <div class="form-group" id="id43"> <input type="text" value="Results" name="s12" maxlength="255" id="id224"> </div> </div> </div> </div> </div> My Qjuery script var row = $(".item:first").clone(false).get(0); $(row).children().each(function () { updateElementIndex(this, prefix, formCount); if ($(this).attr('type') == 'text') { $(this).val(''); } if ($(this).attr('type') == 'hidden' && ($(this).attr('name') != 'csrfmiddlewaretoken')) { $(this).val(''); } if ($(this).attr('type') == 'file') { $(this).val(''); } if ($(this).attr('type') == 'checkbox') { $(this).attr('checked', false); } $(this).remove('a'); }); // Relabel or rename all the relevant bits $(row).children().children().each(function () { updateElementIndex(this, prefix, formCount) if ($(this).attr('type') == 'text') { $(this).val(''); } if ($(this).attr('type') == 'hidden' && ($(this).attr('name') != 'csrfmiddlewaretoken')) { $(this).val(''); } if ($(this).attr('type') == 'file') { $(this).val(''); } if ($(this).attr('type') == 'checkbox') { $(this).attr('checked', false); } $(this).remove('a'); });

    Read the article

  • Squid + Dans Guardian (simple configuration)

    - by The Digital Ninja
    I just built a new proxy server and compiled the latest versions of squid and dansguardian. We use basic authentication to select what users are allowed outside of our network. It seems squid is working just fine and accepts my username and password and lets me out. But if i connect to dans guardian, it prompts for username and password and then displays a message saying my username is not allowed to access the internet. Its pulling my username for the error message so i know it knows who i am. The part i get confused on is i thought that part was handled all by squid, and squid is working flawlessly. Can someone please double check my config files and tell me if i'm missing something or there is some new option i must set to get this to work. dansguardian.conf # Web Access Denied Reporting (does not affect logging) # # -1 = log, but do not block - Stealth mode # 0 = just say 'Access Denied' # 1 = report why but not what denied phrase # 2 = report fully # 3 = use HTML template file (accessdeniedaddress ignored) - recommended # reportinglevel = 3 # Language dir where languages are stored for internationalisation. # The HTML template within this dir is only used when reportinglevel # is set to 3. When used, DansGuardian will display the HTML file instead of # using the perl cgi script. This option is faster, cleaner # and easier to customise the access denied page. # The language file is used no matter what setting however. # languagedir = '/etc/dansguardian/languages' # language to use from languagedir. language = 'ukenglish' # Logging Settings # # 0 = none 1 = just denied 2 = all text based 3 = all requests loglevel = 3 # Log Exception Hits # Log if an exception (user, ip, URL, phrase) is matched and so # the page gets let through. Can be useful for diagnosing # why a site gets through the filter. on | off logexceptionhits = on # Log File Format # 1 = DansGuardian format 2 = CSV-style format # 3 = Squid Log File Format 4 = Tab delimited logfileformat = 1 # Log file location # # Defines the log directory and filename. #loglocation = '/var/log/dansguardian/access.log' # Network Settings # # the IP that DansGuardian listens on. If left blank DansGuardian will # listen on all IPs. That would include all NICs, loopback, modem, etc. # Normally you would have your firewall protecting this, but if you want # you can limit it to only 1 IP. Yes only one. filterip = # the port that DansGuardian listens to. filterport = 8080 # the ip of the proxy (default is the loopback - i.e. this server) proxyip = 127.0.0.1 # the port DansGuardian connects to proxy on proxyport = 3128 # accessdeniedaddress is the address of your web server to which the cgi # dansguardian reporting script was copied # Do NOT change from the default if you are not using the cgi. # accessdeniedaddress = 'http://YOURSERVER.YOURDOMAIN/cgi-bin/dansguardian.pl' # Non standard delimiter (only used with accessdeniedaddress) # Default is enabled but to go back to the original standard mode dissable it. nonstandarddelimiter = on # Banned image replacement # Images that are banned due to domain/url/etc reasons including those # in the adverts blacklists can be replaced by an image. This will, # for example, hide images from advert sites and remove broken image # icons from banned domains. # 0 = off # 1 = on (default) usecustombannedimage = 1 custombannedimagefile = '/etc/dansguardian/transparent1x1.gif' # Filter groups options # filtergroups sets the number of filter groups. A filter group is a set of content # filtering options you can apply to a group of users. The value must be 1 or more. # DansGuardian will automatically look for dansguardianfN.conf where N is the filter # group. To assign users to groups use the filtergroupslist option. All users default # to filter group 1. You must have some sort of authentication to be able to map users # to a group. The more filter groups the more copies of the lists will be in RAM so # use as few as possible. filtergroups = 1 filtergroupslist = '/etc/dansguardian/filtergroupslist' # Authentication files location bannediplist = '/etc/dansguardian/bannediplist' exceptioniplist = '/etc/dansguardian/exceptioniplist' banneduserlist = '/etc/dansguardian/banneduserlist' exceptionuserlist = '/etc/dansguardian/exceptionuserlist' # Show weighted phrases found # If enabled then the phrases found that made up the total which excedes # the naughtyness limit will be logged and, if the reporting level is # high enough, reported. on | off showweightedfound = on # Weighted phrase mode # There are 3 possible modes of operation: # 0 = off = do not use the weighted phrase feature. # 1 = on, normal = normal weighted phrase operation. # 2 = on, singular = each weighted phrase found only counts once on a page. # weightedphrasemode = 2 # Positive result caching for text URLs # Caches good pages so they don't need to be scanned again # 0 = off (recommended for ISPs with users with disimilar browsing) # 1000 = recommended for most users # 5000 = suggested max upper limit urlcachenumber = # # Age before they are stale and should be ignored in seconds # 0 = never # 900 = recommended = 15 mins urlcacheage = # Smart and Raw phrase content filtering options # Smart is where the multiple spaces and HTML are removed before phrase filtering # Raw is where the raw HTML including meta tags are phrase filtered # CPU usage can be effectively halved by using setting 0 or 1 # 0 = raw only # 1 = smart only # 2 = both (default) phrasefiltermode = 2 # Lower casing options # When a document is scanned the uppercase letters are converted to lower case # in order to compare them with the phrases. However this can break Big5 and # other 16-bit texts. If needed preserve the case. As of version 2.7.0 accented # characters are supported. # 0 = force lower case (default) # 1 = do not change case preservecase = 0 # Hex decoding options # When a document is scanned it can optionally convert %XX to chars. # If you find documents are getting past the phrase filtering due to encoding # then enable. However this can break Big5 and other 16-bit texts. # 0 = disabled (default) # 1 = enabled hexdecodecontent = 0 # Force Quick Search rather than DFA search algorithm # The current DFA implementation is not totally 16-bit character compatible # but is used by default as it handles large phrase lists much faster. # If you wish to use a large number of 16-bit character phrases then # enable this option. # 0 = off (default) # 1 = on (Big5 compatible) forcequicksearch = 0 # Reverse lookups for banned site and URLs. # If set to on, DansGuardian will look up the forward DNS for an IP URL # address and search for both in the banned site and URL lists. This would # prevent a user from simply entering the IP for a banned address. # It will reduce searching speed somewhat so unless you have a local caching # DNS server, leave it off and use the Blanket IP Block option in the # bannedsitelist file instead. reverseaddresslookups = off # Reverse lookups for banned and exception IP lists. # If set to on, DansGuardian will look up the forward DNS for the IP # of the connecting computer. This means you can put in hostnames in # the exceptioniplist and bannediplist. # It will reduce searching speed somewhat so unless you have a local DNS server, # leave it off. reverseclientiplookups = off # Build bannedsitelist and bannedurllist cache files. # This will compare the date stamp of the list file with the date stamp of # the cache file and will recreate as needed. # If a bsl or bul .processed file exists, then that will be used instead. # It will increase process start speed by 300%. On slow computers this will # be significant. Fast computers do not need this option. on | off createlistcachefiles = on # POST protection (web upload and forms) # does not block forms without any file upload, i.e. this is just for # blocking or limiting uploads # measured in kibibytes after MIME encoding and header bumph # use 0 for a complete block # use higher (e.g. 512 = 512Kbytes) for limiting # use -1 for no blocking #maxuploadsize = 512 #maxuploadsize = 0 maxuploadsize = -1 # Max content filter page size # Sometimes web servers label binary files as text which can be very # large which causes a huge drain on memory and cpu resources. # To counter this, you can limit the size of the document to be # filtered and get it to just pass it straight through. # This setting also applies to content regular expression modification. # The size is in Kibibytes - eg 2048 = 2Mb # use 0 for no limit maxcontentfiltersize = # Username identification methods (used in logging) # You can have as many methods as you want and not just one. The first one # will be used then if no username is found, the next will be used. # * proxyauth is for when basic proxy authentication is used (no good for # transparent proxying). # * ntlm is for when the proxy supports the MS NTLM authentication # protocol. (Only works with IE5.5 sp1 and later). **NOT IMPLEMENTED** # * ident is for when the others don't work. It will contact the computer # that the connection came from and try to connect to an identd server # and query it for the user owner of the connection. usernameidmethodproxyauth = on usernameidmethodntlm = off # **NOT IMPLEMENTED** usernameidmethodident = off # Preemptive banning - this means that if you have proxy auth enabled and a user accesses # a site banned by URL for example they will be denied straight away without a request # for their user and pass. This has the effect of requiring the user to visit a clean # site first before it knows who they are and thus maybe an admin user. # This is how DansGuardian has always worked but in some situations it is less than # ideal. So you can optionally disable it. Default is on. # As a side effect disabling this makes AD image replacement work better as the mime # type is know. preemptivebanning = on # Misc settings # if on it adds an X-Forwarded-For: <clientip> to the HTTP request # header. This may help solve some problem sites that need to know the # source ip. on | off forwardedfor = on # if on it uses the X-Forwarded-For: <clientip> to determine the client # IP. This is for when you have squid between the clients and DansGuardian. # Warning - headers are easily spoofed. on | off usexforwardedfor = off # if on it logs some debug info regarding fork()ing and accept()ing which # can usually be ignored. These are logged by syslog. It is safe to leave # it on or off logconnectionhandlingerrors = on # Fork pool options # sets the maximum number of processes to sporn to handle the incomming # connections. Max value usually 250 depending on OS. # On large sites you might want to try 180. maxchildren = 180 # sets the minimum number of processes to sporn to handle the incomming connections. # On large sites you might want to try 32. minchildren = 32 # sets the minimum number of processes to be kept ready to handle connections. # On large sites you might want to try 8. minsparechildren = 8 # sets the minimum number of processes to sporn when it runs out # On large sites you might want to try 10. preforkchildren = 10 # sets the maximum number of processes to have doing nothing. # When this many are spare it will cull some of them. # On large sites you might want to try 64. maxsparechildren = 64 # sets the maximum age of a child process before it croaks it. # This is the number of connections they handle before exiting. # On large sites you might want to try 10000. maxagechildren = 5000 # Process options # (Change these only if you really know what you are doing). # These options allow you to run multiple instances of DansGuardian on a single machine. # Remember to edit the log file path above also if that is your intention. # IPC filename # # Defines IPC server directory and filename used to communicate with the log process. ipcfilename = '/tmp/.dguardianipc' # URL list IPC filename # # Defines URL list IPC server directory and filename used to communicate with the URL # cache process. urlipcfilename = '/tmp/.dguardianurlipc' # PID filename # # Defines process id directory and filename. #pidfilename = '/var/run/dansguardian.pid' # Disable daemoning # If enabled the process will not fork into the background. # It is not usually advantageous to do this. # on|off ( defaults to off ) nodaemon = off # Disable logging process # on|off ( defaults to off ) nologger = off # Daemon runas user and group # This is the user that DansGuardian runs as. Normally the user/group nobody. # Uncomment to use. Defaults to the user set at compile time. # daemonuser = 'nobody' # daemongroup = 'nobody' # Soft restart # When on this disables the forced killing off all processes in the process group. # This is not to be confused with the -g run time option - they are not related. # on|off ( defaults to off ) softrestart = off maxcontentramcachescansize = 2000 maxcontentfilecachescansize = 20000 downloadmanager = '/etc/dansguardian/downloadmanagers/default.conf' authplugin = '/etc/dansguardian/authplugins/proxy-basic.conf' Squid.conf http_port 3128 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache #broken_vary_encoding allow apache access_log /squid/var/logs/access.log squid hosts_file /etc/hosts auth_param basic program /squid/libexec/ncsa_auth /squid/etc/userbasic.auth auth_param basic children 5 auth_param basic realm proxy auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl NoAuthNec src <HIDDEN FOR SECURITY> acl BrkRm src <HIDDEN FOR SECURITY> acl Dials src <HIDDEN FOR SECURITY> acl Comps src <HIDDEN FOR SECURITY> acl whsws dstdom_regex -i .opensuse.org .novell.com .suse.com mirror.mcs.an1.gov mirrors.kernerl.org www.suse.de suse.mirrors.tds.net mirrros.usc.edu ftp.ale.org suse.cs.utah.edu mirrors.usc.edu mirror.usc.an1.gov linux.nssl.noaa.gov noaa.gov .kernel.org ftp.ale.org ftp.gwdg.de .medibuntu.org mirrors.xmission.com .canonical.com .ubuntu. acl opensites dstdom_regex -i .mbsbooks.com .bowker.com .usps.com .usps.gov .ups.com .fedex.com go.microsoft.com .microsoft.com .apple.com toolbar.msn.com .contacts.msn.com update.services.openoffice.org fms2.pointroll.speedera.net services.wmdrm.windowsmedia.com windowsupdate.com .adobe.com .symantec.com .vitalbook.com vxn1.datawire.net vxn.datawire.net download.lavasoft.de .download.lavasoft.com .lavasoft.com updates.ls-servers.com .canadapost. .myyellow.com minirick symantecliveupdate.com wm.overdrive.com www.overdrive.com productactivation.one.microsoft.com www.update.microsoft.com testdrive.whoson.com www.columbia.k12.mo.us banners.wunderground.com .kofax.com .gotomeeting.com tools.google.com .dl.google.com .cache.googlevideo.com .gpdl.google.com .clients.google.com cache.pack.google.com kh.google.com maps.google.com auth.keyhole.com .contacts.msn.com .hrblock.com .taxcut.com .merchantadvantage.com .jtv.com .malwarebytes.org www.google-analytics.com dcs.support.xerox.com .dhl.com .webtrendslive.com javadl-esd.sun.com javadl-alt.sun.com .excelsior.edu .dhlglobalmail.com .nessus.org .foxitsoftware.com foxit.vo.llnwd.net installshield.com .mindjet.com .mediascouter.com media.us.elsevierhealth.com .xplana.com .govtrack.us sa.tulsacc.edu .omniture.com fpdownload.macromedia.com webservices.amazon.com acl password proxy_auth REQUIRED acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 631 2001 2005 8731 9001 9080 10000 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port # https, snews 443 563 acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port # unregistered ports 1936-65535 acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 10000 acl Safe_ports port 631 acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT acl UTubeUsers proxy_auth "/squid/etc/utubeusers.list" acl RestrictUTube dstdom_regex -i youtube.com acl RestrictFacebook dstdom_regex -i facebook.com acl FacebookUsers proxy_auth "/squid/etc/facebookusers.list" acl BuemerKEC src 10.10.128.0/24 acl MBSsortnet src 10.10.128.0/26 acl MSNExplorer browser -i MSN acl Printers src <HIDDEN FOR SECURITY> acl SpecialFolks src <HIDDEN FOR SECURITY> # streaming download acl fails rep_mime_type ^.*mms.* acl fails rep_mime_type ^.*ms-hdr.* acl fails rep_mime_type ^.*x-fcs.* acl fails rep_mime_type ^.*x-ms-asf.* acl fails2 urlpath_regex dvrplayer mediastream mms:// acl fails2 urlpath_regex \.asf$ \.afx$ \.flv$ \.swf$ acl deny_rep_mime_flashvideo rep_mime_type -i video/flv acl deny_rep_mime_shockwave rep_mime_type -i ^application/x-shockwave-flash$ acl x-type req_mime_type -i ^application/octet-stream$ acl x-type req_mime_type -i application/octet-stream acl x-type req_mime_type -i ^application/x-mplayer2$ acl x-type req_mime_type -i application/x-mplayer2 acl x-type req_mime_type -i ^application/x-oleobject$ acl x-type req_mime_type -i application/x-oleobject acl x-type req_mime_type -i application/x-pncmd acl x-type req_mime_type -i ^video/x-ms-asf$ acl x-type2 rep_mime_type -i ^application/octet-stream$ acl x-type2 rep_mime_type -i application/octet-stream acl x-type2 rep_mime_type -i ^application/x-mplayer2$ acl x-type2 rep_mime_type -i application/x-mplayer2 acl x-type2 rep_mime_type -i ^application/x-oleobject$ acl x-type2 rep_mime_type -i application/x-oleobject acl x-type2 rep_mime_type -i application/x-pncmd acl x-type2 rep_mime_type -i ^video/x-ms-asf$ acl RestrictHulu dstdom_regex -i hulu.com acl broken dstdomain cms.montgomerycollege.edu events.columbiamochamber.com members.columbiamochamber.com public.genexusserver.com acl RestrictVimeo dstdom_regex -i vimeo.com acl http_port port 80 #http_reply_access deny deny_rep_mime_flashvideo #http_reply_access deny deny_rep_mime_shockwave #streaming files #http_access deny fails #http_reply_access deny fails #http_access deny fails2 #http_reply_access deny fails2 #http_access deny x-type #http_reply_access deny x-type #http_access deny x-type2 #http_reply_access deny x-type2 follow_x_forwarded_for allow localhost acl_uses_indirect_client on log_uses_indirect_client on http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access allow SpecialFolks http_access deny CONNECT !SSL_ports http_access allow whsws http_access allow opensites http_access deny BuemerKEC !MBSsortnet http_access deny BrkRm RestrictUTube RestrictFacebook RestrictVimeo http_access allow RestrictUTube UTubeUsers http_access deny RestrictUTube http_access allow RestrictFacebook FacebookUsers http_access deny RestrictFacebook http_access deny RestrictHulu http_access allow NoAuthNec http_access allow BrkRm http_access allow FacebookUsers RestrictVimeo http_access deny RestrictVimeo http_access allow Comps http_access allow Dials http_access allow Printers http_access allow password http_access deny !Safe_ports http_access deny SSL_ports !CONNECT http_access allow http_port http_access deny all http_reply_access allow all icp_access allow all access_log /squid/var/logs/access.log squid visible_hostname proxy.site.com forwarded_for off coredump_dir /squid/cache/ #header_access Accept-Encoding deny broken #acl snmppublic snmp_community mysecretcommunity #snmp_port 3401 #snmp_access allow snmppublic all cache_mem 3 GB #acl snmppublic snmp_community mbssquid #snmp_port 3401 #snmp_access allow snmppublic all

    Read the article

  • Mpd as pppoe server with authorisation by freeradius2

    - by Korjavin Ivan
    I install freeradius2, add to raddb/users: test Cleartext-Password := "test1" Service-Type = Framed-User, Framed-Protocol = PPP, Framed-IP-Address = 10.36.0.2, Framed-IP-Netmask = 255.255.255.0, start radiusd, and check auth: radtest test test1 127.0.0.1 1002 testing123 Sending Access-Request of id 199 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test1" NAS-IP-Address = 127.0.0.1 NAS-Port = 1002 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=199, length=44 Service-Type = Framed-User Framed-Protocol = PPP Framed-IP-Address = 10.36.0.2 Framed-IP-Netmask = 255.255.255.0 Works fine. Next step. Add to mpd.conf: radius: set auth disable internal set auth max-logins 1 CI set auth enable radius-auth set radius timeout 90 set radius retries 2 set radius server 127.0.0.1 testing123 1812 1813 set radius me 127.0.0.1 create link template L pppoe set link action bundle B set link max-children 1000 set link no multilink set link no shortseq set link no pap chap-md5 chap-msv1 chap-msv2 set link enable chap set pppoe acname Internet load radius create link template em1 L set pppoe iface em1 set link enable incoming And trying to connect, auth failed, here is mpd log: mpd: [em1-2] LCP: auth: peer wants nothing, I want CHAP mpd: [em1-2] CHAP: sending CHALLENGE #1 len: 21 mpd: [em1-2] LCP: LayerUp mpd: [em1-2] CHAP: rec'd RESPONSE #1 len: 58 mpd: [em1-2] Name: "test" mpd: [em1-2] AUTH: Trying RADIUS mpd: [em1-2] RADIUS: Authenticating user 'test' mpd: [em1-2] RADIUS: Rec'd RAD_ACCESS_REJECT for user 'test' mpd: [em1-2] AUTH: RADIUS returned: failed mpd: [em1-2] AUTH: ran out of backends mpd: [em1-2] CHAP: Auth return status: failed mpd: [em1-2] CHAP: Reply message: ^AE=691 R=1 mpd: [em1-2] CHAP: sending FAILURE #1 len: 14 mpd: [em1-2] LCP: authorization failed Then i start freeradius as radiusd -fX, and get this log: rad_recv: Access-Request packet from host 127.0.0.1 port 46400, id=223, length=282 NAS-Identifier = "rubin.svyaz-nt.ru" NAS-IP-Address = 127.0.0.1 Message-Authenticator = 0x14d36639bed8074ec2988118125367ea Acct-Session-Id = "815965-em1-2" NAS-Port = 2 NAS-Port-Type = Ethernet Service-Type = Framed-User Framed-Protocol = PPP Calling-Station-Id = "00e05290b3e3 / 00:e0:52:90:b3:e3 / em1" NAS-Port-Id = "em1" Vendor-12341-Attr-12 = 0x656d312d32 Tunnel-Medium-Type:0 = IEEE-802 Tunnel-Client-Endpoint:0 = "00:e0:52:90:b3:e3" User-Name = "test" MS-CHAP-Challenge = 0xbb1e68d5bbc30f228725a133877de83e MS-CHAP2-Response = 0x010088746ae65b68e435e9d045ad6f9569b60000000000000000b56991b4f20704cb6c68e5982eec5e98a7f4b470c109c1b9 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop [mschap] Found MS-CHAP attributes. Setting 'Auth-Type = mschap' ++[mschap] returns ok [eap] No EAP-Message, not doing EAP ++[eap] returns noop [files] users: Matched entry DEFAULT at line 172 ++[files] returns ok Found Auth-Type = MSCHAP # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group MS-CHAP {...} [mschap] No Cleartext-Password configured. Cannot create LM-Password. [mschap] No Cleartext-Password configured. Cannot create NT-Password. [mschap] Creating challenge hash with username: test [mschap] Client is using MS-CHAPv2 for test, we need NT-Password [mschap] FAILED: No NT/LM-Password. Cannot perform authentication. [mschap] FAILED: MS-CHAP2-Response is incorrect ++[mschap] returns reject Failed to authenticate the user. Login incorrect: [test] (from client localhost port 2 cli 00e05290b3e3 / 00:e0:52:90:b3:e3 / em1) Using Post-Auth-Type REJECT # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} -> test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 2 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 2 Sending Access-Reject of id 223 to 127.0.0.1 port 46400 MS-CHAP-Error = "\001E=691 R=1" Why i have error "[mschap] No Cleartext-Password configured. Cannot create LM-Password." ? I define cleartext-password in users. I check raddb/sites-enabled/default authorize { chap mschap eap { ok = return } files } looks ok for me. Whats wrong with mpd/chap/radius ?

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • Tuxedo 11gR1 Client Server Affinity

    - by todd.little
    One of the major new features in Oracle Tuxedo 11gR1 is the ability to define an affinity between clients and servers. In previous releases of Tuxedo, the only way to ensure that multiple requests from a client went to the same server was to establish a conversation with tpconnect() and then use tpsend() and tprecv(). Although this works it has some drawbacks. First for single-threaded servers, the server is tied up for the entire duration of the conversation and cannot service other clients, an obvious scalability issue. I believe the more significant drawback is that the application programmer has to switch from the simple request/response model provided by tpcall() to the half duplex tpsend() and tprecv() calls used with conversations. Switching between the two typically requires a fair amount of redesign and recoding. The Client Server Affinity feature in Tuxedo 11gR1 allows by way of configuration an application to define affinities that can exist between clients and servers. This is done in the *SERVICES section of the UBBCONFIG file. Using new parameters for services defined in the *SERVICES section, customers can determine when an affinity session is created or deleted, the scope of the affinity, and whether requests can be routed outside the affinity scope. The AFFINITYSCOPE parameter can be MACHINE, GROUP, or SERVER, meaning that while the affinity session is in place, all requests from the client will be routed to the same MACHINE, GROUP, or SERVER. The creation and deletion of affinity is defined by the SESSIONROLE parameter and a service can be defined as either BEGIN, END, or NONE, where BEGIN starts an affinity session, END deletes the affinity session, and NONE does not impact the affinity session. Finally customers can define how strictly they want the affinity scope adhered to using the AFFINITYSTRICT parameter. If set to MANDATORY, all requests made during an affinity session will be routed to a server in the affinity scope. Thus if the affinity scope is SERVER, all subsequent tpcall() requests will be sent to the same server the affinity scope was established with. If the server doesn't offer that service, even though other servers do offer the service, the call will fail with TPNOENT. Setting AFFINITYSTRICT to PRECEDENT tells Tuxedo to try and route the request to a server in the affinity scope, but if that's not possible, then Tuxedo can try to route the request to servers out of scope. All of this begs the question, why? Why have this feature? There many uses for this capability, but the most common is when there is state that is maintained in a server, group of servers, or in a machine and subsequent requests from a client must be routed to where that state is maintained. This might be something as simple as a database cursor maintained by a server on behalf of a client. Alternatively it might be that the server has a connection to an external system and subsequent requests need to go back to the server that has that connection. A more sophisticated case is where a group of servers maintains some sort of cache in shared memory and subsequent requests need to be routed to where the cache is maintained. Although this last case might be able to be handled by data dependent routing, using client server affinity allows the cache to be partitioned dynamically instead of statically.

    Read the article

  • Building a Mafia&hellip;TechFest Style

    - by David Hoerster
    It’s been a few months since I last blogged (not that I blog much to begin with), but things have been busy.  We all have a lot going on in our lives, but I’ve had one item that has taken up a surprising amount of time – Pittsburgh TechFest 2012.  After the event, I went through some minutes of the first meetings for TechFest, and I started to think about how it all came together.  I think what inspired me the most about TechFest was how people from various technical communities were able to come together and build and promote a common event.  As a result, I wanted to blog about this to show that people from different communities can work together to build something that benefits all communities.  (Hopefully I've got all my facts straight.)  TechFest started as an idea Eric Kepes and myself had when we were planning our next Pittsburgh Code Camp, probably in the summer of 2011.  Our Spring 2011 Code Camp was a little different because we had a great infusion of some folks from the Pittsburgh Agile group (especially with a few speakers from LeanDog).  The line-up was great, but we felt our audience wasn’t as broad as it should have been.  We thought it would be great to somehow attract other user groups around town and have a big, polyglot conference. We started contacting leaders from Pittsburgh’s various user groups.  Eric and I split up the ones that we knew about, and we just started making contacts.  Most of the people we started contacting never heard of us, nor we them.  But we all had one thing in common – we ran user groups who’s primary goal is educating our members to make them better at what they do. Amazingly, and I say this because I wasn’t sure what to expect, we started getting some interest from the various leaders.  One leader, Greg Akins, is, in my opinion, Pittsburgh’s poster boy for the polyglot programmer.  He’s helped us in the past with .NET Code Camps, is a Java developer (and leader in Pittsburgh’s Java User Group), works with Ruby and I’m sure a handful of other languages.  He helped make some e-introductions to other user group leaders, and the whole thing just started to snowball. Once we realized we had enough interest with the user group leaders, we decided to not have a Fall Code Camp and instead focus on this new entity. Flash-forward to October of 2011.  I set up a meeting, with the help of Jeremy Jarrell (Pittsburgh Agile leader) to hold a meeting with the leaders of many of Pittsburgh technical user groups.  We had representatives from 12 technical user groups (Python, JavaScript, Clojure, Ruby, PittAgile, jQuery, PHP, Perl, SQL, .NET, Java and PowerShell) – 14 people.  We likened it to a scene from a Godfather movie where the heads of all the families come together to make some deal.  As a result, the name “TechFest Mafia” was born and kind of stuck. Over the next 7 months or so, we had our starts and stops.  There were moments where I thought this event would not happen either because we wouldn’t have the right mix of topics (was I off there!), or enough people register (OK, I was wrong there, too!) or find an appropriate venue (hmm…wrong there, too) or find enough sponsors to help support the event (wow…not doing so well).  Overall, everything fell into place with a lot of hard work from Eric, Jen, Greg, Jeremy, Sean, Nicholas, Gina and probably a few others that I’m forgetting.  We also had a bit of luck, too.  But in the end, the passion that we had to put together an event that was really about making ourselves better at what we do really paid off. I’ve never been more excited about a project coming together than I have been with Pittsburgh TechFest 2012.  From the moment the first person arrived at the event to the final minutes of my closing remarks (where I almost lost my voice – I ended up being diagnosed with bronchitis the next day!), it was an awesome event.  I’m glad to have been part of bringing something like this to Pittsburgh…and I’m looking forward to Pittsburgh TechFest 2013.  See you there!

    Read the article

  • Top Tier, A-Game Talent - How to Land em'

    - by GeekAgilistMercenary
    Recently the question came up from a close friend of mine, "will my PhD help me attain a higher income in the north west?"  I had to tell him, that it might get him a little more, but it won't get him in the top income brackets for the occupation.  Another time, a few days later, someone else asked this too.  Then again, I see a job posting that requires a Bachelors Degree and some other nonsense.  The job posting even states they want "A-Game" talent. I am almost shocked at how poorly part of this industry doesn't realize how unimportant a degree is to getting real top tier, a-game talent.  (and yes, I get a little riled up about this matter) You Can't Make Good Software Developers.  No college out there is going to train someone to be in the top 10%, and absolutely not to be in the top 5% of skill levels.  Colleges can NOT do this.  It is up to the individual, and the individual alone.  If top tier talent seems to come from a college, one should check their premise and look at the motivations the individuals have to go to that school.  There is most likely a reason that top tier talent appears to be made there.  The college however, can only guide or assist, but I repeat that "top tier talent is a very individualistic endeavor". Some might say, well a group is needed, support is needed, this and that are needed.  True, an individual needs a support system and a college can provide that, but it generally ends there.  The support group helps, provides a sounding wall, and provides correlation to good ideas for the a-game top tier geek.  But again, the endeavor is the individuals desire. top tier talent is a very individualistic endeavor - Me Hiring Top Tier, A-Game Talent There are a few things when trying to hire this level of game player. The first thing is to not require a degree of any sort.  Sure, it looks good, but it won't dictate anything other than the individual was able to go through the regimented steps of college. List the skills and ideas that you would like to find in an individual.  Think of two people meeting for the first time, what do you want to know about the other individual.  Team fit is absolutely fundamental for top tier talent.  That support group that I mentioned above, top tier talent works best with a solid group of players. Keep your technology up to date, moving forward, and don't bore your top talent if you manage to get it.  If the company slows down, they will leave.  The more valuable they find out they are, the lower tolerance they'll have for this.  For managers, directors, and leaders in an organization this is THE challenge for them. Provide opportunities not just for advancement, but ways for them to advance their knowledge such as training, a book budget, or other means.  Even if some software they want to use isn't used ton the project, get it for them (within reason of course ? couple $100 or even a few $1000 for a good software license to MSDN, Tellerik, or other suite of software is ideal). Don't push them to, and don't let them overwork themselves into burnout.  This, as a leader in an organization is easy to do if one finds themselves actually hiring top talent.  Because top talent just provides results and more results.  But they are human, they will break, don't be the cause of that or you'll lose your talent. For now, that is it from me on this topic, back to the revenue, code, projects, and pushing forward. For the original entry, check out my personal blog with other juicy tech tidbits, rants, raves, and the like. Agilist Mercenary

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >