Search Results

Search found 1579 results on 64 pages for 'bob walsh'.

Page 47/64 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • SQL Syntax to count unique users completing a task

    - by Belliez
    I have the following code which shows me what users has completed ticket and this lists each user and the date they close a ticket. i.e. Paul Matt Matt Bob Matt Paul Matt Matt At the moment I manually count each user myself to see their totals for the day. EDIT: Changed output as columns instead of rows: What I have been trying to do is get SQL Server to do this for me i.e. the final result to look like: Paul | 2 Matt | 5 Bob | 1 My code I am currently using is and I would be greatful if someone can help me change this so I can get it outputting something similar to above? DECLARE @StartDate DateTime; DECLARE @EndDate DateTime; -- Date format: YYYY-MM-DD SET @StartDate = '2013-11-06 00:00:00' SET @EndDate = GETDATE() -- Today SELECT (select Username from Membership where UserId = Ticket.CompletedBy) as TicketStatusChangedBy FROM Ticket INNER JOIN TicketStatus ON Ticket.TicketStatusID = TicketStatus.TicketStatusID INNER JOIN Membership ON Ticket.CheckedInBy = Membership.UserId WHERE TicketStatus.TicketStatusName = 'Completed' and Ticket.ClosedDate >= @StartDate --(GETDATE() - 1) and Ticket.ClosedDate <= @EndDate --(GETDATE()-0) ORDER BY Ticket.CompletedBy ASC, Ticket.ClosedDate ASC Thank you for your help and time.

    Read the article

  • Error with masterpage

    - by Raphyboy
    Hi, i'm creating a brand new masterpage with VS2010 Beta 2 and I get this warning (that causes me errors in the content pages): Validation (XHTML 1.0 Transitional): Content is not supported outside 'script' or 'asp:content' regions. The masterpage's code : <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Bob.master.cs" Inherits="TShirtFactory.Web.All.Core.lib.masterpage.Bob" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <asp:ContentPlaceHolder ID="head" runat="server"> </asp:ContentPlaceHolder> </head> <body> <form id="form1" runat="server"> <div> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </div> </form> </body> </html> As you can see, it's the default masterpage generated code. I get the warning when I hover the tag at top. Does anybody have an idea of what's going on ? Thank you

    Read the article

  • How do I Handle Ties When Ranking Results in MySQL?

    - by Laxmidi
    Hi, How does one handle ties when ranking results in a mysql query? I've simplified the table names and columns in this example, but it should illustrate my problem: SET @rank=0; SELECT student_names.students, @rank := @rank +1 AS rank, scores.grades FROM student_names LEFT JOIN scores ON student_names.students = scores.students ORDER BY scores.grades DESC So imagine the the above query produces: Students Rank Grades Al 1 90 Amy 2 90 George 3 78 Bob 4 73 Mary 5 NULL William 6 NULL Even though Al and Amy have the same grade, one is ranked higher than the other. Amy got ripped-off. How can I make it so that Amy and Al have the same ranking, so that they both have a rank of 1. Also, William and Mary didn't take the test. They bagged class and were smoking in the boy's room. They should be tied for last place. The correct ranking should be: Students Rank Grades Al 1 90 Amy 1 90 George 3 78 Bob 4 73 Mary 5 NULL William 5 NULL If anyone has any advice, please let me know. Thank you! -Laxmidi

    Read the article

  • script to find "deny" ACE in ACLs, and remove it

    - by Tom
    On my 100TB cluster, I need to find dirs and files that have a "deny" ACE within their ACL, then remove that ACE on each instance. I'm using the following: # find . -print0 | xargs -0 ls -led | grep deny -B4 and get this output (partial, for example only) -r--rw---- 1 chris GroupOne 4096 Mar 6 18:12 ./directoryA/fileX.txt OWNER: user:chris GROUP: group:GroupOne 0: user:chris allow file_gen_read,std_write_dac,file_write_attr 1: user:chris deny file_write,append,file_write_ext_attr,execute -- -r--rwxrwx 1 chris GroupOne 14728221 Mar 6 18:12 ./directoryA/subdirA/fileZ.txt OWNER: user:chris GROUP: group:GroupOne 0: user:chris allow file_gen_read,std_write_dac,file_write_attr 1: user:chris deny file_write,append,file_write_ext_attr,execute -- OWNER: user:bob GROUP: group:GroupTwo 0: user:bob allow dir_gen_read,dir_gen_write,dir_gen_execute,std_write_dac,delete_child,object_inherit,container_inherit 1: group:GroupTwo allow std_read_dac,std_write_dac,std_synchronize,dir_read_attr,dir_write_attr,object_inherit,container_inherit 2: group:GroupTwo deny list,add_file,add_subdir,dir_read_ext_attr,dir_write_ext_attr,traverse,delete_child,object_inherit,container_inherit -- As you can see, depending on where the "deny" ACE is, I can see/not-see the path. I could increase the -B value (I've seen up to 8 ACEs on a file) but then I would get more output to distill from... What I need to do next is extract $ACENUMBER and $PATHTOFILE so that I can execute this command: chmod -a# $ACENUMBER $PATHTOFILE Additional issue is that the find command (above) gives a relative path, whereas I need the full path. I guess that would need to be edited somehow. Any guidance on how to accomplish this?

    Read the article

  • How do you jump to a particular row in a DataGridView by typing (a la Windows Explorer details view)

    - by russcollier
    I have a .NET Winforms app in C# with a DataGridView that's read-only and populated with some number of rows. I'd like to have functionality similar to Windows Explorer's (and many other applications) details view for example. I'd like the DataGridView to behave such that when it has focus if you start typing, the current row selection will jump to the row where the (string) value of cell 0 (i.e. the first column in the row) starts with the characters you typed in. For example, if I have a DataGridView with 1 column and the following rows: Bob Jane Jason John Leroy Sam If the DataGridView has focus and I hit the 'b' key on my keyboard, the selected row is now "Bob". If I quickly type in the keys 'ja', the selected row is Jane. If I quickly type in the letters 'jas', the selected row is Jane. If I hit the 'z' key, nothing is selected (since nothing starts with Z). Likewise if Jane is currently selected and I keep typing the letter 'j', the selection will cycle through to Jason, then John, then back to Jane, each time I hit the 'j' key. I've been doing some Googling (and "stackoverflowing" :-)) for awhile and cannot find any examples of this type of functionality. I have a rough idea in my head to do this via some sort of short lived timer thread, collecting the keystrokes on KeyPress events for the DataGridView, and selecting rows based on those collected keystrokes matching a Cells[0].Value.StartsWith() type of condition. But it seems like there has to be an easier way that I'm just not seeing. Any ideas would be much appreciated. Thanks!

    Read the article

  • SQL Server, how to join a table in a "rotated" format (returning columns instead of rows)?

    - by Joshua Carmody
    Sorry for the lame title, my descriptive skills are poor today. In a nutshell, I have a query similar to the following: SELECT P.LAST_NAME, P.FIRST_NAME, D.DEMO_GROUP FROM PERSON P JOIN PERSON_DEMOGRAPHIC PD ON PD.PERSON_ID = P.PERSON_ID JOIN DEMOGRAPHIC D ON D.DEMOGRAPHIC_ID = PD.DEMOGRAPHIC_ID This returns output like this: LAST_NAME FIRST_NAME DEMO_GROUP --------------------------------------------- Johnson Bob Male Smith Jane Female Smith Jane Teacher Beeblebrox Zaphod Male Beeblebrox Zaphod Alien Beeblebrox Zaphid Politician I would prefer the output be similar to the following: LAST_NAME FIRST_NAME Male Female Teacher Alien Politician --------------------------------------------------------------------------------------------------------- Johnson Bob 1 0 0 0 0 Smith Jane 0 1 1 0 0 Beeblebrox Zaphod 1 0 0 1 1 The number of rows in the DEMOGRAPHIC table varies, so I can't say with certainty how many columns I need. The query needs to be flexible. Yes, it would be trivial to do this in code. But this query is one piece of a complicated set of stored procedures, views, and reporting services, many of which are outside my sphere of influence. I need to produce this output inside the database to avoid breaking the system. Any ideas? This is MS SQL Server 2005, by the way. Thanks.

    Read the article

  • How to build a GridView like DataBound Templated Custom ASP.NET Server Control

    - by Deyan
    I am trying to develop a very simple templated custom server control that resembles GridView. Basically, I want the control to be added in the .aspx page like this: <cc:SimpleGrid ID="SimpleGrid1" runat="server"> <TemplatedColumn>ID: <%# Eval("ID") %></ TemplatedColumn> <TemplatedColumn>Name: <%# Eval("Name") %></ TemplatedColumn> <TemplatedColumn>Age: <%# Eval("Age") %></ TemplatedColumn> </cc:SimpleGrid> and when providing the following datasource: DataTable table = new DataTable(); table.Columns.Add("ID", typeof(int)); table.Columns.Add("Name", typeof(string)); table.Columns.Add("Age", typeof(int)); table.Rows.Add(1, "Bob", 35); table.Rows.Add(2, "Sam", 44); table.Rows.Add(3, "Ann", 26); SimpleGrid1.DataSource = table; SimpleGrid1.DataBind(); the result should be a HTML table like this one. ------------------------------- | ID: 1 | Name: Bob | Age: 35 | ------------------------------- | ID: 2 | Name: Sam | Age: 44 | ------------------------------- | ID: 3 | Name: Ann | Age: 26 | ------------------------------- The main problem is that I cannot define the TemplatedColumn. When I tried to do it like this ... private ITemplate _TemplatedColumn; [PersistenceMode(PersistenceMode.InnerProperty)] [TemplateContainer(typeof(TemplatedColumnItem))] [TemplateInstance(TemplateInstance.Multiple)] public ITemplate TemplatedColumn { get { return _TemplatedColumn; } set { _TemplatedColumn = value; } } .. and subsequently instantiate the template in the CreateChildControls I got the following result which is not what I want: ----------- | Age: 35 | ----------- | Age: 44 | ----------- | Age: 26 | ----------- I know that what I want to achieve is pointless and that I can use DataGrid to achieve it, but I am giving this very simple example because if I know how to do this I would be able to develop the control that I need. Thank you.

    Read the article

  • What algorithms are suitable for this simple machine learning problem?

    - by user213060
    I have a what I think is a simple machine learning question. Here is the basic problem: I am repeatedly given a new object and a list of descriptions about the object. For example: new_object: 'bob' new_object_descriptions: ['tall','old','funny']. I then have to use some kind of machine learning to find previously handled objects that had similar descriptions, for example, past_similar_objects: ['frank','steve','joe']. Next, I have an algorithm that can directly measure whether these objects are indeed similar to bob, for example, correct_objects: ['steve','joe']. The classifier is then given this feedback training of successful matches. Then this loop repeats with a new object. a Here's the pseudo-code: Classifier=new_classifier() while True: new_object,new_object_descriptions = get_new_object_and_descriptions() past_similar_objects = Classifier.classify(new_object,new_object_descriptions) correct_objects = calc_successful_matches(new_object,past_similar_objects) Classifier.train_successful_matches(object,correct_objects) But, there are some stipulations that may limit what classifier can be used: There will be millions of objects put into this classifier so classification and training needs to scale well to millions of object types and still be fast. I believe this disqualifies something like a spam classifier that is optimal for just two types: spam or not spam. (Update: I could probably narrow this to thousands of objects instead of millions, if that is a problem.) Again, I prefer speed when millions of objects are being classified, over accuracy. What are decent, fast machine learning algorithms for this purpose?

    Read the article

  • Spring security with GAE

    - by xybrek
    I'm trying to implement Spring security for my GAE application however I'm getting this error: No bean named 'springSecurityFilterChain' is defined I added this configuration on my application web.xml: <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> And in the servlet-context: <!-- Configure security --> <security:http auto-config="true"> <security:intercept-url pattern="/**" access="ROLE_USER" /> </security:http> <security:authentication-manager alias="authenticationManager"> <security:authentication-provider> <security:user-service> <security:user name="jimi" password="jimi" authorities="ROLE_USER, ROLE_ADMIN" /> <security:user name="bob" password="bob" authorities="ROLE_USER" /> </security:user-service> </security:authentication-provider> </security:authentication-manager> What could be causing the error?

    Read the article

  • Comparable and Comparator contract with regards to null

    - by polygenelubricants
    Comparable contract specifies that e.compareTo(null) must throw NullPointerException. From the API: Note that null is not an instance of any class, and e.compareTo(null) should throw a NullPointerException even though e.equals(null) returns false. On the other hand, Comparator API mentions nothing about what needs to happen when comparing null. Consider the following attempt of a generic method that takes a Comparable, and return a Comparator for it that puts null as the minimum element. static <T extends Comparable<? super T>> Comparator<T> nullComparableComparator() { return new Comparator<T>() { @Override public int compare(T el1, T el2) { return el1 == null ? -1 : el2 == null ? +1 : el1.compareTo(el2); } }; } This allows us to do the following: List<Integer> numbers = new ArrayList<Integer>( Arrays.asList(3, 2, 1, null, null, 0) ); Comparator<Integer> numbersComp = nullComparableComparator(); Collections.sort(numbers, numbersComp); System.out.println(numbers); // "[null, null, 0, 1, 2, 3]" List<String> names = new ArrayList<String>( Arrays.asList("Bob", null, "Alice", "Carol") ); Comparator<String> namesComp = nullComparableComparator(); Collections.sort(names, namesComp); System.out.println(names); // "[null, Alice, Bob, Carol]" So the questions are: Is this an acceptable use of a Comparator, or is it violating an unwritten rule regarding comparing null and throwing NullPointerException? Is it ever a good idea to even have to sort a List containing null elements, or is that a sure sign of a design error?

    Read the article

  • Java remove HTML from String without regular expressions

    - by behrk2
    Hello, I am trying to remove all HTML elements from a String. Unfortunately, I cannot use regular expressions because I am developing on the Blackberry platform and regular expressions are not yet supported. Is there any other way that I can remove HTML from a string? I read somewhere that you can use a DOM Parser, but I couldn't find much on it. Text with HTML: <![CDATA[As a massive asteroid hurtles toward Earth, NASA head honcho Dan Truman (<a href="http://www.netflix.com/RoleDisplay/Billy_Bob_Thornton/20000303">Billy Bob Thornton</a>) hatches a plan to split the deadly rock in two before it annihilates the entire planet, calling on Harry Stamper (<a href="http://www.netflix.com/RoleDisplay/Bruce_Willis/99786">Bruce Willis</a>) -- the world's finest oil driller -- to head up the mission. With time rapidly running out, Stamper assembles a crack team and blasts off into space to attempt the treacherous task. <a href="http://www.netflix.com/RoleDisplay/Ben_Affleck/20000016">Ben Affleck</a> and <a href="http://www.netflix.com/RoleDisplay/Liv_Tyler/162745">Liv Tyler</a> co-star.]]> Text without HTML: As a massive asteroid hurtles toward Earth, NASA head honcho Dan Truman (Billy Bob Thornton) hatches a plan to split the deadly rock in two before it annihilates the entire planet, calling on Harry Stamper (Bruce Willis) -- the world's finest oil driller -- to head up the mission. With time rapidly running out, Stamper assembles a crack team and blasts off into space to attempt the treacherous task.Ben Affleck and Liv Tyler co-star. Thanks!

    Read the article

  • Database access through collections

    - by Mike
    Hi All, I have an 3 tiered application where I need to get database results and populated the UI. I have a MessagesCollection class that deals with messages. I load my user from the database. On the instantiation of a user (ie. new User()), a MessageCollection Messages = new MessageCollection(this) is performed. Message collection accepts a user as a parameter. User user = user.LoadUser("bob"); I want to get the messages for Bob. user.Messages.GetUnreadMessages(); GetUnreadMessages calls my Business Data provider which in turn calls the data access layer. The Business data provider returns List. My question is - I am not sure what the best practice is here - If I have a collection of messages in an array inside the MessagesCollection class, I could implement ICollection to provide GetEnumerator() and ability to traverse the messages. But what happens if the messages change and the the user has old messages loaded? What about big message collections? What if my user had 10,000 unread messages? I don't think accessing the database and returning 10,000 Message objects would be efficient.

    Read the article

  • Mysql - Help me alter this search query to get desired results

    - by sandeepan-nath
    Following is a dump of the tables and data needed to answer understand the system:- The system consists of tutors and classes. The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `All_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, `id_wc` int(10) unsigned default NULL, KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `All_Tag_Relations` (`id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, NULL), (2, 1, NULL), (3, 1, 1), (4, 1, 1), (6, 2, NULL), (7, 2, NULL), (5, 2, 2), (4, 2, 2); Following is my query:- This query searches for "first class" (tag for first = 3 and for class = 4, in Tags table) and returns all those classes such that both the terms first and class are present in the class name. SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 And it returns the class with id_wc = 1. But, I want the search to show all those classes such that all the search terms are present in the class name or its tutor name So that searching "Sandeepan class" (wtagrels.id_tag = 1,4) or "Sandeepan Nath" also returns the class with id_wc=1. And Searching. Searching "Bob First" should not return any classes. Please modify the above query or suggest a new query, if possible using MyIsam - fulltext search, but somehow help me get the result.

    Read the article

  • Excel VBA Macro for Pivot Table with Dynamic Data Range

    - by John Ziebro
    CODE IS WORKING! THANKS FOR THE HELP! I am attempting to create a dynamic pivot table that will work on data that varies in the number of rows. Currently, I have 28,300 rows, but this may change daily. Example of data format as follows: Case Number Branch Driver 1342 NYC Bob 4532 PHL Jim 7391 CIN John 8251 SAN John 7211 SAN Mary 9121 CLE John 7424 CIN John Example of finished table: Driver NYC PHL CIN SAN CLE Bob 1 0 0 0 0 Jim 0 1 0 0 0 John 0 0 2 1 1 Mary 0 0 0 1 0 Code as follows: Sub CreateSummaryReportUsingPivot() ' Use a Pivot Table to create a static summary report ' with model going down the rows and regions across Dim WSD As Worksheet Dim PTCache As PivotCache Dim PT As PivotTable Dim PRange As Range Dim FinalRow As Long Dim FinalCol As Long Set WSD = Worksheets("PivotTable") 'Name active worksheet as "PivotTable" ActiveSheet.Name = "PivotTable" ' Delete any prior pivot tables For Each PT In WSD.PivotTables PT.TableRange2.Clear Next PT ' Define input area and set up a Pivot Cache FinalRow = WSD.Cells(Application.Rows.Count, 1).End(xlUp).Row FinalCol = WSD.Cells(1, Application.Columns.Count). _ End(xlToLeft).Column Set PRange = WSD.Cells(1, 1).Resize(FinalRow, FinalCol) Set PTCache = ActiveWorkbook.PivotCaches.Add(SourceType:= _ xlDatabase, SourceData:=PRange) ' Create the Pivot Table from the Pivot Cache Set PT = PTCache.CreatePivotTable(TableDestination:=WSD. _ Cells(2, FinalCol + 2), TableName:="PivotTable1") ' Turn off updating while building the table PT.ManualUpdate = True ' Set up the row fields PT.AddFields RowFields:="Driver", ColumnFields:="Branch" ' Set up the data fields With PT.PivotFields("Case Number") .Orientation = xlDataField .Function = xlCount .Position = 1 End With With PT .ColumnGrand = False .RowGrand = False .NullString = "0" End With ' Calc the pivot table PT.ManualUpdate = False PT.ManualUpdate = True End Sub

    Read the article

  • Horizontally align rows in multiple tables using web user control

    - by goku_da_master
    I need to align rows in different tables that are layed out horizontally. I'd prefer to put the html code in a single web user control so I can create as many instances of that control as I want and lay them out horizontally. The problem is, the text in the rows needs to wrap. So some rows may expand vertically and some may not (see the example below). When that happens, the rows in the other tables aren't aligned horizontally. I know I can accomplish all this by using a single table, but that would mean I'd have to duplicate the name, address and phone html code instead of dynamically creating new instances of my user control (in reality there are many more fields than this, but I'm keeping it simple). Is there any way to do this whether with div's, tables or something else? Here's the problem: Mary Jane's address field expands 2 lines, causing her phone field to not align properly with John's and Bob's. Name: John Doe Name: Mary Jane Name: Bob Smith Address: 123 broadway Address: Some really long address Address: Short address Phone: 123-456 that takes up multiple lines Phone: 111-2222 Phone: 456-789 I'm not restricted in any way how to do this (other than using asp.net), but I'd prefer to use a single web control that I instantiate X times at design time (in this example, it's 3 times). I'm using VS2008, and .Net 3.5

    Read the article

  • Select * from 'many to many' SQL relationship

    - by Rampant Creative Group
    I'm still learning SQL and my brain is having a hard time with this one. Say I have 3 tables: teams players and teams_players as my link table All I want to do is run a query to get each team and the players on them. I tried this: SELECT * FROM teams INNER JOIN teams_players ON teams.id = teams_players.team_id INNER JOIN players ON teams_players.player_id = players.id But it returned a separate row for each player on each team. Is JOIN the right way to do it or should I be doing something else? ----------------------------------------- Edit Ok, so from what I'm hearing, this isn't necessarily a bad way to do it. I'll just have to group the data by team while I'm doing my loop. I have not yet tried the modified SQL statements provided, but I will today and get back to you. To answer the question about structure - I guess I wasn't thinking about the returned row structure which is part of what lead to my confusion. In this particular case, each team is limited to 4 players (or less) so I guess the structure that would be helpful to me is something like the following: teams.id, teams.name, players.id, players.name, players.id, players.name, players.id, players.name, players.id, players.name, 1 Team ABC 1 Jim 2 Bob 3 Ned 4 Roy 2 Team XYZ 2 Bob 3 Ned 5 Ralph 6 Tom

    Read the article

  • Django/jQuery - read file and pass to browser as file download prompt

    - by danspants
    I've previously asked a question regarding passing files to the browser so a user receives a download prompt. However these files were really just strings creatd at the end of a function and it was simple to pass them to an iframe's src attribute for the desired effect. Now I have a more ambitious requirement, I need to pass pre existing files of any format to the browser. I have attempted this using the following code: def return_file(request): try: bob=open(urllib.unquote(request.POST["file"]),"rb") response=HttpResponse(content=bob,mimetype="application/x-unknown") response["Content-Disposition"] = "attachment; filename=nothing.xls" return HttpResponse(response) except: return HttpResponse(sys.exc_info()) With my original setup the following jQuery was sufficient to give the desired download prompt: jQuery('#download').attr("src","/return_file/"); However this won't work anymore as I need to pass POST values to the function. my attempt to rectify that is below, but instead of a download prompt I get the file displayed as text. jQuery.get("/return_file/",{"file":"c:/filename.xls"},function(data) { jQuery(thisButton).children("iframe").attr("src",data); }); Any ideas as to where I'm going wrong? Thanks!

    Read the article

  • R: manipulating data.frames containing strings and booleans.

    - by Mike Dewar
    Hello. I have a data.frame in R; it's called p. Each element in the data.frame is either True or False. My variable p has, say, m rows and n columns. For every row there is strictly only one TRUE element. It also has column names, which are strings. What I would like to do is the following: For every row in p I see a TRUE I would like to replace with the name of the corresponding column I would then like to collapse the data.frame, which now contains FALSEs and column names, to a single vector, which will have m elements. I would like to do this in an R-thonic manner, so as to continue my enlightenment in R and contribute to a world without for-loops. I can do step 1 using the following for loop: for (i in seq(length(colnames(p)))) { p[p[,i]==TRUE,i]=colnames(p)[i] } but theres's no beauty here and I have totally subscribed to this for-loops-in-R-are-probably-wrong mentality. Maybe wrong is too strong but they're certainly not great. I don't really know how to do step 2. I kind of hoped that the sum of a string and FALSE would return the string but it doesn't. I kind of hoped I could use an OR operator of some kind but can't quite figure that out (Python responds to False or 'bob' with 'bob'). Hence, yet again, I appeal to you beautiful Rstats people for help!

    Read the article

  • Spawning a thread in python

    - by morpheous
    I have a series of 'tasks' that I would like to run in separate threads. The tasks are to be performed by separate modules. Each containing the business logic for processing their tasks. Given a tuple of tasks, I would like to be able to spawn a new thread for each module as follows. from foobar import alice, bob charles data = getWorkData() # these are enums (which I just found Python doesn't support natively) :( tasks = (alice, bob, charles) for task in tasks # Ok, just found out Python doesn't have a switch - @#$%! # yet another thing I'll need help with then ... switch case alice: #spawn thread here - how ? alice.spawnWorker(data) No prizes for guessing I am still thinking in C++. How can I write this in a Pythonic way using Pythonic 'enums' and 'switch'es, and be able to run a module in a new thread. Obviously, the modules will all have a class that is derived from a ABC (abstract base class) called Plugin. The spawnWorker() method will be declared on the Plugin interface and defined in the classes implemented in the various modules. Maybe, there is a better (i.e. Pythonic) way of doing all this?. I'd be interested in knowing

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

  • Perl regex which grabs ALL double letter occurances in a line

    - by phileas fogg
    Hi all, still plugging away at teaching myself Perl. I'm trying to write some code that will count the lines of a file that contain double letters and then place parentheses around those double letters. Now what I've come up with will find the first ocurrance of double letters, but not any other ones. For instance, if the line is: Amp, James Watt, Bob Transformer, etc. These pioneers conducted many My code will render this: 19 Amp, James Wa(tt), Bob Transformer, etc. These pioneers conducted many The "19" is the count (of lines containing double letters) and it gets the "tt" of "Watt" but misses the "ee" in "pioneers". Below is my code: $file = '/path/to/file/electricity.txt'; open(FH, $file) || die "Cannot open the file\n"; my $counter=0; while (<FH>) { chomp(); if (/(\w)\1/) { $counter += 1; s/$&/\($&\)/g; print "\n\n$counter $_\n\n"; } else { print "$_\n"; } } close(FH); What am I overlooking? TIA!

    Read the article

  • WP7 - listbox Binding

    - by Jeff V
    I have an ObservableCollection that I want to bind to my listbox... lbRosterList.ItemsSource = App.ViewModel.rosterItemsCollection; However, in that collection I have another collection within it: [DataMember] public ObservableCollection<PersonDetail> ContactInfo { get { return _ContactInfo; } set { if (value != _ContactInfo) { _ContactInfo = value; NotifyPropertyChanged("ContactInfo"); } } } PersonDetail contains 2 properties: name and e-mail I would like the listbox to have those values for each item in rosterItemsCollection RosterId = 0; RosterName = "test"; ContactInfo.name = "Art"; ContactInfo.email = "[email protected]"; RosterId = 0; RosterName = "test" ContactInfo.name = "bob"; ContactInfo.email = "[email protected]"; RosterId = 1; RosterName = "test1" ContactInfo.name = "chris"; ContactInfo.email = "[email protected]"; RosterId = 1; RosterName = "test1" ContactInfo.name = "Sam"; ContactInfo.email = "[email protected]"; I would like that listboxes to display the ContactInfo information. I hope this makes sense... My XAML that I've tried with little success: <listbox x:Name="lbRosterList" ItemsSource="rosterItemCollection"> <textblock x:name="itemText" text="{Binding Path=name}"/> What am I doing incorrectly?

    Read the article

  • Generate regular expression to match strings from the list A, but not from list B

    - by Vlad
    I have two lists of strings ListA and ListB. I need to generate a regular expression that will match all strings in ListA and will not match any string in ListB. The strings could contain any combination of characters, numbers and punctuation. If a string appears on ListA it is guaranteed that it will not be in the ListB. If a string is not in either of these two lists I don't care what the result of the matching should be. The lists typically contain thousands of strings, and strings are fairly similar to each other. I know the trivial answer to this question, which is just generate a regular expression of the form (Str1)|(Str2)|(Str3) where StrN is the string from ListA. But I am looking for a more efficient way to do this. Ideal solution would be some sort of tool that will take two lists and generate a Java regular expression for this. Update 1: By "efficient", I mean to generate expression that is shorter than trivial solution. The ideal algorithm would generate the shorted possible expression. Here are some examples. ListA = { C10 , C15, C195 } ListB = { Bob, Billy } The ideal expression would be /^C1.+$/ Another example, note the third element of ListB ListA = { C10 , C15, C195 } ListB = { Bob, Billy, C25 } The ideal expression is /^C[^2]{1}.+$/ The last example ListA = { A , D ,E , F , H } ListB = { B , C , G , I } The ideal expression is the same as trivial solution which is /^(A|D|E|F|H)$/ Also, I am not looking for the ideal solution, anything better than trivial would help. I was thinking along the lines of generating the list of trivial solutions, and then try to merge the common substrings while watching that we don't wander into ListB territory. *Update 2: I am not particularly worried about the time it takes to generate the RegEx, anything under 10 minutes on the modern machine is acceptable

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Outlook: Displaying email sender's job title in message list

    - by RexE
    Is there a way to display the sender's job title in the Outlook email list pane? I would like to see something like: From | Title | Subject | Received Joe Smith | President | Re: Proposal | 5:34 Bob Chen | Engineer | Fw: Request | 5:30 I am using Outlook 2010. All my mail comes through an Exchange 2010 server.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >