Search Results

Search found 11587 results on 464 pages for 'pseudo random numbers'.

Page 294/464 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Why is this CHOICE element not getting assigned in my SharePoint Field definition schema?

    - by ccornet
    I defined a new field of the type "Choice" for my web application. It will serve basically as a pseudo-lookup as its contents are defined by the value of a Text field in a list. It is initialized with a dummy choice to begin with (I'm under the impression a choice field needs at least one choice when defined), which is replaced with a real choice later on. But for some reason, this dummy choice is never actually added to the choices! Below is the XML Schema for the field in question. <Field ID="{ALICEH-ASFA-KEGU-IDLISTED}" Name="ddlSystems" Group="Lookup Columns" DisplayName="ddlSystems" Type="Choice" Sealed="FALSE" ReadOnly="FALSE" Hidden="FALSE" FillInChoice="TRUE" DisplaceOnUpgrade="TRUE"> <CHOICES> <CHOICE>BLANULL</CHOICE> </CHOICES> <Default>BLANULL</Default> </Field> Initially, I used a default choice of (a single space), but I changed it to BLANULL so that I can parse an actual word instead of a veritably empty string. Now, even after having uninstalled and reinstalled the feature with this field, I have a choice field that has (still a single space) as the only choice. Even more perplexing, BLANULL is actually listed for the default value in both the UI and the object model! What is causing this problem, and how can I circumvent it so that I don't have to manually set this dummy value each time?

    Read the article

  • search & replace on 3000 row, 25 column spreadsheet

    - by Deca
    I'm attempting to clean up data in this (old) spreadsheet and need to remove things like single and double quotes, HTML tags and so on. Trouble is, it's a 3000 row file with 25 columns and every spreadsheet app I've tried (NeoOffice, MS Excel, Apple Numbers) chokes on it. Hard. Any ideas on how else I can clean this thing up for import to MySQL? Clearly I could go through each record manually, row by row, but would like to avoid that if at all possible. Likewise, I could write a PHP script to handle it on import, but don't want to put the server into a death spiral either.

    Read the article

  • WPF ListView.CurrentChanged too fast for binding

    - by matt
    My case: MVVM ListView+Details(custom UserControl) List bound to MV.Items (IsSynchronizedWithCurrent=true) Details bound to MV.Items.Current MV.Items.Count == 100 about 0.2sec to read details (lazy mode) When I hold the down arrow on the list, very strange things happen: list items order change current changes in the random order CPU usage drastically increments and eventually all hangs. I've read some post that one should start the timer or run handler in the background, but I am not able to do that, since all the binding WPF does for me. Is there some way to instruct the binding in my DetailsControl, to wait a while before accepting CurrentItem? Or should I just resign from the clean solution and write custom code in my MV to handle that?

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • Visual Artifacts in Visual Studio 2010

    - by Simon Chadwick
    I'm using VS 2010 on Windows Server 2003, running on a Dell Inspiron 9400 laptop. VS 2010 runs fine, except for persistent and random screen re-drawing issues. Samples of these are here. These artifacts occur as the mouse moves over items that highlight on a mouse-over event, while scrolling, and when switching tabs. VS 2008 has non of these issues, so I assume that it is related to VS 2010's use of WPF. Could it be that my video card or driver is not up to the task of rendering WPF? Some other WPF applications (not Silverlight) also have some of these screen repainting problems. I have tried a variety of settings in System Properties--Advanced--Performance Options--Visual Effects, and in the related "Advanced" tab, Processor Scheduling is adjusted for best performance of programs. Many thanks for any suggestions!

    Read the article

  • How to have localized style when writing cell with xlwt

    - by lfagundes
    I'm writing an Excel spreadsheet with Python's xlwt and I need numbers to be formatted using "." as thousands separator, as it is in brazilian portuguese language. I have tried: style.num_format_str = r'#,##0' And it sets the thousands separator as ','. If I try setting num_format_str to '#.##0', I'll get number formatted as 1234.000 instead of 1.234. And if I open document in OpenOffice and format cells, I can set the language of the cell to "Portuguese (Brazil)" and then OpenOffice will show the format code as being "#.##0", but I don't find a way to set the cell's language to brazilian portuguese. Any ideas?

    Read the article

  • Easiest way to pass a javascript array or its values to a servlet and convert to a HashMap

    - by Ankur
    I have a Javascript array. I want to pass it's data to a servlet using the ajax() method of jQuery. What is the easiest way to do this. The index values i.e. the i in array[i] are not in order, they are numbers that have meaning themselves, hence I cannot simply loop through and create a GET queryString, or so I believe. Maybe I should be converting the JavaScript array to a JSON Object and sending that to the server ?? I am stumped on this one.

    Read the article

  • Unique identifiers for users

    - by Christopher McCann
    If I have a table of a hundred users normally I would just set up an auto-increment userID column as the primary key. But if suddenly we have a million users or 5 million users then that becomes really difficult because I would want to start becoming more distributed in which case an auto-increment primary key would be useless as each node would be creating the same primary keys. Is the solution to this to use natural primary keys? I am having a real hard time thinking of a natural primary key for this bunch of users. The problem is they are all young people so they do not have national insurance numbers or any other unique identifier I can think of. I could create a multi-column primary key but there is still a chance, however miniscule of duplicates occurring. Does anyone know of a solution? Thanks

    Read the article

  • Reading numeric Excel data as text using xlrd in Python

    - by Brian
    Hi guys, I am trying to read in an Excel file using xlrd, and I am wondering if there is a way to ignore the cell formatting used in Excel file, and just import all data as text? Here is the code I am using for far: import xlrd xls_file = 'xltest.xls' xls_workbook = xlrd.open_workbook(xls_file) xls_sheet = xls_workbook.sheet_by_index(0) raw_data = [['']*xls_sheet.ncols for _ in range(xls_sheet.nrows)] raw_str = '' feild_delim = ',' text_delim = '"' for rnum in range(xls_sheet.nrows): for cnum in range(xls_sheet.ncols): raw_data[rnum][cnum] = str(xls_sheet.cell(rnum,cnum).value) for rnum in range(len(raw_data)): for cnum in range(len(raw_data[rnum])): if (cnum == len(raw_data[rnum]) - 1): feild_delim = '\n' else: feild_delim = ',' raw_str += text_delim + raw_data[rnum][cnum] + text_delim + feild_delim final_csv = open('FINAL.csv', 'w') final_csv.write(raw_str) final_csv.close() This code is functional, but there are certain fields, such as a zip code, that are imported as numbers, so they have the decimal zero suffix. For example, is there is a zip code of '79854' in the Excel file, it will be imported as '79854.0'. I have tried finding a solution in this xlrd spec, but was unsuccessful.

    Read the article

  • SQLServer 2008 Pivot

    - by Mitch
    I need to show some information in a graph, the data is held in a SQL Server 2008 table. The graph is expecting 2 columns, one for QuestionNumber and the other for Score. The table containing the data has column names that correspond to the question numbers ie A1, A2, A3, A4, B1, B2, B3, B4, C1, C2. Each question is given a score of 1 to 5. I need to show a graph where the X axis shows A1, A2, A3 etc and the Y axis shows the score. I'm thinking I somehow need to rotate the data to achive this, but I'm not sure how. Maybe a different technique can acheive this rather that a Rotate, so I'm open to any ideas.

    Read the article

  • Embedding a YouTube Video Based on a PHP Variable

    - by jeffpm
    I have a website that I am creating for a class project. The website shows information about bands (which is stored in a mysql database). I am trying to spruce up the site a bit, and I would like to embed a YouTube video based on the band's name. Say, for instance, I have "Led Zeppelin" stored in my database. How can I take a PHP variable containing "Led Zeppelin", search YouTube using that variable, find a random video, and then embed that in the website? From the research I've done, it seems you can only embed YouTube videos if you know the exact address of the video. Thanks!

    Read the article

  • Rails named_scope across multiple tables

    - by wakiki
    I'm trying to tidy up my code by using named_scopes in Rails 2.3.x but where I'm struggling with the has_many :through associations. I'm wondering if I'm putting the scopes in the wrong place... Here's some pseudo code below. The problem is that the :accepted named scope is replicated twice... I could of course call :accepted something different but these are the statuses on the table and it seems wrong to call them something different. Can anyone shed light on whether I'm doing the following correctly or not? I know Rails 3 is out but it's still in beta and it's a big project I'm doing so I can't use it in production yet. class Person < ActiveRecord::Base has_many :connections has_many :contacts, :through => :connections named_scope :accepted, :conditions => ["connections.status = ?", Connection::ACCEPTED] # the :accepted named_scope is duplicated named_scope :accepted, :conditions => ["memberships.status = ?", Membership::ACCEPTED] end class Group < ActiveRecord::Base has_many :memberships has_many :members, :through => :memberships end class Connection < ActiveRecord::Base belongs_to :person belongs_to :contact, :class_name => "Person", :foreign_key => "contact_id" end class Membership < ActiveRecord::Base belongs_to :person belongs_to :group end I'm trying to run something like person.contacts.accepted and group.members.accepted which are two different things. Shouldn't the named_scopes be in the Membership and Connection classes? One solution is to just call the two different named scope something different in the Person class or even to create separate associations (ie. has_many :accepted_members and has_many :accepted_contacts) but it seems hackish and in reality I have many more than just accepted (ie. banned members, ignored connections, pending, requested etc etc)

    Read the article

  • How to code Fizzbuzz in F#

    - by Russell
    I am currently learning F# and have tried (an extremely) simple example of FizzBuzz. This is my initial attempt: for x in 1..100 do if x % 3 = 0 && x % 5 = 0 then printfn "FizzBuzz" elif x % 3 = 0 then printfn "Fizz" elif x % 5 = 0 then printfn "Buzz" else printfn "%d" x What solutions could be more elegant/simple/better (explaining why) using F# to solve this problem? Note: The FizzBuzz problem is going through the numbers 1 to 100 and every multiple of 3 prints Fizz, every multiple of 5 prints Buzz, every multiple of both 3 AND 5 prints FizzBuzz. Otherwise, simple the number is displayed. Thanks :)

    Read the article

  • Validate a string in a table in SQL Server

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance

    Read the article

  • Do I need to use decimal places when using floats? Is the "f" suffix necessary?

    - by Paulo Fierro
    I've seen several examples in books and around the web where they sometimes use decimal places when declaring float values even if they are whole numbers, and sometimes using an "f" suffix. Is this necessary? For example: [UIColor colorWithRed:0.8 green:0.914 blue:0.9 alpha:1.00]; How is this different from: [UIColor colorWithRed:0.8f green:0.914f blue:0.9f alpha:1.00f]; Does the trailing "f" mean anything special? Getting rid of the trailing zeros for the alpha value works too, so it becomes: [UIColor colorWithRed:0.8 green:0.914 blue:0.9 alpha:1]; So are the decimal zeros just there to remind myself and others that the value is a float? Just one of those things that has puzzled me so any clarification is welcome :)

    Read the article

  • Setting a VCProject property to default

    - by Ofek Shilon
    I'm trying some VS2005 IDE macros to modify a large amount of projects (~80) within a solution. Some of the properties I wish to set do expose a programmatic interface to 'default', but many others do not. Is there a generic way to set such properties to their default? (eventually meaning erasing them from the .vcproj file) Simplified example, setting some random properties: Sub SetSomeProps() Dim prj As VCProject Dim cfg As VCConfiguration Dim toolCompiler As VCCLCompilerTool Dim toolLinker As VCLinkerTool Dim EnvPrj As EnvDTE.Project For Each EnvPrj In DTE.Solution.Projects prj = EnvPrj.Object cfg = prj.Configurations.Item(1) toolLinker = cfg.Tools("VCLinkerTool") If toolLinker IsNot Nothing Then ' Some tool props that expose a *default* interface' toolLinker.EnableCOMDATFolding = optFoldingType.optFoldingDefault toolLinker.OptimizeReferences = optRefType.optReferencesDefault toolLinker.OptimizeForWindows98 = optWin98Type.optWin98Default End If toolCompiler = cfg.Tools("VCCLCompilerTool") If toolCompiler IsNot Nothing Then ' How to set it to default? (*erase* the property from the .vcproj)' toolCompiler.CallingConvention = callingConventionOption.callConventionCDecl toolCompiler.WholeProgramOptimization = False toolCompiler.Detect64BitPortabilityProblems = False End If Next End Sub Any advice would be appreciated.

    Read the article

  • Truncate a HTML formatted text.

    - by marharépa
    Hi there! I've got a variable which is formatted with random HTML code. I call it to {$text} and i truncate it. The value is for example: <div>Lorem <i>ipsum <b>dolor <span>sit </span>amet</b>, con</i> elit.</div> If i truncate the text's first ~30 letters, I'll get this: <div>Lorem <i>ipsum <b>dolor <span>sit The problem is, I can't close the elements. So, I need a script, which check the <*> elements in the code (where * could be anything), and if it dont have a close tag, close 'em. Please help me in this. Thanks.

    Read the article

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • Memory footprint of a parsed XML file in Classic ASP?

    - by Pete Duncanson
    Anyone know of a way to find out the amount of memory/size of a XMLDocument once it has parsed a XML file? I've been doing "beer mat" calculations so far but have been asked to come up with some more legit numbers through monitoring some how. I need to create about 1500 XML files (via FreeThreadedXMl-DOM object), which verge between 3-9K in size and store them in Application vars but our SysAdmin is worried about us gobbling up too much memory. Other than the crude method of booting up a fresh IIS instance and then loading everything in and monitoring before and after memory usage in Task Manager I can't think of a way of doing it with a bit more accuracy.

    Read the article

  • How do I get the name of the newest file via the Terminal?

    - by Alec
    I'm trying to create a macro for Keyboard Maestro for OS X doing the following: Get name of newest file in a directory on my disk based on date created; Paste the text "newest file: " plus the name of the newest file. One of its options is to "Execute a shell script", so I thought that would do it for 1. After Googling around a bit I came up with this: cd /path/to/directory/ ls -t | head -n1 This sorts it right, and returns the first filename. However, it also seems to includes a line break, which I do not want. As for 2: I can output the text "newest file: " with a different action in the app, and paste the filename behind that. But I'm wondering if you can't return "random text" + the outcome of the ls command. So my question is: can I do this only using the ls command? And how do I get just the name of the latest file without any linebreaks or returns?

    Read the article

  • Sorting array containing strings in objective c

    - by jakob
    Hello experts! I have an array named 'names' with strings looking like this: ["name_23_something", "name_25_something", "name_2_something"]; Now I would like to sort this array in ascending order so it looks like this: ["name_25_something", "name_23_something", "name_2_something"]; I guess that should start of with extracting the numbers since I want that the sorting is done by them: for(NSString *name in arr) { NSArray *nameSegments = [name componentsSeparatedByString:@"_"]; NSLog("number: %@", (NSString*)[nameSegments objectAtIndex:1]); } I'm thinking of creating a dictionary with the keys but I'm not sure if that is the correct objective-c way, maybe there some some methods I could use instead? Could you please me with some tips or example code how this sorting should be done in a proper way. Thank you

    Read the article

  • Python Mersenne Twister implementation

    - by B Rivera
    I have Python 3.1.2 and I'm using Windows XP. Where can I see Python's implementation of the Mersenne Twister? In the Python docs it is stated that the Mersenne Twister was written in C and the Python History and License ( http://docs.python.org/py3k/license.html?highlight=mersenne%20twister ) states that "The _random module includes code based on a download from http://www.math.keio.ac.jp/matumoto/MT2002/emt19937ar.html." random.py imports _random which apparently has the Mersenne Twister implementation in it. I can't seem to locate _random. Any thoughts?

    Read the article

  • Numeric comparison difficulty in R

    - by Matt Parker
    I'm trying to compare two numbers in R as a part of a if-statement condition: (a-b) >= 0.5 In this particular instance, a = 0.58 and b = 0.08... and yet (a-b) >= 0.5 is false. I'm aware of the dangers of using == for exact number comparisons, and this seems related: (a - b) == 0.5) is false, while all.equal((a - b), 0.5) is true. The only solution I can think of is to have two conditions: (a-b) > 0.5 | all.equal((a-b), 0.5). This works, but is that really the only solution? Should I just swear off of the = family of comparison operators forever?

    Read the article

  • loops and conditionals inside triggers

    - by Ying
    I have this piece of logic I would like to implement as a trigger, but I have no idea how to do it! I want to create a trigger that, when a row is deleted, it checks to see if the value of one of its columns exists in another table, and if it does, it should also perform a delete on another table based on another column. So say we had a table Foo that has columns Bar, Baz. This is what id be doing if i did not use a trigger: function deleteFromFooTable(FooId) { SELECT (Bar,Baz) FROM FooTable WHERE id=FooId if not-empty(SELECT * FROM BazTable WHERE id=BazId) DELETE FROM BarTable WHERE id=BarId DELETE FROM FooTable WHERE id=FooId } I jumped some hoops in that pseudo code, but i hope you all get where im going. It seems what i would need is a way to do conditionals and to loop(in case of multiple row deletes?) in the trigger statement. So far, I haven't been able to find anything. Is this not possible, or is this bad practice? Thanks!

    Read the article

  • Essential skills of a Data Scientist

    - by harshsinghal
    I would like to know more about the relevant skills in the arsenal of a Data Scientist, and with new technologies coming in every day, how one picks and chooses the essentials. A few ideas germane to this discussion: Knowing SQL and the use of a DB such as MySQL, PostgreSQL was great till the advent of NoSql and non-relational databases. MongoDB, CouchDB etc. are becoming popular to work with web-scale data. Knowing a stats tool like R is enough for analysis, but to create applications one may need to add Java, Python, and such others to the list. Data now comes in the form of text, urls, multi-media to name a few, and there are different paradigms associated with their manipulation. What about cluster computing, parallel computing, the cloud, Amazon EC2, Hadoop ? OLS Regression now has Artificial Neural Networks, Random Forests and other relatively exotic machine learning/data mining algos. for company Thoughts?

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >