Search Results

Search found 2134 results on 86 pages for 'numeric limits'.

Page 71/86 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • How to get the second sub element of every element in a list in R

    - by PaulHurleyuk
    I know I've come across this problem before, but I'm having a bit of a mental block at the moment. and as I can't find it on SO, I'll post it here so I can find it next time. I have a dataframe that contains a field representing an ID label. This label has two parts, an alpha prefix and a numeric suffix. I want to split it apart and create two new fields with these values in. structure(list(lab = c("N00", "N01", "N02", "B00", "B01", "B02", "Z21", "BA01", "NA03")), .Names = "lab", row.names = c(NA, -9L ), class = "data.frame") df$pre<-strsplit(df$lab, "[0-9]+") df$suf<-strsplit(df$lab, "[A-Z]+") Which gives lab pre suf 1 N00 N , 00 2 N01 N , 01 3 N02 N , 02 4 B00 B , 00 5 B01 B , 01 6 B02 B , 02 7 Z21 Z , 21 8 BA01 BA , 01 9 NA03 NA , 03 So, the first strsplit works fine, but the second gives a list, each having two elements, an empty string and the result I want, and stuffs them both into the dataframe column. How can I select the second sub-element from each element of the list ? (or, is there a better way to do this) Thanks Paul.

    Read the article

  • SQL Server 2005 - Enabling both Named Pipes & TCP/IP protocols?

    - by Clinemi
    We have a SQL Server 2005 database, and currently all our users are connecting to the database via the TCP/IP protocol. The SQL Server Configuration Manager allows you to "enable" both Named Pipes, and TCP/IP connections at the same time. Is this a good idea? My question is not whether we should use named pipes instead of TCP/IP, but are there problems associated with enabling both? One of our client's IT guys, says that enabling database communication with both protocols will limit the bandwidth that either protocol can use - to like 50% of the total. I would think that the bandwidth that TCP/IP could use would be directly tied (inversely) to the amount of traffic that Named Pipes (or any of the other types of traffic) were occupying on the network at that moment. However, this IT person is indicating that the fact that we have enabled two protocols on the server, artificially limits the bandwidth that TCP/IP can use. Is this correct? I did Google searches but could not come up with an answer to this question. Any help would be appreciated.

    Read the article

  • Problems with string parameter insertion into prepared statement

    - by c0d3x
    Hi, I have a database running on an MS SQL Server. My application communicates via JDBC and ODBC with it. Now I try to use prepared statements. When I insert a numeric (Long) parameter everything works fine. When I insert a string parameter it does not work. There is no error message, but an empty result set. WHERE column LIKE ('%' + ? + '%') --inserted "test" -> empty result set WHERE column LIKE ? --inserted "%test%" -> empty result set WHERE column = ? --inserted "test" -> works But I need the LIKE functionality. When I insert the same string directly into the query string (not as a prepared statement parameter) it runs fine. WHERE column LIKE '%test%' It looks a little bit like double quoting for me, but I never used quotes inside a string. I use preparedStatement.setString(int index, String x) for insertion. What is causing this problem? How can I fix it? Thanks in advance.

    Read the article

  • Moving to an arbitrary position in a file in Python

    - by B Rivera
    Let's say that I routinely have to work with files with an unknown, but large, number of lines. Each line contains a set of integers (space, comma, semicolon, or some non-numeric character is the delimiter) in the closed interval [0, R], where R can be arbitrarily large. The number of integers on each line can be variable. Often times I get the same number of integers on each line, but occasionally I have lines with unequal sets of numbers. Suppose I want to go to Nth line in the file and retrieve the Kth number on that line (and assume that the inputs N and K are valid --- that is, I am not worried about bad inputs). How do I go about doing this efficiently in Python 3.1.2 for Windows? I do not want to traverse the file line by line. I tried using mmap, but while poking around here on SO, I learned that that's probably not the best solution on a 32-bit build because of the 4GB limit. And in truth, I couldn't really figure out how to simply move N lines away from my current position. If I can at least just "jump" to the Nth line then I can use .split() and grab the Kth integer that way. The nuance here is that I don't just need to grab one line from the file. I will need to grab several lines: they are not necessarily all near each other, the order in which I get them matters, and the order is not always based on some deterministic function. Any ideas? I hope this is enough information. Thanks!

    Read the article

  • Android Cursor strange behaviour

    - by sandis
    After many houres of bug searching in a big app, I have finally tracked down the bug. This logging captures the problem: Log.d(TAG,"buildList, DBresult.getInt(1): "+DBresult.getInt(1)); Log.d(TAG,"buildList, DBresult.getString(1): "+DBresult.getString(1)); Log.d(TAG,"buildList, DBresult.getInt(4): "+DBresult.getInt(4)); Log.d(TAG,"buildList, DBresult.getString(4): "+DBresult.getString(4)); The resulting output: 05-06 11:11:20.123: DEBUG/TodoList(18943): buildList, DBresult.getInt(1): 0 05-06 11:11:20.123: DEBUG/TodoList(18943): buildList, DBresult.getString(1): false 05-06 11:11:20.123: DEBUG/TodoList(18943): buildList, DBresult.getInt(4): 0 05-06 11:11:20.123: DEBUG/TodoList(18943): buildList, DBresult.getString(4): true There are no backgroung threads running. As you can see the problem is that '0' is interpreted as false on one occasion, and as true on another. Since I am completely lost on how this can happen, I dont know how to proceed. What could possibly result in such a behaviour? Both the columns are of the type "boolean", i.e a numeric in sqlite. Unfortunately the string returned is the correct value, while the integer is always 0. If I export the database to my computer and check it with SQlite Administrator I can see that the values are correctly set, it is only the getInt()-function that always returns 0. I know for a fact that this works in other apps I have coded, and I dont know why this has stopped working. I have tried compiling the code under 2.0, 2.0.1 and 2.1, and it always appears. I can make my app runnable again by getting boolean values like this: myBool= (DBresult.getString(0).equals("true")); but that is both ugly and not optimized. Any suggestions on what is causing this behaviour is welcome. Cheers,

    Read the article

  • Working with datetime type in Quickbooks My Time files

    - by jakemcgraw
    I'm attempting to process Quickbooks My Time imt files using PHP. The imt file is a plaintext XML file. I've been able to use the PHP SimpleXML library with no issues but one: The numeric representations of datetime in the My Time XML files is something I've never seen before: <object type="TIMEPERIOD" id="z128"> <attribute name="notes" type="string"></attribute> <attribute name="start" type="date">308073428.00000000000000000000</attribute> <attribute name="running" type="bool">0</attribute> <attribute name="duration" type="double">3600</attribute> <attribute name="datesubmitted" type="date">310526237.59616601467132568359</attribute> <relationship name="activity" type="1/1" destination="ACTIVITY" idrefs="z130"></relationship> </object> You can see that attritube[@name='start'] has a value of: 308073428.00000000000000000000 This is not Excel based method of storage 308,073,428 is too many days since 1900-01-00 and it isn't Unix Epoch either. So, my question is, has anyone ever seen this type of datetime representation?

    Read the article

  • How do I make this nested for loop, testing sums of cubes, more efficient?

    - by Brian J. Fink
    I'm trying to iterate through all the combinations of pairs of positive long integers in Java and testing the sum of their cubes to discover if it's a Fibonacci number. I'm currently doing this by using the value of the outer loop variable as the inner loop's upper limit, with the effect being that the outer loop runs a little slower each time. Initially it appeared to run very quickly--I was up to 10 digits within minutes. But now after 2 full days of continuous execution, I'm only somewhere in the middle range of 15 digits. At this rate it may end up taking a whole year just to finish running this program. The code for the program is below: import java.lang.*; import java.math.*; public class FindFib { public static void main(String args[]) { long uLimit=9223372036854775807L; //long maximum value BigDecimal PHI=new BigDecimal(1D+Math.sqrt(5D)/2D); //Golden Ratio for(long a=1;a<=uLimit;a++) //Outer Loop, 1 to maximum for(long b=1;b<=a;b++) //Inner Loop, 1 to current outer { //Cube the numbers and add BigDecimal c=BigDecimal.valueOf(a).pow(3).add(BigDecimal.valueOf(b).pow(3)); System.out.print(c+" "); //Output result //Upper and lower limits of interval for Mobius test: [c*PHI-1/c,c*PHI+1/c] BigDecimal d=c.multiply(PHI).subtract(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)), e=c.multiply(PHI).add(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)); //Mobius test: if integer in interval (floor values unequal) Fibonacci number! if (d.toBigInteger().compareTo(e.toBigInteger())!=0) System.out.println(); //Line feed else System.out.print("\r"); //Carriage return instead } //Display final message System.out.println("\rDone. "); } } Now the use of BigDecimal and BigInteger was delibrate; I need them to get the necessary precision. Is there anything other than my variable types that I could change to gain better efficiency?

    Read the article

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

  • How to improve multi-threaded access to Cache (custom implementation)

    - by Andy
    I have a custom Cache implementation, which allows to cache TCacheable<TKey> descendants using LRU (Least Recently Used) cache replacement algorithm. Every time an element is accessed, it is bubbled up to the top of the LRU queue using the following synchronized function: // a single instance is created to handle all TCacheable<T> elements public class Cache() { private object syncQueue = new object(); private void topQueue(TCacheable<T> el) { lock (syncQueue) if (newest != el) { if (el.elder != null) el.elder.newer = el.newer; if (el.newer != null) el.newer.elder = el.elder; if (oldest == el) oldest = el.newer; if (oldest == null) oldest = el; if (newest != null) newest.newer = el; el.newer = null; el.elder = newest; newest = el; } } } The bottleneck in this function is the lock() operator, which limits cache access to just one thread at a time. Question: Is it possible to get rid of lock(syncQueue) in this function while still preserving the queue integrity?

    Read the article

  • Aligning decimal points in HTML

    - by ijw
    I have a table containing decimal numbers in one column. I'm looking to align them in a manner similar to a word processor's "decimal tab" feature, so that all the points sit on a vertical line. I have two possible solutions at the moment but I'm hoping for something better... Solution 1: Split the numbers within the HTML, e.g. <td><div>1234</div><div class='dp'>.5</div></td> with .dp { width: 3em; } (Yes, this solution doesn't quite work as-is. The concept is, however, valid.) Solution 2: I found mention of <COL ALIGN="CHAR" CHAR="."> This is part of HTML4 according to the reference page, but it doesn't work in FF3.5, Safari 4 or IE7, which are the browsers I have to hand. It also has the problem that you can't pull out the numeric formatting to CSS (although, since it's affecting a whole column, I suppose that's not too surprising). Thus, anyone have a better idea?

    Read the article

  • apache solr : sum of data resulted from group by

    - by Terance Dias
    Hi, We have a requirement where we need to group our records by a particular field and take the sum of a corresponding numeric field e.x. select userid, sum(click_count) from user_action group by userid; We are trying to do this using apache solr and found that there were 2 ways of doing this: Using the field collapsing feature (http://blog.jteam.nl/2009/10/20/result-grouping-field-collapsing-with-solr/) but found 2 problems with this: 1.1. This is not part of release and is available as patch so we are not sure if we can use this in production. 1.2. We do not get the sum back but individual counts and we need to sum it at the client side. Using the Stats Component along with faceted search (http://wiki.apache.org/solr/StatsComponent). This meets our requirement but it is not fast enough for very large data sets. I just wanted to know if anybody knows of any other way to achieve this. Appreciate any help. Thanks, Terance.

    Read the article

  • Understanding how software testing works and what to test.

    - by RHaguiuda
    Intro: I've seen lots of topics here on SO about software testing and other terms I don't understand. Problem: As a beginner developer I, unfortunately, have no idea how software testing works, not even how to test a simple function. This is a shame, but thats the truth. I also hope this question can help others beginners developers too. Question: Can you help me to understand this subject a little bit more? Maybe some questions to start would help: When I develop a function, how should I test it? For example: when working with a sum function, should I test every input value possible or just some limits? How about testing functions with strings as parameters? In a big program, do I have to test every single piece of code of it? When you guys program do you test every code written? How automated test works and how can I try one? How tools for automated testing works and what they do? I`ve heard about unit testing. Can I have a brief explanation on this? What is a testing framework? If possible please post some code with examples to clarify the ideas. Any help on this topic is very welcome! Thanks.

    Read the article

  • Issue reading in a cell from Excel with Apache POI

    - by Nick
    I am trying to use Apache POI to read in old (pre-2007 and XLS) Excel files. My program goes to the end of the rows and iterates back up until it finds something that's not either null or empty. Then it iterates back up a few times and grabs those cells. This program works just fine reading in XLSX and XLS files made in Office 2010. I get the following error message: Exception in thread "main" java.lang.NumberFormatException: empty String at sun.misc.FloatingDecimal.readJavaFormatString(Unknown Source) at java.lang.Double.parseDouble(Unknown Source) at the line: num = Double.parseDouble(str); from the code: str = cell.toString(); if (str != "" || str != null) { System.out.println("Cell is a string"); num = Double.parseDouble(str); } else { System.out.println("Cell is numeric."); num = cell.getNumericCellValue(); } where the cell is the last cell in the document that's not empty or null. When I try to print the first cell that's not empty or null, it prints nothing, so I think I'm not accessing it correctly.

    Read the article

  • Technique for selectively formatting data in a PowerShell pipeline and output as HTML

    - by halr9000
    Say that you want to do some fancy formatting of some tabular output from powershell, and the destination is to be html (either for a webserver, or to be sent in an email). Let's say for example that you want certain numeric values to have a different background color. Whatever. I can think of two solid programmatic ways to accomplish this: output XML and transform with XSLT, or output HTML and decorate with CSS. XSLT is probably the harder of the two (I say that because I don't know it), but from what little I recall, it has the benefit of bring able to embed the selection criteria (xpath?) for aforementioned fancy formatting. CSS on the other hand needs a helping hand. If you wanted a certain cell to be treated specially, then you would need to distinguish it from its siblings with a class, id, or something along those lines. PowerShell doesn't really have a way to do that natively, so that would mean parsing the HTML as it leaves convertto-html and adding, for example, a "emphasis" class: <td class="emphasis">32MB</td> I don't like the idea of the required text parsing, especially given that I would rather be able to somehow emphasize what needs emphasizing in Powershell before it hits HTML. Is XSLT the best way? Have suggestions for how to markup the HTML after it leaves convertto-html or ideas of a different way?

    Read the article

  • Problems installing PIL after OSX 10.9

    - by user2632417
    I installed Mac OSX 10.9 the day it came out. Afterwards I decided I needed to install PIL. I'd installed it before, but it appeared the update had broken that. When I try to use pip to install PIL, it fails when building _imaging. It appears the root cause is this. /usr/include/sys/cdefs.h:655:2: error: Unsupported architecture Theres also a similar error here: /usr/include/machine/limits.h:8:2: error: architecture not supported and here: /usr/include/machine/_types.h:34:2: error: architecture not supported Then there's a whole list of missing types. /usr/include/sys/_types.h:94:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /usr/include/sys/_types.h:95:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ /usr/include/sys/_types.h:96:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_dev_t; /* dev_t */ ^ /usr/include/sys/_types.h:99:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:100:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ /usr/include/sys/_types.h:101:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ Needless to say I don't know where to go from here. I've got a couple of guesses, but I don't even know how to check. Wrong include probably as a result of a badly configured environment variable Problem with Xcode's installation/ missing command line tools Messed up header files If anyone has any suggestions either to check one of those possibilities or for one of their own I'm all ears.

    Read the article

  • Which fieldtype is best for storing PRICE values?

    - by BerggreenDK
    Hi there I am wondering whats the best "price field" in MSSQL for a shoplike structure? Looking at this overview: http://www.teratrax.com/sql_guide/data_types/sql_server_data_types.html We have datatypes called money, smallmoney, then we have decimal/numeric and lastly float and real Name, memory/disk-usage and value ranges: Money: 8 bytes (values: -922,337,203,685,477.5808 to +922,337,203,685,477.5807) Smallmoney: 4 bytes (values: -214,748.3648 to +214,748.3647) Decimal: 9 [default, min. 5] bytes (values: -10^38 +1 to 10^38 -1 ) Float: 8 bytes (values: -1.79E+308 to 1.79E+308 ) Real: 4 bytes (values: -3.40E+38 to 3.40E+38 ) My question is: is it really wise to store pricevalues in those types? what about eg. INT? Int: 4 bytes (values: -2,147,483,648 to 2,147,483,647) Lets say a shop uses dollars, they have cents, but I dont see prices being $49.2142342 so the use of a lot of decimals showing cents seems waste of SQL bandwidth. Secondly, most shops wouldn't show any prices near 200.000.000 (not in normal webshops at least... unless someone is trying to sell me a famous tower in Paris) So why not go for an int? An int is fast, its only 4 bytes and you can easily make decimals, by saving values in cents instead of dollars and then divide when you present the values. The other approach would be to use smallmoney which is 4 bytes too, but this will require the math part of the CPU to do the calc, where as Int is integer power... on the downside you will need to divide every single outcome. Are there any "currency" related problems with regionalsettings when using smallmoney/money fields? what will these transfer too in C#/.NET ? Any pros/cons? Go for integer prices or smallmoney or some other? Whats does your experience tell?

    Read the article

  • Why doesn't java.lang.Number implement Comparable?

    - by Julien Chastang
    Does anyone know why java.lang.Number does not implement Comparable? This means that you cannot sort Numbers with Collections.sort which seems to me a little strange. Post discussion update: Thanks for all the helpful responses. I ended up doing some more research about this topic. The simplest explanation for why java.lang.Number does not implement Comparable is rooted in mutability concerns. For a bit of review, java.lang.Number is the abstract super-type of AtomicInteger, AtomicLong, BigDecimal, BigInteger, Byte, Double, Float, Integer, Long and Short. On that list, AtomicInteger and AtomicLong to do not implement Comparable. Digging around, I discovered that it is not a good practice to implement Comparable on mutable types because the objects can change during or after comparison rendering the result of the comparison useless. Both AtomicLong and AtomicInteger are mutable. The API designers had the forethought to not have Number implement Comparable because it would have constrained implementation of future subtypes. Indeed, AtomicLong and AtomicInteger were added in Java 1.5 long after java.lang.Number was initially implemented. Apart from mutability, there are probably other considerations here too. A compareTo implementation in Number would have to promote all numeric values to BigDecimal because it is capable of accommodating all the Number sub-types. The implication of that promotion in terms of mathematics and performance is a bit unclear to me, but my intuition finds that solution kludgy.

    Read the article

  • How do I implement configurations and settings?

    - by Malvolio
    I'm writing a system that is deployed in several places and each site needs its own configurations and settings. A "configuration" is a named value that is necessary to a particular site (e.g., the database URL, S3 bucket name); every configuration is necessary, there is not usually a default, and it's typically string-valued. A setting is a named value but it just tweaks the behavior of the system; it's often numeric or Boolean, and there's usually some default. So far, I've been using property files or thing like them, but it's a terrible solution. Several times, a developer has added a requirement for a configuration but not added the value to file for the live configuration, so the new release passed all the tests, then failed when released to live. Better, of course, for every file to be compiled — so if there's a missing configuration, or one of the wrong type, it won't get past the compiler — and inject the site-specific class into the build for each site. As a bones, a Scala file can easy model more complex values, especially lists, but also maps and tuples. The downside is, the files are sometimes maintained by people who aren't developers, so it has to be pretty self-explanatory, which was the advantage of property files. (Someone explain XML configurations to me: all the complexity of a compilable file but the run-time risk of a property file.) What I'm looking for is an easy pattern for defining a group required names and allowable values. Any suggestions?

    Read the article

  • Threading calls to web service in a web service - (.net 2.0)

    - by Ryan Ternier
    Got a question regarding best practices for doing parallel web service calls, in a web service. Our portal will get a message, split that message into 2 messages, and then do 2 calls to our broker. These need to be on separate threads to lower the timeout. One solution is to do something similar to (pseudo code): XmlNode DNode = GetaGetDemoNodeSomehow(); XmlNode ENode = GetAGetElNodeSomehow(); XmlNode elResponse; XmlNode demResponse; Thread dThread = new Thread(delegate { //Web Service Call GetDemographics d = new GetDemographics(); demResponse = d.HIALRequest(DNode); }); Thread eThread = new Thread(delegate { //Web Service Call GetEligibility ge = new GetEligibility(); elResponse = ge.HIALRequest(ENode); }); dThread.Start(); eThread.Start(); dThread.Join(); eThread.Join(); //combine the resulting XML and return it. //Maybe throw a bit of logging in to make architecture happy Another option we thought of is to create a worker class, and pass it the service information and have it execute. This would allow us to have a bit more control over what is going on, but could add additional overhead. Another option brought up would be 2 asynchronous calls and manage the returns through a loop. When the calls are completed (success or error) the loop picks it up and ends. The portal service will be called about 50,000 times a day. I don't want to gold plate this sucker. I'm looking for something light weight. The services that are being called on the broker do have time out limits set, and are already heavily logged and audited, so I'm not worried on that part. This is .NET 2.0 , and as much as I would love to upgrade I can't right now. So please leave all the goodies of 2.0 out please.

    Read the article

  • Piping EOF problems with stdio and C++/Python

    - by yeus
    I got some problems with EOF and stdio. I have no idea what I am doing wrong. When I see an EOF in my program I clear the stdin and next round I try to read in a new line. The problem is: for some reason the getline function immediatly (from the second run always, the first works just as intended) returns an EOF instead of waiting for a new input from py python process... Any idea? alright Here is the code: #include <string> #include <iostream> #include <iomanip> #include <limits> using namespace std; int main(int argc, char **argv) { for (;;) { string buf; if (getline(cin,buf)) { if (buf=="q") break; /*****///do some stuff with input //my actual filter program cout<<buf; /*****/ } else { if ((cin.rdstate() & istream::eofbit)!=0)cout<<"eofbit"<<endl; if ((cin.rdstate() & istream::failbit)!=0)cout<<"failbit"<<endl; if ((cin.rdstate() & istream::badbit)!=0)cout<<"badbit"<<endl; if ((cin.rdstate() & istream::goodbit)!=0)cout<<"goodbit"<<endl; cin.clear(); cin.ignore(numeric_limits<streamsize>::max()); //break;//I am not using break, because I //want more input when the parent //process puts data into stdin; } } return 0; } and in python: from subprocess import Popen, PIPE import os from time import sleep proc=Popen(os.getcwd()+"/Pipingtest",stdout=PIPE,stdin=PIPE,stderr=PIPE); while(1): sleep(0.5) print proc.communicate("1 1 1") print "running"

    Read the article

  • How can I limit asp.net control actions based on user role?

    - by Duke
    I have several pages or views in my application which are essentially the same for both authenticated users and anonymous users. I'd like to limit the insert/update/delete actions in formviews and gridviews to authenticated users only, and allow read access for both authed and anon users. I'm using the asp.net configuration system for handling authentication and roles. This system limits access based on path so I've been creating duplicate pages for authed and anon paths. The solution that comes to mind immediately is to check roles in the appropriate event handlers, limiting what possible actions are displayed (insert/update/delete buttons) and also limiting what actions are performed (for users that may know how to perform an action in the absence of a button.) However, this solution doesn't eliminate duplication - I'd be duplicating security code on a series of pages rather than duplicating pages and limiting access based on path; the latter would be significantly less complicated. I could always build some controls that offered role-based configuration, but I don't think I have time for that kind of commitment right now. Is there a relatively easy way to do this (do such controls exist?) or should I just stick to path-based access and duplicate pages? Does it even make sense to use two methods of authorization? There are still some pages which are strictly for either role so I'll be making use of path-based authorization anyway. Finally, would using something other than path-based authorization be contrary to typical asp.net design practices, at least in the context of using the asp.net configuration system?

    Read the article

  • Getting batch_action functionality in Symfony 1.0

    - by Alex Ciminian
    I'm currently working on a web app written in Symfony. I'm supposed to add an "export to CSV" feature in the backend/administration part of the app for some modules. In the list view, there should be an "Export" button which should provide the user with a csv file of the elements that are displayed (considering filtering criteria). I've created a method in the actions class of the module that takes a comma separated list of ids and generates the CSV, but I'm not really sure how to add the link to it in the view. The problem is that the view doesn't exist anywhere, it's generated on the fly from the data in the generator.yml configuration file. I've posted the relevant part of the file below. I'm new to Symfony, so any help would be appreciated :). Thanks, Alex Update list: display: [id, =name, indemn, _status, _participants, _approved_, created_at] title: Lista actiuni object_actions: _edit: ~ _delete: ~ actions: _create: ~ export_csv: name: Export to CSV action: CSVExport params: id=csvActionSubmit filters: [name, county_id, _status_filter, activity_id] fields: id: name: Nr. crt. ... Thanks to your advice, I've managed to add a button that is linked to my action. The problem is that I also need to send some parameters to the action, because I may not want all the elements - filters may have been used. Unfortunately, the project is using Symfony 1.0, which doesn't support batch_actions. Currently, I'm working around this with Javascript (I parse the DOM to get the numeric ids (from the display table) and then build the link for the button. I really think there could be a better way for this.

    Read the article

  • How do I send floats in window messages.

    - by yngvedh
    Hi, What is the best way to send a float in a windows message using c++ casting operators? The reason I ask is that the approach which first occurred to me did not work. For the record I'm using the standard win32 function to send messages: PostWindowMessage(UINT nMsg, WPARAM wParam, LPARAM lParam) What does not work: Using static_cast<WPARAM>() does not work since WPARAM is typedef'ed to UINT_PTR and will do a numeric conversion from float to int, effectively truncating the value of the float. Using reinterpret_cast<WPARAM>() does not work since it is meant for use with pointers and fails with a compilation error. I can think of two workarounds at the moment: Using reinterpret_cast in conjunction with the address of operator: float f = 42.0f; ::PostWindowMessage(WM_SOME_MESSAGE, *reinterpret_cast<WPARAM*>(&f), 0); Using an union: union { WPARAM wParam, float f }; f = 42.0f; ::PostWindowMessage(WM_SOME_MESSAGE, wParam, 0); Which of these are preffered? Are there any other more elegant way of accomplishing this?

    Read the article

  • Big-O for calculating all routes from GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is the amount of neighbour vertices. Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >