Search Results

Search found 18249 results on 730 pages for 'real world haskell'.

Page 608/730 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • How to solve generic algebra using solver/library programmatically? Matlab, Mathematica, Wolfram etc?

    - by DevDevDev
    I'm trying to build an algebra trainer for students. I want to construct a representative problem, define constraints and relationships on the parameters, and then generate a bunch of Latex formatted problems from the representation. As an example: A specific question might be: If y < 0 and (x+3)(y-5) = 0, what is x? Answer (x = -3) I would like to encode this as a Latex formatted problem like. If $y<0$ and $(x+constant_1)(y+constant_2)=0$ what is the value of x? Answer = -constant_1 And plug into my problem solver constant_1 > 0, constant_1 < 60, constant_1 = INTEGER constant_2 < 0, constant_2 > -60, constant_2 = INTEGER Then it will randomly construct me pairs of (constant_1, constant_2) that I can feed into my Latex generator. Obviously this is an extremely simple example with no real "solving" but hopefully it gets the point across. Things I'm looking for ideally in priority order * Solve algebra problems * Definition of relationships relatively straight forward * Rich support for latex formatting (not just writing encoded strings) Thanks!

    Read the article

  • Build a Visual Studio Project without access to referenced dlls

    - by David Reis
    I have a project which has a set of binary dependencies (assembly dlls for which I do no have the source code). At runtime those dependencies are required pre-installed on the machine and at compile time they are required in the source tree, e,g in a lib folder. As I'm also making source code available for this program I would like to enable a simple download and build experience for it. Unfortunately I cannot redistribute the dlls, and that complicates things, since VS wont link the project without access to the referenced dlls. Is there anyway to enable this project to be built and linked in absence of the real referenced dlls? Maybe theres a way to tell VS to link against an auto generated stub of the dll, so that it can rebuild without the original? Maybe there's a third party tool that will do this? Any clues or best practices at all in this area? I realize the person must have access to the dlls to run the code, so it makes sense that he could add them to the build process, but I'm just trying to save them the pain of collecting all the dlls and placing them in the lib folder manually.

    Read the article

  • Why is debugging better in an IDE?

    - by Bill Karwin
    I've been a software developer for over twenty years, programming in C, Perl, SQL, Java, PHP, JavaScript, and recently Python. I've never had a problem I could not debug using some careful thought, and well-placed debugging print statements. I respect that many people say that my techniques are primitive, and using a real debugger in an IDE is much better. Yet from my observation, IDE users don't appear to debug faster or more successfully than I can, using my stone knives and bear skins. I'm sincerely open to learning the right tools, I've just never been shown a compelling advantage to using visual debuggers. Moreover, I have never read a tutorial or book that showed how to debug effectively using an IDE, beyond the basics of how to set breakpoints and display the contents of variables. What am I missing? What makes IDE debugging tools so much more effective than thoughtful use of diagnostic print statements? Can you suggest resources (tutorials, books, screencasts) that show the finer techniques of IDE debugging? Sweet answers! Thanks much to everyone for taking the time. Very illuminating. I voted up many, and voted none down. Some notable points: Debuggers can help me do ad hoc inspection or alteration of variables, code, or any other aspect of the runtime environment, whereas manual debugging requires me to stop, edit, and re-execute the application (possibly requiring recompilation). Debuggers can attach to a running process or use a crash dump, whereas with manual debugging, "steps to reproduce" a defect are necessary. Debuggers can display complex data structures, multi-threaded environments, or full runtime stacks easily and in a more readable manner. Debuggers offer many ways to reduce the time and repetitive work to do almost any debugging tasks. Visual debuggers and console debuggers are both useful, and have many features in common. A visual debugger integrated into an IDE also gives you convenient access to smart editing and all the other features of the IDE, in a single integrated development environment (hence the name).

    Read the article

  • start-stop-daemon quoted arguments misinterpreted

    - by Martin Westin
    Hi, I have been trying to make an init script using start-stop-daemon. I am stuck on the arguments to the daemon. I want to keep these in a variable at the top of the script but I can't get the quotations to filter down correctly. I'll use ls here so we don't have to look at binaries and arguments that most people wont know or care about. The end result I am looking for is for start-stop... to run ls -la "/folder with space/" DAEMON=/usr/bin/ls DAEMON_OPTS='-la "/folder with space/"' start-stop-daemon --start --make-pidfile --pidfile $PID --exec $DAEMON -- $DAEMON_OPTS Double escaping the options and trying innumerable variations of quotations do not help... Then they end up at the daemon they are always messed up. Enclosing $DAEMON_OPTS in quotes changes things... then they are seen as one since quote... never the right number though :) Echoing the command-line (start-stop...) prints exactly the right stuff to screen. But the daemon (the real one, not ls) complains about the wrong number of arguments. How do I specify a variable so that quotes inside it are brought along to the daemon correctly? Thanks, Martin

    Read the article

  • not able to run c/cpp execs in eclipse cdt

    - by user1658323
    i installed eclipse and then cdt on an ubuntu system recently and was trying to make the first runnable c/c++ proj.. i installed g++ also, and then created the first executable cpp 'Hello World' project some files are created... then some issues... 1) even though Build Automatically is selected, I have to goto the project n do a Build Project to build it manually, and this i have to do everytime i make a change 2) After Building manually, there are some new folders created with Binaries and Debug files and i can see g++ commands in the console being executed. The project binary is output both to debug n binaries folder. But i am not able to run these through the Green Play Button or any other way in eclipse. Even Run configuration is not showing any option for c/C++ proj.. though i can goto terminal and run the binary myself through ./ But i want to be able to run n debug this through eclipse. plz help in fixing me this problem as i really love eclipse n have some c/cpp assignments coming soon.. Console info on doing a manual project build - Build of configuration Debug for project qwe ** make all Building file: ../src/qwe.cpp Invoking: GCC C++ Compiler g++ -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"src/qwe.d" -MT"src/qwe.d" -o "src/qwe.o" "../src/qwe.cpp" Finished building: ../src/qwe.cpp Building target: qwe Invoking: GCC C++ Linker g++ -o "qwe" ./src/qwe.o Finished building target: qwe Build Finished **

    Read the article

  • subquery in join with doctrine dql

    - by Martijn de Munnik
    I want to use DQL to create a query which looks like this in SQL: select e.* from e inner join ( select uuid, max(locale) as locale from e where locale = 'nl_NL' or locale = 'nl' group by uuid ) as e_ on e.uuid = e_.uuid and e.locale = e_.locale I tried to use QueryBuilder to generate the query and subquery. I think they do the right thing by them selves but I can't combine them in the join statement. Does anybody now if this is possible with DQL? I can't use native SQL because I want to return real objects and I don't know for which object this query is run (I only know the base class which have the uuid and locale property). $subQueryBuilder = $this->_em->createQueryBuilder(); $subQueryBuilder ->addSelect('e.uuid, max(e.locale) as locale') ->from($this->_entityName, 'e') ->where($subQueryBuilder->expr()->in('e.locale', $localeCriteria)) ->groupBy('e.uuid'); $queryBuilder = $this->_em->createQueryBuilder(); $queryBuilder ->addSelect('e') ->from($this->_entityName, 'e') ->join('('.$subQueryBuilder.') as', 'e_')->join ->where('e.uuid = e_.uuid') ->andWhere('e.locale = e_.locale');

    Read the article

  • How do you slow down the output from a DOS / windows command prompt

    - by JW
    I have lots of experience of writing php scripts that are run in the context of a webserver and almost no epxerience of writing php scripts for CLI or GUI output. I have used the command line for linux but do not have much expereince with DOS. Lets say I have php script that is: <?php echo('Hello world'); for ($idx = 0 ; $idx < 100 ; $idx++ ) { echo 'I am line '. $idx . PHP_EOL; } Then, I run it in my DOS Command prompt: # php helloworld.php Now this will spurt out the output quckly and i have to scroll the DOS command window up to see the output. I want to see the output one 'screen full' at a time. How do you do that from the perspective of a DOS user? Furthermore, although this is not my main main question, I would be also interested in knowing how to make the php script 'wait for input' from the command prompt.

    Read the article

  • Liskov Substition and Composition

    - by FlySwat
    Let say I have a class like this: public sealed class Foo { public void Bar { // Do Bar Stuff } } And I want to extend it to add something beyond what an extension method could do....My only option is composition: public class SuperFoo { private Foo _internalFoo; public SuperFoo() { _internalFoo = new Foo(); } public void Bar() { _internalFoo.Bar(); } public void Baz() { // Do Baz Stuff } } While this works, it is a lot of work...however I still run into a problem: public void AcceptsAFoo(Foo a) I can pass in a Foo here, but not a super Foo, because C# has no idea that SuperFoo truly does qualify in the Liskov Substitution sense...This means that my extended class via composition is of very limited use. So, the only way to fix it is to hope that the original API designers left an interface laying around: public interface IFoo { public Bar(); } public sealed class Foo : IFoo { // etc } Now, I can implement IFoo on SuperFoo (Which since SuperFoo already implements Foo, is just a matter of changing the signature). public class SuperFoo : IFoo And in the perfect world, the methods that consume Foo would consume IFoo's: public void AcceptsAFoo(IFoo a) Now, C# understands the relationship between SuperFoo and Foo due to the common interface and all is well. The big problem is that .NET seals lots of classes that would occasionally be nice to extend, and they don't usually implement a common interface, so API methods that take a Foo would not accept a SuperFoo and you can't add an overload. So, for all the composition fans out there....How do you get around this limitation? The only thing I can think of is to expose the internal Foo publicly, so that you can pass it on occasion, but that seems messy.

    Read the article

  • What if a large number of objects are passed to my SwingWorker.process() method?

    - by Trejkaz
    I just found an interesting situation. Suppose you have some SwingWorker (I've made this one vaguely reminiscent of my own): public class AddressTreeBuildingWorker extends SwingWorker<Void, NodePair> { private DefaultTreeModel model; public AddressTreeBuildingWorker(DefaultTreeModel model) { } @Override protected Void doInBackground() { // Omitted; performs variable processing to build a tree of address nodes. } @Override protected void process(List<NodePair> chunks) { for (NodePair pair : chunks) { // Actually the real thing inserts in order. model.insertNodeInto(parent, child, parent.getChildCount()); } } private static class NodePair { private final DefaultMutableTreeNode parent; private final DefaultMutableTreeNode child; private NodePair(DefaultMutableTreeNode parent, DefaultMutableTreeNode child) { this.parent = parent; this.child = child; } } } If the work done in the background is significant then things work well - process() is called with relatively small lists of objects and everything is happy. Problem is, if the work done in the background is suddenly insignificant for whatever reason, process() receives a huge list of objects (I have seen 1,000,000, for instance) and by the time you process each object, you have spent 20 seconds on the Event Dispatch Thread, exactly what SwingWorker was designed to avoid. In case it isn't clear, both of these occur on the same SwingWorker class for me - it depends on the input data, and the type of processing the caller wanted. Is there a proper way to handle this? Obviously I can intentionally delay or yield the background processing thread so that a smaller number might arrive each time, but this doesn't feel like the right solution to me.

    Read the article

  • Is hardware accelerated CSS3 in Safari 4 & 5 broken, or my CSS and JS?

    - by Dan Forys
    Hi all, I've created a somewhat silly site that shows you the expected weather forecast for any city in the World. On webkit based browsers, when the weather is sunny a sun with CSS3 animated rotated sunbeams appears. This works fine on Chrome. An example (sunny, at the moment) page is: http://willitraintoday.co.uk/iceland/reykjavik/ However, when viewed in Safari 4 or 5 on Mac Snow Leopard, when the sun appears the sky background appears over it. Weirder still, as the cloud containing the advert moves across the sky, it squashes the main text. When the cloud reaches the left edge, the text appears wider than normal and starts squashing down again. I've tried: - Disabling the CSS3 animation; it works fine in Safari - Juggling the z-index of various elements; to no avail Is there something up with my Javascript or CSS, or is the hardware accelerated snow leopard Safari broken in this case? It seems not to happen in Safari 4 on Leopard, but I don't have Leopard any more to test myself. Grateful for any opinions!

    Read the article

  • Rack, FastCGI, Lighttpd configuration

    - by zacsek
    Hi! I want to run a simple application using Rack, FastCGI and Lighttpd, but I cannot get it working. I get the following error: /usr/lib/ruby/1.8/rack/handler/fastcgi.rb:23:in `initialize': Address already in use - bind(2) (Errno::EADDRINUSE) from /usr/lib/ruby/1.8/rack/handler/fastcgi.rb:23:in `new' from /usr/lib/ruby/1.8/rack/handler/fastcgi.rb:23:in `run' from /www/test.rb:7 Here is the application: #!/usr/bin/ruby app = Proc.new do |env| [200, {'Content-Type' => 'text/plain'}, "Hello World!"] end require 'rack' Rack::Handler::FastCGI.run app, :Port => 4000 ... and the lighttpd.conf: server.modules += ( "mod_access", "mod_accesslog", "mod_fastcgi" ) server.port = 80 server.document-root = "/www" mimetype.assign = ( ".html" => "text/html", ".txt" => "text/plain", ".jpg" => "image/jpeg", ".png" => "image/png" ) index-file.names = ( "test.rb" ) fastcgi.debug = 1 fastcgi.server = ( ".rb" => (( "host" => "127.0.0.1", "port" => 4000, "bin-path" => "/www/test.rb", "check-local" => "disable", "max-procs" => 1 )) ) Can someone help me? What am I doing wrong?

    Read the article

  • What algorithms compute directions from point A to point B on a map?

    - by A. Rex
    How do map providers (such as Google or Yahoo! Maps) suggest directions? I mean, they probably have real-world data in some form, certainly including distances but also perhaps things like driving speeds, presence of sidewalks, train schedules, etc. But suppose the data were in a simpler format, say a very large directed graph with edge weights reflecting distances. I want to be able to quickly compute directions from one arbitrary point to another. Sometimes these points will be close together (within one city) while sometimes they will be far apart (cross-country). Graph algorithms like Dijkstra's algorithm will not work because the graph is enormous. Luckily, heuristic algorithms like A* will probably work. However, our data is very structured, and perhaps some kind of tiered approach might work? (For example, store precomputed directions between certain "key" points far apart, as well as some local directions. Then directions for two far-away points will involve local directions to a key points, global directions to another key point, and then local directions again.) What algorithms are actually used in practice? PS. This question was motivated by finding quirks in online mapping directions. Contrary to the triangle inequality, sometimes Google Maps thinks that X-Z takes longer and is farther than using an intermediate point as in X-Y-Z. But maybe their walking directions optimize for another parameter, too? PPS. Here's another violation of the triangle inequality that suggests (to me) that they use some kind of tiered approach: X-Z versus X-Y-Z. The former seems to use prominent Boulevard de Sebastopol even though it's slightly out of the way. (Edit: this example doesn't work anymore, but did at the time of the original post. The one above still works as of early November 2009.)

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • How can you make a PHP application require a key to work?

    - by jasondavis
    About 4 years ago I used a php product called amember pro, it is a membership script which has plugins for lie 30 different payment processors, it was an easy way to set up an automated membership site where users would pay a payment and get access to a certain area. The script used ioncube http://www.ioncube.com/sa_encoder.php to prevent non-paying users from using the script, it requered that you register the domain that the script would be used on, you were then given a key to enter into the file that would make the system/script work. Now I am wanting to know how to do such a task, I know ioncube encoder just makes it hard to see the code, in the script I mention, they would just have a small section at the tp of 1 of the included pages that was encrypted and without that part of the code it would break and in addition if the owner of the script did not put you domain in the list and give you a valid key it would not work, also if you tried to use the script on a different domain it would not work. I realize that somewhere in the encrypted code that is must of sent you key to there server and checked that it was valid for the domain name it is on, or possibly it did not even do that, maybe the key would just verify that it matched the domain the script was on, that more likely what it did. Here is where the real question is, How would you make a script require the portion that is encrypted? If I made a script and had a small encrypted part at the top, it would seem a user would be able to easily just remove the encrypted part and figure out what the non encrypted part is doing and fix it to work. Any ideas?

    Read the article

  • Determine if the current thread has low I/O priority

    - by Magnus Hoff
    I have a background thread that does some I/O-intensive background type work. To please the other threads and processes running, I set the thread priority to "background mode" using SetThreadPriority, like this: SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN); However, THREAD_MODE_BACKGROUND_BEGIN is only available in Windows Server 2008 or newer, as well as Windows Vista and newer, but the program needs to work well on Windows Server 2003 and XP as well. So the real code is more like this: if (!SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN)) { SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_LOWEST); } The problem with this is that on Windows XP it will totally disrupt the system by using too much I/O. I have a plan for a ugly and shameful way of mitigating this problem, but that depends on me being able to determine if the current thread has low I/O priority or not. Now, I know I can store which thread priority I ended up setting, but the control flow in the program is not really well suited for this. I would rather like to be able to test later whether or not the current thread has low I/O priority -- if it is in "background mode". GetThreadPriority does not seem to give me this information. Is there any way to determine if the current thread has low I/O priority?

    Read the article

  • GridView will not update underlying data source

    - by John Christensen
    So I'm been pounding on this problem all day. I've got a LinqDataSource that points to my model and a GridView that consumes it. When I attempt to do an update on the GridView, it does not update the underlying data source. I thought it might have to do with the LinqDataSource, so I added a SqlDataSource and the same thing happens. The aspx is as follows (the code-behind page is empty): <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="Data Source=devsql32;Initial Catalog=Steam;Persist Security Info=True;" ProviderName="System.Data.SqlClient" SelectCommand="SELECT [LangID], [Code], [Name] FROM [Languages]" UpdateCommand="UPDATE [Languages] SET [Code]=@Code WHERE [LangID]=@LangId"> </asp:SqlDataSource> <asp:GridView ID="_languageGridView" runat="server" AllowPaging="True" AllowSorting="True" AutoGenerateColumns="False" DataKeyNames="LangId" DataSourceID="SqlDataSource1"> <Columns> <asp:CommandField ShowDeleteButton="True" ShowEditButton="True" /> <asp:BoundField DataField="LangId" HeaderText="Id" ReadOnly="True" /> <asp:BoundField DataField="Code" HeaderText="Code" /> <asp:BoundField DataField="Name" HeaderText="Name" /> </Columns> </asp:GridView> <asp:LinqDataSource ID="_languageDataSource" ContextTypeName="GeneseeSurvey.SteamDatabaseDataContext" runat="server" TableName="Languages" EnableInsert="True" EnableUpdate="true" EnableDelete="true"> </asp:LinqDataSource> What in the world am I missing here? This problem is driving me insane.

    Read the article

  • Executing untrusted code

    - by MainMa
    Hi, I'm building a C# application which uses plug-ins. The application must guarantee to the user that plug-ins will not do whatever they want on the user machine, and will have less privileges that the application itself (for example, the application can access its own log files, whereas plug-ins cannot). I considered three alternatives. Using System.AddIn. I tried this alternative first, because it seamed much powerful, but I'm really disappointed by the need of modifying the same code seven times in seven different projects each time I want to modify something. Besides, there is a huge number of problems to solve even for a simple Hello World application. Using System.Activator.CreateInstance(assemblyName, typeName). This is what I used in the preceding version of the application. I can't use it nevermore, because it does not provide a way to restrict permissions. Using System.Activator.CreateInstance(AppDomain domain, [...]). That's what I'm trying to implement now, but it seems that the only way to do that is to pass through ObjectHandle, which requires serialization for every used class. Although plug-ins contain WPF UserControls, which are not serializable. So is there a way to create plug-ins containing UserControls or other non serializable objects and to execute those plug-ins with a custom PermissionSet ?

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • Confused about UIView frame property

    - by slowfungus
    I'm building a prototype iPad app that draws diagrams. I have the following view hierarchy: UIView UIScrollView DiagramView : UIView TabBar NavigationBar And a UIViewController subclass holding all that together. Before drawing the diagram the first time I calculate the dimensions of the diagram, and set the DiagramView frame to that size, and the content size of the scrollview as well. -(void)recalculateBounds { [renderer diagram:diagram shouldDraw:NO]; SQXRect diagramRect = SQXMakeRect(0.0,0.0,[diagram bounds].size.width,[diagram bounds].size.height); self.frame = diagramRect; [(UIScrollView*)[self superview] setContentSize:diagramRect.size]; } I should disclose that the frame is being set to about 1500 x 3500 which i know is ridiculous. I just want to focus on some other parts of the app before I get into optimizing the render code. This works beautifully, except that the rect being passed to drawRect is not the size that I set, and my drawing is getting clipped at the bottom. Its close the size i set, but bigger in width, and shorter in height. Also of note, is the fact that if I force the frame to be much bigger than what I know the diagram needs, then the drawRect:rect is big enough, and no clipping occurs. Of course this has me wondering if the frame size needs to take into account some other screen real estate like the toolbars but my reading of the docs tells me the frame is in superview coordinates, which would be the scrollview so I reckon I need to worry about such things. Any idea what is causing this discrepancy?

    Read the article

  • Javascript JQUERY AJAX: When Are These Implemented

    - by Michael Moreno
    I'm learning javascript. Poked around this excellent site to gather intel. Keep coming across questions / answers about javascript, JQUERY, JQUERY with AJAX, javascript with JQUERY, AJAX alone. My conclusion: these are all individually powerful and useful. My confusion: how does one determine which/which combination to use ? I've concluded that javascript is readily available on most browsers. For example, I can extend a simple HTML page with <html> <body> <script type="text/javascript"> document.write("Hello World!"); </script> </body> </html> However, within the scope of Python/DJANGO, many of these questions are JQUERY and AJAX related. At which point or under what development circumstances would I conclude that javascript alone isn't going to "cut it", and I need to implement JQUERY and/or AJAX and/or some other permutation ?

    Read the article

  • How to build a simulation of a login hardware token in .Net

    - by Michel
    Hi, i have a hardware token for remote login to some citrix environment. When i click the button on the device, i get an id and i can use that to login to the citrix farm. I can click the button as much as i like, and every time a new code gets generated, and they all work. Now i want to secure my private website likewise, but not with the hardware token, but with a 'token app' on my phone. So i run an app on my phone, generate a key, and use that to (partly) authenticate myself on the server. But here's the point: i don't know how it works! How can i generate 1, 2 or 100 keys at one time which i can see (on the server) are all valid, but without the server and the phone app having contact (the hardware token also is an 'offline' solution). Can you help me with a hint how i would do this? This is what i thought of so far: the phone app and the server app know (hardcoded) the same encryption key. The phone app encrypts the current time. The server app decrypts the string to the current time and if the diff between that time and the actual server time is less than 10 minutes it's an ok. Difficult for other users to fake a key, but encryption gives such nasty strings to enter, and the hardware token gives me nice things like 'H554TU8' And this is probably not how the real hardware token works, because the server and the phone app must 'know' the same encryption key. Michel

    Read the article

  • Can't seem to sign AWS Cloud Front URL properly

    - by Joe Corkery
    Hi everybody, I found a lot of detailed examples online on how to sign an Amazon CloudFront URL for private content. Unfortunately, whenever I implement these examples my URL doesn't seem to work. The resource path is correct because I can download the file when it is set for world read, but the URL doesn't work when set just for authorized users. The PHP code I am using is below. If anybody has any insights as to what I might be doing wrong (I'm guessing it is something obvious that I am just not seeing right now), it would be greatly appreciated. function urlCloudFront($resource) { $AWS_CF_KEY = 'APKA...'; $priv_key = file_get_contents(path_to_pem_file); $pkeyid = openssl_get_privatekey($priv_key); $expires = strtotime("+ 3 hours"); $policy_str = '{"Statement":[{"Resource":"'.$resource.'","Condition":{"DateLessThan":{"AWS:EpochTime":'.$expires.'}}}]}'; $policy_str = trim( preg_replace( '/\s+/', '', $policy_str ) ); $res = openssl_sign($policy_str, $signature, $pkeyid, OPENSSL_ALGO_SHA1); $signature_base64 = (base64_encode($signature)); $repl = array('+' => '-','=' => '_','/' => '~'); $signature_base64 = strtr($signature_base64,$repl); $url = $resource . '?Expires=' .$expires. '&Signature=' . $signature_base64 . '&Key-Pair-Id='. $AWS_CF_KEY; print '<p><A href="' .$url. '">Download VIDA (CloudFrount)</A>'; } urlCloudFront("http://mydistcloud.cloudfront.net/mydir/myfile.tar.gz"); Thanks.

    Read the article

  • alternative to #include within namespace { } block

    - by Jeff
    Edit: I know that method 1 is essentially invalid and will probably use method 2, but I'm looking for the best hack or a better solution to mitigate rampant, mutable namespace proliferation. I have multiple class or method definitions in one namespace that have different dependencies, and would like to use the fewest namespace blocks or explicit scopings possible but while grouping #include directives with the definitions that require them as best as possible. I've never seen any indication that any preprocessor could be told to exclude namespace {} scoping from #include contents, but I'm here to ask if something similar to this is possible: (see bottom for explanation of why I want something dead simple) // NOTE: apple.h, etc., contents are *NOT* intended to be in namespace Foo! // would prefer something most this: namespace Foo { #include "apple.h" B *A::blah(B const *x) { /* ... */ } #include "banana.h" int B::whatever(C const &var) { /* ... */ } #include "blueberry.h" void B::something() { /* ... */ } } // namespace Foo ... // over this: #include "apple.h" #include "banana.h" #include "blueberry.h" namespace Foo { B *A::blah(B const *x) { /* ... */ } int B::whatever(C const &var) { /* ... */ } void B::something() { /* ... */ } } // namespace Foo ... // or over this: #include "apple.h" namespace Foo { B *A::blah(B const *x) { /* ... */ } } // namespace Foo #include "banana.h" namespace Foo { int B::whatever(C const &var) { /* ... */ } } // namespace Foo #include "blueberry.h" namespace Foo { void B::something() { /* ... */ } } // namespace Foo My real problem is that I have projects where a module may need to be branched but have coexisting components from the branches in the same program. I have classes like FooA, etc., that I've called Foo::A in the hopes being able to branch less painfully as Foo::v1_2::A, where some program may need both a Foo::A and a Foo::v1_2::A. I'd like "Foo" or "Foo::v1_2" to show up only really once per file, as a single namespace block, if possible. Moreover, I tend to prefer to locate blocks of #include directives immediately above the first definition in the file that requires them. What's my best choice, or alternatively, what should I be doing instead of hijacking the namespaces?

    Read the article

  • Print an array elements

    - by 1ace1
    Hello guys, this is my first post and im still a beginner in the programming world so bare with me. I just created an array with 100 initialized values and i want to print out 10 elements on each line so it would be somthing like this 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ...26 this is the code i used and i managed to do it for the first 10 elements but i couldnt figure out how to do it for the rest public static void main(String[] args) { int[] numbers = { 0,1,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17}; int i, count = 0; for (i = 0; i < numbers.length; i++) { System.out.print(numbers[i] + " "); count++; if (count == 9) for (i = 9; i < numbers.length; i++) System.out.println(numbers[i] + " "); } } thanks!

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >