Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 733/1300 | < Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >

  • Creating LINQ to SQL Data Models' Data Contexts with ASP.NET MVC

    - by Maxim Z.
    I'm just getting started with ASP.NET MVC, mostly by reading ScottGu's tutorial. To create my database connections, I followed the steps he outlined, which were to create a LINQ-to-SQL dbml model, add in the database tables through the Server Explorer, and finally to create a DataContext class. That last part is the part I'm stuck on. In this class, I'm trying to create methods that work around the exposed data. Following the example in the tutorial, I created this: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MySite.Models { public partial class MyDataContext { public List<Post> GetPosts() { return Posts.ToList(); } public Post GetPostById(int id) { return Posts.Single(p => p.ID == id); } } } As you can see, I'm trying to use my Post data table. However, it doesn't recognize the "Posts" part of my code. What am I doing wrong? I have a feeling that my problem is related to my not adding the data tables correctly, but I'm not sure. Thanks in advance.

    Read the article

  • How to remove the error "Cant find PInvoke DLL SQLite.interop.dll"

    - by Shailesh Jaiswal
    I am developing windows mobile application. I am using the SQLlite database. I am using the following code to connect to this database as follows SQLiteConnection cn = new SQLiteConnection(); SQLiteDataReader SQLiteDR; cn.ConnectionString = @"Data Source=F:\CompNetDB.db3"; cn.Open(); SQLiteCommand cmd = new SQLiteCommand(); cmd.CommandText = "select * from CustomerInfo"; cmd.CommandType = CommandType.Text; cmd.Connection = cn; SQLiteDR = cmd.ExecuteReader(); In the above case I am getting the error "Cant find PInvike DLL SQLite.interop.dll". I have added the DLL System.Data.SQLLite from the \SQLite.NET\bin\compactframework this folder. This is the folder which is installed by default when I installed the SQLite. In the same folder there is one DLL file named SQLlite.Interop.66.DLL. When I try to add reference to this dll it is giving error that dll can not be added. Are the two dlls SQLlite.Interop.dll & System.Interop.066.dll same ? In the above code how to solve the error "Cant find PInvoke.SQLite.Interop.dll" Please can you tell whether there is mistake in my code or I am missing something in my application?

    Read the article

  • Need help in tuning a sql-query

    - by Viper
    Hello, i need some help to boost this SQL-Statement. The execution time is around 125ms. During the runtime of my program this sql (better: equally structured sqls for different tables) will be called 300.000 times. The average row count in the tables lies around 10.000.000 rows and new rows (updates/inserts) will be added with a timestamp each day. Data which are interesting for this particular export-program lies in the last 1-3 days. Maybe this is helpful for an index to create. The data i need is the current valid row for a given id and the forerunner datarow to get the updates (if exists). We use a Oracle 11g database and Dot.Net Framework 3.5 SQL-Statement to boost: select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_INSTRUMENT and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; Here i uploaded the current Explain-Plan for this query. I'm not a database expert so i don't know which index-type would fit best for this requirement. I have seen that there are many different possible index-types which could be applied. Maybe Oracle Optimizer Hints are helpful, too. Does anyone has a good idea for tuning this sql or can point me in a right direction?

    Read the article

  • java connectivity with mysql error

    - by abson
    I just started with the connectivity and tried this example. I have installed the necessary softwares. Also copied the jar file into the /ext folder.Yet the code below has the following error import java.sql.*; public class Jdbc00 { public static void main(String args[]){ try { Statement stmt; Class.forName("com.mysql.jdbc.Driver"); String url = "jdbc:mysql://localhost:3306/mysql" DriverManager.getConnection(url,"root", "root"); //Display URL and connection information System.out.println("URL: " + url); System.out.println("Connection: " + con); //Get a Statement object stmt = con.createStatement(); //Create the new database stmt.executeUpdate( "CREATE DATABASE JunkDB"); stmt.executeUpdate( "GRANT SELECT,INSERT,UPDATE,DELETE," + "CREATE,DROP " + "ON JunkDB.* TO 'auser'@'localhost' " + "IDENTIFIED BY 'drowssap';"); con.close(); }catch( Exception e ) { e.printStackTrace(); }//end catch }//end main }//end class Jdbc00 But it gave the following error D:\Java12\Explore>java Jdbc00 java.lang.ClassNotFoundException: com.mysql.jdbc.Driver at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at Jdbc11.main(Jdbc00.java:11) Could anyone please guide me in correcting this?

    Read the article

  • PHP mySQL Error

    - by happyCoding25
    Hello, Im new to php so I decided to follow this tutorial for a simple login screen. I got the code setup but when I try login I get this error: Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in (a long file path to the script) on line 27 The code I got from the tutorial is: <?php ob_start(); $host="thehost"; // Host name $username="myusername"; // Mysql username $password="mypass"; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); // Define $myusername and $mypassword $myusername=$_POST['myusername']; $mypassword=$_POST['mypassword']; // To protect MySQL injection (more detail about MySQL injection) $myusername = stripslashes($myusername); $mypassword = stripslashes($mypassword); $myusername = mysql_real_escape_string($myusername); $mypassword = mysql_real_escape_string($mypassword); $sql="SELECT * FROM $tbl_name WHERE username='$myusername' and password='$mypassword'"; $result=mysql_query($sql); // Mysql_num_row is counting table row $count=mysql_num_rows($result); // If result matched $myusername and $mypassword, table row must be 1 row if($count==1){ // Register $myusername, $mypassword and redirect to file "login_success.php" session_register("myusername"); session_register("mypassword"); header("location:login_success.php"); } else { echo "Wrong Username or Password"; } ob_end_flush(); ?> (Note: All of the mySQL database info is filled in on my version) Aslo, the author gives a php5 version and a normal php version. I have tried both and gotten the same error. If anyone knows why this is happening help is really appreciated.

    Read the article

  • In what order does DOJO handles ajax requests?

    - by mm2887
    Hi, Using a form in a dialog box I am using Dojo in jsp to save the form in my database. After that request is completed using dojo.xhrPost() I am sending in another request to update a dropdown box that should include the added form object that was just saved, but for some reason the request to update the dropdown is executed before saving the form in the database even though the form save is called first. Using Firebug I can see that the getLocations() request is completed before the sendForm() request. This is the code: Add Team sendForm("ajaxResult1", "addTeamForm"); dijit.byId("addTeamDialog").hide(); getLocations("locationsTeam"); </script> function sendForm(target, form){ var targetNode = dojo.byId(target); var xhrArgs = { form: form, url: "ajax", handleAs: "text", load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrPost(xhrArgs); } function getLocations(id) { var targetNode = dojo.byId(id); var xhrArgs = { url: "ajax", handleAs: "text", content: { location: "yes" }, load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrGet(xhrArgs); } Why is this happening? Is there way to make the first request complete first before the second executes? To reduce the possibilities of why this is happening I tried setting the cache property in xhrGet to false but the result is still the same. Please help!

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • JDBC/OSGi and how to dynamically load drivers without explicitly stating dependencies in the bundle?

    - by Chris
    Hi, This is a biggie. I have a well-structured yet monolithic code base that has a primitive modular architecture (all modules implement interfaces yet share the same classpath). I realize the folly of this approach and the problems it represents when I go to deploy on application servers that may have different conflicting versions of my library. I'm dependent on around 30 jars right now and am mid-way though bnding them up. Now some of my modules are easy to declare the versioned dependencies of, such as my networking components. They statically reference classes within the JRE and other BNDded libraries but my JDBC related components instantiate via Class.forName(...) and can use one of any number of drivers. I am breaking everything up into OSGi bundles by service area. My core classes/interfaces. Reporting related components. Database access related components (via JDBC). etc.... I wish for my code to be able to still be used without OSGi via single jar file with all my dependencies and without OSGi at all (via JARJAR) and also to be modular via the OSGi meta-data and granular bundles with dependency information. How do I configure my bundle and my code so that it can dynamically utilize any driver on the classpath and/or within the OSGi container environment (Felix/Equinox/etc.)? Is there a run-time method to detect if I am running in an OSGi container that is compatible across containers (Felix/Equinox/etc.) ? Do I need to use a different class loading mechanism if I am in a OSGi container? Am I required to import OSGi classes into my project to be able to load an at-bundle-time-unknown JDBC driver via my database module? I also have a second method of obtaining a driver (via JNDI, which is only really applicable when running in an app server), do I need to change my JNDI access code for OSGi-aware app servers?

    Read the article

  • partial entity loading and management in silverlight / wcf ria

    - by Dan Wray
    I have a Silverlight 4 app which pulls entities down from a database using WCF RIA services. These data objects are fairly simple, just a few fields but one of those fields contains binary data of an arbitrarily size. The application needs access to this data basically asap after a user has logged in, to display in a list, enable selection etc. My problem is because of the size of this data, the load times are not acceptable and can approach the default timeout of the RIA service. I'd like to somehow partially load the objects into my local data context so that I have the IDs, names etc but not the binary data. I could then at a later point (ie when it's actually needed) populate the binary fields of those objects I need to display. Any suggestions on how to accomplish this would be welcome. Another approach which has occurred to me whilst writing this question (how often does that happen?!) is that I could move the binary data into a seperate database table joined to the original record 1:1 which would allow me to make use of RIA's lazy loading on that binary data. again.. comments welcome! Thanks.

    Read the article

  • JMX - MBean automated registration on application deployment

    - by Gadi
    Hi All, I need some direction with JMX and J2EE. I am aware (after few weeks of research) that the JMX specification is missing as far as deployment is concerned. There are few vendor specific implementations for what I am looking for but none are cross vendor. I would like to automate the deployment of MBeans and registration with the Server. I need the server to load and register my MBeand when the application is deployed and remove when the application is un-deployed. I develop with: NetBean 6.7.1, GlassFish 2.1, J2EE5, EJB3 More specific, I need a way to manage timer service runs. My application need to run different archiving agents and batch reporting. I was hoping the JMX will give me remote access to create and manage the timer services and enable the user to create his own schedule. If the JMX is auto registered on application deployment the user can immediately connect and manage the schedule. On the other hand, how can an EJB connect/access an MBean? Many thanks in advance. Gadi.

    Read the article

  • Using WCF to expose underlying process

    - by Steven
    I think I must be a little dull because I'm having so much difficulty with this. I use WCF for pretty much everything in-house, it's the most appropriate technology. I have a new Silverlight 3 app that is connecting to the WCF service and that's working fine. Where the problem begins is: Because of the expense in creating the objects within this service and the high correlation of individual objects being shared between clients I want to have a console application that basically gathers/calculates/caches all the data for the service 24/7 and the service basically connects to the console app (or whatever it is) and gets the pre-processed data. eg, think of it in terms of a stock reporting app (which it is). Person A has a portfolio of x, y z Person B has a portfolio of x, q, z, r The service needs to provide updated metrics on how their portfolio is performing. So instead of every 1 second processing person A, then person B, the app independently gathers the stock price and persons position information into memory and the service just queries the in memory result. Thanks for your help, I really am feeling dumb right now.

    Read the article

  • PHP fails silently when php-code is within html tag

    - by Michal M
    PROBLEM UPDATED, READ BELOW For some reason my CI fails silently when loading view. Loading view is simply called from controller $this->load->view('templates/default.php'); Now. There are some functions in the loaded view that are not defined unless a proper helper is loaded as well. Normally, php would throw an error, but instead it fails silently here. I have no idea why. The template gets outputted till the line containing the undefined function. It took me long time to realise where my script is failing. Here's my setup: Windows 7 Ultimate Apache 2.2.15 PHP 5.3.2 with following error reporting settings: display_errors = On display_startup_errors = On error_reporting = E_ALL | E_STRICT CodeIgniter 1.7.2 Any ideas why would that be? UPDATE After further debugging, it turned out that PHP fails to report any errors when php code is inline with HTML and within the HTML tag. Now this is bizarre. This returns Fatal Error: <p><?php echo $bogus(); ?></p> This doesn't and fails silently: <p class="<?php echo $bogus(); ?>">paragraph</p> Why? :O UPDATE 2 Further investigation showed that if an error_log in PHP is specified, the errors are in fact reported in that file, but still not in the browser... Again, why? UPDATE 3 Actually my code should be slightly different. Checked another PHP installation on completely different machine and it confirmed the PHP bug. Reported here: http://bugs.php.net/bug.php?id=52040

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • How can I print a web page on a server?

    - by Gavin Schultz
    Suppose I develop a web page using the cool Google visualization API, and it does everything the user wants. They can the parameters, look at the graphs, and print the page to get a reasonable-looking report. All good. Now suppose I want to do the same thing server-side. For example, say we need a set of report generated at a specific time of day, printed to a PDF and emailed to a manager. It's not a user-initiated action, so we don't have a user's browser or their printer. We have a URL that would render the report if we had a browser, and that's it. Is there a good way to do this server-side? Is this just foolish? Has anyone done anything like that before? Do any of the major browsers have APIs that might provide such functionality? Keep in mind too that it's not just static HTML; probably javascript will be running first to shift the DOM around. I know we could implement a whole different reporting engine on the server side to do this, but that will (a) generate reports that look a bit different, and (b) require me to build/maintain two sets of functionality. Instead, I'd be happy if I could just render the page / pages I want in an invisible server-side browser and print it to a PDF (let's mostly ignore that step - I know any number of PDF printer drivers that could do this). I don't really want to do it ugly either - i.e. by starting a browser process and then sending keystrokes directly to the window either - that's just bound to fall apart with a slight nudge. The only related question I found had an answer like that. Any advice appreciated!

    Read the article

  • Out Of Memory error while executing mysqldump

    - by Nishaz Salam
    Hi, I am getting the following error when trying to backup a database using mysqldump from the command prompt. C:\Documents and Settings\bobC:\Adobe\LiveCycle8.2\mysql\bin\mysqldump --quick --add-locks --lock-tables -c --default-character-set=utf8 --skip-opt -pxxxx -u adobe -r C:\Adobe\LiveCycle8.2\configurationManager\working\upgrade\mysql\adobe. sql -B adobe --port=3306 --host=localhost mysqldump: Out of memory (Needed 10380928 bytes) mysqldump: Got error: 2008: MySQL client ran out of memory when retrieving data from server As you can see i am using the --quick and --skip-opt too; cannot figure out what is causing the issue. The server log has the following messages 100420 15:16:39 InnoDB: Error: cannot allocate 4814100 bytes of memory for InnoDB: a BLOB with malloc! Total allocated memory InnoDB: by InnoDB 33427880 bytes. Operating system errno: 2 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. 100420 15:16:40 InnoDB: Warning: could not allocate 3814100 + 1000000 bytes to retrieve InnoDB: a big column. Table name adobe/tb_form_data Any help on this regard is highly appreciated P.S: The backup works fine without any issues when i use the MYSQL Administrator, but since an external app( adobe livecycle installer) uses the above command to backup the database during install, i need to get this working. Thanks, Nishaz Salam

    Read the article

  • .Net Architecture challenge: The Change-prone Frankestein Model

    - by SDReyes
    Good Morning SO! We've been scratching our heads with with this interesting scenario at the office, and we're anxious to hear your ideas and approaches: We have a database, whose schema is prone to changes -lets call it Prony-. (is used to store configuration parameters for embedded devices. so if the embedded devices guy need a new table, property or relationship for the model, he should be able to adapt the schema in a easy way -happens so often- ). Prony needs a web interface to create/edit its data. We have another database containing data that also need to be loaded to the devices, after making some transformations - lets call this one Oddy- (this data it's generated by an already existent administrative web application). Finally we have Tracy, a server that communicates our DBs and our embedded devices. She should to auto-adapt herself, to our dbs schema changes and serialize the data to the devices. Nice puzzle, don't think so? : ) Our current candidates: Rady: The fast Lets create some views in Prony that make the data transformation from Oddy. then use DynamicData (or some RAD tool) to create/update a simple web interface for Prony (so he can even consult the transformated data from coming from Prony : ). About Tracy, she will need to be recompiled to update her DB schema (Entity framework should work) and use Reflection to explore recursively the schema and serialize data. Cons: We would have to recompile Tracy and the Prony's web interface. What do you think of the candidate(s)? What would you do?

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Design Practice Retrieve Multiple Users - PHP

    - by pws5068
    Greetings all, I'm looking for a more efficient way to grab multiple users without too much code redundancy. I have a Users class, (here's a highly condensed representation) Class Users { function __construct() { ... ... } private static function getNew($id) { // This is all only example code $sql = "SELECT * FROM Users WHERE User_ID=?"; $user = new User(); $user->setID($id); .... return $user; } public static function getNewestUsers($count) { $sql = "SELECT * FROM Users ORDER BY Join_Date LIMIT ?"; $users = array(); // Prepared Statement Data Here while($stmt->fetch()) { $users[] = Users::getNew($id); ... } return $users; } // These have more sanity/security checks, demonstration only. function setID($id) { $this->id = $id; } function setName($name) { $this->name = $name; } ... } So clearly calling getNew() in a loop is inefficient, because it makes $count number of calls to the database each time. I would much rather select all users in a single SQL statement. The problem is that I have probably a dozen or so methods for getting arrays of users, so all of these would now need to know the names of the database columns which is repeating a lot of structure that could always change later. What are some other ways that I could approach this problem? My goals include efficiency, flexibility, scalability, and good design practice.

    Read the article

  • Communication between EJB3 Instances (JEE inter-bean communication) possible?

    - by Hank
    I'm designing a part of a JEE6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time. Currently I'm thinking of the following: SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends What I'm wondering now: Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...) How do I know how many and which EJB instances of SearchWorker are currently running? Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..

    Read the article

  • How should a one-man development shop document their code?

    - by CKoenig
    Hi, please let me first describe my situation. I work in an IT department for a small-to-medium sized industrial-company and basically I'm the only real developer (sometimes a second guy joins in for his own projects). I programm mostly in C#/.net. Of course I only programm for internal need (Intranet, reporting, data-driven apps, some mobile apps, ...). My question is how should I document my work? It's a highly dynamic environment (the features and bug fixes I implement are tested by me during production, and go live, often within a day. If I technical documentation like MSDN or even overview diagramms those would take me more time to sync than the whole programming process. Also I feel it's a waste of time because I would be the only one who ever read it. I do understand that if I get sick, leave, or forget this documentation would be valuable. PS:well of course you are right - the quesion is how much and how/where. I try using the XML-docu comments for the public exposed parts but as I'm a believer in self-documenting code the comments mostly restates in plain text what you can read from the method-head itself :(Maybe using the remarks section is the key but if you have 30 lines of code with a 15 line xml-comment in front it just looks dirty (sorry for posting it here but our firewall rejects JSON :( )

    Read the article

  • Spaced repetition (SRS) for learning

    - by Fredrik Johansson
    A client has asked me to add a simple spaced repeition algorithm (SRS) for an onlinebased learning site. But before throwing my self into it, I'd like to discuss it with the community. Basically the site asks the user a bunch of questions (by automatically selecting say 10 out of 100 total questions from a database), and the user gives either a correct or incorrect answer. The users result are then stored in a database, for instance: userid questionid correctlyanswered dateanswered 1 123 0 (no) 2010-01-01 10:00 1 124 1 (yes) 2010-01-01 11:00 1 125 1 (yes) 2010-01-01 12:00 Now, to maximize a users ability to learn all answers, I should be able to apply an SRS algorithm so that a user, next time he takes the quiz, receives questions incorrectly answered more often; than questions answered correctly. Also, questions that are previously answered incorrectly, but recently often answered correctly should occur less often. Have anyone implemented something like this before? Any tips or suggestions? Theese are the best links I've found: http://en.wikipedia.org/wiki/Spaced_repetition http://www.mnemosyne-proj.org/principles.php http://www.supermemo.com/english/ol/sm2.htm

    Read the article

  • Obtaining Current Fiscal Year from Hierarchy with MDX

    - by Robert Iver
    I'm building a report in Reporting Services 2005 based on a SSAS 2005 cube. The basic idea of the report is that they want to see sales for this fiscal year to date vs. last year sales year to date. It sounds simple, but being that this is my first "real" report based on SSAS, I'm having a hell of a time. First, how would one calculate the current fiscal year, quarter, or month. I have a Fiscal Date Hierarchy with all that information in it, but I can't figure out how to say: "Based on today's date, find the current fiscal year, quarter, and month." My second, but slightly smaller problem, is getting last years sales vs. this years sales. I have seen MANY examples on how to do this, but they all assume that you select the date manually. Since this is a report and will run pretty much on it's own, I need a way to insert the "current" fiscal year, quarter, and month into the PERIODSTODATE or PARALLELPERIOD functions to get what I want. So, I'm begging for your help on this one. I have to have this report done by tomorrow monring and I'm beginning to freak out a bit. Thanks in advance.

    Read the article

  • If attacker has original data and encrypted data, can they determine the passphrase?

    - by Brad Cupit
    If an attacker has several distinct items (for example: e-mail addresses) and knows the encrypted value of each item, can the attacker more easily determine the secret passphrase used to encrypt those items? Meaning, can they determine the passphrase without resorting to brute force? This question may sound strange, so let me provide a use-case: User signs up to a site with their e-mail address Server sends that e-mail address a confirmation URL (for example: https://my.app.com/confirmEmailAddress/bill%40yahoo.com) Attacker can guess the confirmation URL and therefore can sign up with someone else's e-mail address, and 'confirm' it without ever having to sign in to that person's e-mail account and see the confirmation URL. This is a problem. Instead of sending the e-mail address plain text in the URL, we'll send it encrypted by a secret passphrase. (I know the attacker could still intercept the e-mail sent by the server, since e-mail are plain text, but bear with me here.) If an attacker then signs up with multiple free e-mail accounts and sees multiple URLs, each with the corresponding encrypted e-mail address, could the attacker more easily determine the passphrase used for encryption? Alternative Solution I could instead send a random number or one-way hash of their e-mail address (plus random salt). This eliminates storing the secret passphrase, but it means I need to store that random number/hash in the database. The original approach above does not require storage in the database. I'm leaning towards the the one-way-hash-stored-in-the-db, but I still would like to know the answer: does having multiple unencrypted e-mail addresses and their encrypted counterparts make it easier to determine the passphrase used?

    Read the article

  • Refining data stored in SQLite - how to join several contacts?

    - by Krab
    Problem background Imagine this problem. You have a water molecule which is in contact with other molecules (if the contact is a hydrogen bond, there can be 4 other molecules around my water). Like in the following picture (A, B, C, D are some other atoms and dots mean the contact). A B . . O / \ H H . . C D I have the information about all the dots and I need to eliminate the water in the center and create records describing contacts of A-C, A-D, A-B, B-C, B-D, and C-D. Database structure Currently, I have the following structure in the database: Table atoms: "id" integer PRIMARY KEY, "amino" char(3) NOT NULL, (HOH for water or other value) other columns identifying the atom Table contacts: "acceptor_id" integer NOT NULL, (the atom near to my hydrogen, here C or D) "donor_id" integer NOT NULL, (here A or B) "directness" char(1) NOT NULL, (this should be D for direct and W for water-mediated) other columns about the contact, such as the distance Current solution (insufficient) Now, I'm going through all the contacts which have donor.amino = "HOH". In this sample case, this would select contacts from C and D. For each of these selected contacts, I look up contacts having the same acceptor_id as is the donor_id in the currently selected contact. From this information, I create the new contact. At the end, I delete all contacts to or from HOH. This way, I am obviously unable to create C-D and A-B contacts (the other 4 are OK). If I try a similar approach - trying to find two contacts having the same donor_id, I end up with duplicate contacts (C-D and D-C). Is there a simple way to retrieve all six contacts without duplicates? I'm dreaming about some one page long SQL query which retrievs just these six wanted rows. :-) It is preferable to conserve information about who is donor where possible, but not strictly necessary. Big thanks to all of you who read this question to this point.

    Read the article

  • Computing complex math equations in python

    - by dassouki
    Are there any libraries or techniques that simplify computing equations ? Take the following two examples: F = B * { [ a * b * sumOf (A / B ''' for all i ''' ) ] / [ sumOf(c * d * j) ] } where: F = cost from i to j B, a, b, c, d, j are all vectors in the format [ [zone_i, zone_j, cost_of_i_to_j], [..]] This should produce a vector F [ [1,2, F_1_2], ..., [i,j, F_i_j] ] T_ij = [ P_i * A_i * F_i_j] / [ SumOf [ Aj * F_i_j ] // j = 1 to j = n ] where: n is the number of zones T = vector [ [1, 2, A_1_2, P_1_2], ..., [i, j, A_i_j, P_i_j] ] F = vector [1, 2, F_1_2], ..., [i, j, F_i_j] so P_i would be the sum of all P_i_j for all j and Aj would be sum of all P_j for all i I'm not sure what I'm looking for, but perhaps a parser for these equations or methods to deal with multiple multiplications and products between vectors? To calculate some of the factors, for example A_j, this is what i use from collections import defaultdict A_j_dict = defaultdict(float) for A_item in TG: A_j_dict[A_item[1]] += A_item[3] Although this works fine, I really feel that it is a brute force / hacking method and unmaintainable in the case we want to add more variables or parameters. Are there any math equation parsers you'd recommend? Side Note: These equations are used to model travel. Currently I use excel to solve a lot of these equations; and I find that process to be daunting. I'd rather move to python where it pulls the data directly from our database (postgres) and outputs the results into the database. All that is figured out. I'm just struggling with evaluating the equations themselves. Thanks :)

    Read the article

< Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >