Search Results

Search found 4805 results on 193 pages for 'repository'.

Page 182/193 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • How do gitignore exclusion rules actually work?

    - by meowsqueak
    I'm trying to solve a gitignore problem on a large directory structure, but to simplify my question I have reduced it to the following. I have the following directory structure of two files (foo, bar) in a brand new git repository (no commits so far): a/b/c/foo a/b/c/bar Obviously, a 'git status -u' shows: # Untracked files: ... # a/b/c/bar # a/b/c/foo What I want to do is create a .gitignore file that ignores everything inside a/b/c but does not ignore the file 'foo'. If I create a .gitignore thus: c/ Then a 'git status -u' shows both foo and bar as ignored: # Untracked files: ... # .gitignore Which is as I expect. Now if I add an exclusion rule for foo, thus: c/ !foo According to the gitignore manpage, I'd expect this to to work. But it doesn't - it still ignores foo: # Untracked files: ... # .gitignore This doesn't work either: c/ !a/b/c/foo Neither does this: c/* !foo Gives: # Untracked files: ... # .gitignore # a/b/c/bar # a/b/c/foo In that case, although foo is no longer ignored, bar is also not ignored. The order of the rules in .gitignore doesn't seem to matter either. This also doesn't do what I'd expect: a/b/c/ !a/b/c/foo That one ignores both foo and bar. One situation that does work is if I create the file a/b/c/.gitignore and put in there: * !foo But the problem with this is that eventually there will be other subdirectories under a/b/c and I don't want to have to put a separate .gitignore into every single one - I was hoping to create 'project-based' .gitignore files that can sit in the top directory of each project, and cover all the 'standard' subdirectory structure. This also seems to be equivalent: a/b/c/* !a/b/c/foo This might be the closest thing to "working" that I can achieve, but the full relative paths and explicit exceptions need to be stated, which is going to be a pain if I have a lot of files of name 'foo' in different levels of the subdirectory tree. Anyway, either I don't quite understand how exclusion rules work, or they don't work at all when directories (rather than wildcards) are ignored - by a rule ending in a / Can anyone please shed some light on this? Is there a way to make gitignore use something sensible like regular expressions instead of this clumsy shell-based syntax? I'm using and observe this with git-1.6.6.1 on Cygwin/bash3 and git-1.7.1 on Ubuntu/bash3.

    Read the article

  • C#: BackgroundWorker cloning resources?

    - by Dav
    The problem I've been struggling with this partiular problem for two days now and just run out of ideas. A little... background: we have a WinForms app that needs to access a database, construct a list of related in-memory objects from that data, and then display on a DataGridView. Important point is that we first populate an app-wide cache (List), and then create a mirror of the cache local to the form on which the DGV lives (using List constructor param). Because fetching the data takes a good few seconds (DB sits on a LAN server) to load, we decided to use a BackgroundWorker, and only refresh the DGV once the data is loaded. However, it seems that doing the loading via a BGW results in some memory leak... or an error on my part. When loaded using a blocking method call, the app consumes about 30MB of RAM; with a BGW this jumps to 80MB! While it may not seem as much anyway, our clients are not too happy about it. Relevant code Form private void MyForm_Load(object sender, EventArgs e) { MyRepository.Instance.FinishedEvent += RefreshCache; } private void RefreshCache(object sender, EventArgs e) { dgvProducts.DataSource = new List<MyDataObj>(MyRepository.Products); } Repository private static List<MyDataObj> Products { get; set; } public event EventHandler ProductsLoaded; public void GetProductsSync() { List<MyDataObj> p; using (MyL2SDb db = new MyL2SDb(MyConfig.ConnectionString)) { p = db.PRODUCTS .Select(p => new MyDataObj {Id = p.ID, Description = p.DESCR}) .ToList(); } Products = p; // tell the form to refresh UI if (ProductsLoaded != null) ProductsLoaded(this, null); } public void GetProductsAsync() { using (BackgroundWorker myWorker = new BackgroundWorker()) { myWorker.DoWork += delegate { List<MyDataObj> p; using (MyL2SDb db = new MyL2SDb(MyConfig.ConnectionString)) { p = db.PRODUCTS .Select(p => new MyDataObj {Id = p.ID, Description = p.DESCR}) .ToList(); } Products = p; }; // tell the form to refresh UI when finished myWorker.RunWorkerCompleted += GetProductsCompleted; myWorker.RunWorkerAsync(); } } private void GetProductsCompleted(object sender, RunWorkerCompletedEventArgs e) { if (ProductsLoaded != null) ProductsLoaded(this, null); } End! GetProductsSync or GetProductsAsync are called on the main thread, not shown above. Could it be that the GarbageCollector just gets lost with two threads? Or is it the task manager that shows incorrect values? Will be greateful for any responses, suggestions, criticism.

    Read the article

  • A simple Python deployment problem - a whole world of pain

    - by Evgeny
    We have several Python 2.6 applications running on Linux. Some of them are Pylons web applications, others are simply long-running processes that we run from the command line using nohup. We're also using virtualenv, both in development and in production. What is the best way to deploy these applications to a production server? In development we simply get the source tree into any directory, set up a virtualenv and run - easy enough. We could do the same in production and perhaps that really is the most practical solution, but it just feels a bit wrong to run svn update in production. We've also tried fab, but it just never works first time. For every application something else goes wrong. It strikes me that the whole process is just too hard, given that what we're trying to achieve is fundamentally very simple. Here's what we want from a deployment process. We should be able to run one simple command to deploy an updated version of an application. (If the initial deployment involves a bit of extra complexity that's fine.) When we run this command it should copy certain files, either out of a Subversion repository or out of a local working copy, to a specified "environment" on the server, which probably means a different virtualenv. We have both staging and production version of the applications on the same server, so they need to somehow be kept separate. If it installs into site-packages, that's fine too, as long as it works. We have some configuration files on the server that should be preserved (ie. not overwritten or deleted by the deployment process). Some of these applications import modules from other applications, so they need to be able to reference each other as packages somehow. This is the part we've had the most trouble with! I don't care whether it works via relative imports, site-packages or whatever, as long as it works reliably in both development and production. Ideally the deployment process should automatically install external packages that our applications depend on (eg. psycopg2). That's really it! How hard can it be?

    Read the article

  • Should I use Python or Assembly for a super fast copy program

    - by PyNEwbie
    As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine. I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies. I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages. Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss) The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.

    Read the article

  • Tracking upstream svn changes with git-svn and github?

    - by Joseph Turian
    How do I track upstream SVN changes using git-svn and github? I used git-svn to convert an SVN repo to git on github: $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master I then made a few changes in my git repo, committed, and pushed to github. Now, I am on a new machine. I want to take upstream SVN changes, merge them with my github repo, and push them to my github repo. This documentation says: "If you ever lose your local copy, just run the import again with the same settings, and you’ll get another working directory with all the necessary SVN metainfo." So I did the following. But none of the commands work as desired. How do I track upstream SVN changes using git-svn and github? What am I doing wrong? $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master To [email protected]:turian/osqa.git ! [rejected] master -> master (non-fast forward) error: failed to push some refs to '[email protected]:turian/osqa.git' $ git pull remote: Counting objects: 21, done. remote: Compressing objects: 100% (17/17), done. remote: Total 17 (delta 7), reused 9 (delta 0) Unpacking objects: 100% (17/17), done. From [email protected]:turian/osqa * [new branch] master -> origin/master From [email protected]:turian/osqa * [new tag] master -> master You asked me to pull without telling me which branch you want to merge with, and 'branch.master.merge' in your configuration file does not tell me either. Please name which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details on the refspec. ... $ /usr//lib/git-core/git-svn rebase warning: refname 'master' is ambiguous. First, rewinding head to replay your work on top of it... Applying: Added forum/management/commands/dumpsettings.py error: Ref refs/heads/master is at 6acd747f95aef6d9bce37f86798a32c14e04b82e but expected a7109d94d813b20c230a029ecd67801e6067a452 fatal: Cannot lock the ref 'refs/heads/master'. Could not move back to refs/heads/master rebase refs/remotes/trunk: command returned error: 1

    Read the article

  • How to get hibernate3-maven-plugin hbm2ddl to find JDBC driver?

    - by HDave
    I have a Java project I am building with Maven. I am now trying to get the hibernate3-maven-plugin to run the hbm2ddl tool to generate a schema.sql file I can use to create the database schema from my annotated domain classes. This is a JPA application that uses Hibernate as the provider. In my persistence.xml file I call out the mysql driver: <property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect"/> <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/> When I run Maven, I see it processing all my classes, but when it goes to output the schema, I get the following error: ERROR org.hibernate.connection.DriverManagerConnectionProvider - JDBC Driver class not found: com.mysql.jdbc.Driver java.lang.ClassNotFoundException: com.mysql.jdbc.Driver I have the MySQL driver as a dependency of this module. However it seems like the hbm2ddl tool cannot find it. I would have guessed that the Maven plugin would have known to search the local Maven file repository for this driver. What gives? The relevant part of my pom.xml is this: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <goals> <goal>hbm2ddl</goal> </goals> </execution> </executions> <configuration> <components> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> </component> </components> <componentProperties> <persistenceunit>my-unit</persistenceunit> </componentProperties> </configuration> </plugin>

    Read the article

  • Storing simulation results in a persistent manner for Python?

    - by Az
    Background: I'm running multiple simuations on a set of data. For each session, I'm allocating projects to students. The difference between each session is that I'm randomising the order of the students such that all the students get a shot at being assigned a project they want. I was writing out some of the allocations in a spreadsheet (i.e. Excel) and it basically looked like this (tiny snapshot, actual table extends to a few thousand sessions, roughly 100 students). | | Session 1 | Session 2 | Session 3 | |----------|-----------|-----------|-----------| |Stu1 |Proj_AA |Proj_AB |Proj_AB | |----------|-----------|-----------|-----------| |Stu2 |Proj_AB |Proj_AA |Proj_AC | |----------|-----------|-----------|-----------| |Stu3 |Proj_AC |Proj_AC |Proj_AA | |----------|-----------|-----------|-----------| Now, the code that deals with the allocation currently stores a session in an object. The next time the allocation is run, the object is over-written. Thus what I'd really like to do is to store all the allocation results. This is important since I later need to derive from the data, information such as: which project Stu1 got assigned to the most or perhaps how popular Proj_AC was (how many times it was assigned / number of sessions). Question(s): What methods can I possibly use to basically store such session information persistently? Basically, each session output needs to add itself to the repository after ending and before beginning the next allocation cycle. One solution that was suggested by a friend was mapping these results to a relational database using SQLAlchemy. I kind of like the idea since this does give me an opportunity to delve into databases. Now the database structure I was recommended was: |----------|-----------|-----------| |Session |Student |Project | |----------|-----------|-----------| |1 |Stu1 |Proj_AA | |----------|-----------|-----------| |1 |Stu2 |Proj_AB | |----------|-----------|-----------| |1 |Stu3 |Proj_AC | |----------|-----------|-----------| |2 |Stu1 |Proj_AB | |----------|-----------|-----------| |2 |Stu2 |Proj_AA | |----------|-----------|-----------| |2 |Stu3 |Proj_AC | |----------|-----------|-----------| |3 |Stu1 |Proj_AB | |----------|-----------|-----------| |3 |Stu2 |Proj_AC | |----------|-----------|-----------| |3 |Stu3 |Proj_AA | |----------|-----------|-----------| Here, it was suggested that I make the Session and Student columns a composite key. That way I can access a specific record for a particular student for a particular session. Or I can merely get the entire allocation run for a particular session. Questions: Is the idea a good one? How does one implement and query a composite key using SQLAlchemy? What happens to the database if a particular student is not assigned a project (happens if all projects that he wants are taken)? In the code, if a student is not assigned a project, instead of a proj_id he simply gets None for that field/object. I apologise for asking multiple questions but since these are closely-related, I thought I'd ask them in the same space.

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • while creating archetype getting following error

    - by munna
    D:\Training\workspace\vppsourcemvn archetype:generate -B -DarchetypeGroupId=org .appfuse.archetypes -DarchetypeArtifactId=appfuse-modular-struts-archetype -Darc hetypeVersion=2.1.0-M1 -DgroupId=com.vmware -DartifactId=vpp [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Batch mode [WARNING] Error reading archetype catalog http://repo1.maven.org/maven2 org.apache.maven.wagon.TransferFailedException: Error transferring file: Connect ion timed out: connect at org.apache.maven.wagon.providers.http.LightweightHttpWagon.fillInputD ata(LightweightHttpWagon.java:143) at org.apache.maven.wagon.StreamWagon.getInputStream(StreamWagon.java:11 6) at org.apache.maven.wagon.StreamWagon.getIfNewer(StreamWagon.java:88) at org.apache.maven.wagon.StreamWagon.get(StreamWagon.java:61) at org.apache.maven.archetype.source.RemoteCatalogArchetypeDataSource.ge tArchetypeCatalog(RemoteCatalogArchetypeDataSource.java:97) at org.apache.maven.archetype.DefaultArchetypeManager.getRemoteCatalog(D efaultArchetypeManager.java:195) at org.apache.maven.archetype.DefaultArchetypeManager.getRemoteCatalog(D efaultArchetypeManager.java:184) at org.apache.maven.archetype.ui.DefaultArchetypeSelector.getArchetypesB yCatalog(DefaultArchetypeSelector.java:278) at org.apache.maven.archetype.ui.DefaultArchetypeSelector.selectArchetyp e(DefaultArchetypeSelector.java:69) at org.apache.maven.archetype.mojos.CreateProjectFromArchetypeMojo.execu te(CreateProjectFromArchetypeMojo.java:186) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at sun.net.NetworkClient.doConnect(NetworkClient.java:163) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.http.HttpClient.(HttpClient.java:233) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.http.HttpClient.New(HttpClient.java:323) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLC onnection.java:860) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConne ction.java:801) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection .java:726) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLCon nection.java:1049) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:373 ) at org.apache.maven.wagon.providers.http.LightweightHttpWagon.fillInputD ata(LightweightHttpWagon.java:115) ... 28 more [WARNING] No archetype found in Remote catalog. Defaulting to internal Catalog [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 46 seconds [INFO] Finished at: Wed Jun 09 16:11:07 IST 2010 [INFO] Final Memory: 11M/28M [INFO] ------------------------------------------------------------------------

    Read the article

  • NHibernate GenericADO Exception

    - by Ris90
    Hi, I'm trying to make simple many-to-one association, using NHibernate.. I have class Recruit with this mapping: <class name="Recruit" table="Recruits"> <id name="ID"> <generator class="native"/> </id> <property name="Lastname" column="lastname"/> <property name="Name" column="name"/> <property name="MedicalReport" column="medicalReport"/> <property name="DateOfBirth" column ="dateOfBirth" type="Date"/> <many-to-one name="AssignedOnRecruitmentOffice" column="assignedOnRecruitmentOffice" class="RecruitmentOffice"/> which is many-to-one connected to RecruitmentOffices: <class name="RecruitmentOffice" table="RecruitmentOffices"> <id name="ID" column="ID"> <generator class="native"/> </id> <property name="Chief" column="chief"/> <property name="Name" column="name"/> <property name ="Address" column="address"/> <set name="Recruits" cascade="save-update" inverse="true" lazy="true"> <key> <column name="AssignedOnRecruitmentOffice"/> </key> <one-to-many class="Recruit"/> </set> And create Repository class with method Insert: public void Insert(Recruit recruit) { using (ITransaction transaction = session.BeginTransaction()) { session.Save(recruit); transaction.Commit(); } } then I try to save new recrui to base: Recruit test = new Recruit(); RecruitmentOffice office = new RecruitmentOffice(); ofice.Name = "test"; office.Chief = "test"; test.AssignedOnRecruitmentOffice = office; test.Name = "test"; test.DateOfBirth = DateTime.Now; RecruitRepository testing = new RecruitRepository(); testing.Insert(test); And have this error GenericADOException could not insert: [OSiUBD.Models.DAO.Recruit][SQL: INSERT INTO Recruits (lastname, name, medicalReport, dateOfBirth, assignedOnRecruitmentOffice) VALUES (?, ?, ?, ?, ?); select SCOPE_IDENTITY()] on session.Save

    Read the article

  • How to work with file and streams in php,case: if we open file in Class A and pass open stream to Cl

    - by Rachel
    I have two class, one is Business Logic Class{BLO} and the other one is Data Access Class{DAO} and I have dependency in my BLO class to my Dao class. Basically am opening a csv file to write into it in my BLO class using inside its constructor as I am creating an object of BLO and passing in file from command prompt: Code: $this->fin = fopen($file,'w+') or die('Cannot open file'); Now inside BLO I have one function notifiy, which call has dependency to DAO class and call getCurrentDBSnapshot function from the Dao and passes the open stream so that data gets populated into the stream. Code: Blo Class Constructor: public function __construct($file) { //Open Unica File for parsing. $this->fin = fopen($file,'w+') or die('Cannot open file'); // Initialize the Repository DAO. $this->dao = new Dao('REPO'); } Blo Class method that interacts with Dao Method and call getCurrentDBSnapshot. public function notifiy() { $data = $this->fin; var_dump($data); //resource(9) of type (stream) $outputFile=$this->dao->getCurrentDBSnapshot($data); // So basically am passing in $data which is resource((9) of type (stream) } Dao function: getCurrentDBSnapshot which get current state of Database table. public function getCurrentDBSnapshot($data) { $query = "SELECT * FROM myTable"; //Basically just preparing the query. $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); $header = array(); while ($row=$stmt->fetch(PDO::FETCH_ASSOC)) { if(empty($header)) { // Get the header values from table(columnnames) $header = array_keys($row); // Populate csv file with header information from the mytable fputcsv($data, $header); } // Export every row to a file fputcsv($data, $row); } var_dump($data);//resource(9) of type (stream) return $data; } So basically in am getting back resource(9) of type (stream) from getCurrentDBSnapshot and am storing that stream into $outputFile in Blo class method notify. Now I want to close the file which I opened for writing and so it should be fclose of $outputFile or $data, because in both the cases it gives me: var_dump(fclose($outputFile)) as bool(false) var_dump(fclose($data)) as bool(false) and var_dump($outputFile) as resource(9) of type (Unknown) var_dump($data) as resource(9) of type (Unknown) My question is that if I open file using fopen in class A and if I call class B method from Class A method and pass an open stream, in our case $data, than Class B would perform some work and return back and open stream and so How can I close that open stream in Class A's method or it is ok to keep that stream open and not use fclose ? Would appreciate inputs as am not very sure as how this can be implemented.

    Read the article

  • Installing Enclosure with Netbeans

    - by jdknewb
    Hi all, I am having trouble installing Enclosure and getting it to work. I have followed this guide http://www.enclojure.org/gettingstarted and successfully installed Enclosure (I think). However, when I try to build the sample application (labrepl) I get a bunch of errors and a failed build. I haven't used Java in a long time and I've never used Netbeans, and the error doesn't seem very helpful with my limited knowledge of this domain. I'm using the latest Netbeans and the Enclosure URL from the guide. Since I am on Windows, I can't use git to clone the repo, so I'm not sure what to do from here. Anyway, here are the error messages. WARNING: You are running embedded Maven builds, some build may fail due to incompatibilities with latest Maven release. To set Maven instance to use for building, click here. Scanning for projects... [#process-resources] [resources:resources] Using default encoding to copy filtered resources. [#compile] [ERROR]Transitive dependency resolution for scope: compile has failed for your project. [ERROR]Error message: Missing: [ERROR]---------- [ERROR]1) org.clojure:clojure-contrib:jar:1.2.0-master-SNAPSHOT [ERROR] Try downloading the file manually from the project website. [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=org.clojure -DartifactId=clojure-contrib -Dversion=1.2.0-master-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=org.clojure -DartifactId=clojure-contrib -Dversion=1.2.0-master-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] Path to dependency: [ERROR] 1) labrepl:labrepl:jar:0.0.1 [ERROR] 2) org.clojure:clojure-contrib:jar:1.2.0-master-SNAPSHOT [ERROR]---------- [ERROR]1 required artifact is missing. [ERROR]for artifact: [ERROR] labrepl:labrepl:jar:0.0.1 [ERROR]from the specified remote repositories: [ERROR] central (http://repo1.maven.org/maven2), [ERROR] clojars (http://clojars.org/repo/), [ERROR] incanter (http://repo.incanter.org), [ERROR] clojure-snapshots (http://build.clojure.org/snapshots), [ERROR] clojure (http://build.clojure.org/releases), [ERROR] clojure-releases (http://build.clojure.org/releases) [ERROR]Group-Id: labrepl [ERROR]Artifact-Id: labrepl [ERROR]Version: 0.0.1 [ERROR]From file: C:\Users\chloey\Documents\NetBeansProjects\RelevanceLabRepl\pom.xml ------------------------------------------------------------------------ For more information, run with the -e flag ------------------------------------------------------------------------ BUILD FAILED ------------------------------------------------------------------------ Total time: 1 second Finished at: Wed Jun 09 21:53:04 CDT 2010 Final Memory: 72M/172M ------------------------------------------------------------------------ Thanks all.

    Read the article

  • What facets have I missed for creating a 3 person guerilla dev team?

    - by Penguinix
    Sorry for the Windows developers out there, this solution is for Macs only. This set of applications accounts for: Usability Testing, Screen Capture (Video and Still), Version Control, Task Lists, Bug Tracking, a Developer IDE, a Web Server, A Blog, Shared Doc Editing on the Web, Team and individual Chat, Email, Databases and Continuous Integration. This does assume your team members provide their own machines, and one person has a spare old computer to be the Source Repository and Web Server. All for under $200 bucks. Usability Silverback Licenses = 3 x $49.95 "Spontaneous, unobtrusive usability testing software for designers and developers." Source Control Server and Clients (multiple options) Subversion = Free Subversion is an open source version control system. Versions (Currently in Beta) = Free Versions provides a pleasant work with Subversion on your Mac. Diffly = Free "Diffly is a tool for exploring Subversion working copies. It shows all files with changes and, clicking on a file, shows a highlighted view of the changes for that file. When you are ready to commit Diffly makes it easy to select the files you want to check-in and assemble a useful commit message." Bug/Feature/Defect Tracking (multiple options) Bugzilla = Free Bugzilla is a "Defect Tracking System" or "Bug-Tracking System". Defect Tracking Systems allow individual or groups of developers to keep track of outstanding bugs in their product effectively. Most commercial defect-tracking software vendors charge enormous licensing fees. Trac = Free Trac is an enhanced wiki and issue tracking system for software development projects. Database Server & Clients MySQL = Free CocoaMySQL = Free Web Server Apache = Free Development and Build Tools XCode = Free CruiseControl = Free CruiseControl is a framework for a continuous build process. It includes, but is not limited to, plugins for email notification, Ant, and various source control tools. A web interface is provided to view the details of the current and previous builds. Collaboration Tools Writeboard = Free Ta-da List = Free Campfire Chat for 4 users = Free WordPress = Free "WordPress is a state-of-the-art publishing platform with a focus on aesthetics, web standards, and usability. WordPress is both free and priceless at the same time." Gmail = Free "Gmail is a new kind of webmail, built on the idea that email can be more intuitive, efficient, and useful." Screen Capture (Video / Still) Jing = Free "The concept of Jing is the always-ready program that instantly captures and shares images and video…from your computer to anywhere." Lots of great responses: TeamCity [Yo|||] Skype [Eric DeLabar] FogBugz [chakrit] IChatAV and Screen Sharing (built-in to OS) [amrox] Google Docs [amrox]

    Read the article

  • msysgit git-am can't apply it's own git format-patch sequence

    - by Andrian Nord
    I'm using msysgit git on windows to operate on central svn repository. I'm using git as I want to have it's awesome little local branches for everything and rebasing on each other. I also need to update from central repo often, so using separate svn/git is not an option. Problem is - git svn --help (man page) says that it is not a good idea to use git merge into master branch (which is set to track from svn's trunk) from local branches, as this will ruin the party and git svn dcommit would not work anymore. I know that it's not exactly true and you may use git merge if you are merging from branch which was properly rebased on master prior merge, but I'm trying to make it safer and actually use git format-patch and git am. We are using code review, so I'm making patches anyway. I also knew about git cherry-pick, but I want to just git am /reviewed/patches/dir/* without actually recalling what commits was corresponding to this patches (without reading patches, that is). So, what's wrong with git svn and git am? It's simple - git am for a few very hard points is doing CRLF into LF conversion for patches supplied (git-mailsplit is doing this, to be precise), if not rebasing. git format-patch is also producing proper (LF-ended) patches. As my repo is mostly CRLF (and it should remain so), patches are, obviously, failing due to wrong EOL. Converting diffs to CRLF and somehow hacking git am to prevent it from conversion is not working, too. It will fail if any file was removed or deleted - git apply will complain about expected /dev/null (but he got /dev/null^M). And if I'm applying it with git am --ignore-space-change --ignore-whitespace that it will commit LF endings straight to the index, which is also weird. I don't know if it will preserve over commiting into svn (via git svn dcommit) and checking it out and I don't want to try out. Of course, it's still possible to try hacking around patches to convert only actual diffs, but this is too much hacks for simple task. So, I wonder, is there really no established way to produce patches and apply them to the same repo on the same system? It just feels weird that msysgit can't apply it's own patches.

    Read the article

  • Is it possible to reliably auto-decode user files to Unicode? [C#]

    - by NVRAM
    I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files. Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding. This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms? I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files. My questions boil down to: Is BOM-aware detection sufficient for the vast majority of files? In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.") Under what circumstances will a "valid" file fail with the C# encoder/decoder framework? Is there a repository anywhere that has a multitude of files with various encodings to use for testing? While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this. So far I've found: A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.) Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh? Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible. My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files. Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy. Thanks.

    Read the article

  • Accessing different connection strings at runtime in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve? UPDATE: I decided to abandon the modification of Web.config in lieu of a "request-scoped DataContext" pattern that I found here. From the looks of it, I believe it should give me the results I'm looking for. However, during the TextFixtureSetUp, I try to create a new copy of the database for testing purposes, and it fails silently. When I get to the tests, the repository still uses the production database connection string to load data.

    Read the article

  • What's the best Scala build system?

    - by gatoatigrado
    I've seen questions about IDE's here -- Which is the best IDE for Scala development? and What is the current state of tooling for Scala?, but I've had mixed experiences with IDEs. Right now, I'm using the Eclipse IDE with the automatic workspace refresh option, and KDE 4's Kate as my text editor. Here are some of the problems I'd like to solve: use my own editor IDEs are really geared at everyone using their components. I like Kate better, but the refresh system is very annoying (it doesn't use inotify, rather, maybe a 10s polling interval). The reason I don't use the built-in text editor is because broken auto-complete functionalities cause the IDE to hang for maybe 10s. rebuild only modified files The Eclipse build system is broken. It doesn't know when to rebuild classes. I find myself almost half of the time going to project-clean. Worse, it seems even after it has finished building my project, a few minutes later it will pop up with some bizarre error (edit - these errors appear to be things that were previously solved with a project clean, but then come back up...). Finally, setting "Preferences / Continue launch if project contains errors" to "prompt" seems to have no effect for Scala projects (i.e. it always launches even if there are errors). build customization I can use the "nightly" release, but I'll want to modify and use my own Scala builds, not the compiler that's built into the IDE's plugin. It would also be nice to pass [e.g.] -Xprint:jvm to the compiler (to print out lowered code). fast compiling Though Eclipse doesn't always build right, it does seem snappy -- even more so than fsc. I looked at Ant and Maven, though haven't employed either yet (I'll also need to spend time solving #3 and #4). I wanted to see if anyone has other suggestions before I spend time getting a suboptimal build system working. Thanks in advance! UPDATE - I'm now using Maven, passing a project as a compiler plugin to it. It seems fast enough; I'm not sure what kind of jar caching Maven does. A current repository for Scala 2.8.0 is available [link]. The archetypes are very cool, and cross-platform support seems very good. However, about compile issues, I'm not sure if fsc is actually fixed, or my project is stable enough (e.g. class names aren't changing) -- running it manually doesn't bother me as much. If you'd like to see an example, feel free to browse the pom.xml files I'm using [github]. UPDATE 2 - from benchmarks I've seen, Daniel Spiewak is right that buildr's faster than Maven (and, if one is doing incremental changes, Maven's 10 second latency gets annoying), so if one can craft a compatible build file, then it's probably worth it...

    Read the article

  • Getting a KeyError in DB backend of django-digest

    - by rtmie
    I have just started to integrate django_digest into my app. As a start I have added the @httpdigest decorator to one of my views. If I try to connect to it I get a KeyError exception thrown in django_digest/backend/db.py . Depending on which db I configure I get a different KeyError in a different location. I am using Django 1.2.1, with MySql (also tested with sqlite). I am using the default values for all the settings options. As far as I can see I have followed all instructions but am struggling all day with this. I am using the repository versions of django-digest and python-digest. Any steer would be greatly appreciated. Tracebacks for sqlite and mysql below: with sqlite: Traceback (most recent call last): File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/wsgi.py", line 248, in __call__ signals.request_finished.send(sender=self.__class__) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/dispatch/dispatcher.py", line 162, in send response = receiver(signal=self, sender=sender, **named) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/backend/db.py", line 16, in close_connection _connection.close() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/sqlite3/base.py", line 186, in close if self.settings_dict['NAME'] != ":memory:": KeyError: 'NAME' with mysql: Traceback (most recent call last): File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 142, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 166, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 80, in get_response response = middleware_method(request) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/middleware.py", line 13, in process_request if (not self._authenticator.authenticate(request) and File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/__init__.py", line 86, in authenticate partial_digest = self._account_storage.get_partial_digest(digest_response.username) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/backend/db.py", line 97, in get_partial_digest cursor = get_connection().cursor() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/__init__.py", line 75, in cursor cursor = self._cursor() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/mysql/base.py", line 281, in _cursor if settings_dict['USER']: KeyError: 'USER'

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • bundler/capistrano is not installing gems with correct ruby version

    - by Douglas
    I'm trying to deploy my first app on a server with Capistrano, and I'm a bit lost with managing gemsets and ruby version. These are my (server and workstation) versions : Rails 3.2.8 RVM 1.16.17 Gem 1.8.24 Bundler 1.2.1 pg gem 0.14.1 My gemset are : Gemsets for ruby-1.9.3-p194 (found in /usr/local/rvm/gems/ruby-1.9.3-p194) (default) global = rail3dev20120606 I set the default gemset with : rvm use 1.9.3-p194@rail3dev20120606 --default --passenger When I run a : cap bundle:install The task end with success, but when I do a : gem list There are many missing gems though they are present in my Gemfile. When I go to check my gems in /var/www/opf/shared/bundle/ruby/ I find a folder called 1.9.1 and in /var/www/opf/shared/bundle/ruby/1.9.1/gems/ I can fond all of my needed gems (specified in Gemfile). I'm sure there is a problem with ruby version, but how do I solve this ? At the moment, if I do any rake command, I got a ruby crash [Bug] Segmentation fault, as it try to access the db and using postgresql_adapter. I think as many gems are missing there must have some gem dependencies not verified, and maybe a gem is using an incompatible ruby version 1.9.1 though it expect a 1.9.3. I think the issue is around managing ruby versions and gems. I'm certainly doing some mix with gemset and my capistrano deployement. I'm missing experience and info. Could anybody advise me how to handle this on the server ? What are the best practices ? How am I suppose to update my ruby version ? with Capistrano deploy.rb ? manually ? with/without rvm ? I saw a new version of ruby 1.9.3-p327 has just released. Should I use gemset or not ? What about the :rvm_ruby_string in my deploy.rb. Is it correctly spelled or should I remove the p194 part ? Should I Remove the :rvm_ruby_string ? Keep it ? Use a .rvmrc file ??? I'm really lost and some kind help would be welcome. This is my config/deploy.rb in any case : require 'bundler/capistrano' require File.join(File.dirname(__FILE__), 'deploy') + '/capistrano_database' set :rvm_type, :system set :rvm_ruby_string, 'ruby-1.9.3-p194@rail3dev20120606' require 'rvm/capistrano' set :application, 'opf' set :deploy_to, '/var/www/opf' set :rails_env, 'production' set :user, 'the_user' set :use_sudo, false set :group_writable, false set :scm, :git set :repository, '[email protected]:user/opf.git' set :branch, 'master' default_run_options[:pty] = true set :deploy_via, :remote_cache server '192.168.5.200', :web, :app, :db, :primary => true # If you are using Passenger mod_rails uncomment this: namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles => :app, :except => { :no_release => true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Thanks for any help

    Read the article

  • How to use Crtl in a Delphi unit in a C++Builder project? (or link to C++Builder C runtime library)

    - by Craig Peterson
    I have a Delphi unit that is statically linking a C .obj file using the {$L xxx} directive. The C file is compiled with C++Builder's command line compiler. To satisfy the C file's runtime library dependencies (_assert, memmove, etc), I'm including the crtl unit Allen Bauer mentioned here. unit FooWrapper; interface implementation uses Crtl; // Part of the Delphi RTL {$L FooLib.obj} // Compiled with "bcc32 -q -c foolib.c" procedure Foo; cdecl; external; end. If I compile that unit in a Delphi project (.dproj) everthing works correctly. If I compile that unit in a C++Builder project (.cbproj) it fails with the error: [ILINK32 Error] Fatal: Unable to open file 'CRTL.OBJ' And indeed, there isn't a crtl.obj file in the RAD Studio install folder. There is a .dcu, but no .pas. Trying to add crtdbg to the uses clause (the C header where _assert is defined) gives an error that it can't find crtdbg.dcu. If I remove the uses clause, it instead fails with errors that __assert and _memmove aren't found. So, in a Delphi unit in a C++Builder project, how can I export functions from the C runtime library so they're available for linking? I'm already aware of Rudy Velthuis's article. I'd like to avoid manually writing Delphi wrappers if possible, since I don't need them in Delphi, and C++Builder must already include the necessary functions. Edit For anyone who wants to play along at home, the code is available in Abbrevia's Subversion repository at https://tpabbrevia.svn.sourceforge.net/svnroot/tpabbrevia/trunk. I've taken David Heffernan's advice and added a "AbCrtl.pas" unit that mimics crtl.dcu when compiled in C++Builder. That got the PPMd support working, but the Lzma and WavPack libraries both fail with link errors: [ILINK32 Error] Error: Unresolved external '_beginthreadex' referenced from ABLZMA.OBJ [ILINK32 Error] Error: Unresolved external 'sprintf' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external 'strncmp' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external '_ftol' referenced from ABWAVPACK.OBJ AFAICT, all of them are declared correctly, and the _beginthreadex one is actually declared in AbLzma.pas, so it's used by the pure Delphi compile as well. To see it yourself, just download the trunk (or just the "source" and "packages" directories), disable the {$IFDEF BCB} block at the bottom of AbDefine.inc, and try to compile the C++Builder "Abbrevia.cbproj" project.

    Read the article

  • Android Couchdb - libcouch and IPC Aidl Services

    - by dirtySanchez
    I am working on a native CouchdDB app with android. Now just this week CouchOne released libcouch, described as "Library files needed to interact with CouchDB on Android": couchone_libcouch@Github It is a basic app that installs CouchDB if the CouchDB service (that comes with CouchDB if it was installed previously) can't be bound to. To be more precise, as I understand it: libcouch estimates CouchDb's presence on the device by trying to bind to a IPC Service from CouchDB and through that service wants communicate with CouchDB. Please see the method "attemptLaunch()" at CouchAppLauncher.class for reviewing this: public void attemptLaunch() { Log.i(TAG,"1.) called attemptLaunch"); Intent intent = new Intent(ICouchService.class.getName()); Log.i(TAG,"1.a) setup Intent"); Boolean canStart = bindService(intent, couchServiceConn, Context.BIND_AUTO_CREATE); Log.i(TAG,"1.b bound service. canStart: " + Boolean.toString(canStart)); if (!canStart) { setContentView(R.layout.install_couchdb); TextView label = (TextView) findViewById(R.id.install_couchdb_text); Button btn = (Button) this.findViewById(R.id.install_couchdb_btn); String text = getString(R.string.app_name) + " requires Apache CouchDB to be installed."; label.setText(text); // Launching the market will fail on emulators btn.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { launchMarket(); finish(); } }); } } The question(s) I have about this are: libcouch never is able to "find" a previously installed CouchDB. It always attempts to install CouchDB from the market. This is because it never actually is able to bind to the CouchDBService. As I understand the purpose auf AIDL generated service interfaces, the actual service that intends to offer it's IPC to other applications should make use of AIDL. In this case the AIDL has been moved to the application that is trying to bind to the remote service, which is libcouch in this case. Reviewing the commits the AIDL files have just been moved out of that repository to libcouch. For complete linkage, here's the link to the Android CouchDB sources: github.com/couchone/libcouch-android Now, I could be completely wrong in my findings, it could also be lincouch's Manifest that s missing something, but I am really looking forward to get some answers!

    Read the article

  • What is the most elegant way to implement a business rule relating to a child collection in LINQ?

    - by AaronSieb
    I have two tables in my database: Wiki WikiId ... WikiUser WikiUserId (PK) WikiId UserId IsOwner ... These tables have a one (Wiki) to Many (WikiUser) relationship. How would I implement the following business rule in my LINQ entity classes: "A Wiki must have exactly one owner?" I've tried updating the tables as follows: Wiki WikiId (PK) OwnerId (FK to WikiUser) ... WikiUser WikiUserId (PK) WikiId UserId ... This enforces the constraint, but if I remove the owner's WikiUser record from the Wiki's WikiUser collection, I recieve an ugly SqlException. This seems like it would be difficult to catch and handle in the UI. Is there a way to perform this check before the SqlException is generated? A better way to structure my database? A way to catch and translate the SqlException to something more useful? Edit: I would prefer to keep the validation rules within the LINQ entity classes if possible. Edit 2: Some more details about my specific situation. In my application, the user should be able to remove users from the Wiki. They should be able to remove any user, except the user who is currently flagged as the "owner" of the Wiki (a Wiki must have exactly one owner at all times). In my control logic, I'd like to use something like this: wiki.WikiUsers.Remove(wikiUser); mRepository.Save(); And have any broken rules transferred to the UI layer. What I DON'T want to have to do is this: if(wikiUser.WikiUserId != wiki.OwnerId) { wiki.WikiUsers.Remove(wikiUser); mRepository.Save(); } else { //Handle errors. } I also don't particularly want to move the code to my repository (because there is nothing to indicate not to use the native Remove functions), so I also DON'T want code like this: mRepository.RemoveWikiUser(wiki, wikiUser) mRepository.Save(); This WOULD be acceptable: try { wiki.WikiUsers.Remove(wikiUser); mRepository.Save(); } catch(ValidationException ve) { //Display ve.Message } But this catches too many errors: try { wiki.WikiUsers.Remove(wikiUser); mRepository.Save(); } catch(SqlException se) { //Display se.Message } I would also PREFER NOT to explicitly call a business rule check (although it may become necessary): wiki.WIkiUsers.Remove(wikiUser); if(wiki.CheckRules()) { mRepository.Save(); } else { //Display broken rules }

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • Xcode + Perforce: it frequently shows me the spinning wheel for no reason! What can i do?

    - by GamingHorror
    While working on an Xcode project i keep getting the spinning wheel while switching files, scrolling, searching, typing, debugging, removing breakpoints, switching back from another app or saving. It also happens before compiling but usually it just happens from time to time for no apparent reason. This is the second time this started happening in a Xcode project and it's driving me nuts. It completely breaks my flow of work to have to wait for the spinning wheel to go away (2-5 seconds). What on earth could i possibly do to ... figure out what's causing the problem? resolve the problem? More details: When any project is small, everything is super-smooth with Xcode and Perforce. Two of my projects eventually had this spinning wheel problem after about 4 weeks of work. It only happened with those two projects so far. They consist of around 1000-1200 files in source control, most of them assets. The problem occurs even if i manually check out the whole project in Perforce. The problem is gone when i copy the project directory and work in the copy which is no longer under source control, or if i create a branch in Perforce and work in the branch (under source control). One of these projects i shared with a colleague, and he had exactly the same issues on his Mac. We eventually switched to Subversion and the spinning-wheel issue immediately went away. Now that i've received an updated copy of the project and simply put it under Perforce as a new project, the problem also went away (so far it did not resurface). It leads me to think that it may be caused by a larger number of file revisions. The server itself (version 2009.1) is on a different (Windows) machine on my LAN, so there's definetely no Internet lag involved. The complete repository is just 1 GB in size spread over a dozen projects or so. I'm sorry if the question seems more like a support inquiry for Perforce. However i'm using the free version of Perforce so i'm not entitled to get support from them. I hope no one minds me asking here. I'm really bummed out by this. I don't want to have to create a new branch for the project each time the spinning wheel problem surfaces.

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >