Search Results

Search found 5153 results on 207 pages for 'unique ptr'.

Page 171/207 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • Sql server query using function and view is slower

    - by Lieven Cardoen
    I have a table with a xml column named Data: CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](max) NOT NULL, [LastName] [nvarchar](max) NOT NULL, [Email] [nvarchar](250) NOT NULL, [Password] [nvarchar](max) NULL, [UserName] [nvarchar](250) NOT NULL, [LanguageId] [int] NOT NULL, [Data] [xml] NULL, [IsDeleted] [bit] NOT NULL,... In the Data column there's this xml <data> <RRN>...</RRN> <DateOfBirth>...</DateOfBirth> <Gender>...</Gender> </data> Now, executing this query: SELECT UserId FROM Users WHERE data.value('(/data/RRN)[1]', 'nvarchar(max)') = @RRN after clearing the cache takes (if I execute it a couple of times after each other) 910, 739, 630, 635, ... ms. Now, a db specialist told me that adding a function, a view and changing the query would make it much more faster to search a user with a given RRN. But, instead, these are the results when I execute with the changes from the db specialist: 2584, 2342, 2322, 2383, ... This is the added function: CREATE FUNCTION dbo.fn_Users_RRN(@data xml) RETURNS varchar(100) WITH SCHEMABINDING AS BEGIN RETURN @data.value('(/data/RRN)[1]', 'varchar(max)'); END; The added view: CREATE VIEW vwi_Users WITH SCHEMABINDING AS SELECT UserId, dbo.fn_Users_RRN(Data) AS RRN from dbo.Users Indexes: CREATE UNIQUE CLUSTERED INDEX cx_vwi_Users ON vwi_Users(UserId) CREATE NONCLUSTERED INDEX cx_vwi_Users__RRN ON vwi_Users(RRN) And then the changed query: SELECT UserId FROM Users WHERE dbo.fn_Users_RRN(Data) = '59021626919-61861855-S_FA1E11' Why is the solution with a function and a view going slower?

    Read the article

  • Is there a tool for managing redundant pages across a website?

    - by dmanexe
    I am in charge of constructing a website with a '2-dimensional' site map, as explained later. I am looking for (preferably a Wordpress plugin, as the site is built in Wordpress already) that would make managing thousands of pages a lot easier. To explain further, let me iterate my situation. I am building a website for a construction company, and they have several key cities and several key services. Now, they want a parent page for each service, and another unique page for the child sub-service, and finaly, a grandchild page for the city they are performing the service in. For example, if they were doing Concrete Construction in Los Angeles, the URL would look like: /concrete/construction/los-angeles The content on /los-angeles would be the same as on /malibu, or /burbank. However, there would be a different set of content for /concrete/design/los-angeles, but the entire page content (sans a few variables with city names) would be the same. Is there a way to manage or automate 'matrixing' this information on the site? I am looking for a tool that would allow me to easily add a 'city' with the same content across all grandchildren, per the child's content requirements. All of the grandchildren pages will have redundant content across them. Should something like this not exist, how difficult would it be to create, as a freelance side project? I need a tool like this, because I am approaching about ~500 cities and 50 services (Concrete Construction, Concrete Design, Concrete Engineering, etc)

    Read the article

  • Problem with Active Record

    - by kshchepelin
    Hello everyone. Lets assume we have a User model. And user can plan some activities. The number of types of activities is about 50. All activities have common properties, such as start_time, end_time, user_id, etc. But each of them has some unique properties. Now we have each activity living in its own table in DB. And thats why we have such terrible sql queries like SELECT * FROM `first_activities_table` WHERE (`first_activity`.`id` IN (17,18)) SELECT * FROM `second_activities_table` WHERE (`second_activity`.`id` = 17) ..... SELECT * FROM `n_activities_table` WHERE (`n_activity`.`id` = 44) About 50 queries. That's terrible. There are different ways to solve this. Choose the activity type with the biggest number of properties, create the table 'Activities' and have STI model. But this way we must name our columns in uncomfortable way and often the record in that table would have some NULL fields. Also STI model, but having columns, common for all of activity types and some blob column with serialized properties. But we have to do some search on activities - there can be a problem. And serialization is quite slow. Please help me dealing with this. Maybe my problem has quite different solution that will fit my needs. Thanks for help.

    Read the article

  • how to implement enhanced session handling in PHP

    - by praksant
    Hi, i'm working with sessions in PHP, and i have different applications on single domain. Problem is, that cookies are domain specific, and so session ids are sent to any page on single domain. (i don't know if there is a way to make cookies work in different way). So Session variables are visible in every page on this domain. I'm trying to implement custom session manager to overcome this behavior, but i'm not sure if i'm thinking about it right. I want to completely avoid PHP session system, and make a global object, which would store session data and on the end of script save it to database. On first access i would generate unique session_id and create a cookie On the end of script save session data with session_id, timestamps for start of session and last access, and data from $_SERVER, such as REMOTE_ADDR, REMOTE_PORT, HTTP_USER_AGENT. On every access chceck database for session_id sent in cookie from client, check IP, Port and user agent (for security) and read data into session variable (if not expired). If session_id expired, delete from database. That session variable would be implemented as singleton (i know i would get tight coupling with this class, but i don't know about better solution). I'm trying to get following benefits: Session variables invisible in another scripts on the same server and same domain Custom management of session expiration Way to see open sessions (something like list of online users) i'm not sure if i'm overlooking any disadvantages of this solution. Is there any better way? Thank you!!

    Read the article

  • Fast, easy, and secure method to perform DB actions with GET

    - by rob - not a robber
    Hey All, Sort of a methods/best practices question here that I am sure has been addressed, yet I can't find a solution based on the vague search terms I enter. I know starting off the question with "Fast and easy" will probably draw out a few sighs, so my apologies. Here is the deal. I have a logged in area where an ADMIN can do a whole host of POST operations to input data relating to their profile. The way I have data structured is pretty distinct and well segmented in most tables as it relates to the ID of the admin. Now, I have a table where I dump one type of data into and differentiate this data by assigning the ADMIN's unique ID to each record. In other words, all ADMINs have this one type of data writing to this table. I just differentiate by the ADMIN ID with each record. I was planning on letting the ADMIN remove these records by clicking on a link with a query string - obviously using GET. Obviously, the query structure is in the link so any logged in admin could then exploit the URL and delete a competitor's records. Is the only way to safely do this through POST or should I pass through the session info that includes password and validate it against the ADMIN ID that is requesting the delete? This is obviously much more work for me. As they said in the auto repair biz I used to work in... there are 3 ways to do a job: Fast, Good, and Cheap. You can only have two at a time. Fast and cheap will not be good. Good and cheap will not have fast turnaround. Fast and good will NOT be cheap. haha I guess that applies here... can never have Fast, Easy and Secure all at once ;) Thanks in advance...

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • How can I get the rank of rows relative to total number of rows based on a field?

    - by Arms
    I have a scores table that has two fields: user_id score I'm fetching specific rows that match a list of user_id's. How can I determine a rank for each row relative to the total number of rows, based on score? The rows in the result set are not necessarily sequential (the scores will vary widely from one row to the next). I'm not sure if this matters, but user_id is a unique field. Edit @Greelmo I'm already ordering the rows. If I fetch 15 rows, I don't want the rank to be 1-15. I need it to be the position of that row compared against the entire table by the score property. So if I have 200 rows, one row's rank may be 3 and another may be 179 (these are arbitrary #'s for example only). Edit 2 I'm having some luck with this query, but I actually want to avoid ties SELECT s.score , s.created_at , u.name , u.location , u.icon_id , u.photo , (SELECT COUNT(*) + 1 FROM scores WHERE score > s.score) AS rank FROM scores s LEFT JOIN users u ON u.uID = s.user_id ORDER BY s.score DESC , s.created_at DESC LIMIT 15 If two or more rows have the same score, I want the latest one (or earliest - I don't care) to be ranked higher. I tried modifying the subquery with AND id > s.id but that ended up giving me an unexpected result set and different ties.

    Read the article

  • Getting wierd issue with TO_NUMBER function in Oracle

    - by Fazal
    I have been getting an intermittent issue when executing to_number function in the where clause on a varchar2 column if number of records exceed a certain number n. I used n as there is no exact number of records on which it happens. On one DB it happens after n was 1 million on another when it was 0.1. million. E.g. I have a table with 10 million records say Table Country which has field1 varchar2 containing numberic data and Id If I do a query as an example select * from country where to_number(field1) = 23 and id 1 and id < 100000 This works But if i do the query select * from country where to_number(field1) = 23 and id 1 and id < 100001 It fails saying invalid number Next I try the query select * from country where to_number(field1) = 23 and id 2 and id < 100001 It works again As I only got invalid number it was confusing, but in the log file it said Memory Notification: Library Cache Object loaded into SGA Heap size 3823K exceeds notification threshold (2048K) KGL object name :with sqlplan as ( select c006 object_owner, c007 object_type,c008 object_name from htmldb_collections where COLLECTION_NAME='HTMLDB_QUERY_PLAN' and c007 in ('TABLE','INDEX','MATERIALIZED VIEW','INDEX (UNIQUE)')), ws_schemas as( select schema from wwv_flow_company_schemas where security_group_id = :flow_security_group_id), t as( select s.object_owner table_owner,s.object_name table_name, d.OBJECT_ID from sqlplan s,sys.dba_objects d It seems its related to SGA size, but google did not give me much help on this. Does anyone have any idea about this issue with TO_NUMBER or oracle functions for large data?

    Read the article

  • How to "reduce" a hash?

    - by Julien Lebosquain
    Suppose I have any "long" hash, like a 16 bytes MD5 or a 20 bytes SHA1. I want to reduce this hash to fit on 4 bytes, for GetHashCode() purposes. First, I'm perfectly aware that I'll get more collisions. That's totally fine in my case, but I'd still prefer to get the less possible collisions. There are several solutions to my problem: I could take the 4 first bytes of the hash. I could take the 4 last bytes of the hash. I could take 4 random bytes of the hash. I could generate a hash of the hash, involving classic prime numbers multiplications. Are there other solutons I didn't think about? And more importantly, what method will give me the most unique hash code? I'm currently supposing they're almost equivalent. Microsoft choose that the public key token of an assembly is the last 8 bytes of the SHA1 hash of its public key, so I'll probably go for this solution but I'd like to know why.

    Read the article

  • Pecking order of pigeons?

    - by sc_ray
    I was going though problems on graph theory posted by Prof. Ericksson from my alma-mater and came across this rather unique question about pigeons and their innate tendency to form pecking orders. The question goes as follows: Whenever groups of pigeons gather, they instinctively establish a pecking order. For any pair of pigeons, one pigeon always pecks the other, driving it away from food or potential mates. The same pair of pigeons always chooses the same pecking order, even after years of separation, no matter what other pigeons are around. Surprisingly, the overall pecking order can contain cycles—for example, pigeon A pecks pigeon B, which pecks pigeon C, which pecks pigeon A. Prove that any finite set of pigeons can be arranged in a row from left to right so that every pigeon pecks the pigeon immediately to its left. Since this is a question on Graph theory, the first things that crossed my mind that is this just asking for a topological sort of a graphs of relationships(relationships being the pecking order). What made this a little more complex was the fact that there can be cyclic relationships between the pigeons. If we have a cyclic dependency as follows: A-B-C-A where A pecks on B,B pecks on C and C goes back and pecks on A If we represent it in the way suggested by the problem, we have something as follows: C B A But the above given row ordering does not factor in the pecking order between C and A. I had another idea of solving it by mathematical induction where the base case is for two pigeons arranged according to their pecking order, assuming the pecking order arrangement is valid for n pigeons and then proving it to be true for n+1 pigeons. I am not sure if I am going down the wrong track here. Some insights into how I should be analyzing this problem will be helpful. Thanks

    Read the article

  • How to check if two records have a self-referencing relation?

    - by Machine
    Consider the following schema with users and their collegues (friends): Users User: columns: user_id: name: user_id as userId type: integer(8) unsigned: 1 primary: true autoincrement: true first_name: name: first_name as firstName type: string(45) notnull: true last_name: name: last_name as lastName type: string(45) notnull: true email: type: string(45) notnull: true unique: true relations: Collegues: class: User local: invitor foreign: invitee refClass: CollegueStatus equal: true onDelete: CASCADE onUpdate: CASCADE Join table: CollegueStatus: columns: invitor: type: integer(8) unsigned: 1 primary: true invitee: type: integer(8) unsigned: 1 primary: true status: type: enum(8) values: [pending, accepted, denied] default: pending notnull: true Now, let's say I two records, one for the user making a HTTP request (the logged in user), and one record for a user he wants to send a message to. I want to check if these users are collegues. Questions: Does Doctrine have any pre-build functionality to check if two records with with self-relations are related? If not, how would you write a method to check this? Where would you put said method? (In the User-class, UserTable-class etc) I could probably do something like this: public function (User $user1, User $user2) { // Ensure we load collegues if $user1 was fetched with DQL that // doesn't load this relation $collegues = $user1->get('Collegues'); $areCollegues = false; foreach($collegues as $collegue) { if($collegue['userId'] === $user2['userId']) { $areCollegues = true; break; } } return $areCollegues; } But this looks a neither efficient nor pretty. I just feel that it should be solved already for self-referencing relations to be nice to use.

    Read the article

  • How to generate lots of redundant ajax elements like checkboxes and pulldowns in Django?

    - by iJames
    Hello folks. I've been getting lots of answers from stackoverflow now that I'm in Django just be searching. Now I hope my question will also create some value for everybody. In choosing Django, I was hoping there was some similar mechanism to the way you can do partials in ROR. This was going to help me in two ways. One was in generating repeating indexed forms or form elements, and also in rendering only a piece of the page on the round trip. I've done a little bit of that by using taconite with a simple URL click but now I'm trying to get more advanced. This will focus on the form issue which boils down to how to iterate over a secondary object. If I have a list of photo instances, each of which has a couple of parameters, let's say a size and a quantity. I want to generate form elements for each photo instance separately. But then I have two lists I want to iterate on at the same time. Context: photos : Photo.objects.all() and forms = {} for photo in photos: forms[photo.id] = PhotoForm() In other words we've got a list of photo objects and a dict of forms based on the photo.id. Here's an abstraction of the template: {% for photo in photos %} {% include "photoview.html" %} {% comment %} So here I want to use the photo.id as an index to get the correct form. So that each photo has its own form. I would want to have a different action and each form field would be unique. Is that possible? How can I iterate on that? Thanks! {% endcomment %} Quantity: {{ oi.quantity }} {{ form.quantity }} Dimensions: {{ oi.size }} {{ form.size }} {% endfor %} What can I do about this simple case. And how can I make it where every control is automatically updating the server instead of using a form at all? Thanks! James

    Read the article

  • Hibernate saveorUpdate method problem

    - by umesh awasthi
    Hi all, i am trying to work on with Hibernate (Database side for first time) and some how struck in choosing best possible way to use saveOrUpdate or Save.Update i have a destination POJO class and its other attributes which needs to be updated along with the Destination entity. i am getting an XML file to be imported and i am using the following code to update/save the destination entity along with its attribute classes. try{ getSessionFactory().getCurrentSession().beginTransaction(); getSessionFactory().getCurrentSession().saveOrUpdate(entity); getSessionFactory().getCurrentSession().getTransaction().commit(); } catch(Exception e){ getSessionFactory().getCurrentSession().getTransaction().rollback(); } finally{ getSessionFactory().close(); } everything is working fine untill i am using the same session instance.but later on when i am using the same XML file to update the destination PO for certain attributes it giving me the following error. SEVERE: Duplicate entry 'MNLI' for key 'DESTINATIONID' 9 Jan, 2011 4:58:11 PM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:114) at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:109) at org.hibernate.jdbc.AbstractBatcher.prepareBatchStatement(AbstractBatcher.java:244) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2242) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2678) at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79) i am using UUID as primary key for the Destination table and in destination table i have a destination id which is unique.but i can understand that in the secodn case hibernate is not able to find if there is already an entry for the same destination in the DB and trying to execute insert statement rather than update. one possible solution is i can user destinationid to check if there is already a destination inplace with the given id and based on the results i can issue save or update command. my question is can this be achieve by any other good way..? Thanks in advance

    Read the article

  • How to map a Dictionary<string, string> spanning several tables

    - by Kim Johansson
    I have four tables: CREATE TABLE [Languages] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Code] NVARCHAR(10) NOT NULL, PRIMARY KEY ([Id]), UNIQUE INDEX ([Code]) ); CREATE TABLE [Words] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, PRIMARY KEY ([Id]) ); CREATE TABLE [WordTranslations] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Value] NVARCHAR(100) NOT NULL, [Word] INTEGER NOT NULL, [Language] INTEGER NOT NULL, PRIMARY KEY ([Id]), FOREIGN KEY ([Word]) REFERENCES [Words] ([Id]), FOREIGN KEY ([Language]) REFERENCES [Languages] ([Id]) ); CREATE TABLE [Categories] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Word] INTEGER NOT NULL, PRIMARY KEY ([Id]), FOREIGN KEY ([Word]) REFERENCES [Words] ([Id]) ); So you get the name of a Category via the Word - WordTranslation - Language relations. Like this: SELECT TOP 1 wt.Value FROM [Categories] AS c LEFT JOIN [WordTranslations] AS wt ON c.Word = wt.Word WHERE wt.Language = ( SELECT TOP 1 l.Id FROM [Languages] WHERE l.[Code] = N'en-US' ) AND c.Id = 1; That would return the en-US translation of the Category with Id = 1. My question is how to map this using the following class: public class Category { public virtual int Id { get; set; } public virtual IDictionary<string, string> Translations { get; set; } } Getting the same as the SQL query above would be: Category category = session.Get<Category>(1); string name = category.Translations["en-US"]; And "name" would now contain the Category's name in en-US. Category is mapped against the Categories table. How would you do this and is it even possible?

    Read the article

  • Python: Convert format string to regular expression

    - by miracle2k
    The users of my app can configure the layout of certain files via a format string. For example, the config value the user specifies might be: layout = '%(group)s/foo-%(locale)s/file.txt' I now need to find all such files that already exist. This seems easy enough using the glob module: glob_pattern = layout % {'group': '*', 'locale': '*'} glob.glob(glob_pattern) However, now comes the hard part: Given the list of glob results, I need to get all those filename-parts that matched a given placeholder, for example all the different "locale" values. I thought I would generate a regular expression for the format string that I could then match against the list of glob results (or then possibly skipping glob and doing all the matching myself). But I can't find a nice way to create the regex with both the proper group captures, and escaping the rest of the input. For example, this might give me a regex that matches the locales: regex = layout % {'group': '.*', 'locale': (.*)} But to be sure the regex is valid, I need to pass it through re.escape(), which then also escapes the regex syntax I have just inserted. Calling re.escape() first ruins the format string. I know there's fnmatch.translate(), which would even give me a regex - but not one that returns the proper groups. Is there a good way to do this, without a hack like replacing the placeholders with a regex-safe unique value etc.? Is there possibly some way (a third party library perhaps?) that allows dissecting a format string in a more flexible way, for example splitting the string at the placeholder locations?

    Read the article

  • Using ASP.NET SQL Membership Provider, how do I store my own per-user data?

    - by Gary McGill
    I'm using the ASP.NET SQL Membership Provider. So, there's an aspnet_Users table that has details of each of my users. (Actually, the aspnet_Membership table seems to contain most of the actual data). I now want to store some per-user information in my database, so I thought I'd just create a new table with a UserId (GUID) column and an FK relationship to aspnet_Users. However, I then discovered that I can't easily get access to the UserId since it's not exposed via the membership API. (I know I can access it via the ProviderUserKey, but it seems like the API is abstracting away the internal UserID in favor of the UserName, and I don't want to go too far against the grain). So, I thought I should instead put a LoweredUserName column in my table, and create an FK relationship to aspnet_Users using that. Bzzzt. Wrong again, because while there is a unique index in aspnet_Users that includes the LoweredUserName, it also includes the ApplicationId - so in order to create my FK relationship, I'd need to have an ApplicationId column in my table too. At first I thought: fine, I'm only dealing with a single application, so I'll just add such a column and give it a default value. Then I realised that the ApplicationId is a GUID, so it'd be a pain to do this. Not hard exactly, but until I roll out my DB I can't predict what the GUID is going to be. I feel like I'm missing something, or going about things the wrong way. What am I supposed to do?

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • How to have multiple instances of jQuery plugin on single page?

    - by James Skidmore
    I'm writing a simple jQuery plugin, but I'm having trouble being able to use multiple instances on a page. For instance, here is a sample plugin to illustrate my point: (function($) { $.fn.samplePlugin = function(options) { if (typeof foo != 'undefined') { alert('Already defined!'); } else { var foo = 'bar'; } }; })(jQuery); And then if I do this: $(document).ready(function(){ $('#myDiv').samplePlugin({}); // does nothing $('#myDiv2').samplePlugion({}); // alerts "Already defined!" }); This is obviously an over-simplified example to get across the point. So my question is, how do I have two separate instances of the plugin? I'd like to be able to use it across multiple instances on the same page. I'm guessing that part of the problem might be with defining the variables in a global scope. How can I define them unique to that instance of the plugin then? Thank you for your guidance!

    Read the article

  • DotNetOpenAuth / WebSecurity Basic Info Exchange

    - by Jammer
    I've gotten a good number of OAuth logins working on my site now. My implementation is based on the WebSecurity classes with amends to the code to suit my needs (I pulled the WebSecurity source into mine). However I'm now facing a new set of problems. In my application I have opted to make the user email address the login identifier of choice. It's naturally unique and suits this use case. However, the OAuth "standards" strikes again. Some providers will return your email address as "username" (Google) some will return the display name (Facebook). As it stands I see to options given my particular scenario: Option 1 Pull even more framework source code into my solution until I can chase down where the OpenIdRelyingParty class is actually interacted with (via the DotNetOpenAuth.AspNet facade) and make addition information requests from the OpenID Providers. Option 2 When a user first logs in using an OpenID provider I can display a kind of "complete registration" form that requests missing info based on the provider selected.* Option 2 is the most immediate and probably the quickest to implement but also includes some code smells through having to do something different based on the provider selected. Option 1 will take longer but will ultimately make things more future proof. I will need to perform richer interactions down the line so this also has an edge in that regard. The more I get into the code it does seem that the WebSecurity class itself is actually very limiting as it hides lots of useful DotNetOpenAuth functionality in the name of making integration easier. Andrew (the author of DNOA) has said that the Attribute Exchange stuff happens in the OpenIdRelyingParty class but I cannot see from the DotNetOpenAuth.AspNet source code where this class is used so I'm unsure of what source would need to be pulled into my code in order to enable the functionality I need. Has anyone completely something similar?

    Read the article

  • How would I find all sets of N single-digit, non-repeating numbers that add up to a given sum in PHP

    - by TerranRich
    Let's say I want to find all sets of 5 single-digit, non-repeating numbers that add up to 30... I'd end up with [9,8,7,5,1], [9,8,7,4,2], [9,8,6,4,3], [9,8,6,5,2], [9,7,6,5,3], and [8,7,6,5,4]. Each of those sets contains 5 non-repeating digits that add up to 30, the given sum. Any help would be greatly appreciated. Even just a starting point for me to use would be awesome. I came up with one method, which seems like a long way of going about it: get all unique 5-digit numbers (12345, 12346, 12347, etc.), add up the digits, and see if it equals the given sum (e.g. 30). If it does, add it to the list of possible matching sets. I'm doing this for a personal project, which will help me in solving Kakuro puzzles without actually solving the whole thing at once. Yeah, it may be cheating, but it's... it's not THAT bad... :P

    Read the article

  • Consolidate multiple site files into single location

    - by seengee
    We have a custom PHP/MySQL CMS running on Linux/Apache thats rolled out to multiple sites (20+) on the same server. Each site uses exactly the same CMS files with a few files for each site being customised. The customised files for each site are: /library/mysql_connect.php /public_html/css/* /public_html/ftparea/* /public_html/images/* There's also a couple of other random files inside /public_html/includes/ that are unique to each site. Other than this each site on the server uses the exact same files. Each site sitting within /home/username/. There is obviously a massive amount of replication here as each time we want to deploy a system update we need to update to each user account. Given the common site files are all stored in SVN it would make far more sense if we were able to simply commit to SVN and deploy to a single location direct from there. Unfortunately, making a major architecture change at this stage could be problematic. In my mind the ideal scenario would mean creating an account like /home/commonfiles/ and each site using these common files unless an account specific file exists, for example a request is made to /home/user/public_html/index.php but as this file doesnt exist the request is then redirected to /home/commonfiles/public_html/index.php. I know that generally this approach is possible, similar to how Zend Framework (and probably others) redirect all requests that dont match a specific file to index.php. I'm just not sure about how exactly to go about implementing it and whether its actually advisable. Would really welcome any input/ideas people have got. Thanks.

    Read the article

  • Is there any algorithm that can solve ANY traditional sudoku puzzles, WITHOUT guessing (or similar techniques)?

    - by justin
    Is there any algorithm that solves ANY traditional sudoku puzzle, WITHOUT guessing? Here Guessing means trying an candidate and see how far it goes, if a contradiction is found with the guess, backtracking to the guessing step and try another candidate; when all candidates are exhausted without success, backtracking to the previous guessing step (if there is one; otherwise the puzzle proofs invalid.), etc. EDIT1: Thank you for your replies. traditional sudoku means 81-box sudoku, without any other constraints. Let us say the we know the solution is unique, is there any algorithm that can GUARANTEE to solve it without backtracking? Backtracking is a universal tool, I have nothing wrong with it but, using a universal tool to solve sudoku decreases the value and fun in deciphering (manually, or by computer) sudoku puzzles. How can a human being solve the so called "the hardest sudoku in the world", does he need to guess? I heard some researcher accidentally found that their algorithm for some data analysis can solve all sudoku. Is that true, do they have to guess too?

    Read the article

  • Good data structure for efficient insert/querying on arbitrary properties

    - by Juliet
    I'm working on a project where Arrays are the default data structure for everything, and every query is a linear search in the form of: Need a customer with a particular name? customer.Find(x => x.Name == name) Need a customer with a particular unique id? customer.Find(x => x.Id == id) Need a customer of a particular type and age? customer.Find(x => x is PreferredCustomer && x.Age >= age) Need a customer of a particular name and age? customer.Find(x => x.Name == name && x.Age == age) In almost all instances, the criteria for lookups is well-defined. For example, we only search for customers by one or more of the properties Id, Type, Name, or Age. We rarely search by anything else. Is a good data structure to support arbitrary queries of these types with lookup better than O(n)? Any out-of-the-box implementations for .NET?

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >