Search Results

Search found 24106 results on 965 pages for 'usb key'.

Page 528/965 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Connection string problems on shared hosting with sql server 2005 express

    - by dagogo
    hi i have problem connecting to my db on a shared hosting, my host provider says they deployed sql 2005 express on their database and i prepared my connection string as follows to take advantage of sql express. \ the data source nae i used originally was ./SQLExpress but my host provider asked that i change it to local host, although with the former it didnt connect, but still with the change as indicated above the error still comes up on access to my default page. the error is as follows; Server Error in '/' Application. Invalid value for key 'attachdbfilename'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: Invalid value for key 'attachdbfilename'. Source Error: Line 120: Public Function GetID(ByVal sLgaName As String) As Integer Line 121: Dim q As String = "Select PLID " & "From LGA " & "Where LGAName = " & "'" & sLgaName & "'" Line 122: Dim cn As New SqlConnection(Me.ConnectionString) Line 123: Dim cmd As New SqlCommand(q, cn) Line 124: ive read up a lot on the web and googled ma fingers numb on this, i have a deadline to deliver this project and having successfully built the app it frustrating for this to happen. pls help me.

    Read the article

  • Has anyone ever successfully make index merge work for MySQL?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing? Update mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 1863 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Version: mysql> select version(); +----------------------+ | version() | +----------------------+ | 5.1.36-community-log | +----------------------+ Has anyone ever successfully make index merge work for MySQL? I'll be glad to see successful stories here:)

    Read the article

  • Cannot find grldr in all devices

    - by blockhead
    I'm running wubi on XP machine. Started out originally with 8.04, and gradually upgraded to 10.04. Recently, I was creating linux bootable USB drive, and put it in my system to see if it would work. After booting the LiveOS, and rebooting my machine, I know get the error Cannot find grldr in all devices when booting Ubuntu. I don't know what grldr is, but I assume it is the GRUB Loader. Did booting the LiveOS screw with my MBR perhaps? How can I fix this, and if not, is it possible to reinstall wubi, without losing anything of what I have now?

    Read the article

  • Trouble swapping values as keys in generic java BST class

    - by user1729869
    I was given a generic binary search tree class with the following declaration: public class BST<K extends Comparable<K>, V> I was asked to write a method that reverses the BST such that the values become the keys and keys become values. When I call the following method (defined in the class given) reverseDict.put(originalDict.get(key), key); I get the following two error messages from Netbeans: Exception in thread "main" java.lang.RuntimeException: Uncompilable source code - Erroneous sym type: BST.put And also: no suitable method found for put(V,K) method BST.put(BST<K,V>.Node,K,V) is not applicable (actual and formal argument lists differ in length) method BST.put(K,V) is not applicable (actual argument V cannot be converted to K by method invocation conversion) where V,K are type-variables: V extends Object declared in method <K,V>reverseBST(BST<K,V>) K extends Comparable<K> declared in method <K,V>reverseBST(BST<K,V>) From what the error messages are telling me, since my values do not extend Comparable I am unable to use them as keys. If I am right, how can I get around that without changing the class given (maybe a cast)?

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

  • Is it possible to have a mysql table accept a null value for a primary_key column referencing a diff

    - by Dr.Dredel
    I have a table that has a column which holds the id of a row in another table. However, when table A is being populated, table B may or may not have a row ready for table A. My question is, is it possible to have mysql prevent an invalid value from being entered but be ok with a NULL? or does a foreign key necessitate a valid related value? So... what I'm looking for (in pseudo code) is this: Table "person" id | name Table "people" id | group_name | person_id (foreign key id from table person) insert into person (1, 'joe'); insert into people (1, 'foo', 1)//kosher insert into people (1, 'foo', NULL)//also kosher insert into people(1, 'foo', 7)// should fail since there is no id 7 in the person table. The reason I need this is that I'm having a chicken and egg issue where it makes perfect sense for the rows in the people table to be created before hand (in this example, I'm creating the groups and would like them to pre-exist the people who join them). And I realize that THIS example is silly and I would just put the group id in the person table rather than vice-versa, but in my real-world problem that is not workable. Just curious if I need to allow any and all values in order to make this work, or if there's some way to allow for null.

    Read the article

  • saveall not saving associated data

    - by junior29101
    I'm having a problem trying to save (update) some associated data. I've read about a million google returns, but nothing seems to be the solution. I'm at my wit's end and hope some kind soul here can help. I'm using 1.3.0-RC4, my database is in InnoDB. Course has many course_tees CourseTee belongs to course My controller function is pretty simple (I've made it as simple as possible): if(!empty($this-data)) $this-Course-saveAll($this-data); I've tried a lot of different variations of that $this-data['Course'], save($this-data), etc without luck. It saves the Course info, but not the CourseTee stuff. I don't get an error message. Since I don't know how many tees any given course will have, I generate the form inputs dynamically in a loop. $form-input('CourseTee.'.$i.'.teeName', array('error' = false, 'label' = false, 'value'=$data['course_tees'][$i]['teeName'])) The course inputs are simpler: $form-input('Course.hcp'.$j, array('error' = false, 'label' = false, 'class' = 'form_small_w', 'value'=$data['Course']['hcp'.$j])) And this is how my data is formatted: Array ( [Course] = Array ( [id] = 1028476 ... ) [CourseTee] = Array ( [0] = Array ( [key] = 636 [courseid] = 1028476 ... ) [1] = Array ( [key] = 637 [courseid] = 1028476 ... ) ... ) )

    Read the article

  • Why aren't there 8gb RAM modules yet?

    - by user49951
    Why is RAM module development seemingly stuck at the same size for a while now (a couple of years)? I bought 2x2gb modules 2 years ago, and now it's all the same size, with prices even bigger. I want more memory, because I work a lot on my computer and I just need it. What is going on? Hardware/memory progress was being made constantly until these couple of years, and I'm a big computer user for over 15 years. Why isn't here 4gb/8gb modules yet? I would gladly replace my DDR2 motherboard for a DDRX one if it had at least 4gb DDRX modules for a reasonable price. Now we have a situation with very cheap usb drives reaching 64gb size, and a ram modules with pathetic 2gb size. Sounds like some sort of conspiracy.

    Read the article

  • database table design

    - by e.b.white
    I design the tables as below for the system which looks like a package delivering system For example, after user received the package, postman should record in system, and the state(history table) is "delivered",and operator is this postman, the current state(state table) is of course "delivered" history table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | package_state | +---------------+--------------------------+ | operators | operators | +---------------+--------------------------+ | create_time| create_time | +---------------+--------------------------+ state table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | latest_package_state | +---------------+--------------------------+ Above is just the basic information to record, some other information( like invoice, destination,...) should be recored as well. But there are different service types like s1 and s2, for s1 it is not needed to record invoice but s1 need, and maybe s1 need some other information to record (like the tel of end user). After all, at delivering way stations there are additional information to record, and for different service type the information type is different. My question is: 1. For different service type, shall I need to declare different tables(option A) or just one big table which can record all information for all types(option B)? 2. If option A, since the basic information above is MUST, how can prevent from declaring there duplicate fields in different tables?

    Read the article

  • In Process Explorer is it possible change scaling of activity graph the be able to further analyse graph?

    - by therobyouknow
    Is there any way to zoom in, in the System Internals Process Explorer graph? Background I'm trying to work out why my PC freezes/locks up for about a second (the pointer does not move) every so often. This has only been happening for the last 2 days. There is a very narrow spike associated with the freeze, but it's hard to hover over it an analyse what is causing it. My PC spec: ThinkPad X201S 1440x900 i7 2.0GHz 8Gb RAM, 256GB Samsung 840Pro SSD, Windows 8.1 Pro 64bit CalDigit USB 3.0 ExpressCard 34, Ultrabase X200 with DisplayPort to HDMI

    Read the article

  • LINQ to SQL - database relationships won't update after submit

    - by Quantic Programming
    I have a Database with the tables Users and Uploads. The important columns are: Users -> UserID Uploads -> UploadID, UserID The primary key in the relationship is Users -> UserID and the foreign key is Uploads -> UserID. In LINQ to SQL, I do the following operations: Retrieve files var upload = new Upload(); upload.UserID = user.UserID; upload.UploadID = XXX; db.Uploads.InsertOnSubmit(upload) db.SubmitChanges(); If I do that and rerun the application (and the db object is re-built, of course) - if do something like this: foreach(var upload in user.Uploads) I get all the uploads with that user's ID. (like added in the previous example) The problem is, that my application, after adding an upload an submitting changes, doesn't update the user.Uploads collection. i.e - I don't get the newly added uploads. The user object is stored in the Session object. At first, I though that the LINQ to SQL Framework doesn't update the reference of the object, therefore I should simply "reset" the user object from a new SQL request. I mean this: Session["user"] = db.Users.Where(u => u.UserID == user.UserID).SingleOrDefault(); (Where user is the previous user) But it didn't help. Please note: After rerunning the application, user.Uploads does have the new upload! Did anyone experience this type of problem, or is it normal behavior? I am a newbie to this framework. I would gladly take any advice. Thank you!

    Read the article

  • Route gaming data over wireless and everything else through LAN?

    - by Alex
    I have two internet connections available to me. One is via LAN.. not a great ping, but fast downloads. The other is via USB wireless adapter.. good ping, but slow downloads. I want to connect to both of them simultaneously. I want to be able to specify which data or application will use the wireless connection and route everything else through the lan connection. Is this possible, and how would I do it? Windows 7 x64 is my operating system. Here is the data from route print: http://pastebin.com/vsjQRpSM I'm still unsure of how to use this to make all of my data go through the nvidia lan interface, even after reading route /? Also, if I'm able to achieve that, will it override the ForceBindIP?

    Read the article

  • dual-boot does not work

    - by elyashiv
    I have a PC with linux-mint installed on it. I wanted to install win-7 along-side, for some reasones. what I have done is: create a bootable USB-stick with Ubontu ios. restarted the computer, this time with Ubuntu (running from my disk-on-key). created a partition on the main HD using GPart. formated the partition to NTFS. restarted the computer, this time through the installation CD for win-7. installed win-7 with normal settings. that all worked, and I'm writing this through win-7. the thing is - when I boot my system, I don't get to choose what OS to run. I checked the settings in msconfig, and in boot label it has just win-7. how can I boot linux?

    Read the article

  • how to find missing rows in oracle

    - by user203212
    Hi, I have a table called 2 tables create table ORDERS ( ORDER_NO NUMBER(38,0) not null, ORDER_DATE DATE not null, SHIP_DATE DATE null, SHIPPING_METHOD VARCHAR2(12) null, TAX_STATUS CHAR(1) null, SUBTOTAL NUMBER null, TAX_AMT NUMBER null, SHIPPING_CHARGE NUMBER null, TOTAL_AMT NUMBER null, CUSTOMER_NO NUMBER(38,0) null, EMPLOYEE_NO NUMBER(38,0) null, BRANCH_NO NUMBER(38,0) null, constraint ORDERS_ORDERNO_PK primary key (ORDER_NO) ); and create table PAYMENTS ( PAYMENT_NO NUMBER(38,0) NOT NULL, CUSTOMER_NO NUMBER(38,0) null, ORDER_NO NUMBER(38,0) null, AMT_PAID NUMBER NULL, PAY_METHOD VARCHAR(10) NULL, DATE_PAID DATE NULL, LATE_DAYS NUMBER NULL, LATE_FEES NUMBER NULL, constraint PAYMENTS_PAYMENTNO_PK primary key (PAYMENT_NO) ); I am trying to find how many late orders each customer have. the column late_days in PAYMENTS table has how many days the customer is late for making payments for any particular order. so I am making this query SELECT C.CUSTOMER_NO, C.lname, C.fname, sysdate, COUNT(P.ORDER_NO) as number_LATE_ORDERS FROM CUSTOMER C, orders o, PAYMENTS P WHERE C.CUSTOMER_NO = o.CUSTOMER_NO AND P.order_no = o.order_no AND P.LATE_DAYS>0 group by C.CUSTOMER_NO, C.lname, C.fname That means, I am counting the orders those have any late payments and late_days0. But this is giving me only the customers who have any orders with late_days0, but the customers who does not have any late orders are not showing up. so if one customer has 5 orders with late payments then it is showing 5 for that customer, but if a customer have 0 late orders,that customer is not selected in this query. Is there any way to select all the customers , and if he has any late orders, it will show the number and also if he does not have any late orders, it will show 0.

    Read the article

  • .NET port with Java's Map, Set, HashMap

    - by Nikos Baxevanis
    I am porting Java code in .NET and I am stuck in the following lines that (behave unexpectedly in .NET). Java: Map<Set<State>, Set<State>> sets = new HashMap<Set<State>, Set<State>>(); Set<State> p = new HashSet<State>(); if (!sets.containsKey(p)) { ... } The equivalent .NET code could possibly be: IDictionary<HashSet<State>, HashSet<State>> sets = new Dictionary<HashSet<State>, HashSet<State>>(); HashSet<State> p = new HashSet<State>(); if (!sets.containsKey(p)) { /* (Add to a list). Always get here in .NET (??) */ } However the code comparison fails, the program think that "sets" never contain Key "p" and eventually results in OutOfMemoryException. Perhaps I am missing something, object equality and identity might be different between Java and .NET. I tried implementing IComparable and IEquatable in class State but the results were the same. Edit: What the code does is: If the sets does not contain key "p" (which is a HashSet) it is going to add "p" at the end of a LinkedList. The State class (Java) is a simple class defined as: public class State implements Comparable<State> { boolean accept; Set<Transition> transitions; int number; int id; static int next_id; public State() { resetTransitions(); id = next_id++; } // ... public int compareTo(State s) { return s.id - id; } public boolean equals(Object obj) { return super.equals(obj); } public int hashCode() { return super.hashCode(); }

    Read the article

  • method for creating PHP templates (ie html with variables)?

    - by Haroldo
    I'm designing my html emails, these are to be a block of html containing variables that i can store in a $template variable. My problem comes with the storing in the variable part. putting all my html into php makes it a pain in the bum to work with. for example, the below code is fine for a simple email but once i start getting nested tables etc its going to get really confusing... $template.= 'Welcome ' . $username . '<br /><br /><br />'; $template.= 'Thank-you for creating an account <br /><br />'; $template.= 'Please confirm your account by click the link below! <br /><br />'; $template.= '<a href="' . $sitepath . '?email=' . $email . '&conf_key=' . $key . '" style="color: #03110A;"><font size="5" font-family="Verdana, Geneva, sans-serif" color="#03110A">' . $key . '</font></a>'; $template.='</body></html>'; is there a way i can still store the html in a $var but not have to write it like this?

    Read the article

  • Ubuntu stops auto-mounting flash drive

    - by Brian
    It seems that after being up for a few days, my Ubuntu system refuses to auto-mount hot-plugged USB disks (i.e. flash drives). The output from dmesg shows that the kernel recognizes the device correctly. The only solution I'm aware of at the moment is to reboot (logging out may work as well, but the impact is the same since I have a bunch of stuff open and it takes a few minutes to get everything situated after startup/login). I thought gvfs-fuse-daemon was the thing responsible for managing filesystems in userspace, but killing and restarting that doesn't help. Any other ideas?

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • What power cord does a WD16001032 hard drive use?

    - by llcf
    I have a Western Digital 160GB My Book USB external hard drive (WD16001032), but I can't find its power cord (or, at least, figure out which one it is in my box of cords). It might be that only one power cord would fit, but I'm a bit cautious since I just tried one of the cords with a router and could smell electronics burning when I used an incorrect one. What voltage/amps are needed for this drive? I can't find specs on Western Digital's site. I'm assuming this is due to it being an older drive.

    Read the article

  • Best Practices For Secure APIs?

    - by Ferrett Steinmetz
    Let's say I have a website that has a lot of information on our products. I'd like some of our customers (including us!) to be able to look up our products for various methods, including: 1) Pulling data from AJAX calls that return data in cool, JavaScripty-ways 2) Creating iPhone applications that use that data; 3) Having other web applications use that data for their own end. Normally, I'd just create an API and be done with it. However, this data is in fact mildly confidential - which is to say that we don't want our competitors to be able to look up all our products every morning and then automatically set their prices to undercut us. And we also want to be able to look at who might be abusing the system, so if someone's making ten million complex calls to our API a day and bogging down our server, we can cut them off. My next logical step would be then to create a developers' key to restrict access - which would work fine for web apps, but not so much for any AJAX calls. (As I see it, they'd need to provide the key in the JavaScript, which is in plaintext and easily seen, and hence there's actually no security at all. Particularly if we'd be using our own developers' keys on our site to make these AJAX calls.) So my question: after looking around at Oauth and OpenID for some time, I'm not sure there is a solution that would handle all three of the above. Is there some sort of canonical "best practices" for developers' keys, or can Oauth and OpenID handle AJAX calls easily in some fashion I have yet to grok, or am I missing something entirely?

    Read the article

  • Problem accessing private variables in jQuery like chainable design pattern

    - by novogeek
    Hi folks, I'm trying to create my custom toolbox which imitates jQuery's design pattern. Basically, the idea is somewhat derived from this post: http://stackoverflow.com/questions/2061501/jquery-plugin-design-pattern-common-practice-for-dealing-with-private-function (Check the answer given by "David"). So here is my toolbox function: (function(window){ var mySpace=function(){ return new PrivateSpace(); } var PrivateSpace=function(){ var testCache={}; }; PrivateSpace.prototype={ init:function(){ console.log('init this:', this); return this; }, ajax:function(){ console.log('make ajax calls here'); return this; }, cache:function(key,selector){ console.log('cache selectors here'); testCache[key]=selector; console.log('cached selector: ',testCache); return this; } } window.hmis=window.m$=mySpace(); })(window) Now, if I execute this function like: console.log(m$.cache('firstname','#FirstNameTextbox')); I get an error 'testCache' is not defined. I'm not able to access the variable "testCache" inside my cache function of the prototype. How should I access it? Basically, what I want to do is, I want to cache all my jQuery selectors into an object and use this object in the future.

    Read the article

  • How do I iterate through hierarchical data in a Sql Server 2005 stored proc?

    - by AlexChilcott
    Hi, I have a SqlServer2005 table "Group" similar to the following: Id (PK, int) Name (varchar(50)) ParentId (int) where ParentId refers to another Id in the Group table. This is used to model hierarchical groups such as: Group 1 (id = 1, parentid = null) +--Group 2 (id = 2, parentid = 1) +--Group 3 (id = 3, parentid = 1) +--Group 4 (id = 4, parentid = 3) Group 5 (id = 5, parentid = null) +--Group 6 (id = 6, parentid = 5) You get the picture I have another table, let's call it "Data" for the sake of simplification, which looks something like: Id (PK, int) Key (varchar) Value (varchar) GroupId (FK, int) Now, I am trying to write a stored proc which can get me the "Data" for a given group. For example, if I query for group 1, it returns me the Key-Value-Pairs from Data where groupId = 1. If I query for group 2, it returns the KVPs for groupId = 1, then unioned with those which have groupId = 2 (and duplicated keys are replaced). Ideally, the sproc would also fail gracefully if there is a cycle (ie if group 1's parent is group 2 and group 2's parent is group 1) Has anyone had experience in writing such a sproc, or know how this might be accomplished? Thanks guys, much appreciated, Alex

    Read the article

  • no default constructor exists for class

    - by MixedCoder
    #include "Includes.h" enum BlowfishAlgorithm { ECB, CBC, CFB64, OFB64, }; class Blowfish { public: struct bf_key_st { unsigned long P[18]; unsigned long S[1024]; }; Blowfish(BlowfishAlgorithm algorithm); void Dispose(); void SetKey(unsigned char data[]); unsigned char Encrypt(unsigned char buffer[]); unsigned char Decrypt(unsigned char buffer[]); char EncryptIV(); char DecryptIV(); private: BlowfishAlgorithm _algorithm; unsigned char _encryptIv[200]; unsigned char _decryptIv[200]; int _encryptNum; int _decryptNum; }; class GameCryptography { public: Blowfish _blowfish; GameCryptography(unsigned char key[]); void Decrypt(unsigned char packet[]); void Encrypt(unsigned char packet[]); Blowfish Blowfish; void SetKey(unsigned char k[]); void SetIvs(unsigned char i1[],unsigned char i2[]); }; GameCryptography::GameCryptography(unsigned char key[]) { } Error:IntelliSense: no default constructor exists for class "Blowfish" ???!

    Read the article

  • In Linux, is it possible to get a listing of drives' disk space usage that also shows volume labels?

    - by DavidH
    I know about df, of course, but df does not output volume labels. I have 5 USB hard drives plugged into my NAS box, and would love to know which is which. Current df output: Filesystem Size Used Avail Use% Mounted on /dev/sda1 27G 2.2G 24G 9% / none 56M 476K 55M 1% /dev none 60M 0 60M 0% /dev/shm none 60M 332K 59M 1% /var/run none 60M 0 60M 0% /var/lock none 60M 0 60M 0% /lib/init/rw /dev/sde1 150G 102G 48G 68% /media/usb0 /dev/sdb1 299G 196G 103G 66% /media/usb1 /dev/sdc1 233G 183G 51G 79% /media/usb2 /dev/sdd1 233G 209G 25G 90% /media/usb3 /dev/sdf1 150G 101G 49G 68% /media/usb4

    Read the article

  • What laptops can run an external 27" or 30" screen at 2560x1600 native resolution? [closed]

    - by Moin Zaman
    SU Folks, What laptops do you know off that can run a 27" or 30" external monitor as a secondary display (Extending desktop onto second screen, without switching off the laptop's own built in screen) at the screens native resolution of 2560x1600. I'm not interested docks or USB video adapters etc. Just via the laptop's built in display ports. List the port used and cable as well if possible. Reference links / posts that confirm it are an added bonus. I'm hoping people who've tried it themselves and / or confirmed it can list specific models of laptops so we can build up a good list.

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >