Search Results

Search found 9542 results on 382 pages for 'row'.

Page 355/382 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • 5 Android Keyboard Replacements to Help You Type Faster

    - by Chris Hoffman
    Android allows developers to replace its keyboard with their own keyboard apps. This has led to experimentation and great new features, like the gesture-typing feature that’s made its way into Android’s official keyboard after proving itself in third-party keyboards. This sort of customization isn’t possible on Apple’s iOS or even Microsoft’s modern Windows environments. Installing a third-party keyboard is easy — install it from Google Play, launch it like another app, and it will explain how to enable it. Google Keyboard Google Keyboard is Android’s official keyboard, as seen on Google’s Nexus devices. However, there’s a good chance your Android smartphone or tablet comes with a keyboard designed by its manufacturer instead. You can install the Google Keyboard from Google Play, even if your device doesn’t come with it. This keyboard offers a wide variety of features, including a built-in gesture-typing feature, as popularized by Swype. It also offers prediction, including full next-word prediction based on your previous word, and includes voice recognition that works offline on modern versions of Android. Google’s keyboard may not offer the most accurate swiping feature or the best autocorrection, but it’s a great keyboard that feels like it belongs in Android. SwiftKey SwiftKey costs $4, although you can try it free for one month. In spite of its price, many people who rarely buy apps have been sold on SwiftKey. It offers amazing auto-correction and word-prediction features. Just mash away on your touch-screen keyboard, typing as fast as possible, and SwiftKey will notice your mistakes and type what you actually meant to type. SwiftKey also now has built-in support for gesture-typing via SwiftKey Flow, so you get a lot of flexibility. At $4, SwiftKey may seem a bit pricey, but give the month-long trial a try. A great keyboard makes all the typing you do everywhere on your phone better. SwiftKey is an amazing keyboard if you tap-to-type rather than swipe-to-type. Swype While other keyboards have copied Swype’s swipe-to-type feature, none have completely matched its accuracy. Swype has been designing a gesture-typing keyboard for longer than anyone else and its gesture feature still seems more accurate than its competitors’ gesture support. If you use gesture-typing all the time, you’ll probably want to use Swype. Swype can now be installed directly from Google Play without the old, tedious process of registering a beta account and sideloading the Swype app. Swype offers a month-long free trial and the full version is available for $1 afterwards. Minuum Minuum is a crowdfunded keyboard that is currently still in beta and only supports English. We include it here because it’s so interesting — it’s a great example of the kind of creativity and experimentation that happens when you allow developers to experiment with their own forms of keyboard. Minuum uses a tiny, minimum keyboard that frees up your screen space, so your touch-screen keyboard doesn’t hog your device’s screen. Rather than displaying a full keyboard on your screen, Minuum displays a single row of letters.  Each letter is small and may be difficult to hit, but that doesn’t matter — Minuum’s smart autocorrection algorithms interpret what you intended to type rather than typing the exact letters you press. Just swipe to the right to type a space and accept Minuum’s suggestion. At $4 for a beta version with no trial, Minuum may seem a bit pricy. But it’s a great example of the flexibility Android allows. If there’s a problem with this keyboard, it’s that it’s a bit late — in an age of 5″ smartphones with 1080p screens, full-size keyboards no longer feel as cramped. MessagEase MessagEase is another example of a new take on text input. Thankfully, this keyboard is available for free. MessagEase presents all letters in a nine-button grid. To type a common letter, you’d tap the button. To type an uncommon letter, you’d tap the button, hold down, and swipe in the appropriate direction. This gives you large buttons that can work well as touch targets, especially when typing with one hand. Like any other unique twist on a traditional keyboard, you’d have to give it a few minutes to get used to where the letters are and the new way it works. After giving it some practice, you may find this is a faster way to type on a touch-screen — especially with one hand, as the targets are so large. Google Play is full of replacement keyboards for Android phones and tablets. Keyboards are just another type of app that you can swap in. Leave a comment if you’ve found another great keyboard that you prefer using. Image Credit: Cheon Fong Liew on Flickr     

    Read the article

  • The Sound of Two Toilets Flushing: Constructive Criticism for Virgin Atlantic Complaints Department

    - by Geertjan
    I recently had the experience of flying from London to Johannesburg and back with Virgin Atlantic. The good news was that it was the cheapest flight available and that the take off and landing were absolutely perfect. Hence I really have no reason to complain. Instead, I'd like to offer some constructive criticism which hopefully Richard Branson will find sometime while googling his name. Or maybe someone from the Virgin Atlantic Complaints Department will find it, whatever, just want to put this information out there. Arrangement of restroom facilities. Maybe next time you design an airplane, consider not putting your toilets at a right angle right next to your rows of seats. Being able to reach, without even needing to stretch your arm, from your seat to close, yet again, a toilet door that someone, someone obviously sitting very far from the toilets, carelessly forgot to close is not an indicator of quality interior design. Have you noticed how all other airplanes have their toilets in a cubicle separated from the rows of seats? On those airplanes, people sitting in the seats near the toilets are not constantly being woken up throughout the night whenever someone enters/exits the toilet, whenever the light in the toilet is suddenly switched on, and whenever one of the toilets flushes. Bonus points for Virgin Atlantic passengers in the seats adjoining the toilets is when multiple toilets are flushed simultaneously and multiple passengers enter/exit them at the same time, a bit like an unasked for low budget musical of suddenly illuminated grumpy people in crumpled clothes. What joy that brings at 3 AM is hard to describe. Seats with extra leg room. You know how other airplanes have the seats with the extra leg room? You know what those seats tend to have? Extra leg room. It's really interesting how Virgin Atlantic's seats with extra leg room actually have no extra leg room at all. It should have been a give away, the fact that these special seats are found in the same rows as the standard seats, rather than on the cusp of real glory which is where most airlines put their extra leg room seats, with the only actual difference being that they have a slightly different color. Had you called them "seats with a different color" (i.e., almost not quite green, rather than something vaguely hinting at blue), at least I'd have known what I was getting. Picture the joy at 3 AM, rudely awakened from nightmarish slumber, partly grateful to have been released from a grayish dream of faceless zombies resembling one or two of those in a recent toilet line, by multiple adjoining toilets flushing simultaneously, while you're sitting in a seat with extra leg room that has exactly as much leg room as the seats in neighboring rows. You then have a choice of things to be sincerely annoyed about. Food from the '80's. In the '80's, airplane food came in soggy containers and even breakfast, the most important meal of the day, was a sad heap of vaguely gray colors. The culinary highlight tended to be a squashed tomato, which must have been mashed to a pulp with a brick prior to being regurgitated by a small furry animal, and there was also always a piece of immensely horrid pumpkin, as well as a slice of spongy something you'd never seen before. Sausages and mash at 6 AM on an airplane was always a heavy lump of horribleness. Thankfully, all airlines throughout the world changed from this puke inducing strategy around 1987 sometime. Not Virgin Atlantic, of course. The fatty sausages and mash are still there, bringing you flashbacks to Duran Duran, which is what you were listening to (on your walkman) the last time you saw it in an airplane. Even the golden oldie "squashed tomato attached by slime to three wet peas" is on the menu. How wonderful to have all this in a cramped seat with a long row of early morning bleariness lined up for the toilets, right at your side, bumping into your elbow, groggily, one by one, one after another, more and more, fumble-open-door-silence-flush-fumble-open-door, and on and on, while you tentatively push your fork through a soggy pile of colorless mush, fighting the urge to throw up on the stinky socks of whatever nightmarish zombie is bumping into your elbow at the time. But, then again, the plane landed without a hitch, in fact, extremely smoothly, so I'm certainly not blaming the pilots.

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Think before you animate

    - by David Paquette
    Animations are becoming more and more common in our applications.  With technologies like WPF, Silverlight and jQuery, animations are becoming easier for developers to use (and abuse).  When used properly, animation can augment the user experience.  When used improperly, animation can degrade the user experience.  Sometimes, the differences can be very subtle. I have recently made use of animations in a few projects and I very quickly realized how easy it is to abuse animation techniques.  Here are a few things I have learned along the way. 1) Don’t animate for the sake of animating We’ve all seen the PowerPoint slides with annoying slide transitions that animate 20 different ways.  It’s distracting and tacky.  The same holds true for your application.  While animations are fun and becoming easy to implement, resist the urge to use the technology just because you think the technology is amazing.   2) Animations should (and do) have meaning I recently built a simple Windows Phone 7 (WP7) application, Steeped (download it here).  The application has 2 pages.  The first page lists a number of tea types.  When the user taps on one of the tea types, the application navigates to the second page with information about that tea type and some options for the user to choose from.       One of the last things I did before submitting Steeped to the marketplace was add a page transition between the 2 pages.  I choose the Slide / Fade Out transition.  When the user selects a tea type, the main page slides to the left and fades out.  At the same time, the details page slides in from the right and fades in.  I tested it and thought it looked great so I submitted the app.  A few days later, I asked a friend to try the app.  He selected a tea type, and I was a little surprised by how he used the app.  When he wanted to navigate back to the main page, instead of pressing the back button on the phone, he tried to use a swiping gesture.  Of course, the swiping gesture did nothing because I had not implemented that feature.  After thinking about it for a while, I realized that the page transition I had chosen implied a particular behaviour.  As a user, if an action I perform causes an item (in this case the page) to move, then my expectation is that I should be able to move it back.  I have since added logic to handle the swipe gesture and I think the app flows much better now. When using animation, it pays to ask yourself:  What story does this animation tell my users?   3) Watch the replay Some animations might seem great initially but can get annoying over time.  When you use an animation in your application, make sure you try using it over and over again to make sure it doesn’t get annoying.  When I add an animation, I try watch it at least 25 times in a row.  After watching the animation repeatedly, I can make a more informed decision whether or not I should keep the animation.  Often, I end up shortening the length of the animations.   4) Don’t get in the users way An animation should never slow the user down.  When implemented properly, an animation can give a perceived bump in performance.  A good example of this is a the page transitions in most of the built in apps on WP7.  Obviously, these page animations don’t make the phone any faster, but they do provide a more responsive user experience.  Why?  Because most of the animations begin as soon as the user has performed some action.  The destination page might not be fully loaded yet, but the system responded immediately to user action, giving the impression that the system is more responsive.  If the user did not see anything happen until after the destination page was fully loaded, the application would feel clumsy and slow.  Also, it is important to make sure the animation does not degrade the performance (or perceived performance) of the application.   Jut a few things to consider when using animations.  As is the case with many technologies, we often learn how to misuse it before we learn how to use it effectively.

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • How to get distinct values from the List&lt;T&gt; with LINQ

    - by Vincent Maverick Durano
    Recently I was working with data from a generic List<T> and one of my objectives is to get the distinct values that is found in the List. Consider that we have this simple class that holds the following properties: public class Product { public string Make { get; set; } public string Model { get; set; } }   Now in the page code behind we will create a list of product by doing the following: private List<Product> GetProducts() { List<Product> products = new List<Product>(); Product p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy S 1"; products.Add(p); p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy S 2"; products.Add(p); p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy Note"; products.Add(p); p = new Product(); p.Make = "Apple"; p.Model = "iPhone 4"; products.Add(p); p = new Product(); p.Make = "Apple"; p.Model = "iPhone 4s"; products.Add(p); p = new Product(); p.Make = "HTC"; p.Model = "Sensation"; products.Add(p); p = new Product(); p.Make = "HTC"; p.Model = "Desire"; products.Add(p); p = new Product(); p.Make = "Nokia"; p.Model = "Some Model"; products.Add(p); p = new Product(); p.Make = "Nokia"; p.Model = "Some Model"; products.Add(p); p = new Product(); p.Make = "Sony Ericsson"; p.Model = "800i"; products.Add(p); p = new Product(); p.Make = "Sony Ericsson"; p.Model = "800i"; products.Add(p); return products; }   And then let’s bind the products to the GridView. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Gridview1.DataSource = GetProducts(); Gridview1.DataBind(); } }   Running the code will display something like this in the page: Now what I want is to get the distinct row values from the list. So what I did is to use the LINQ Distinct operator and unfortunately it doesn't work. In order for it work is you must use the overload method of the Distinct operator for you to get the desired results. So I’ve added this IEqualityComparer<T> class to compare values: class ProductComparer : IEqualityComparer<Product> { public bool Equals(Product x, Product y) { if (Object.ReferenceEquals(x, y)) return true; if (Object.ReferenceEquals(x, null) || Object.ReferenceEquals(y, null)) return false; return x.Make == y.Make && x.Model == y.Model; } public int GetHashCode(Product product) { if (Object.ReferenceEquals(product, null)) return 0; int hashProductName = product.Make == null ? 0 : product.Make.GetHashCode(); int hashProductCode = product.Model.GetHashCode(); return hashProductName ^ hashProductCode; } }   After that you can then bind the GridView like this: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Gridview1.DataSource = GetProducts().Distinct(new ProductComparer()); Gridview1.DataBind(); } }   Running the page will give you the desired output below: As you notice, it now eliminates the duplicate rows in the GridView. Now what if we only want to get the distinct values for a certain field. For example I want to get the distinct “Make” values such as Samsung, Apple, HTC, Nokia and Sony Ericsson and populate them to a DropDownList control for filtering purposes. I was hoping the the Distinct operator has an overload that can compare values based on the property value like (GetProducts().Distinct(o => o.PropertyToCompare). But unfortunately it doesn’t provide that overload so what I did as a workaround is to use the GroupBy,Select and First LINQ query operators to achieve what I want. Here’s the code to get the distinct values of a certain field. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { DropDownList1.DataSource = GetProducts().GroupBy(o => o.Make).Select(o => o.First()); DropDownList1.DataTextField = "Make"; DropDownList1.DataValueField = "Model"; DropDownList1.DataBind(); } } Running the code will display the following output below:   That’s it! I hope someone find this post useful!

    Read the article

  • Collision Detection problems in Voxel Engine (XNA)

    - by Darestium
    I am creating a minecraft like terrain engine in XNA and have had some collision problems for quite some time. I have checked and changed my code based on other peoples collision code and I still have the same problem. It always seems to be off by about a block. for instance, if I walk across a bridge which is one block high I fall through it. Also, if you walk towards a "row" of blocks like this: You are able to stand "inside" the left most one, and you collide with nothing in the right most side (where there is no block and is not visible on this image). Here is all my collision code: private void Move(GameTime gameTime, Vector3 direction) { float speed = playermovespeed * (float)gameTime.ElapsedGameTime.TotalSeconds; Matrix rotationMatrix = Matrix.CreateRotationY(player.Camera.LeftRightRotation); Vector3 rotatedVector = Vector3.Transform(direction, rotationMatrix); rotatedVector.Normalize(); Vector3 testVector = rotatedVector; testVector.Normalize(); Vector3 movePosition = player.position + testVector * speed; Vector3 midBodyPoint = movePosition + new Vector3(0, -0.7f, 0); Vector3 headPosition = movePosition + new Vector3(0, 0.1f, 0); if (!world.GetBlock(movePosition).IsSolid && !world.GetBlock(midBodyPoint).IsSolid && !world.GetBlock(headPosition).IsSolid) { player.position += rotatedVector * speed; } //player.position += rotatedVector * speed; } ... public void UpdatePosition(GameTime gameTime) { player.velocity.Y += playergravity * (float)gameTime.ElapsedGameTime.TotalSeconds; Vector3 footPosition = player.Position + new Vector3(0f, -1.5f, 0f); Vector3 headPosition = player.Position + new Vector3(0f, 0.1f, 0f); // If the block below the player is solid the Y velocity should be zero if (world.GetBlock(footPosition).IsSolid || world.GetBlock(headPosition).IsSolid) { player.velocity.Y = 0; } UpdateJump(gameTime); UpdateCounter(gameTime); ProcessInput(gameTime); player.Position = player.Position + player.velocity * (float)gameTime.ElapsedGameTime.TotalSeconds; velocity = Vector3.Zero; } and the one and only function in the camera class: protected void CalculateView() { Matrix rotationMatrix = Matrix.CreateRotationX(upDownRotation) * Matrix.CreateRotationY(leftRightRotation); lookVector = Vector3.Transform(Vector3.Forward, rotationMatrix); cameraFinalTarget = Position + lookVector; Vector3 cameraRotatedUpVector = Vector3.Transform(Vector3.Up, rotationMatrix); viewMatrix = Matrix.CreateLookAt(Position, cameraFinalTarget, cameraRotatedUpVector); } which is called when the rotation variables are changed: public float LeftRightRotation { get { return leftRightRotation; } set { leftRightRotation = value; CalculateView(); } } public float UpDownRotation { get { return upDownRotation; } set { upDownRotation = value; CalculateView(); } } World class: public Block GetBlock(int x, int y, int z) { if (InBounds(x, y, z)) { Vector3i regionalPosition = GetRegionalPosition(x, y, z); Vector3i region = GetRegionPosition(x, y, z); return regions[region.X, region.Y, region.Z].Blocks[regionalPosition.X, regionalPosition.Y, regionalPosition.Z]; } return new Block(BlockType.none); } public Vector3i GetRegionPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; return new Vector3i(regionx, regiony, regionz); } public Vector3i GetRegionalPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int X = x % Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int Y = y % Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; int Z = z % Variables.REGION_SIZE_Z; return new Vector3i(X, Y, Z); } Any ideas how to fix this problem? EDIT 1: Graphic of the problem: EDIT 2 GetBlock, Vector3 version: public Block GetBlock(Vector3 position) { int x = (int)Math.Floor(position.X); int y = (int)Math.Floor(position.Y); int z = (int)Math.Ceiling(position.Z); Block block = GetBlock(x, y, z); return block; } Now, the thing is I tested the theroy that the Z is always "off by one" and by ceiling the value it actually works as intended. Altough it still could be greatly more accurate (when you go down holes you can see through the sides, and I doubt it will work with negitive positions). I also does not feel clean Flooring the X and Y values and just Ceiling the Z. I am surely not doing something correctly still.

    Read the article

  • The design of a generic data synchronizer, or, an [object] that does [actions] with the aid of [helpers]

    - by acheong87
    I'd like to create a generic data-source "synchronizer," where data-source "types" may include MySQL databases, Google Spreadsheets documents, CSV files, among others. I've been trying to figure out how to structure this in terms of classes and interfaces, keeping in mind (what I've read about) composition vs. inheritance and is-a vs. has-a, but each route I go down seems to violate some principle. For simplicity, assume that all data-sources have a header-row-plus-data-rows format. For example, assume that the first rows of Google Spreadsheets documents and CSV files will have column headers, a.k.a. "fields" (to parallel database fields). Also, eventually, I would like to implement this in PHP, but avoiding language-specific discussion would probably be more productive. Here's an overview of what I've tried. Part 1/4: ISyncable class CMySQL implements ISyncable GetFields() // sql query, pdo statement, whatever AddFields() RemFields() ... _dbh class CGoogleSpreadsheets implements ISyncable GetFields() // zend gdata api AddFields() RemFields() ... _spreadsheetKey _worksheetId class CCsvFile implements ISyncable GetFields() // read from buffer AddFields() RemFields() ... _buffer interface ISyncable GetFields() AddFields($field1, $field2, ...) RemFields($field1, $field2, ...) ... CanAddFields() // maybe the spreadsheet is locked for write, or CanRemFields() // maybe no permission to alter a database table ... AddRow() ModRow() RemRow() ... Open() Close() ... First Question: Does it make sense to use an interface, as above? Part 2/4: CSyncer Next, the thing that does the syncing. class CSyncer __construct(ISyncable $A, ISyncable $B) Push() // sync A to B Pull() // sync B to A Sync() // Push() and Pull() only differ in direction; factor. // Sync()'s job is to make sure that the fields on each side // match, to add fields where appropriate and possible, to // account for different column-orderings, etc., and of // course, to add and remove rows as necessary to sync. ... _A _B Second Question: Does it make sense to define such a class, or am I treading dangerously close to the "Kingdom of Nouns"? Part 3/4: CTranslator? ITranslator? Now, here's where I actually get lost, assuming the above is passable. Sometimes, two ISyncables speak different "dialects." For example, believe it or not, Google Spreadsheets (accessed through the Google Data API "list feed") returns column headers lower-cased and stripped of all spaces and symbols! That is, sys_TIMESTAMP is systimestamp, as far as my code can tell. (Yes, I am aware that the "cell feed" does not strip the name so; however cell-by-cell manipulation is too slow for what I'm doing.) One can imagine other hypothetical examples. Perhaps even the data itself can be in different "dialects." But let's take it as given for now, and not argue this if possible. Third Question: How would you implement "translation"? Note: Taking all this as an exercise, I'm more interested in the "idealized" design, rather than the practical one. (God knows that shipped sailed when I began this project.) Part 4/4: Further Thought Here's my train of thought to demonstrate I've thunk, albeit unfruitfully: First, I thought, primitively, "I'll just modify CMySQL::GetFields() to lower-case and strip field names so they're compatible with Google Spreadsheets." But of course, then my class should really be called, CMySQLForGoogleSpreadsheets, and that can't be right. So, the thing which translates must exist outside of an ISyncable implementor. And surely it can't be right to make each translation a method in CSyncer. If it exists outside of both ISyncable and CSyncer, then what is it? (Is it even an "it"?) Is it an abstract class, i.e. abstract CTranslator? Is it an interface, since a translator only does, not has, i.e. interface ITranslator? Does it even require instantiation? e.g. If it's an ITranslator, then should its translation methods be static? (I learned what "late static binding" meant, today.) And, dear God, whatever it is, how should a CSyncer use it? Does it "have" it? Is it, "it"? Who am I? ...am I, "I"? I've attempted to break up the question into sub-questions, but essentially my question is singular: How does one implement an object A that conceptually "links" (has) two objects b1 and b2 that share a common interface B, where certain pairs of b1 and b2 require a helper, e.g. a translator, to be handled by A? Something tells me that I've overcomplicated this design, or violated a principle much higher up. Thank you all very much for your time and any advice you can provide.

    Read the article

  • What Counts for A DBA: Observant

    - by drsql
    When walking up to the building where I work, I can see CCTV cameras placed here and there for monitoring access to the building. We are required to wear authorization badges which could be checked at any time. Do we have enemies?  Of course! No one is 100% safe; even if your life is a fairy tale, there is always a witch with an apple waiting to snack you into a thousand years of slumber (or at least so I recollect from elementary school.) Even Little Bo Peep had to keep a wary lookout.    We nerdy types (or maybe it was just me?) generally learned on the school playground to keep an eye open for unprovoked attack from simpler, but more muscular souls, and take steps to avoid messy confrontations well in advance. After we’d apprehensively negotiated adulthood with varying degrees of success, these skills of watching for danger, and avoiding it,  translated quite well to the technical careers so many of us were destined for. And nowhere else is this talent for watching out for irrational malevolence so appropriate as in a career as a production DBA.   It isn’t always active malevolence that the DBA needs to watch out for, but the even scarier quirks of common humanity.  A large number of the issues that occur in the enterprise happen just randomly or even just one time ever in a spurious manner, like in the case where a person decided to download the entire MSDN library of software, cross join every non-indexed billion row table together, and simultaneously stream the HD feed of 5 different sporting events, making the network access slow while the corporate online sales just started. The decent DBA team, like the going, gets tough under such circumstances. They spring into action, checking all of the sources of active information, observes the issue is no longer happening now, figures that either it wasn’t the database’s fault and that the reboot of the whatever device on the network fixed the problem.  This sort of reactive support is good, and will be the initial reaction of even excellent DBAs, but it is not the end of the story if you really want to know what happened and avoid getting called again when it isn’t even your fault.   When fires start raging within the corporate software forest, the DBA’s instinct is to actively find a way to douse the flames and get back to having no one in the company have any idea who they are.  Even better for them is to find a way of killing a potential problem while the fires are small, long before they can be classified as raging. The observant DBA will have already been monitoring the server environment for months in advance.  Most troubles, such as disk space and security intrusions, can be predicted and dealt with by alerting systems, whereas other trouble can come out of the blue and requires a skill of observing ongoing conditions and noticing inexplicable changes that could signal an emerging problem.  You can’t automate the DBA, because the bankable skill of a DBA is in detecting the early signs of unexpected problems, and working out how to deal with them before anyone else notices them.    To achieve this, the DBA will check the situation as it is currently happening,  and in many cases is likely to have been the person who submitted the problem to the level 1 support person in the first place, just to let the support team know of impending issues (always well received, I tell you what!). Database and host computer settings, configurations, and even critical data might be profiled and captured for later comparisons. He’ll use Monitoring tools, built-in, commercial (Not to be too crassly commercial or anything, but there is one such tool is SQL Monitor) and lots of homebrew monitoring tools to monitor for problems and changes in the server environment.   You will know that you have it right when a support call comes in and you can look at your monitoring tools and quickly respond that “response time is well within the normal range, the query that supports the failing interface works perfectly and has actually only been called 67% as often as normal, so I am more than willing to help diagnose the problem, but it isn’t the database server’s fault and is probably a client or networking slowdown causing the interface to be used less frequently than normal.” And that is the best thing for any DBA to observe…

    Read the article

  • ?My Oracle Support???????????????

    - by user763198
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:105%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:major-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:major-latin; mso-fareast-language:EN-US; mso-bidi-language:EN-US;} l ????????? ??????Help ??????? Table of Contents? ????Certification? ??,????????: My Oracle Support Help - Certifications (Doc ID 870956.5) l ????????? ?????????????????????????,???My Oracle Support Help - Certifications (Doc ID 870956.5)????????????????????,??? Global Customer Support ????? l ??????????? ????????????: http://www.oracle.com/support/contact.html l ???????????? ????????????: https://shop.oracle.com/pls/ostore/f?p=dstore:home:0:::::&tz=8:00 ????????????????,???? “Contact Us” ??? l ???????,??? Export ??? ???????:    1. ???????,????,????Copy Selected Row,??Actions?????Copy Selected Rows? 2. ??????spreadsheet,??????? email ???? ?????????????table ??,????table ??,????Printable View????????????????,?????????????? ????tables????Export??, ??????export???????CSV???? ??:???Printable View ?Export?????????,??????????????? l ?????????? ?Oracle Education ??????????: http://education.oracle.com ????????????? ???????????,???Oracle Education ??(????????????)? l ????? License codes/Keys? http://www.oracle.com/us/support/licensecodes/index.html ?????????License Codes ?????????????(?????),????????????????????Oracle license code ?????????,????Customer Support Identifier (CSI)? l ???Software Delivery Cloud ????????? ???Software Delivery Cloud ???????? (Doc ID 1070969.1) ??? http://edelivery.oracle.com/ ,Oracle Software Delivery Cloud. ??“Sign In/Register” button. ??Terms & Restrictions ????Continue ????????,????Go ????????Continue ????????? l ??????Oracle.com ???? 1, ? www.oracle.com 2, ???????“Help” 3, ?????,???“Oracle.com login FAQ” 4, ???????????? 5, Oracle.com ???????????? l ????CRM Ondemand ????? ?CRMOD ??,????????: (i) CRMOD ??????: http://www.oracle.com/us/products/applications/crmondemand/support/customer-care-support-337680.html (ii) CRMOD ?????????,?????????????????: https://ebusiness.siebel.com/odcustomercare/contact/contact_cc.asp l ???? Oracle Software Delivery Cloud ??? ????Oracle Software Delivery Cloud – ??Oracle ??????,???/????????email ???????? ???????: 1. ? www.oracle.com ?????Single Sign-On (SSO) ?? 2. ?????"Account" 3. ????,?? "Please verify account(?????)" ,????????????email ID 4. ????????,???????? l ???My Oracle Support??????Oracle Database Patchset ? 1. ?????? 2. ??Product ?Product Family: Oracle database. 3. ????:? Oracle 10.2.0.4. 4. ????:?Microsoft x64 (64-bit). 5. ??????: Patchset/Minipack. 6. ?? Go. l ?????????????FTP ??? ???? CD/DVD ? FTP ??: ???My Oracle Support ( https://support.oracle.com ) ???????Contact Us?? ???????????? CD/DVD ? FTP ???????????? ????????,????????,?????????????,????????????

    Read the article

  • Strange WPF ListBox Behavior

    - by uncle-harvey
    I’m trying to bind a List of items to a listbox in WPF. The items are grouped by one value and each group is to be housed in an expander. Everything works fine when I don’t use any custom styles. However, when I use custom styles (which work properly with non-grouped items and as independent controls) the binding doesn’t display any items. Below is the code I’m executing. Any ideas why the items won’t show up in the Expander? Test.xaml: <Window x:Class="Glossy.Test" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Test" Height="300" Width="300"> <Window.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="..\TestStyles.xaml"/> <ResourceDictionary> <Style x:Key="ContainerStyle" TargetType="{x:Type GroupItem}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <Expander Header="{Binding}" IsExpanded="True"> <ItemsPresenter /> </Expander> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Window.Resources> <Grid> <ListBox x:Name="TestList"> <ListBox.GroupStyle> <GroupStyle ContainerStyle="{StaticResource ContainerStyle}"/> </ListBox.GroupStyle> </ListBox> </Grid> Test.xaml.cs: public partial class Test : Window { private List<Contact> _ContactItems; public List<Contact> ContactItems { get { return _ContactItems; } set { _ContactItems = value; } } public Test() { InitializeComponent(); ContactItems = new List<Contact>(); ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 1"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 2"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 3"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 10"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 11"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 12"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "RST"; ContactItems.Last().Name = "Contact 7"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "RST"; ContactItems.Last().Name = "Contact 8"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "RST"; ContactItems.Last().Name = "Contact 9"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "XYZ"; ContactItems.Last().Name = "Contact 4"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "XYZ"; ContactItems.Last().Name = "Contact 5"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "XYZ"; ContactItems.Last().Name = "Contact 6"; ICollectionView view = CollectionViewSource.GetDefaultView(ContactItems); view.GroupDescriptions.Add(new PropertyGroupDescription("CompanyName")); view.SortDescriptions.Add(new SortDescription("Name", ListSortDirection.Ascending)); TestList.ItemsSource = view; } } public class Contact { public string CompanyName { get; set; } public string Name { get; set; } public override string ToString() { return Name; } } TestStyles.xaml: <Style TargetType="{x:Type ListBox}"> <Setter Property="SnapsToDevicePixels" Value="true"/> <Setter Property="OverridesDefaultStyle" Value="true"/> <Setter Property="ScrollViewer.HorizontalScrollBarVisibility" Value="Auto"/> <Setter Property="ScrollViewer.VerticalScrollBarVisibility" Value="Auto"/> <Setter Property="ScrollViewer.CanContentScroll" Value="true"/> <Setter Property="MinWidth" Value="120"/> <Setter Property="MinHeight" Value="95"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListBox"> <Grid Background="Black"> <Rectangle VerticalAlignment="Stretch" HorizontalAlignment="Stretch" Fill="White"> <Rectangle.OpacityMask> <DrawingBrush> <DrawingBrush.Drawing> <GeometryDrawing Geometry="M65.5,33 L537.5,35 537.5,274.5 C536.5,81 119.5,177 66.5,92" Brush="#11444444"> <GeometryDrawing.Pen> <Pen Brush="Transparent"/> </GeometryDrawing.Pen> </GeometryDrawing> </DrawingBrush.Drawing> </DrawingBrush> </Rectangle.OpacityMask> </Rectangle> <Border Name="Border" Background="Transparent" BorderBrush="Gray" BorderThickness="1" CornerRadius="2"> <ScrollViewer Margin="0" Focusable="false"> <StackPanel Margin="2" IsItemsHost="True" /> </ScrollViewer> </Border> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsEnabled" Value="false"> <Setter TargetName="Border" Property="Background" Value="Gray" /> <Setter TargetName="Border" Property="BorderBrush" Value="DimGray" /> </Trigger> <Trigger Property="IsGrouping" Value="true"> <Setter Property="ScrollViewer.CanContentScroll" Value="false"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style TargetType="{x:Type ListBoxItem}"> <Setter Property="SnapsToDevicePixels" Value="true"/> <Setter Property="OverridesDefaultStyle" Value="true"/> <Setter Property="Foreground" Value="Gray"/> <Setter Property="Background" Value="Transparent"/> <Setter Property="FontFamily" Value="Verdana"/> <Setter Property="HorizontalAlignment" Value="Stretch"/> <Setter Property="FontSize" Value="11"/> <Setter Property="Margin" Value="3,1,3,1"/> <Setter Property="Padding" Value="0"/> <Setter Property="FontWeight" Value="Normal"/> <Setter Property="VerticalAlignment" Value="Center"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListBoxItem"> <Border Name="Border" Padding="2" SnapsToDevicePixels="true"> <ContentPresenter /> </Border> <ControlTemplate.Triggers> <Trigger Property="IsSelected" Value="true"> <Setter TargetName="Border" Property="Background" Value="Gray"/> </Trigger> <Trigger Property="IsEnabled" Value="false"> <Setter Property="Foreground" Value="White"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> <ControlTemplate x:Key="ExpanderToggleButton" TargetType="ToggleButton"> <Border Name="Border" CornerRadius="2,0,0,0" Background="Transparent" BorderBrush="LightGray" BorderThickness="0,0,1,0"> <Path Name="Arrow" Fill="Blue" HorizontalAlignment="Center" VerticalAlignment="Center" Data="M 0 0 L 4 4 L 8 0 Z"/> </Border> <ControlTemplate.Triggers> <Trigger Property="ToggleButton.IsMouseOver" Value="True"> <Setter TargetName="Border" Property="Background" Value="Gray" /> </Trigger> <Trigger Property="IsPressed" Value="True"> <Setter TargetName="Border" Property="Background" Value="Black" /> </Trigger> <Trigger Property="IsChecked" Value="True"> <Setter TargetName="Arrow" Property="Data" Value="M 0 4 L 4 0 L 8 4 Z" /> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter TargetName="Border" Property="Background" Value="DimGray" /> <Setter TargetName="Border" Property="BorderBrush" Value="DimGray" /> <Setter Property="Foreground" Value="LightGray"/> <Setter TargetName="Arrow" Property="Fill" Value="LightBlue" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> <Style TargetType="{x:Type Expander}"> <Setter Property="Foreground" Value="White"/> <Setter Property="FontFamily" Value="Verdana"/> <Setter Property="FontSize" Value="11"/> <Setter Property="FontWeight" Value="Normal"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Expander"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Name="ContentRow" Height="0"/> </Grid.RowDefinitions> <Border Name="Border" Grid.Row="0" Background="Black" BorderBrush="DimGray" BorderThickness="1" Cursor="Hand" CornerRadius="2,2,0,0" > <Grid HorizontalAlignment="Left"> <Grid.RowDefinitions> <RowDefinition Height="23"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="20" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <ToggleButton IsChecked="{Binding Path=IsExpanded,Mode=TwoWay,RelativeSource={RelativeSource TemplatedParent}}" Template="{StaticResource ExpanderToggleButton}" Background="Black" /> <Label Grid.Column="1" FontSize="14" FontWeight="Normal" Margin="0" VerticalAlignment="Top" Foreground="White" FontFamily="Verdana"> <ContentPresenter Grid.Column="1" Margin="4,3,0,0" HorizontalAlignment="Left" ContentSource="Header" RecognizesAccessKey="True" /> </Label> </Grid> </Border> <Border Name="Content" Background="Black" BorderBrush="DimGray" BorderThickness="1,0,1,1" Grid.Row="1" CornerRadius="0,0,2,2" > <Grid Background="Black"> <Rectangle VerticalAlignment="Stretch" HorizontalAlignment="Stretch" Fill="White"> <Rectangle.OpacityMask> <DrawingBrush> <DrawingBrush.Drawing> <GeometryDrawing Geometry="M65.5,33 L537.5,35 537.5,274.5 C536.5,81 119.5,177 66.5,92" Brush="#11444444"> <GeometryDrawing.Pen> <Pen Brush="Transparent"/> </GeometryDrawing.Pen> </GeometryDrawing> </DrawingBrush.Drawing> </DrawingBrush> </Rectangle.OpacityMask> </Rectangle> <ContentPresenter Margin="4" /> </Grid> </Border> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsExpanded" Value="True"> <Setter TargetName="ContentRow" Property="Height" Value="{Binding ElementName=Content,Path=DesiredHeight}" /> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter TargetName="Border" Property="Background" Value="Gray" /> <Setter TargetName="Border" Property="BorderBrush" Value="DimGray" /> <Setter Property="Foreground" Value="White"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style>

    Read the article

  • Could not load type 'Default.DataMatch' in DataMatch.aspx file

    - by salvationishere
    I am developing a C# VS 2008 / SQL Server 2008 website, but now I am getting the above error when I build it. I included the Default.aspx, Default.aspx.cs, DataMatch.aspx, and DataMatch.aspx.cs files below. What do I need to do to fix this? Default.aspx: <%@ Page Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Title="Untitled Page" %> ... DataMatch.aspx: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="DataMatch.aspx.cs" Inherits="_Default.DataMatch" %> ... Default.aspx.cs: using System; using System.Collections; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; using System.Collections.Generic; using System.IO; using System.Drawing; using System.ComponentModel; using System.Data.SqlClient; using ADONET_namespace; using System.Security.Principal; //using System.Windows; public partial class _Default : System.Web.UI.Page //namespace AddFileToSQL { //protected System.Web.UI.HtmlControls.HtmlInputFile uploadFile; //protected System.Web.UI.HtmlControls.HtmlInputButton btnOWrite; //protected System.Web.UI.HtmlControls.HtmlInputButton btnAppend; protected System.Web.UI.WebControls.Label Label1; protected static string inputfile = ""; public static string targettable; public static string selection; // Number of controls added to view state protected int default_NumberOfControls { get { if (ViewState["default_NumberOfControls"] != null) { return (int)ViewState["default_NumberOfControls"]; } else { return 0; } } set { ViewState["default_NumberOfControls"] = value; } } protected void uploadFile_onclick(object sender, EventArgs e) { } protected void Load_GridData() { //GridView1.DataSource = ADONET_methods.DisplaySchemaTables(); //GridView1.DataBind(); } protected void btnOWrite_Click(object sender, EventArgs e) { if (uploadFile.PostedFile.ContentLength > 0) { feedbackLabel.Text = "You do not have sufficient access to overwrite table records."; } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void btnAppend_Click(object sender, EventArgs e) { string fullpath = Page.Request.PhysicalApplicationPath; string path = uploadFile.PostedFile.FileName; if (File.Exists(path)) { // Create a file to write to. try { StreamReader sr = new StreamReader(path); string s = ""; while (sr.Peek() > 0) s = sr.ReadLine(); sr.Close(); } catch (IOException exc) { Console.WriteLine(exc.Message + "Cannot open file."); return; } } if (uploadFile.PostedFile.ContentLength > 0) { inputfile = System.IO.File.ReadAllText(path); Session["Message"] = inputfile; Response.Redirect("DataMatch.aspx"); } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void Page_Load(object sender, EventArgs e) { if (Request.IsAuthenticated) { WelcomeBackMessage.Text = "Welcome back, " + User.Identity.Name + "!"; // Reference the CustomPrincipal / CustomIdentity CustomIdentity ident = User.Identity as CustomIdentity; if (ident != null) WelcomeBackMessage.Text += string.Format(" You are the {0} of {1}.", ident.Title, ident.CompanyName); AuthenticatedMessagePanel.Visible = true; AnonymousMessagePanel.Visible = false; if (!Page.IsPostBack) { Load_GridData(); } } else { AuthenticatedMessagePanel.Visible = false; AnonymousMessagePanel.Visible = true; } } protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { GridViewRow row = GridView1.SelectedRow; targettable = row.Cells[2].Text; } } DataMatch.aspx.cs: using System; using System.Collections; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Diagnostics; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using ADONET_namespace; //using MatrixApp; //namespace AddFileToSQL //{ public partial class DataMatch : AddFileToSQL._Default { protected System.Web.UI.WebControls.PlaceHolder phTextBoxes; protected System.Web.UI.WebControls.PlaceHolder phDropDownLists; protected System.Web.UI.WebControls.Button btnAnotherRequest; protected System.Web.UI.WebControls.Panel pnlCreateData; protected System.Web.UI.WebControls.Literal lTextData; protected System.Web.UI.WebControls.Panel pnlDisplayData; protected static string inputfile2; static string[] headers = null; static string[] data = null; static string[] data2 = null; static DataTable myInputFile = new DataTable("MyInputFile"); static string[] myUserSelections; static bool restart = false; private DropDownList[] newcol; int @temp = 0; string @tempS = ""; string @tempT = ""; // a Property that manages a counter stored in ViewState protected int NumberOfControls { get { return (int)ViewState["NumControls"]; } set { ViewState["NumControls"] = value; } } private Hashtable ddl_ht { get { return (Hashtable)ViewState["ddl_ht"]; } set { ViewState["ddl_ht"] = value; } } // Page Load private void Page_Load(object sender, System.EventArgs e) { if (!Page.IsPostBack) { ddl_ht = new Hashtable(); this.NumberOfControls = 0; } } // This data comes from input file private void PopulateFileInputTable() { myInputFile.Columns.Clear(); string strInput, newrow; string[] oneRow; DataColumn myDataColumn; DataRow myDataRow; int result, numRows; //Read the input file strInput = Session["Message"].ToString(); data = strInput.Split('\r'); //Headers headers = data[0].Split('|'); //Data for (int i = 0; i < data.Length; i++) { newrow = data[i].TrimStart('\n'); data[i] = newrow; } result = String.Compare(data[data.Length - 1], ""); numRows = data.Length; if (result == 0) { numRows = numRows - 1; } data2 = new string[numRows]; for (int a = 0, b = 0; a < numRows; a++, b++) { data2[b] = data[a]; } // Create columns for (int col = 0; col < headers.Length; col++) { @temp = (col + 1); @tempS = @temp.ToString(); @tempT = "@col"+ @temp.ToString(); myDataColumn = new DataColumn(); myDataColumn.DataType = Type.GetType("System.String"); myDataColumn.ColumnName = headers[col]; myInputFile.Columns.Add(myDataColumn); ddl_ht.Add(@tempT, headers[col]); } // Create new DataRow objects and add to DataTable. for (int r = 0; r < numRows - 1; r++) { oneRow = data2[r + 1].Split('|'); myDataRow = myInputFile.NewRow(); for (int c = 0; c < headers.Length; c++) { myDataRow[c] = oneRow[c]; } myInputFile.Rows.Add(myDataRow); } NumberOfControls = headers.Length; myUserSelections = new string[NumberOfControls]; } //Create display panel private void CreateDisplayPanel() { btnSubmit.Style.Add("top", "auto"); btnSubmit.Style.Add("left", "auto"); btnSubmit.Style.Add("position", "absolute"); btnSubmit.Style.Add("top", "200px"); btnSubmit.Style.Add("left", "400px"); newcol = CreateDropDownLists(); for (int counter = 0; counter < NumberOfControls; counter++) { pnlDisplayData.Controls.Add(newcol[counter]); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); pnlDisplayData.Visible = true; pnlDisplayData.FindControl(newcol[counter].ID); } } //Recreate display panel private void RecreateDisplayPanel() { btnSubmit.Style.Add("top", "auto"); btnSubmit.Style.Add("left", "auto"); btnSubmit.Style.Add("position", "absolute"); btnSubmit.Style.Add("top", "200px"); btnSubmit.Style.Add("left", "400px"); newcol = RecreateDropDownLists(); for (int counter = 0; counter < NumberOfControls; counter++) { pnlDisplayData.Controls.Add(newcol[counter]); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); pnlDisplayData.Visible = true; pnlDisplayData.FindControl(newcol[counter].ID); } } // Add DropDownList Control to Placeholder private DropDownList[] CreateDropDownLists() { DropDownList[] dropDowns = new DropDownList[NumberOfControls]; for (int counter = 0; counter < NumberOfControls; counter++) { DropDownList ddl = new DropDownList(); SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); ddl.ID = "DropDownListID" + counter.ToString(); int NumControls = targettable.Length; DataTable dt = new DataTable(); dt.Load(dr2); ddl.DataValueField = "COLUMN_NAME"; ddl.DataTextField = "COLUMN_NAME"; ddl.DataSource = dt; ddl.SelectedIndexChanged += new EventHandler(ddlList_SelectedIndexChanged); ddl.DataBind(); ddl.AutoPostBack = true; ddl.EnableViewState = true; //Preserves View State info on Postbacks dr2.Close(); ddl.Items.Add("IGNORE"); dropDowns[counter] = ddl; } return dropDowns; } protected void ddlList_SelectedIndexChanged(object sender, EventArgs e) { DropDownList ddl = (DropDownList)sender; string ID = ddl.ID; } // Add TextBoxes Control to Placeholder private DropDownList[] RecreateDropDownLists() { DropDownList[] dropDowns = new DropDownList[NumberOfControls]; for (int counter = 0; counter < NumberOfControls; counter++) { DropDownList ddl = new DropDownList(); SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); ddl.ID = "DropDownListID" + counter.ToString(); int NumControls = targettable.Length; DataTable dt = new DataTable(); dt.Load(dr2); ddl.DataValueField = "COLUMN_NAME"; ddl.DataTextField = "COLUMN_NAME"; ddl.DataSource = dt; ddl.SelectedIndexChanged += new EventHandler(ddlList_SelectedIndexChanged); ddl.DataBind(); ddl.AutoPostBack = true; ddl.EnableViewState = false; //Preserves View State info on Postbacks dr2.Close(); ddl.Items.Add("IGNORE"); dropDowns[counter] = ddl; } return dropDowns; } private void CreateLabels() { for (int counter = 0; counter < NumberOfControls; counter++) { Label lbl = new Label(); lbl.ID = "Label" + counter.ToString(); lbl.Text = headers[counter]; lbl.Style["position"] = "absolute"; lbl.Style["top"] = 60 * counter + 10 + "px"; lbl.Style["left"] = 250 + "px"; pnlDisplayData.Controls.Add(lbl); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); } } // Add TextBoxes Control to Placeholder private void RecreateLabels() { for (int counter = 0; counter < NumberOfControls; counter++) { Label lbl = new Label(); lbl.ID = "Label" + counter.ToString(); lbl.Text = headers[counter]; lbl.Style["position"] = "absolute"; lbl.Style["top"] = 60 * counter + 10 + "px"; lbl.Style["left"] = 250 + "px"; pnlDisplayData.Controls.Add(lbl); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); } } // Create TextBoxes and DropDownList data here on postback. protected override void CreateChildControls() { // create the child controls if the server control does not contains child controls this.EnsureChildControls(); // Creates a new ControlCollection. this.CreateControlCollection(); // Here we are recreating controls to persist the ViewState on every post back if (Page.IsPostBack) { RecreateDisplayPanel(); RecreateLabels(); } // Create these conrols when asp.net page is created else { PopulateFileInputTable(); CreateDisplayPanel(); CreateLabels(); } // Prevent dropdownlists and labels from being created again. if (restart == false) { this.ChildControlsCreated = true; } else if (restart == true) { this.ChildControlsCreated = false; } } private void AppendRecords() { switch (targettable) { case "ContactType": for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataCT(myInputFile.Rows[r], ddl_ht); } break; case "Contact": for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataC(myInputFile.Rows[r], ddl_ht); } break; case "AddressType": for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataAT(myInputFile.Rows[r], ddl_ht); } break; default: resultLabel.Text = "You do not have access to modify this table. Please select a different target table and try again."; restart = true; break; //throw new ArgumentOutOfRangeException("targettable type", targettable); } } // Read all the data from TextBoxes and DropDownLists protected void btnSubmit_Click(object sender, System.EventArgs e) { //int cnt = FindOccurence("DropDownListID"); AppendRecords(); pnlDisplayData.Visible = false; btnSubmit.Visible = false; resultLabel.Attributes.Add("style", "align:center"); btnSubmit.Style.Add("top", "auto"); btnSubmit.Style.Add("left", "auto"); btnSubmit.Style.Add("position", "absolute"); int bSubmitPosition = NumberOfControls; btnSubmit.Style.Add("top", System.Convert.ToString(bSubmitPosition)+"px"); resultLabel.Visible = true; Instructions.Visible = false; if (restart == true) { CreateChildControls(); } } private int FindOccurence(string substr) { string reqstr = Request.Form.ToString(); return ((reqstr.Length - reqstr.Replace(substr, "").Length) / substr.Length); } #region Web Form Designer generated code override protected void OnInit(EventArgs e) { // // CODEGEN: This call is required by the ASP.NET Web Form Designer. // InitializeComponent(); base.OnInit(e); } /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { } #endregion } //}

    Read the article

  • WPF - Overlapping Custom Tabs in a TabControl and ZIndex

    - by Rachel
    Problem I have a custom tab control using Chrome-shaped tabs that binds to a ViewModel. Because of the shape, the edges overlap a bit. I have a function that sets the tabItem's ZIndex on TabControl_SelectionChanged which works fine for selecting tabs, and dragging/dropping tabs, however when I Add or Close a tab via a Relay Command I am getting unusual results. Does anyone have any ideas? Default View http://i193.photobucket.com/albums/z197/Lady53461/tabs_default.jpg Removing Tabs http:/i193.photobucket.com/albums/z197/Lady53461/tabs_removing.jpg Adding 2 or more Tabs in a row http:/i193.photobucket.com/albums/z197/Lady53461/tabs_adding.jpg Code to set ZIndex private void PrimaryTabControl_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (e.Source is TabControl) { TabControl tabControl = sender as TabControl; ItemContainerGenerator icg = tabControl.ItemContainerGenerator; if (icg.Status == System.Windows.Controls.Primitives.GeneratorStatus.ContainersGenerated) { foreach (object o in tabControl.Items) { UIElement tabItem = icg.ContainerFromItem(o) as UIElement; Panel.SetZIndex(tabItem, (o == tabControl.SelectedItem ? 100 : 90 - tabControl.Items.IndexOf(o))); } } } } By using breakpoints I can see that it is correctly setting the ZIndex to what I want it to, however the layout is not displaying the changes. I know some of the changes are in effect because if none of them were working then the tab edges would be reversed (the right tabs would be drawn on top of the left ones). Clicking a tab will correctly set the zindex of all tabs (including the one that should be drawn on top) and dragging/dropping them to rearrange them also renders correctly (which removes and reinserts the tab item). The only difference I can think of is I am using the MVVM design pattern and the buttons that Add/Close tabs are relay commands. Does anyone have any idea why this is happening and how I can fix it?? p.s. I did try setting a ZIndex in my ViewModel and binding to it, however the same thing happens when adding/removing tabs via the relay command. EDIT: Being a new user I couldn't post images and could only post 1 link. Images just show a picture of what the tags render as after each scenario. Adding more then 1 at a time will not reset the zindex of other recently-added tabs so they go behind the tab on the Right, and closing tabs does not correctly render the ZIndex of the SelectedTab that replaces it and it shows up behind the tab on its right.

    Read the article

  • Binding WPF DataGrid to DataTable using TemplateColumns

    - by Chris J
    I have tried everything and got nowhere so I'm hoping someone can give me the aha moment. I simply cannot get the binding to pull the data in the datagrid successfully. I have a DataTable that contains multiple columns with of MyDataType} public class MyData { string nameData {get;set;} boolean showData {get;set;} } MyDataType has 2 properties (A string, a boolean) I have created a test DataTable DataTable GetDummyData() { DataTable dt = new DataTable("Foo"); dt.Columns.Add(new DataColumn("AnotherColumn", typeof(MyData))); dt.Rows.Add(new MyData("Row1C1", true)); dt.Rows.Add(new MyData("Row2C1", false)); dt.AcceptChanges(); return dt; } I have a WPF DataGrid which I want to show my DataTable. But all I want to do is to change how each cell is rendered to show [TextBlock][Button] per cell with values bound to the MyData object and this is where I'm having a tonne of trouble. My XAML looks like this <Window.Resources><ResourceDictionary><DataTemplate x:Key="MyDataTemplate" DataType="MyData"> <StackPanel Orientation="Horizontal" > <Button Background="Green" HorizontalAlignment="Right" VerticalAlignment="Center" Margin="5,0,0,0" Content="{Binding Path=nameData}"></Button> <TextBlock Background="Green" HorizontalAlignment="Center" VerticalAlignment="Center" Margin="5,0,0,0" Text="{Binding Path=nameData}"></TextBlock> </StackPanel> </DataTemplate></ResourceDictionary></Window.Resources> <Grid> <dg:DataGrid Grid.Row="1" AutoGenerateColumns="True" x:Name="dataGrid1" SelectionMode="Single" CanUserAddRows="False" CanUserSortColumns="true" CanUserDeleteRows="False" AlternatingRowBackground="AliceBlue" AutoGeneratingColumn="dataGrid1_AutoGeneratingColumn" ItemsSource="{Binding}" /> now all I do once loaded is to attempt to bind the DataTable to the WPF DataGrid dt = GetDummyData(); dataGrid1.ItemsSource = dt.DefaultView; The TextBlock and Button show up, but they don't bind, which leaves them blank. Could anyone let me know if they have any idea how to fix this. This should be simple, thats what Microsoft leads us to believe. I have set the Column.CellTemplate during the AutoGenerating event and still get no binding. Please help!!!

    Read the article

  • Silverlight and Unexpected Font Sizes

    - by Eric J.
    Someone please teach me to fish here... I'm just learning Silverlight and have ran into a few situations where the font size actually used is drastically different than I would expect. There's probably something conceptual that I'm missing. Case A In one instance, I have defined a user control that presents a Label to show text. If one clicks on the label, the label (that is in a stack panel, in the user control) is replaced with a TextBox. When used at the top of a page (as in the example below with lblName) the label text is very small (around 8 points). When clicked on, the text box that replaces the label uses the specified fonts size. That same user control, used in different parts of the app, uses the same font for Label and TextBox. <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition Height="33" /> <RowDefinition Height="267*" /> </Grid.RowDefinitions> <StackPanel Height="Auto" HorizontalAlignment="Left" Name="stackPanel" VerticalAlignment="Top" Width="Auto" Grid.Row="1" /> <my:EditLabel Height="33" HorizontalAlignment="Left" x:Name="lblName" VerticalAlignment="Top" Width="Auto" FlexText="{Binding Name, Mode=TwoWay}" FontSize="20" MinHeight="24" /> </Grid> Case B I'm using the LiquidMenu.Menu control to pop up a menu when a button is pressed. The font looks huge compared to the rest of my page (maybe 36 points?). I tried forcing it to a very small by explicitly setting it to 8pt, but that had no effect. <Grid x:Name="LayoutRoot" Background="{x:Null}"> <StackPanel x:Name="labelStackPanel" Orientation="Horizontal"> <TextBlock Height="24" HorizontalAlignment="Left" Name="labelText" VerticalAlignment="Top" Width="200" Text="(Value Goes Here)" /> </StackPanel> <liquidMenu:Menu x:Name="popupMenu" Canvas.Left="40" Canvas.Top="40" ItemSelected="MenuList_ItemSelected" Visibility="Collapsed" Height="Auto" FontSize="8"> <liquidMenu:MenuItem ID="delete" Icon="Images/Delete10.png" Text="Delete" Shortcut="Del" /> <liquidMenu:MenuItem ID="exclusive" Icon="" Text="Exclusive" Shortcut="Ctrl+E" /> <liquidMenu:MenuItem ID="properties" Icon="" Text="Properties" Shortcut="Ctrl+P" /> </liquidMenu:Menu> </Grid> Answers to these specific issues are great, a new way to think about this type of issue so that I understand how to control font size is better.

    Read the article

  • EPPlus - .xlsx is locked for editing by 'another user'

    - by AdamTheITMan
    I have searched through every possible answer on SO for a solution, but nothing has worked. I am basically creating an excel file from a database and sending the results to the response stream using EPPlus(OpenXML). The following code gives me an error when trying to open my generated excel sheet "[report].xlsx is locked for editing by 'another user'." It will open fine the first time, but the second time it's locked. Dim columnData As New List(Of Integer) Dim rowHeaders As New List(Of String) Dim letter As String = "B" Dim x As Integer = 0 Dim trendBy = context.Session("TRENDBY").ToString() Dim dateHeaders As New List(Of String) dateHeaders = DirectCast(context.Session("DATEHEADERS"), List(Of String)) Dim DS As New DataSet DS = DirectCast(context.Session("DS"), DataSet) Using excelPackage As New OfficeOpenXml.ExcelPackage Dim excelWorksheet = excelPackage.Workbook.Worksheets.Add("Report") 'Add title to the top With excelWorksheet.Cells("B1") .Value = "Account Totals by " + If(trendBy = "Months", "Month", "Week") .Style.Font.Bold = True End With 'add date headers x = 2 'start with letter B (aka 2) For Each Header As String In dateHeaders With excelWorksheet.Cells(letter + "2") .Value = Header .Style.HorizontalAlignment = OfficeOpenXml.Style.ExcelHorizontalAlignment.Right .AutoFitColumns() End With x = x + 1 letter = Helper.GetColumnIndexToColumnLetter(x) Next 'Adds the descriptive row headings down the left side of excel sheet x = 0 For Each DC As DataColumn In DS.Tables(0).Columns If (x < DS.Tables(0).Columns.Count) Then rowHeaders.Add(DC.ColumnName) End If Next Dim range = excelWorksheet.Cells("A3:A30") range.LoadFromCollection(rowHeaders) 'Add the meat and potatoes of report x = 2 For Each dTable As DataTable In DS.Tables columnData.Clear() For Each DR As DataRow In dTable.Rows For Each item As Object In DR.ItemArray columnData.Add(item) Next Next letter = Helper.GetColumnIndexToColumnLetter(x) excelWorksheet.Cells(letter + "3").LoadFromCollection(columnData) With excelWorksheet.Cells(letter + "3") .Formula = "=SUM(" + letter + "4:" + letter + "6)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "7") .Formula = "=SUM(" + letter + "8:" + letter + "11)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "12") .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "13") .Formula = "=SUM(" + letter + "14:" + letter + "20)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "21") .Formula = "=SUM(" + letter + "22:" + letter + "23)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "24") .Formula = "=SUM(" + letter + "25:" + letter + "26)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "27") .Formula = "=SUM(" + letter + "28:" + letter + "29)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "30") .Formula = "=SUM(" + letter + "3," + letter + "7," + letter + "12," + letter + "13," + letter + "21," + letter + "24," + letter + "27)" .Style.Font.Bold = True .Style.Font.Size = 12 End With x = x + 1 Next range.AutoFitColumns() 'send it to response Using stream As New MemoryStream(excelPackage.GetAsByteArray()) context.Response.Clear() context.Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" context.Response.AddHeader("content-disposition", "attachment; filename=filetest.xlsx") context.Response.OutputStream.Write(stream.ToArray(), 0, stream.ToArray().Length) context.Response.Flush() context.Response.Close() End Using End Using

    Read the article

  • SSRS Report from Oracle DB - Use stored procedure

    - by Emtucifor
    I am developing a report in Sql Server Reporting Services 2005, connecting to an Oracle 11g database. As you post replies perhaps it will help to know that I'm skilled in MSSQL Server and inexperienced in Oracle. I have multiple nested subreports and need to use summary data in outer reports and the same data but in detail in the inner reports. In order to spare the DB server from multiple executions, I thought to populate some temp tables at the beginning and then query just them the multiple times in the report and the subreports. In SSRS, Datasets are evidently executed in the order they appear in the RDL file. And you can have a dataset that doesn't return a rowset. So I created a stored procedure to populate my four temp tables and made this the first Dataset in my report. This SP works when I run it from SQLDeveloper and I can query the data from the temp tables. However, this didn't appear to work out because SSRS was apparently not reusing the same session, so even though the global temporary tables were created with ON COMMIT PRESERVE ROWS my Datasets were empty. I switched to using "real" tables and am now passing in an additional parameter, a GUID in string form, uniquely generated on each new execution, that is part of the primary key of each table, so I can get back just the rows for this execution. Running this from Sql Developer works fine, example: DECLARE ActivityCode varchar2(15) := '1208-0916 '; ExecutionID varchar2(32) := SYS_GUID(); BEGIN CIPProjectBudget (ActivityCode, ExecutionID); END; Never mind that in this example I don't know the GUID, this simply proves it works because rows are inserted to my four tables. But in the SSRS report, I'm still getting no rows in my Datasets and SQL Developer confirms no rows are being inserted. So I'm thinking along the lines of: Oracle uses implicit transactions and my changes aren't getting committed? Even though I can prove that the non-rowset returning SP is executing (because if I leave out the parameter mapping it complains at report rendering time about not having enough parameters) perhaps it's not really executing. Somehow. Wrong execution order isn't the problem or rows would appear in the tables, and they aren't. I'm interested in any ideas about how to accomplish this (especially the part about not running the main queries multiple times). I'll redesign my whole report. I'll stop using a stored procedure. Suggest anything you like! I just need help getting this working and I am stuck. If you want more details, in my SSRS report I have a List object (it's a container that repeats once for each row in a Dataset) that has some header values and then contains a subreport. Eventually, there will be four total reports: one main report, with three nested subreports. Each subreport will be in a List on the parent report.

    Read the article

  • a4j:jsFunction with actionListener inside of h:dataTable

    - by JQueryNeeded
    Hello all, I'm having problem with using a4j:jsFunction with actionListener inside of h:dataTable, when I want to invoke an action over particular row with a4j:commandLink it works flawless but when I want to invoke the action with a4j:jsFunction & actionListener it's always invoked over the last element in dataTable Let me give you an example: <a4j:form ajaxSubmit="true" reRender="mainForm" id="mainForm"> <a4j:region> <t:saveState value="#{ts.list}" /> </a4j:region> <h:dataTable value="#{ts.list}" var="el" binding="#{ts.bind}"> <h:column>#{el}</h:column>> <h:column> <a4j:commandLink actionListener="#{ts.rem}"> <h:outputText value="delete by CMDLink" /> </a4j:commandLink> </h:column> <h:column> <a href="#" onclick="okClicked();">delete by okClicked</a> <a4j:jsFunction name="okClicked" actionListener="#{ts.rem}" /> </h:column> </h:dataTable> </a4j:form> now, the bean's code: package com.sth; import java.util.ArrayList; import java.util.List; import javax.faces.component.UIData; import javax.faces.event.ActionEvent; public class Ts { private List<String> list = new ArrayList<String>(); private UIData bind; public Ts(){ list.add("element1"); list.add("element2"); list.add("element3"); list.add("element4"); } public List<String> getList() { return list; } public void setList(List<String> list) { this.list = list; } public void rem(ActionEvent ae) { String toRem = (String) bind.getRowData(); System.out.println("Deleting " + toRem); list.remove(toRem); } public UIData getBind() { return bind; } public void setBind(UIData bind) { this.bind = bind; } } when I use a4j:commandLink to remove element, it works as its expected, but when I use a4j:jsFunction to invoke actionListener it invokes action against last element :( Any ideas? Cheers

    Read the article

  • UITableView's NSString memory leak on iphone when encoding with NSUTF8StringEncoding

    - by vince
    my UITableView have serious memory leak problem only when the NSString is NOT encoding with NSASCIIStringEncoding. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"cell"; UILabel *textLabel1; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; textLabel1 = [[UILabel alloc] initWithFrame:CGRectMake(105, 6, 192, 22)]; textLabel1.tag = 1; textLabel1.textColor = [UIColor whiteColor]; textLabel1.backgroundColor = [UIColor blackColor]; textLabel1.numberOfLines = 1; textLabel1.adjustsFontSizeToFitWidth = NO; [textLabel1 setFont:[UIFont boldSystemFontOfSize:19]]; [cell.contentView addSubview:textLabel1]; [textLabel1 release]; } else { textLabel1 = (UILabel *)[cell.contentView viewWithTag:1]; } NSDictionary *tmpDict = [listOfInfo objectForKey:[NSString stringWithFormat:@"%@",indexPath.row]]; textLabel1.text = [tmpDict objectForKey:@"name"]; return cell; } -(void) readDatabase { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath = [documentsDir stringByAppendingPathComponent:[NSString stringWithFormat:@"%@",myDB]]; sqlite3 *database; if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { const char sqlStatement = [[NSString stringWithFormat:@"select id,name from %@ order by orderid",myTable] UTF8String]; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { while(sqlite3_step(compiledStatement) == SQLITE_ROW) { NSString *tmpid = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSUTF8StringEncoding]; [listOfInfo setObject:[[NSMutableDictionary alloc] init] forKey:tmpid]; [[listOfInfo objectForKey:tmpid] setObject:[NSString stringWithFormat:@"%@", tmpname] forKey:@"name"]; } } sqlite3_finalize(compiledStatement); debugNSLog(@"sqlite closing"); } sqlite3_close(database); } when i change the line NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSUTF8StringEncoding]; to NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSASCIIStringEncoding]; the memory leak is gone i tried NSString stringWithUTF8String and it still leak. i've also tried: NSData *dtmpname = [NSData dataWithBytes:sqlite3_column_blob(compiledStatement, 1) length:sqlite3_column_bytes(compiledStatement, 1)]; NSString *tmpname = [[[NSString alloc] initWithData:dtmpname encoding:NSUTF8StringEncoding] autorelease]; and the problem remains, the leak occur when u start scrolling the tableview. i've actually tried other encoding and it seems that only NSASCIIStringEncoding works(no memory leak) any idea how to get rid of this problem?

    Read the article

  • iPhone Objective C - error: pointer value used where a floating point value was expected

    - by Mausimo
    I do not understand why i am getting this error. Here is the related code: Photo.h #import <CoreData/CoreData.h> @class Person; @interface Photo : NSManagedObject { } @property (nonatomic, retain) NSData * imageData; @property (nonatomic, retain) NSNumber * Latitude; @property (nonatomic, retain) NSString * ImageName; @property (nonatomic, retain) NSString * ImagePath; @property (nonatomic, retain) NSNumber * Longitude; @property (nonatomic, retain) Person * PhotoToPerson; @end Photo.m #import "Photo.h" #import "Person.h" @implementation Photo @dynamic imageData; @dynamic Latitude; @dynamic ImageName; @dynamic ImagePath; @dynamic Longitude; @dynamic PhotoToPerson; @end This is a mapViewController.m class i have created. If i run this, the CLLocationDegrees CLLat and CLLong lines: CLLocationDegrees CLLat = (CLLocationDegrees)photo.Latitude; CLLocationDegrees CLLong = (CLLocationDegrees)photo.Longitude; give me the error : pointer value used where a floating point value was expected. for(int i = 0; i < iPerson; i++) { //get the person that corresponds to the row indexPath that is currently being rendered and set the text Person * person = (Person *)[myArrayPerson objectAtIndex:i]; //get the photos associated with the person NSArray * PhotoArray = [person.PersonToPhoto allObjects]; int iPhoto = [PhotoArray count]; for(int j = 0; j < iPhoto; j++) { //get the first photo (all people will have atleast 1 photo, else they will not exist). Set the image Photo * photo = (Photo *)[PhotoArray objectAtIndex:j]; if(photo.Latitude != nil && photo.Longitude != nil) { MyAnnotation *ann = [[MyAnnotation alloc] init]; ann.title = photo.ImageName; ann.subtitle = photo.ImageName; CLLocationCoordinate2D cord; CLLocationDegrees CLLat = (CLLocationDegrees)photo.Latitude; CLLocationDegrees CLLong = (CLLocationDegrees)photo.Longitude; cord.latitude = CLLat; cord.longitude = CLLong; ann.coordinate = cord; [mkMapView addAnnotation:ann]; } } }

    Read the article

  • ASP.NET MVC 2: Updating a Linq-To-Sql Entity with an EntitySet

    - by Simon
    I have a Linq to Sql Entity which has an EntitySet. In my View I display the Entity with it's properties plus an editable list for the child entites. The user can dynamically add and delete those child entities. The DefaultModelBinder works fine so far, it correctly binds the child entites. Now my problem is that I just can't get Linq To Sql to delete the deleted child entities, it will happily add new ones but not delete the deleted ones. I have enabled cascade deleting in the foreign key relationship, and the Linq To Sql designer added the "DeleteOnNull=true" attribute to the foreign key relationships. If I manually delete a child entity like this: myObject.Childs.Remove(child); context.SubmitChanges(); This will delete the child record from the DB. But I can't get it to work for a model binded object. I tried the following: // this does nothing public ActionResult Update(int id, MyObject obj) // obj now has 4 child entities { var obj2 = _repository.GetObj(id); // obj2 has 6 child entities if(TryUpdateModel(obj2)) //it sucessfully updates obj2 and its childs { _repository.SubmitChanges(); // nothing happens, records stay in DB } else ..... return RedirectToAction("List"); } and this throws an InvalidOperationException, I have a german OS so I'm not exactly sure what the error message is in english, but it says something along the lines of that the entity needs a Version (Timestamp row?) or no update check policies. I have set UpdateCheck="Never" to every column except the primary key column. public ActionResult Update(MyObject obj) { _repository.MyObjectTable.Attach(obj, true); _repository.SubmitChanges(); // never gets here, exception at attach } I've read alot about similar "problems" with Linq To Sql, but it seems most of those "problems" are actually by design. So am I right in my assumption that this doesn't work like I expect it to work? Do I really have to manually iterate through the child entities and delete, update and insert them manually? For such a simple object this may work, but I plan to create more complex objects with nested EntitySets and so on. This is just a test to see what works and what not. So far I'm disappointed with Linq To Sql (maybe I just don't get it). Would be the Entity Framework or NHibernate a better choice for this scenario? Or would I run into the same problem?

    Read the article

  • DataForm commit button is not enabled when data changed.

    - by Grayson Mitchell
    This is a weird problem. I am using a dataform, and when I edit the data the save button is enabled, but the cancel button is not. After looking around a bit I have found that I have to implement the IEditableObject in order to cancel an edit. Great I did that (and it all works), but now the commit button (Save) is grayed out, lol. Anyone have any idea's why the commit button will not activate any more? Xaml <df:DataForm x:Name="_dataForm" AutoEdit="False" AutoCommit="False" CommandButtonsVisibility="All"> <df:DataForm.EditTemplate > <DataTemplate> <StackPanel Name="rootPanel" Orientation="Vertical" df:DataField.IsFieldGroup="True"> <!-- No fields here. They will be added at run-time. --> </StackPanel> </DataTemplate> </df:DataForm.EditTemplate> </df:DataForm> binding DataContext = this; _dataForm.ItemsSource = _rows; ... TextBox textBox = new TextBox(); Binding binding = new Binding(); binding.Path = new PropertyPath("Data"); binding.Mode = BindingMode.TwoWay; binding.Converter = new RowIndexConverter(); binding.ConverterParameter = col.Value.Label; textBox.SetBinding(TextBox.TextProperty, binding); dataField.Content = textBox; // add DataField to layout container rootPanel.Children.Add(dataField); Data Class definition public class Row : INotifyPropertyChanged , IEditableObject { public void BeginEdit() { foreach (var item in _data) { _cache.Add(item.Key, item.Value); } } public void CancelEdit() { _data.Clear(); foreach (var item in _cache) { _data.Add(item.Key, item.Value); } _cache.Clear(); } public void EndEdit() { _cache.Clear(); } private Dictionary<string, object> _cache = new Dictionary<string, object>(); private Dictionary<string, object> _data = new Dictionary<string, object>(); public object this[string index] { get { return _data[index]; } set { _data[index] = value; OnPropertyChanged("Data"); } } public object Data { get { return this; } set { PropertyValueChange setter = value as PropertyValueChange; _data[setter.PropertyName] = setter.Value; } } public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } }

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >