Search Results

Search found 6630 results on 266 pages for 'cname record'.

Page 13/266 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Record keyboard/mouse macros for games

    - by Dan
    I want to record a keyboard/mouse macro for automatically playing repetitive flash games. The programs I'm familiar with xnee/gnee/pnee and xmacro don't work under Ubuntu 10.04. (Xnee gives "Xnee failed due to bad data received from RECORD extension" for version 3.02, which is a known issue which I haven't found a solution for, and xmacro just plain doesn't work...) Are there any other methods I could use besides these two programs? Thanks

    Read the article

  • Saving table yields "Record is too large" in Access

    - by C. Ross
    I have an access database that I gave to a user (shame on my head). They were having trouble with some data being too long, so I suggested changing several text fields to memo fields. I tried this in my copy and it worked perfectly, but when the user tries it they get a "Record is too large" messagebox on saving the modified table design. Obviously the same record is not too large in my database, why would it be in theirs?

    Read the article

  • Popup Details for a Table Record

    - by shay.shmeltzer
    This one started as an OTN how-to question that seemed like something that should work automatically - turns out you need a couple of small tweaks to get it working. The idea is to have a table on a page showing multiple records, you can click any row in the table - and get a pop-up window that shows more data about that row. At first I thought I'll just need to drag the same view twice to the page - once as a table and then as a form in a pop-up. But then the Form didn't reflect the new row that got selected in the table - you'll always see the first row you selected. Adding a Partial Page Rendering between the table and the pop-up didn't do the trick either. Then I realized that the content delivery attribute of the pop-up was set to lazy, when I switched it to immediate - everything worked. Here is a little demo showing the whole development process: Note that the content delivery method attribute is also something you might want to check if you see your tables being refreshed too often when you scroll through records for example.

    Read the article

  • World Record Oracle Business Intelligence Benchmark on SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server configured with four SPARC T4 3.0 GHz processors delivered the first and best performance of 25,000 concurrent users on Oracle Business Intelligence Enterprise Edition (BI EE) 11g benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 10. A SPARC T4-4 server running Oracle Business Intelligence Enterprise Edition 11g achieved 25,000 concurrent users with an average response time of 0.36 seconds with Oracle BI server cache set to ON. The benchmark data clearly shows that the underlying hardware, SPARC T4 server, and the Oracle BI EE 11g (11.1.1.6.0 64-bit) platform scales within a single system supporting 25,000 concurrent users while executing 415 transactions/sec. The benchmark demonstrated the scalability of Oracle Business Intelligence Enterprise Edition 11g 11.1.1.6.0, which was deployed in a vertical scale-out fashion on a single SPARC T4-4 server. Oracle Internet Directory configured on SPARC T4 server provided authentication for the 25,000 Oracle BI EE users with sub-second response time. A SPARC T4-4 with internal Solid State Drive (SSD) using the ZFS file system showed significant I/O performance improvement over traditional disk for the Web Catalog activity. In addition, ZFS helped get past the UFS limitation of 32767 sub-directories in a Web Catalog directory. The multi-threaded 64-bit Oracle Business Intelligence Enterprise Edition 11g and SPARC T4-4 server proved to be a successful combination by providing sub-second response times for the end user transactions, consuming only half of the available CPU resources at 25,000 concurrent users, leaving plenty of head room for increased load. The Oracle Business Intelligence on SPARC T4-4 server benchmark results demonstrate that comprehensive BI functionality built on a unified infrastructure with a unified business model yields best-in-class scalability, reliability and performance. Oracle BI EE 11g is a newer version of Business Intelligence Suite with richer and superior functionality. Results produced with Oracle BI EE 11g benchmark are not comparable to results with Oracle BI EE 10g benchmark. Oracle BI EE 11g is a more difficult benchmark to run, exercising more features of Oracle BI. Performance Landscape Results for the Oracle BI EE 11g version of the benchmark. Results are not comparable to the Oracle BI EE 10g version of the benchmark. Oracle BI EE 11g Benchmark System Number of Users Response Time (sec) 1 x SPARC T4-4 (4 x SPARC T4 3.0 GHz) 25,000 0.36 Results for the Oracle BI EE 10g version of the benchmark. Results are not comparable to the Oracle BI EE 11g version of the benchmark. Oracle BI EE 10g Benchmark System Number of Users 2 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 50,000 1 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 28,000 Configuration Summary Hardware Configuration: SPARC T4-4 server 4 x SPARC T4-4 processors, 3.0 GHz 128 GB memory 4 x 300 GB internal SSD Storage Configuration: "> Sun ZFS Storage 7120 16 x 146 GB disks Software Configuration: Oracle Solaris 10 8/11 Oracle Solaris Studio 12.1 Oracle Business Intelligence Enterprise Edition 11g (11.1.1.6.0) Oracle WebLogic Server 10.3.5 Oracle Internet Directory 11.1.1.6.0 Oracle Database 11g Release 2 Benchmark Description Oracle Business Intelligence Enterprise Edition (Oracle BI EE) delivers a robust set of reporting, ad-hoc query and analysis, OLAP, dashboard, and scorecard functionality with a rich end-user experience that includes visualization, collaboration, and more. The Oracle BI EE benchmark test used five different business user roles - Marketing Executive, Sales Representative, Sales Manager, Sales Vice-President, and Service Manager. These roles included a maximum of 5 different pre-built dashboards. Each dashboard page had an average of 5 reports in the form of a mix of charts, tables and pivot tables, returning anywhere from 50 rows to approximately 500 rows of aggregated data. The test scenario also included drill-down into multiple levels from a table or chart within a dashboard. The benchmark test scenario uses a typical business user sequence of dashboard navigation, report viewing, and drill down. For example, a Service Manager logs into the system and navigates to his own set of dashboards using Service Manager. The BI user selects the Service Effectiveness dashboard, which shows him four distinct reports, Service Request Trend, First Time Fix Rate, Activity Problem Areas, and Cost Per Completed Service Call spanning 2002 to 2005. The user then proceeds to view the Customer Satisfaction dashboard, which also contains a set of 4 related reports, drills down on some of the reports to see the detail data. The BI user continues to view more dashboards – Customer Satisfaction and Service Request Overview, for example. After navigating through those dashboards, the user logs out of the application. The benchmark test is executed against a full production version of the Oracle Business Intelligence 11g Applications with a fully populated underlying database schema. The business processes in the test scenario closely represent a real world customer scenario. See Also SPARC T4-4 Server oracle.com OTN Oracle Business Intelligence oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Oracle Solaris oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • Web Server Scripting Hack to Maintain State and Keep a Domain Cookieless

    - by jasonspalace
    Hello, I am looking for a solution on a LAMP server to keep a site cookieless such as "example.com", where static content is served from "static.example.com", and with rules in place to rewrite requests for "www.example.com" to "example.com". I am really hoping to avoid setting up a cookieless domain for the static content due to an unanswered SEO concern with regards to CNAMEing to a CDN. Is there a way, (or safe hack), that can be implemented where a second domain such as "www.example2.com" is CNAMEd, aliased, or otherwise used with "example.com" to somehow trick a php application into maintaining state with a cookie dropped on "www.example2.com" therefore keeping all of "example.com" cookieless? If such a solution is feasible, what implications would exists with regards to SSL and cross-browser compatibility other than requiring users to accept cookies from 3rd party domains and possibly needing an additional SSL to keep the cookie secure? Thanks in advance to all.

    Read the article

  • Azure cloud app subdomain pointing to actual domain

    - by Amit Aggarwal
    Say we have a domain xyz.com registered with some registrar ... we pointed that domain to the name server of our dedicated server where the DNS will be hosted for that domain. Now, we just want that dedicated server to host the emails coming and the domain will point to abc.cloudapp.net (azure cloud app, they don't provide any static IP ... and only public url is given) Now, someone please helping me in editing/creating the DNS file on our dedicated server to make sure things work properly... if possible past here minimum settings we need in DNS file to make sure mails are on dedicated server and app is on cloud... Thanks, Amit

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

  • Record management system java web framework

    - by Kamil Tomšík
    We're currently reconsidering technologies and frameworks to get more agile with "simple" RMS CRUD-based projects. In short, short-living things like this Right now we have a custom extension on top of SmartGWT but after some time it has proven not to be flexible enough. I also personally dislike the java-js compilation process and the whole GWT codebase. Not only is the design ugly, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: closest to web as possible, like JSF or possibly Tapestry, it is very important to be able get "low" and weave framework if necessary. Happens more often than we thought. datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet). I only have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it hard for me to judge or even propose other frameworks. What would be your proposition? What are the problems you've faced? What was your solution and how hard do you think it was to deal with them in "big picture"?

    Read the article

  • How to extract a record in a text on string match in a file using bash

    - by private
    Hi I have a text file sample.txt as =====record1 title:javabook price:$120 author:john path:d: =====record2 title:.netbook author:paul path:f: =====record3 author:john title:phpbook subject:php path:f: price:$150 =====record4 title:phpbook subject:php path:f: price:$150 from this I want to split the data based on author, it should split into 2 files which contains test1.txt =====record1 title:javabook price:$120 author:john path:d: =====record3 author:john title:phpbook subject:php path:f: price:$150 and test2.txt =====record2 title:.netbook author:paul path:f: like above I want to classify the main sample.txt file into sub files based on author field dynamically. Please suggest me a way to do it.

    Read the article

  • how to record sound via headphone in audacity

    - by agha rehan abbas
    i have tried recording sound via headphone but i cant get it done not only that but while using skype my voice is not audible to the person whom i am talking but i can hear his voice i think it is the issue of some simple settings so can any one help me to get it done i have connected my headphone into my pc but i cant see it in the list of recordable devices have a look at it is this an issue of incompatible headphones ?

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Swiss Re increases data warehouse performance and deploys in record time

    - by KLaker
    Great information on yet another data warehouse deployment on Exadata. A little background on Swiss Re: In 2002, Swiss Re established a data warehouse for its client markets and products to gather reinsurance information across all organizational units into an integrated structure. The data warehouse provided the basis for reporting at the group level with drill-down capability to individual contracts, while facilitating application integration and data exchange by using common data standards. Initially focusing on property and casualty reinsurance information only, it now includes life and health reinsurance, insurance, and nonlife insurance information. Key highlights of the benefits that Swiss Re achieved by using Exadata: Reduced the time to feed the data warehouse and generate data marts by 58% Reduced average runtime by 24% for standard reports comfortably loading two data warehouse refreshes per day with incremental feeds Freed up technical experts by significantly minimizing time spent on tuning activities Most importantly this was one of the fastest project deployments in Swiss Re's history. They went from installation to production in just four months! What is truly surprising is the that it only took two weeks between power-on to testing the machine with full data volumes! Business teams at Swiss Re are now able to fully exploit up-to-date analytics across property, casualty, life, health insurance, and reinsurance lines to identify successful products. These points are highlighted in the following quotes from Dr. Stephan Gutzwiller, Head of Data Warehouse Services at Swiss Re:  "We were operating a complete Oracle stack, including servers, storage area network, operating systems, and databases that was well optimized and delivered very good performance over an extended period of time. When a hardware replacement was scheduled for 2012, Oracle Exadata was a natural choice—and the performance increase was impressive. It enabled us to deliver analytics to our internal customers faster, without hiring more IT staff" “The high quality data that is readily available with Oracle Exadata gives us the insight and agility we need to cater to client needs. We also can continue re-engineering to keep up with the increasing demand without having to grow the organization. This combination creates excellent business value.” Our full press release is available here: http://www.oracle.com/us/corporate/customers/customersearch/swiss-re-1-exadata-ss-2050409.html. If you want more information about how Exadata can increase the performance of your data warehouse visit our home page: http://www.oracle.com/us/products/database/exadata-database-machine/overview/index.html

    Read the article

  • Microsoft Delivers Record April Patch

    April marks another historic Patch Tuesday with 11 security bulletins being rolled out today by Microsoft....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What Pattern will solve this - fetching dependent record from database

    - by tunmise fasipe
    I have these classes class Match { int MatchID, int TeamID, //used to reference Team ... other fields } Note: Match actually have 2 teams which means 2 TeamID class Team { int TeamID, string TeamName } In my view I need to display List<Match> showing the TeamName. So I added another field class Match { int MatchID, int TeamID, //used to reference Team ... other fields string TeamName; } I can now do Match m = getMatch(id); m.TeamName = getTeamName(m.TeamId); //get name from database But for a List<Match>, getTeamName(TeamId) will go to the database to fetch TeamName for each TeamID. For a page of 10 Matches per page, that could be (10x2Teams)=20 trip to database. To avoid this, I had the idea of loading everything once, store it in memory and only lookup the TeamName in memory. This made me have a rethink that what if the records are 5000 or more. What pattern is used to solve this and how?

    Read the article

  • DNS: Forward domain to another host

    - by normmcgarry
    I was hoping some expert on here could quickly answer my question. I don't know much about DNS, so bare with me. I have a domain that is hosted with XO Communications. I want to host that domain at another web host, but I want to keep the mail at XO, so I'd like to keep the DNS managed by XO. What do I need to do in the DNS to switch it the website to the other host, but leave the mail unchanged? Thank you so much.

    Read the article

  • Customers Go On Record About Oracle ERP and HCM Cloud Services

    - by Kathryn Perry
    Listen to these Oracle customers from Red Robin, Herbalife, LendingClub, and Cricket.talk about how they're using Oracle ERP and HCM Cloud Services. Collectively they're driving cost savings, managing global, fast paced growth, automating processes, implementing quickly in the cloud, and much more. Here's the video link: http://www.youtube.com/user/FusionAppsAtOracle

    Read the article

  • Best video recording & mixing software for Ubuntu

    - by ???? No
    I'm searching for a quality software for recording video streams and mixing 3 cameras' streams and photos. I need it also for online streaming on a website. It could be a commercial software, doesn't have to be open source or free. I just don't have a clue if there is something like this. Thanks in advance. P.S. It's for Ubuntu 12.04 P.S.S. Maybe my definition is not correct or full, so I have to add - I need the program for live broadcast and recording on the computer at the same time.

    Read the article

  • [LINQ] Master &ndash; Detail Same Record(II)

    - by JTorrecilla
    In my previous post, I introduced my problem, but I didn’t explain the problem with Entity Framework When you try the solution indicated you will take the following error: LINQ to Entities don’t recognize the method 'System.String Join(System.String, System.Collections.Generic.IEnumerable`1[System.String])’ of the method, and this method can’t be translated into a stored expression. The query that produces that error was: 1: var consulta = (from TCabecera cab in 2: contexto_local.TCabecera 3: let Detalle = (from TDetalle detalle 4: in cab.TDetalle 5: select detalle.Nombre) 6: let Nombres = string.Join(",",Detalle ) 7: select new 8: { 9: cab.Campo1, 10: cab.Campo2, 11: Nombres 12: }).ToList(); 13: grid.DataSource=consulta;   Why is this error happening? This error happens when the query couldn’t be translated into T-SQL. Solutions? To quit that error, we need to execute the query on 2 steps: 1: var consulta = (from TCabecera cab in 2: contexto_local.TCabecera 3: let Detalle = (from TDetalle detalle 4: in cab.TDetalle 5: select detalle.Nombre) 6: select new 7: { 8: cab.Campo1, 9: cab.Campo2, 10: Detalle 11: }).ToList(); 12: var consulta2 = (from dato in consulta 13: let Nombes = string.Join(",",dato.Detalle) 14: select new 15: { 16: dato.Campo1, 17: dato.Campo2, 18: Nombres 19: }; 20: grid.DataSource=consulta2.ToList(); Curiously This problem happens with Entity Framework but, the same problem can’t be reproduced on LINQ – To – SQL, that it works fine in one unique step. Hope It’s helpful Best Regards

    Read the article

  • [LINQ] Master&ndash;Detail Same Record (I)

    - by JTorrecilla
    PROBLEM Firstly, I am working on a project based on LINQ, EF, and C# with VS2010. The following Table shows what I have and what I want to show. Header C1 C2 C3 1 P1 01/01/2011 2 P2 01/02/2011 Details 1 1 D1 2 1 D2 3 1 D3 4 2 D1 5 2 D4 Expected Results 1 P1 01/01/2011 D1, D2, D3 2 P2 01/02/2011 D1,D4   IDEAS At the begin I got 3 possible ways: - Doing inside the DB:  It could be achieved from DB with a CURSOR in a Stored Procedure. - Doing from .NET with LOOPS. - Doing with LINQ (I love it!!) FIRST APROX Example with a simple CLASS with a LIST: With and Employee Class that acts as Header Table: 1: public class Employee 2: { 3: public Employee () { } 4: public Int32 ID { get; set; } 5: public String FirstName{ get; set; } 6: public String LastName{ get; set; } 7: public List<string> Numbers{ get; set; } // Acts as Details Table 8: } We can show all numbers contained by Employee: 1: List<Employee > listado = new List<Employee >(); 2: //Fill Listado 3: var query= from Employee em in listado 4: let Nums= string.Join(";", em.Numbers) 5: select new { 6: em.Name, 7: Nums 8: }; The “LET” operator allows us to host the results of “Join” of the Numbers List of the Employee Class. A little example. ASAP I will post the second part to achieve the same with Entity Framework. Best Regards

    Read the article

  • What can be considered too high or too low volume?

    - by josinalvo
    I've asked a question about what audio volume to use when recording: recording audio: What is the best volume setting? In there, I learned that: I should avoid too high a volume, to prevent clipping I should avoid too low a volume, to prevent loss of resolution The question now is: What is too high a volume? What is too low? I am setting the volume via the GUI for sound config. It has an unamplified setting, a 100% setting, and volumes beyond 100%. After 100%, is there still resolution loss? How can I tell if there is clipping going on (given that my recording program is the non-GUI ffmpeg)?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >