Search Results

Search found 14416 results on 577 pages for 'standard reports'.

Page 15/577 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Why isnt int pow(int base, int exponent) in the standard C++ libraries?

    - by Dan O
    I feel like I must just be unable to find it. Is there any reason that the c++ pow function does not implement the "power" function for anything except floats and doubles? I know the implementation is trivial, I just feel like I'm doing work that should be in a standard library. A robust power function (ie handles overflow in some consistent, explicit way) is not fun to write.

    Read the article

  • What is a typical scenario for and end-user reports design?

    - by Sebastian
    Hello! I'm wondering what would be the typical scenario for using an end-user report designer. What I'm thinking of is to have a base report with all the columns that I can have, also with a basic view of the report (formatting, order of columns, etc.) and then let the user to change that format and order, take out or add (from the available columns) data to it, etc. Is that a common way to address what is called end-user designer for reports or I'm off track? I know it depends on the user (if it's someone that can handle SQL or not for example), but is it common to have a scenario where the user can build everthing from the sql query to the formatting? Thanks! Sebastian

    Read the article

  • Calculation with dates and different locales in Crystal Reports for Eclipse?

    - by Bevor
    Hello, I'm using Crystal Reports for Eclipse 2.0.4 and I have a problem. I use a formula in an report to subtract one day from a string which is a date: ToText(CDate({Agreement.EndDate})-1, "dd.MM.yyyy"); This works for the German locale. With an English locale, the calculation is absolutely wrong because the day and month is interchanged. For example: When {Agreement.EndDate} is 07.05.2010 and I subtract one day from it, I get 06.04.2010 with the German locale but 04.07.2010 with an English locale. How can I solve this that I works for different locales?

    Read the article

  • HPCM 11.1.2.2.x - HPCM Standard Costing Generating >99 Calc Scipts

    - by Jane Story
    HPCM Standard Profitability calculation scripts are named based on a documented naming convention. From 11.1.2.2.x, the script name = a script suffix (1 letter) + POV identifier (3 digits) + Stage Order Number (1 digit) + “_” + index (2 digits) (please see documentation for more information (http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_admin/apes01.html). This naming convention results in the name being 8 characters in length i.e. the maximum number of characters permitted calculation script names in non-unicode Essbase BSO databases. The index in the name will indicate the number of scripts per stage. In the vast majority of cases, the number of scripts generated per stage will be significantly less than 100 and therefore, there will be no issue. However, in some cases, the number of scripts generated can exceed 99. It is unusual for an application to generate more than 99 calculation scripts for one stage. This may indicate that explicit assignments are being extensively used. An assessment should be made of the design to see if assignment rules can be used instead. Assignment rules will reduce the need for so many calculation script lines which will reduce the requirement for such a large number of calculation scripts. In cases where the scripts generates exceeds 100, the length of the name of the 100th calculation script is different from the 99th as the calculation script name changes from being 8 characters long and becomes 9 characters long (e.g. A6811_100 rather than A6811_99). A name of 9 characters is not permitted in non Unicode applications. It is “too long”. When this occurs, an error will show in the hpcm.log as “Error processing calculation scripts” and “Unexpected error in business logic “. Further down the log, it is possible to see that this is “Caused by: Error copying object “ and “Caused by: com.essbase.api.base.EssException: Cannot put olap file object ... object name_[<calc script name> e.g. A6811_100] too long for non-unicode mode application”. The error file will give the name of the calculation script which is causing the issue. In my example, this is A6811_100 and you can see this is 9 characters in length. It is not possible to increase the number of characters allowed in a calculation script name. However, it is possible to increase the size of each calculation script. The default for an HPCM application, set in the preferences, is set to 4mb. If the size of each calculation script is larger, the number of scripts generated will reduce and, therefore, less than 100 scripts will be generated which means that the name of the calculation script will remain 8 characters long. To increase the size of the generated calculation scripts for an application, in the HPM_APPLICATION_PREFERENCE table for the application, find the row where HPM_PREFERENCE_NAME_ID=20. The default value in this row is 4194304. This can be increased e.g. 7340032 will increase this to 7mb. Please restart the profitability service after making the change.

    Read the article

  • log shipping of biztalk database on SQL server 2008 standard edition

    - by Manjot
    Hi, I want to do log shipping for biztalk databases on SQL server 2008 standard edition (server A) to another SQL server 2008 standard edition (server B). I was told that for biztalk, logshipping is not like standard logshipping. I was able to find 2 links: http://msdn.microsoft.com/en-us/library/cc296836%28v=BTS.10%29.aspx http://msdn.microsoft.com/en-us/library/cc296741%28v=BTS.10%29.aspx but they are not talking about SQL 2008 servers. Can anyone please help in this? Thanks in advance

    Read the article

  • Where can I find a description of the old British Standard structured flow charts?

    - by Steve314
    Some professional organisation defined these in, IIRC, the early 80s as similar to the more well known flow charts, but "structured". Instead of having arbitrary "goto" arrows, they had the equivalent of loops etc. They were standardized, and I vaguely remember studying them briefly at O Level. Of course they were about as useful as the well-known chocolate teapot, but I'd still like to be able to find a reference guide for them if possible - for roughly the same reason I was looking for a reference for standard Basic a while back. Google tells me - well, nothing really. They may as well never have existed. Which is probably nearly (and perhaps completely) true - I certainly never heard of them anywhere else except when I was at school. There's a chance that they may even be my computer science teachers little joke.

    Read the article

  • What are the standard/practical steps required before moving to implementation of any Project/Task?

    - by jkm
    What are the standard/practical steps required before moving to implementation of any Project/Task? Hi everyone, I liked stackoverflow very much and just got registered. As I am a beginner in programming, most of the time i just implement/code my tasks directly not even thinking of creating any dfd's, flowcharts or other tools for my new classes and methods. In some interviews i was asked what process you follow and i was confused as i am not very used to follow any standards. So If some experts can help me that what steps and in what order are the best practices for solving/approaching any task in programming. And how important these are? Thanks in advance! and sorry if this question is trivial one/already asked.

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 3

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 2 Adding the ReportViewer control and filter drop downs. Open the source code for index.aspx and add a ScriptManager control. This control is required for the ReportViewer control. Add a DropDownList for the categories and suppliers. Add the ReportViewer control. The markup after these steps is shown below. <div> <asp:ScriptManager ID="smScriptManager" runat="server"> </asp:ScriptManager> <div id="searchFilter"> Filter by: Category : <asp:DropDownList ID="ddlCategories" runat="server" /> and Supplier : <asp:DropDownList ID="ddlSuppliers" runat="server" /> </div> <rsweb:ReportViewer ID="rvProducts" runat="server"> </rsweb:ReportViewer> </div> The design view for index.aspx is shown below. The dropdowns will display the categories and suppliers in the database. Changing the selection in the drop downs will cause the report to be filtered by the selections in the dropdowns. You will see how to do this in the next steps.   Attaching the RDLC to the ReportViewer control by clicking on the top right of the control, going to Report Viewer tasks and selecting Products.rdlc.   Resize the ReportViewer control by dragging at the bottom right corner. I set mine to 800px x 500px. You can also set this value in source view. Defining the data sources. We will now define the Data Source used to populate the report. Go back to the “ReportViewer Tasks” and select “Choose Data Sources” Select a “New data source..” Select “Object” and name your Data Source ID “odsProducts”   In the next screen, choose “ProductRepository” as your business object. Choose “GetProductsProjected” in the next screen.   The method requires a SupplierID and CategoryID. We will set these so that our data source gets the values from the drop down lists we defined earlier. Set the parameter source to be of type “Control” and set the ControlIDs to be ddlSuppliers and ddlCategories respectively. Your screen will look like this: We are now going to define the data source for our drop downs. Select the ddlCategory drop down and pick “Choose Data Source”. Pick “Object” and give it an id “odsCategories”   In the next screen, choose “ProductRepository” Select the GetCategories() method in the next screen.   Select “CategoryName” and “CategoryID” in the next screen. We are done defining the data source for the Category drop down. Perform the same steps for the Suppliers drop down.   Select each dropdown and set the AppendDataBoundItems to true and AutoPostback to true.     The AppendDataBoundItems is needed because we are going to insert an “All“ list item with a value of empty. Go to each drop down and add this list item markup as shown below> Finally, double click on each drop down in the designer and add the following code in the code behind. This along with the “Autopostback= true” attribute refreshes the report anytime a drop down is changed. protected void ddlCategories_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); }   protected void ddlSuppliers_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); } Compile your report and run the page. You should see the report rendered. Note that the tool bar in the ReportViewer control gives you a couple of options including the ability to export the data to Excel, PDF or word.   Conclusion Through this three part series, we did the following: Created a data layer for use by our RDLC. Created an RDLC using the report wizard and define a dataset for the report. Used the report design surface to design our report including adding a chart. Used the ReportViewer control to attach the RDLC. Connected our ReportWiewer to a data source and take parameter values from the drop downlists. Used AutoPostBack to refresh the reports when the dropdown selection was changed. RDLCs allow you to create interactive reports including drill downs and grouping. For even more advanced reports you can use Microsoft® SQL Server™ Reporting Services with RDLs. With RDLs, the report is rendered on the report server instead of the web server. Another nice thing about RDLs is that you can define a parameter list for the report and it gets rendered automatically for you. RDLCs and RDLs both have their advantages and its best to compare them and choose the right one for your requirements. Download VS2010 RTM Sample project NorthwindReports.zip   Alfred Borden: Are you watching closely?

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • How Can I Create Reports in a Custom C#.NET Windows Application? - General Question

    - by user311509
    Assume i have a custom Windows application written in C#. This application has only the following functionalists, add, edit, delete and view. For example, a user can add a sale, change sales record, delete a sale record or view the whole sales record. I need to add some reporting functionalists e.g. i want a user to print the sales of a certain customer from 2008 to 2009 into pdf, what all products a certain customer has purchased from us and so on. I will only include the basic common report requests that are usually needed in the office. Any other kind of reports that are requested inconsistently, i would do it manually from my side at the back end and send the results manually to the requester. What i would do is: If a user wants more info of a certain customer, a special window box appears for that customer. This window box will have different controls that allows user to request more info such as, print customer purchases from ..... to ..... (user chooses the dates) and user will view results in pdf or so. Of course, at the back scene i will write an appropriate SQL Query with parameters that meets a certain function. Is this how it should be done? I have heard about SQL Reporting, i don't know anything about it yet. I will check it out. Anyhow, your suggestions won't harm. I'm still a student, so i don't have practical experience yet. I hope my question is clear enough. Thank you.

    Read the article

  • How do I show all group headers in Access 2007 reports?

    - by Newbie
    This is a question about Reports in Access 2007. I'm unsure whether the solution will involve any programming, but hopefully someone will be able to help me. I have a report which lists all records from a particular table (call it A), and groups them by their associated record in a related table (call it B). I use the 'group headers' to add the information from table-B into the report. The problem occurs when I filter the records from table-A that are shown in the report. If I filter out all table-A records that relate to a particular record (call it X) in table-B, the report no longer shows the record-X group header. As a possible workaround, I have tried to ensure that I have one empty record in table-A for each of the records in table-B. That way I can specify NOT to filter out these empty records. However, the outcome is ugly one-record-high blank spaces at the start of each group in the report. Does anyone know of an alternative solution?

    Read the article

  • Crystal Reports : How to add an external assembly class?

    - by Sunil
    I am using VS2010, CrystalReport13 & MVC3. My problem is unable to add an external assembly in Crystal Report using "Database Expert" Option. I have a class named WeeklyReportModel in an external assembly. In my web project, data retrieving from DB as IEnumerable collection of WeeklyReportModel. I tried ProjectData - .NetObjects in Crystal Report for adding the WeeklyReportModel. But this external assembly is not showing under ".NetObjects". Then I tried other option as Create New Connection - ADO.Net – Make New Connection and pointed this External Assembly. It has been added under Ado.Net node, but while expanding displays as "...no items found..." Totally frustrated. Please help. External Assembly Class: namespace SMS.Domain { public class WeeklyReportModel { public int StoreId { get; set; } public string StoreName{ get; set; } public decimal Saturday { get; set; } public decimal Sunday { get; set; } public decimal Monday { get; set; } public decimal Tuesday { get; set; } public decimal Wednesday { get; set; } public decimal Thurday { get; set; } public decimal Friday { get; set; } public decimal Average { get; set; } public string DateRange { get; set; } } } In Controller-action[Data retrieving as Collection Of WeeklyReportModel] namespace SMS.UI.Controllers { public class ReportController : Controller { public ActionResult StoreWeeklyReport(string id) { DateTime weekStart, weekClose; string[] dateArray = id.Split('_'); weekStart = Convert.ToDateTime(dateArray[0].ToString()); weekClose = Convert.ToDateTime(dateArray[1].ToString()); SMS.Infrastructure.Report.AuditReport weeklyReport = new SMS.Infrastructure.Report.AuditReport(); IEnumerable<SMS.Domain.WeeklyReportModel> weeklyRpt = weeklyReport.ReportByStore().WeeklyReport(weekStart, weekClose); Session["WeeklyData"] = weeklyRpt; Response.Redirect("~/Reports/Weekly/StoreWeekly.aspx"); return View(); } } } Thanks in advance.

    Read the article

  • If some standards apply when "it depends" then should I stick with custom approaches?

    - by Travis J
    If I have an unconventional approach which works better than the industry standard, should I just stick with it even though in principal it violates those standards? What I am talking about is referential integrity for relational database management systems. The standard for enforcing referential integrity is to CASCADE delete. In practice, this is just not going to work all the time. In my current case, it does not. The alternative suggested is to either change the reference to NULL, DEFAULT, or just to take NO ACTION - usually in the form of a "soft delete". I am all about enforcing referential integrity. Love it. However, sometimes it just does not fully apply to use all the standards in practice. My approach has been to slightly abandon a small part of one of those practices which is the part about leaving "hanging references" around. Oops. The trade off is plentiful in this situation I believe. Instead of having deprecated data in the production database, a splattering of "soft delete" logic all across my controllers (and views sometimes depending on how far down the chain the soft delete occurred), and the prospect of queries taking longer and longer - instead of all that - I now have a recycle bin and centralized logic. The only tradeoff is that I must explicitly manage the possibility of "hanging references" which can be done through generics with one class. Any thoughts?

    Read the article

  • Oracle Primavera Partner Programs

    - by mark.kromer
    Here is the slide presentation with only the slides that can be shared at this time, for our Oracle Primavera partner programs focusing on expanding P6's workflows and reporting capabilities. By leveraging Oracle's BPM & BI Publisher products, you can build exciting new workflow & enhanced reports to expand the capabilities of Primavera applications.

    Read the article

  • Linux standard input issue

    - by George2
    Hello everyone, I am new to Linux. And I am using Red Hat Enterprise Version 5. There is a ruby program which use standard input as its input (e.g. the Ruby program process input from standard input). I think standard input should be keyboard, correct? So, I think other kinds of input (non-standard input) should not work (i.e. the ruby program should not be able to read input from such non-standard input), but actually I have tried using pipe works, I am so confused because I think pipe should be some other kinds of input -- other than standard input, why it could work? i.e. put text "123" in abc.txt with pipe, could achieve the same result as using keyboard as input to type "123" for the ruby program. Here is the sample which works and makes me confused, cat abc.txt | ~/test/rubysrc/foo.rb thanks in advance, George

    Read the article

  • Automate delivery of Crystal Reports With a Windows Service

    In this article, Vince demonstrates the creation of a Windows Service to automatically run and send a Crystal Report as an email attachment. After a basic introduction, he examines the creation of the database and windows service with the help of relevant source code and explanations. Towards the end of the article, Vince discusses the steps to be followed in order to install the windows service.

    Read the article

  • Alloy Navigator 6 Automates Reports Integrates with Exchange

    Alloy Software has released Alloy Navigator 6, an update to its Navigator integrated IT operations management application....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • A standard style guide or best-practice guide for web application development

    - by gutch
    I run a very small team of developers on a web application, just three people (and not even full time). We're all capable developers, but we write our code in very different ways: we name similar things in different ways, we use different HTML and CSS to achieve similar outcomes. We can manage this OK because we're small, but can't help feeling it would be better to get some standards in place. Are there any good style guides or best-practice guides for web application development that we can use to keep our code under control? Sure, we could write them ourselves. But the reality is that with lots to do and very few staff, we're not going to bother. We need something off the shelf that we can tinker with rather than start from scratch. What we're not looking for here is basic code formatting rules like "whether to use tabs or spaces" or "where to put line breaks" — we can control this by standardising our IDEs. What we are looking for are rules for code and markup. For example: What HTML markup should be used for headers, tables, sidebars, buttons, etc. When to add new CSS styles, and what to name them When IDs should be allocated to HTML elements, and what to name them How Javascript functions should be declared and called How to pick an appropriate URL for given page or AJAX call When to use each HTTP method, ie POST vs GET vs PUT etc How to name server-side methods (Java, in our case) How to throw and handle errors and exceptions in a consistent way etc, etc.

    Read the article

  • SQL Server 2008 R2 still requires a trace flag for Lock Pages in Memory

    - by AaronBertrand
    Almost two years ago, I blogged that Lock Pages in Memory was finally available to Standard Edition customers (Enterprise Edition customers had long been deemed smart enough to not abuse this feature). In addition to applying a cumulative update (2005 SP3 CU4 or 2008 SP1 CU2), in order to take advantage of LPIM, you also had to enable trace flag 845. Since the trace flag isn't documented for SQL Server 2008 R2, several of us in the community assumed that it was no longer required (since it was introduced...(read more)

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

  • SQL Server 2008 R2 still requires a trace flag for Lock Pages in Memory

    - by AaronBertrand
    Almost two years ago, I blogged that Lock Pages in Memory was finally available to Standard Edition customers (Enterprise Edition customers had long been deemed smart enough to not abuse this feature). In addition to applying a cumulative update (2005 SP3 CU4 or 2008 SP1 CU2), in order to take advantage of LPIM, you also had to enable trace flag 845. Since the trace flag isn't documented for SQL Server 2008 R2, several of us in the community assumed that it was no longer required (since it was introduced...(read more)

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >