Search Results

Search found 40870 results on 1635 pages for 'database design'.

Page 36/1635 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Prioritize compiler functionality/tasks, when designing a new language

    - by Mahdi
    Well, the question should be so hard to ask and I expect couple of down votes, however, I'm really interested to have your ideas and recommendations. :) I've already made a very simple compiler, with a few and limited functionality. Now I'm getting more on it to make it more like a real-world compiler. I definitely need to start over 'cause I've much more experience and ideas in this area rather a few years ago. So, I want to know, right now, from the very first step again, which tasks/features for the new compiler should implement first and which tasks has lower priority rather than others? For example, I'd say, first I'd go to decide about the object-oriented structure for the new language, but you might say, hey, just go for a compiler that could define a variable, when you finished that, then start thinking about OOP designs ... I prefer to hear the pros and cons for your suggestions also. Actually I like to start from Bottom to Top, where I could add simplest tasks first, and later adding more complex ones, but I'm totally open for any new ideas, and really appreciate that. Also please consider that I'm thinking about the design concepts. Actually I expect answers like: Priority from Highest to Lowest: variables, because .... functions, because .... loops, because .... ... Not: define a syntax for your new language, and start parsing your source code ...

    Read the article

  • Struggling with the Single Responsibility Principle

    - by AngryBird
    Consider this example: I have a website. It allows users to make posts (can be anything) and add tags that describe the post. In the code, I have two classes that represent the post and tags. Lets call these classes Post and Tag. Post takes care of creating posts, deleting posts, updating posts, etc. Tag takes care of creating tags, deleting tags, updating tags, etc. There is one operation that is missing. The linking of tags to posts. I am struggling with who should do this operation. It could fit equally well in either class. On one hand, the Post class could have a function that takes a Tag as a parameter, and then stores it in a list of tags. On the other hand, the Tag class could have a function that takes a Post as a parameter and links the Tag to the Post. The above is just an example of my problem. I am actually running into this with multiple classes that are all similar. It could fit equally well in both. Short of actually putting the functionality in both classes, what conventions or design styles exist to help me solve this problem. I am assuming there has to be something short of just picking one? Maybe putting it in both classes is the correct answer?

    Read the article

  • Implementing a ILogger interface to log data

    - by Jon
    I have a need to write data to file in one of my classes. Obviously I will pass an interface into my class to decouple it. I was thinking this interface will be used for testing and also in other projects. This is my interface: //This could be used by filesystem, webservice public interface ILogger { List<string> PreviousLogRecords {get;set;} void Log(string Data); } public interface IFileLogger : ILogger { string FilePath; bool ValidFileName; } public class MyClassUnderTest { public MyClassUnderTest(IFileLogger logger) {....} } [Test] public void TestLogger() { var mock = new Mock<IFileLogger>(); mock.Setup(x => x.Log(Is.Any<string>).AddsDataToList()); //Is this possible?? var myClass = new MyClassUnderTest(mock.Object); myClass.DoSomethingThatWillSplitThisAndLog3Times("1,2,3"); Assert.AreEqual(3,mock.PreviousLogRecords.Count); } This won't work I don't believe as nothing is storing the items so is this possible using Moq and also what do you think of the design of the interface?

    Read the article

  • Parameterized Django models

    - by mgibsonbr
    In principle, a single Django application can be reused in two or more projects, providing functionality relevent to both. That implies that the same database structure (tables and relations) will be re-created identically in different databases, and most times this is not a problem (assuming the projects/databases are unrelated - for instance when someone downloads a complete app to use in their own projects). Sometimes, however, the models must be "tweaked" a little to better fit the problem needs. This can be accomplished by forking the app, but I wondered if there wouldn't be a better option in cases where the app designer can anticipate the most common customizations. For instance, if I have a model that could relate to another as one-to-one or one-to-many, I could specify the unique property as a parameter, that can be specified in the project's settings: class This(models.Model): other = models.ForeignKey(Other, unique=settings.OTHER_TO_THIS) Or if a model can relate to many others, I could create an intermediate table for each of them (thus enforcing referential integrity) instead of using generic fks: for related in settings.MODELS_RELATED_TO_OTHER: model_name = '%s_Other' % related globals()[model_name] = type(model_name, (models.Model,) { me:models.ForeignKey(find_model_class(related)), other:models.ForeignKey(Other), # Some other properties all intersection tables must have }) Etc. Let me stress out that I'm not proposing to change the models at runtime nor anything like that; once the parameters were defined and syncdb called for the first time, those parameters are not to be changed again (unless you're doing a schema migration). Is this a good design? Are there better ways to accomplish the same thing, or maybe drawbacks I coulnd't anticipate? This technique is meant to be used sparingly (only on apps meant to be reused in wildly different contexts, and only when a specific need of customization can be detected while the app model is being designed).

    Read the article

  • Handling Types for Real and Complex Matrices in a BLAS Wrapper

    - by mga
    I come from a C background and I'm now learning OOP with C++. As an exercise (so please don't just say "this already exists"), I want to implement a wrapper for BLAS that will let the user write matrix algebra in an intuitive way (e.g. similar to MATLAB) e.g.: A = B*C*D.Inverse() + E.Transpose(); My problem is how to go about dealing with real (R) and complex (C) matrices, because of C++'s "curse" of letting you do the same thing in N different ways. I do have a clear idea of what it should look like to the user: s/he should be able to define the two separately, but operations would return a type depending on the types of the operands (R*R = R, C*C = C, R*C = C*R = C). Additionally R can be cast into C and vice versa (just by setting the imaginary parts to 0). I have considered the following options: As a real number is a special case of a complex number, inherit CMatrix from RMatrix. I quickly dismissed this as the two would have to return different types for the same getter function. Inherit RMatrix and CMatrix from Matrix. However, I can't really think of any common code that would go into Matrix (because of the different return types). Templates. Declare Matrix<T> and declare the getter function as T Get(int i, int j), and operator functions as Matrix *(Matrix RHS). Then specialize Matrix<double> and Matrix<complex>, and overload the functions. Then I couldn't really see what I would gain with templates, so why not just define RMatrix and CMatrix separately from each other, and then overload functions as necessary? Although this last option makes sense to me, there's an annoying voice inside my head saying this is not elegant, because the two are clearly related. Perhaps I'm missing an appropriate design pattern? So I guess what I'm looking for is either absolution for doing this, or advice on how to do better.

    Read the article

  • Best practice to collect information from child objects

    - by Markus
    I'm regularly seeing the following pattern: public abstract class BaseItem { BaseItem[] children; // ... public void DoSomethingWithStuff() { StuffCollection collection = new StuffCollection(); foreach(child c : children) c.AddRequiredStuff(collection); // do something with the collection ... } public abstract void AddRequiredStuff(StuffCollection collection); } public class ConcreteItem : BaseItem { // ... public override void AddRequiredStuff(StuffCollection collection) { Stuff stuff; // ... collection.Add(stuff); } } Where I would use something like this, for better information hiding: public abstract class BaseItem { BaseItem[] children; // ... public void DoSomethingWithStuff() { StuffCollection collection = new StuffCollection(); foreach(child c : children) collection.AddRange(c.RequiredStuff()); // do something with the collection ... } public abstract StuffCollection RequiredStuff(); } public class ConcreteItem : BaseItem { // ... public override StuffCollection RequiredStuff() { StuffCollection stuffCollection; Stuff stuff; // ... stuffCollection.Add(stuff); return stuffCollection; } } What are pros and cons of each solution? For me, giving the implementation access to parent's information is some how disconcerting. On the other hand, initializing a new list, just to collect the items is a useless overhead ... What is the better design? How would it change, if DoSomethingWithStuff wouldn't be part of BaseItem but a third class? PS: there might be missing semicolons, or typos; sorry for that! The above code is not meant to be executed, but just for illustration.

    Read the article

  • Classes as a compilation unit

    - by Yannbane
    If "compilation unit" is unclear, please refer to this. However, what I mean by it will be clear from the context. Edit: my language allows for multiple inheritance, unlike Java. I've started designing+developing my own programming language for educational, recreational, and potentially useful purposes. At first, I've decided to base it off Java. This implied that I would have all the code be written inside classes, and that code compiles to classes, which are loaded by the VM. However, I've excluded features such as interfaces and abstract classes, because I found no need for them. They seemed to be enforcing a paradigm, and I'd like my language not to do that. I wanted to keep the classes as the compilation unit though, because it seemed convenient to implement, familiar, and I just liked the idea. Then I noticed that I'm basically left with a glorified module system, where classes could be used either as "namespaces", providing constants and functions using the static directive, or as templates for objects that need to be instantiated ("actual" purpose of classes in other languages). Now I'm left wondering: what are the benefits of having classes as compilation units? (Also, any general commentary on my design would be much appreciated.)

    Read the article

  • Tips for Making this Code Testable [migrated]

    - by Jesse Bunch
    So I'm writing an abstraction layer that wraps a telephony RESTful service for sending text messages and making phone calls. I should build this in such a way that the low-level provider, in this case Twilio, can be easily swapped without having to re-code the higher level interactions. I'm using a package that is pre-built for Twilio and so I'm thinking that I need to create a wrapper interface to standardize the interaction between the Twilio service package and my application. Let us pretend that I cannot modify this pre-built package. Here is what I have so far (in PHP): <?php namespace Telephony; class Provider_Twilio implements Provider_Interface { public function send_sms(Provider_Request_SMS $request) { if (!$request->is_valid()) throw new Provider_Exception_InvalidRequest(); $sms = \Twilio\Twilio::request('SmsMessage'); $response = $sms->create(array( 'To' => $request->to, 'From' => $request->from, 'Body' => $request->body )); if ($this->_did_request_fail($response)) { throw new Provider_Exception_RequestFailed($response->message); } $response = new Provider_Response_SMS(TRUE); return $response; } private function _did_request_fail($api_response) { return isset($api_response->status); } } So the idea is that I can write another file like this for any other telephony service provided that it implements Provider_Interface making them swappable. Here are my questions: First off, do you think this is a good design? How could it be improved? Second, I'm having a hard time testing this because I need to mock out the Twilio package so that I'm not actually depending on Twilio's API for my tests to pass or fail. Do you see any strategy for mocking this out? Thanks in advance for any advice!

    Read the article

  • Modeling a Generic Relationship in a Database

    - by StevenH
    This is most likely one for all you sexy DBAs out there: How would I effieciently model a relational database whereby I have a field in an "Event" table which defines a "SportType". This "SportsType" field can hold a link to different sports tables E.g. "FootballEvent", "RubgyEvent", "CricketEvent" and "F1 Event". Each of these Sports tables have different fields specific to that sport. My goal is to be able to genericly add sports types in the future as required, yet hold sport specific event data (fields) as part of my Event Entity. Is it possible to use an ORM such as NHibernate / Entity framework which would reflect such a relationship? I have thrown together a quick C# example to express my intent at a higher level: public class Event<T> where T : new() { public T Fields { get; set; } public Event() { EventType = new T(); } } public class FootballEvent { public Team CompetitorA { get; set; } public Team CompetitorB { get; set; } } public class TennisEvent { public Player CompetitorA { get; set; } public Player CompetitorB { get; set; } } public class F1RacingEvent { public List<Player> Drivers { get; set; } public List<Team> Teams { get; set; } } public class Team { public IEnumerable<Player> Squad { get; set; } } public class Player { public string Name { get; set; } public DateTime DOB { get; set;} }

    Read the article

  • Modeling a Generic Relationship (expressed in C#) in a Database

    - by StevenH
    This is most likely one for all you sexy DBAs out there: How would I effieciently model a relational database whereby I have a field in an "Event" table which defines a "SportType"? This "SportsType" field can hold a link to different sports tables E.g. "FootballEvent", "RubgyEvent", "CricketEvent" and "F1 Event". Each of these Sports tables have different fields specific to that sport. My goal is to be able to genericly add sports types in the future as required, yet hold sport specific event data (fields) as part of my Event Entity. Is it possible to use an ORM such as NHibernate / Entity framework / DataObjects.NET which would reflect such a relationship? I have thrown together a quick C# example to express my intent at a higher level: public class Event<T> where T : new() { public T Fields { get; set; } public Event() { EventType = new T(); } } public class FootballEvent { public Team CompetitorA { get; set; } public Team CompetitorB { get; set; } } public class TennisEvent { public Player CompetitorA { get; set; } public Player CompetitorB { get; set; } } public class F1RacingEvent { public List<Player> Drivers { get; set; } public List<Team> Teams { get; set; } } public class Team { public IEnumerable<Player> Squad { get; set; } } public class Player { public string Name { get; set; } public DateTime DOB { get; set;} }

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

  • Database schema to store AND, OR relation, association

    - by user455387
    Many thanks for your help on this. In order for an entreprise to get a call for tender it must meet certain requirements. For the first example the enterprise must have a minimal class 4, and have qualification 2 in sector 5. Minimal class is always one number. Qualification can be anything (single, or multiple using AND, OR logical operators) I have created tables in order to map each number to it's given name. Now I need to store requirements in the database. minimal class 4 Sector Qualification 5.2 minimal class 2 Sector Qualifications 3.9 and 3.10 minimal class 3 Sector Qualifications 6.1 or 6.3 minimal class 1 Sector Qualifications (3.1 and 3.2) or 5.6 class Domain < ActiveRecord::Base has_many :domain_classes has_many :domain_sectors has_many :sector_qualifications, :through => :domain_sectors end class DomainClass < ActiveRecord::Base belongs_to :domain end class DomainSector < ActiveRecord::Base belongs_to :domain has_many :sector_qualifications end class SectorQualification < ActiveRecord::Base belongs_to :domain_sector end create_table "domains", :force => true do |t| t.string "name" end create_table "domain_classes", :force => true do |t| t.integer "number" t.integer "domain_id" end create_table "domain_sectors", :force => true do |t| t.string "name" t.integer "number" t.integer "domain_id" end create_table "sector_qualifications", :force => true do |t| t.string "name" t.integer "number" t.integer "domain_sector_id" end

    Read the article

  • java class that simulates a simple database table

    - by ericso
    I have a collection of heterogenous data that I pull from a database table mtable. Then, for every unique value uv in column A, I compute a function of (SELECT * FROM mtable WHERE A=uv). Then I do the same for column B, and column C. There are rather a lot of unique values, so I don't want to hit the db repeatedly - I would rather have a class that replicates some of the functionality (most importantly some version of SELECT WHERE). Additionally, I would like to abstract the column names away from the class definition, if that makes any sense - the constructor should take a list of names as a parameter, and also, I suppose, a list of types (right now this is just a String[], which seems hacky). I'm getting the initial data from a RowSet. I've more or less done this by using a hashmap that maps Strings to lists/arrays of Objects, but I keep getting bogged down in comparisons and types, and am thinking that my current implementation really isn't as clean and clear as it could be. I'm pretty new to java, also, and am not sure if I'm not going down a completely incorrect path. Does anyone have any suggestions?

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • cPickle ImportError: No module named multiarray

    - by Rafal
    Hello, I'm using cPickle to save my Database into file. The code looks like that: def Save_DataBase(): import cPickle from scipy import * from numpy import * a=Results.VersionName #filename='D:/results/'+a[a.find('/')+1:-a.find('/')-2]+Results.AssType[:3]+str(random.randint(0,100))+Results.Distribution+".lft" filename='D:/results/pppp.lft' plik=open(filename,'w') DataOutput=[[[DataBase.Arrays.Nodes,DataBase.Arrays.Links,DataBase.Arrays.Turns,DataBase.Arrays.Connectors,DataBase.Arrays.Zones], [DataBase.Nodes.Data,DataBase.Links.Data,DataBase.Turns.Data,DataBase.OrigConnectors.Data,DataBase.DestConnectors.Data,DataBase.Zones.Data], [DataBase.Nodes.DictionaryPy2Vis,DataBase.Links.DictionaryPy2Vis,DataBase.Turns.DictionaryPy2Vis,DataBase.OrigConnectors.DictionaryPy2Vis,DataBase.DestConnectors.DictionaryPy2Vis,DataBase.Zones.DictionaryPy2Vis], [DataBase.Nodes.DictionaryVis2Py,DataBase.Links.DictionaryVis2Py,DataBase.Turns.DictionaryVis2Py,DataBase.OrigConnectors.DictionaryVis2Py,DataBase.DestConnectors.DictionaryVis2Py,DataBase.Zones.DictionaryVis2Py], [DataBase.Paths.List]],[Results.VersionName,Results.noZones,Results.noNodes,Results.noLinks,Results.noTurns,Results.noTrips, Results.Times.VersionLoad,Results.Times.GetData,Results.Times.GetCoords,Results.Times.CrossTheTime,Results.Times.Plot_Cylinder, Results.AssType,Results.AssParam,Results.tStart,Results.tEnd,Results.Distribution,Results.tVector]] cPickle.dump(DataOutput, plik, protocol=0) plik.close()` And it works fine. Most of my Database rows are lists of a lists, vecor-like, or array-like data sets. But now when I input data, an error occurs: def Load_DataBase(): import cPickle from scipy import * from numpy import * filename='D:/results/pppp.lft' plik= open(filename, 'rb') """ first cPickle load approach """ A= cPickle.load(plik) """ fail """ """ Another approach - data format exact as in Output step above , also fails""" [[[DataBase.Arrays.Nodes,DataBase.Arrays.Links,DataBase.Arrays.Turns,DataBase.Arrays.Connectors,DataBase.Arrays.Zones], [DataBase.Nodes.Data,DataBase.Links.Data,DataBase.Turns.Data,DataBase.OrigConnectors.Data,DataBase.DestConnectors.Data,DataBase.Zones.Data], [DataBase.Nodes.DictionaryPy2Vis,DataBase.Links.DictionaryPy2Vis,DataBase.Turns.DictionaryPy2Vis,DataBase.OrigConnectors.DictionaryPy2Vis,DataBase.DestConnectors.DictionaryPy2Vis,DataBase.Zones.DictionaryPy2Vis], [DataBase.Nodes.DictionaryVis2Py,DataBase.Links.DictionaryVis2Py,DataBase.Turns.DictionaryVis2Py,DataBase.OrigConnectors.DictionaryVis2Py,DataBase.DestConnectors.DictionaryVis2Py,DataBase.Zones.DictionaryVis2Py], [DataBase.Paths.List]],[Results.VersionName,Results.noZones,Results.noNodes,Results.noLinks,Results.noTurns,Results.noTrips, Results.Times.VersionLoad,Results.Times.GetData,Results.Times.GetCoords,Results.Times.CrossTheTime,Results.Times.Plot_Cylinder, Results.AssType,Results.AssParam,Results.tStart,Results.tEnd,Results.Distribution,Results.tVector]]= cPickle.load(plik)` Error is (in both cases): A= cPickle.load(plik) ImportError: No module named multiarray Any Ideas? PS.

    Read the article

  • What's the most efficient way to reclaim disk space after deleting lots of data from a database on Sybase ASE 15?

    - by Ernie Longmire
    As I understand it, based on some research but zero real-world experience with Sybase ASE, the only way to reclaim disk space once it's been allocated to a database is to export that database, create a new DB with the same schema, and reload all the exported data to the new database. Is this correct, or is there some other method? Then: assuming the above is correct and a full export-recreate-reload is required, what's the most efficient way to do that? Are there tools that will automate all or part of that process? I'm being told we would have to write separate bcp export and import commands for each and every object in the database, which if true sounds easily scriptable by someone who knows Sybase ASE well enough. (I don't.) This seems to me like a really basic housekeeping task, and it feels like I'm missing something obvious.

    Read the article

  • OTN Developer Day: Oracle Database 11g Application Development

    - by stephen.garth
    When and Where: Tuesday June 15, 2010 from 8:00 am - 5:30 pm Hyatt Regency Reston, Reston VA This full-day FREE event offers a great learning and networking opportunity. With support from Oracle database application development experts, you'll get valuable hands-on experience developing database-backed apps with the latest Oracle tools and frameworks. Oh yeah, you get to use your own notebook and download some cool and very useful materials. Find out more and register here. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Where do we put "asking the world" code when we separate computation from side effects?

    - by Alexey
    According to Command-Query Separation principle, as well as Thinking in Data and DDD with Clojure presentations one should separate side effects (modifying the world) from computations and decisions, so that it would be easier to understand and test both parts. This leaves an unanswered question: where relatively to the boundary should we put "asking the world"? On the one hand, requesting data from external systems (like database, extental services' APIs etc) is not referentially transparent and thus should not sit together with pure computational and decision making code. On the other hand, it's problematic, or maybe impossible to tease them apart from computational part and pass it as an argument as because we may not know in advance which data we may need to request.

    Read the article

  • Business Continuity for EBS Using Oracle 11g Physical Standby DB

    - by Steven Chan
    Our Applications Technology Group database architects have released two new documents covering the use of Oracle Data Guard to create physical standby databases for Oracle E-Business Suite environments:Business Continuity for Oracle E-Business Release 12 Using Oracle 11g Physical Standby Database (Note 1070033.1)Business Continuity for Oracle E-Business Release 11i Using Oracle 11g Physical Standby Database (Note 1068913.1)

    Read the article

  • 7 Web Design Tutorials from PSD to HTML/CSS

    - by Sushaantu
    Some time back when I was looking for some tutorials to create a website from scratch i.e. the process from designing the PSD to slice it and CSS/XHTML it, then not many quality results appeared. But that was like almost an year back and a lot of water has flown down the river Thanes since then. In this list I will give you links to some wonderful tutorials teaching you in a step by step way to design a website. These tutorials are ideal for someone who is learning web designing and has grasp of basic CSS, XHTML and little designing on Photoshop. How to Design and Code Web 2.0 Style Web Design Design a website from PSD to HTML Designing and Coding a Grunge Web Design from Scratch Creating a CSS layout from scratch Build a Sleek Portfolio Site from Scratch Designing and Coding a web design from scratch Design and Code a Dark and Sleek Web Design

    Read the article

  • Is RAC One Node Certified for E-Business Suite?

    - by Steven Chan
    Oracle Real Application Clusters (RAC) is a cluster database with a shared cache architecture that supports the transparent deployment of a single database across a pool of servers.  RAC is certified with both Oracle E-Business Suite Release 11i and 12.  We publish best-practices documentation for specific combinations of EBS + RAC versions.  For example, if you were planning on implementing RAC for EBS 12, you would use this documentation:Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 (Note 823587.1)Many of the largest E-Business Suite users in the world run RAC today, including Oracle; see this Oracle R12 case study for details.A number of customers have recently asked whether RAC One Node can be used with the E-Business Suite.  From the RAC website:Oracle RAC One Node is a new option available with Oracle Database 11g Release 2. Oracle RAC One Node is a single instance of an Oracle RAC-enabled database running on one node in a cluster.

    Read the article

  • New Whitepaper: Oracle E-Business Suite on Exadata

    - by Steven Chan
    Our Maximum Availability Architecture (MAA) team has quietly been amassing a formidable set of whitepapers about the Oracle Exadata Database Machine.  They're available here:MAA Best Practices - Exadata Database MachineIf you're one of the lucky ones with access to this hardware platform, you'll be pleased to hear that the MAA team has just published a new whitepaper with best practices for EBS environments:Oracle E-Business Suite on ExadataThis whitepaper covers the following topics:Getting to Exadata -- a high level overview of fresh installation on, and migration to, Exadata Database Machine with pointers to more detailed documentation High Availability and Disaster Recovery -- an overview of our MAA best practices with pointers to our detailed MAA Best Practices documentation Performance and Scalability -- best practices for running Oracle E-Business Suite on Exadata Database Machine based on our internal testing

    Read the article

  • SQL Server 2008 R2 Enterprise database has unexpected 4GB database size limit

    - by Jesse
    I have SQL Server 2008 R2 Enterprise installed on a local Windows 7 x64 workstation. When I create a database on the server, it unexpectedly has a 4GB size limit (Database properties in SQL Server Management Studio say size = 3934.38 MB, space available = 47.13 MB). Unfortunately the database needs more than 4GB, and Enterprise is not supposed to have a practical maximum size. I confirmed the database is on the Enterprise server: SELECT @@VERSIONMicrosoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) The database file is not set to restrict growth in SQL Server Management Studio, and there is plenty of hard drive space. The database was copied from SQL Express (which has a 4GB limit), but the same occurs with a fresh database creation. I've spent a couple of hours trying to figure this out and Google-searching, to no avail. Any ideas?

    Read the article

  • Database Machine, 11gR2 és a Tivoli Data Protection is együttmuködik

    - by Fekete Zoltán
    Felmerült a kérdés, hogy a Database Machine környezetben végezhetjük-e a mentéseket Tivoli Data Protection for Oracle szoftverrel. A válasz: IGEN. Az IBM Tivoli Data Protection for Oracle V5.5.2 on Linux x86_64 immár bevizsgáltatott az Oracle 11gR2 RAC-cal. Az aktuális support információ itt található: A V5.5.2-re kattintva megtaláljátok a következo adatokat: Oracle Enterprise Linux 5 with any of the following Oracle releases: * 64-bit Oracle Standard or Enterprise Server 10gR2, 11g, or 11gR2 * 64-bit Real Application Clusters (RAC) 10gR2, 11g, or 11gR2 Ez az információ elegendo és remek :), mivel a Database Machine komponensek Oracle Enterprise Linux operációs rendszeren muködnek, 64 bites architektúrában és az Oracle RMAN (Recovery Manager) elemet tudja használni a TDP.

    Read the article

  • Database Machine gyakorlati tapasztalatok!

    - by Fekete Zoltán
    Ketto héttel ezelott gyakorlati tapasztalatokat szereztünk egy magyarországon muködo vállalat adattárházával egy igazi Database Machine / Exadata környezetben, egy Sun Oracle Database Machine Half Rack kiépítéssel. Az eredmények valóban lenyugözöek. Az Exadata Storage izgalmas és egyedi tulajdonságai: Smart Scan, Smart Flash Cache, Storage Index, tömörítés: Exadata Hybrid Columnar Compression, a hihetetlen mértéku párhuzamosság olyan teljesítmény elonyhöz juttatja a felhasználót, ami szemelkerekedést, ujjongást és hosszantartó belso mosolyt eredményez. Hogy ez hasznos-e az Ön és adatbázis környezete egészségére nézve? :) Kérdezze meg orvosát, gyógyszerészét és a white paper leírásokat, továbbá publikált ügyfél történeteket. A technikai és kereskedelmi részletekrol kérdezzen engem. :) A holnapi (2010. máricius 24. szerda) eloadáson a HOUG Konferencián személyesen is meghallgathatók ezek a gyakorlati tapasztalatok, authentikus forrásból. Hasonlóan szép napokat kívánok mindenkinek!

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >