Search Results

Search found 1619 results on 65 pages for 'itai alter'.

Page 33/65 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Restoring GRUB2 on Software RAID 0 using LiveCD after Windows 7 wiped it

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

  • 12.10 upgrade broke brightness keys [closed]

    - by Chris Morgan
    I have been running Ubuntu (64-bit) on my HP 6710b laptop (Core 2 Duo with integrated graphics) for several years, and the backlight brightness keys have always worked. Since I upgraded to Ubuntu 12.10 earlier today, those keys do not work any more. The secondary function keys: Fn+F3: sleep; still works (and considerably faster than ever before!) Fn+F8: battery info; still works Fn+F9: reduce brightness; stopped working in 12.10 Fn+F10: increase brightness; stopped working in 12.10 It may also be worth while mentioning that X does not appear to be receiving the brightness events at all, or at least not sending them out further. (This I detected with a key logger I wrote for a Uni project, which uses X's Record extension; it is informed of the sleep and battery info keystrokes, but doesn't receive the brightness ones at all.) In the mean time, I know that I can use the Brightness & Lock settings screen to alter the brightness. (Wow! I can suddenly make my backlight darker than I could before—I can go right down to turning the backlight off, something I couldn't do before... but this model has a fairly dim screen, so I don't expect to use that much, if ever.) How can I get the brightness keys working again? This question is probably strongly related to I can't control my Brightness in HP Compaq 6710s.

    Read the article

  • Input/Output console window in XNA

    - by Will Bagley
    I am currently making a simple game in XNA but am at a point where testing various aspect gets a bit tricky, especially when you have to wait till you have 1000 score to see if your animation is playing correctly etc. Of course i could just edit the starting variable in the code before I launched but I have recently been interested in trying to implement a console style window which can print out values and take input to alter public variables during run-time. I am aware that VS has the immediate window which achieves a similar thing but i would prefer mine is an actual part of the game with the intention that the user may have limited access to it in the future. Some of the key things i have yet to find an answer to after looking around for a while are: how i would support free text entry how i would access variables during runtime how i would edit these variable I have also read about using a property grid from windows form aps (and partially reflection) which looked like it could simplify a lot of things but i am not sure how I would get that running inside my XNA game window or how i would get it to not look out of place (as the visual aspect of is seems to be aimed just for development time viewing). All in all I'm quite open to any suggestions on how to approach this task as currently I'm not sure where to start. Thanks in advance.

    Read the article

  • Software licensing template that gives room for restricting usage to certain industries/uses of software/source

    - by BSara
    *Why this question is not a duplicate of the questions specified as such: I did not ask if there was a license that restricted specific uses and I did not ask if I could rewrite every line of any open source project. I asked very specifically: "Does there exist X? If not, can I Y with Z?". As far as I can tell, the two questions that were specified as duplicates do not answer my specific question. Please remove the duplicate status placed on the question. I'm developing some software that I would like to be "semi" open source. I would like to allow for anyone to use my software/source unless they are using the software/source for certain purposes. For example, I don't want to allow usage of the software/source if it is being used to create, distribute, view or otherwise support pornography, illegal purposes, etc. I'm no lawyer and couldn't ever hope to write a license myself nor do I have to time to figure how to best do this. My question is this: Does there exist a freely available license or a template for a license that I can use to license my software under they conditions explained above just like one can use the Creative Commons licenses? If not, am I allowed to just alter one of Creative Commons licenses to meet my needs?

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • Does unit testing lead to premature generalization (specifically in the context of C++)?

    - by Martin
    Preliminary notes I'll not go into the distinction of the different kinds of test there are, there are already a few questions on these sites regarding that. I'll take what's there and that says: unit testing in the sense of "testing the smallest isolatable unit of an application" from which this question actually derives The isolation problem What is the smallest isolatable unit of a program. Well, as I see it, it (highly?) depends on what language you are coding in. Micheal Feathers talks about the concept of a seam: [WEwLC, p31] A seam is a place where you can alter behavior in your program without editing in that place. And without going into the details, I understand a seam -- in the context of unit testing -- to be a place in a program where your "test" can interface with your "unit". Examples Unit test -- especially in C++ -- require from the code under test to add more seams that would be strictly called for for a given problem. Example: Adding a virtual interface where non-virtual implementation would have been sufficient Splitting -- generalizing(?) -- a (smallish) class further "just" to facilitate adding a test. Splitting a single-executable project into seemingly "independent" libs, "just" to facilitate compiling them independently for the tests. The question I'll try a few versions that hopefully ask about the same point: Is the way that Unit Tests require one to structure an application's code "only" beneficial for the unit tests or is it actually beneficial to the applications structure. Is the generalization code need to exhibit to be unit-testable useful for anything but the unit tests? Does adding unit tests force one to generalize unnecessarily? Is the shape unit tests force on code "always" also a good shape for the code in general as seen from the problem domain? I remember a rule of thumb that said don't generalize until you need to / until there's a second place that uses the code. With Unit Tests, there's always a second place that uses the code -- namely the unit test. So is this reason enough to generalize?

    Read the article

  • How would I batch rename a lot of files using command-line?

    - by Whisperity
    I have a problem which I am unable to solve: I need to rename a great dump of files using patterns. I tried using this, but I always get an error. I have a folder, inside with a lot of files. Running ls -1 | wc -l, it returns that I have like 160000 files inside. The problem is, that I wish to move these files to a Windows system, but most of them have characters like : and ? in them, which makes the file unaccessible on said Windows-based systems. (As a "do not solve but deal with" method, I tried booting up a LiveCD on the Windows system and moving the files using the live OS. Under that Ubuntu, the files were readable and writable on the mounted NTFS partition, but when I booted back on Windows, it showed that the file is there but Windows was unable to access it in any fashion: rename, delete or open.) I tried running rename 's/\:/_' * inside the folder, but I got Argument list too long error. Some search revealed that it happens because I have so many files, and then I arrived here. The problem is that I don't know how to alter the command to suit my needs, as I always end up having various errors like Trying find -name '*:*' | xargs rename : _, it gives xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option [\n] syntax error at (eval 1) line 1, near ":" [\n] xargs: rename: exited with status 255; aborting Adding the -0 after xargs turns the error message to xargs: argument line too long These files are archive files generated by various PHP scripts. The best solution would be having a chance to rename them before they are moved to Windows, but if there is no way to do it, we might have a way to rename the files while they are moved to Windows. I use samba and proftpd to move the files. Unfortunately, graphical software are out of the question as the server containing the files is what it is, a server, with only command-line interface.

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • Best way to mask 2D sprites in XNA?

    - by electroflame
    I currently am trying to mask some sprites. Rather than explaining it in words, I've made up some example pictures: The area to mask (in white) Now, the red sprite that needs to be cropped. The final result. Now, I'm aware that in XNA you can do two things to accomplish this: Use the Stencil Buffer. Use a Pixel Shader. I have tried to do a pixel shader, which essentially did this: float4 main(float2 texCoord : TEXCOORD0) : COLOR0 { float4 tex = tex2D(BaseTexture, texCoord); float4 bitMask = tex2D(MaskTexture, texCoord); if (bitMask.a > 0) { return float4(tex.r, tex.g, tex.b, tex.a); } else { return float4(0, 0, 0, 0); } } This seems to crop the images (albeit, not correct once the image starts to move), but my problem is that the images are constantly moving (they aren't static), so this cropping needs to be dynamic. Is there a way I could alter the shader code to take into account it's position? Alternatively, I've read about using the Stencil Buffer, but most of the samples seem to hinge on using a rendertarget, which I really don't want to do. (I'm already using 3 or 4 for the rest of the game, and adding another one on top of it seems overkill) The only tutorial I've found that doesn't use Rendertargets is one from Shawn Hargreaves' blog over here. The issue with that one, though is that it's for XNA 3.1, and doesn't seem to translate well to XNA 4.0. It seems to me that the pixel shader is the way to go, but I'm unsure of how to get the positioning correct. I believe I would have to change my onscreen coordinates (something like 500, 500) to be between 0 and 1 for the shader coordinates. My only problem is trying to work out how to correctly use the transformed coordinates. Thanks in advance for any help!

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • 12c - Invisible Columns...

    - by noreply(at)blogger.com (Thomas Kyte)
    Remember when 11g first came out and we had "invisible indexes"?  It seemed like a confusing feature - indexes that would be maintained by modifications (hence slowing them down), but would not be used by queries (hence never speeding them up).  But - after you looked at them a while, you could see how they can be useful.  For example - to add an index in a running production system, an index used by the next version of the code to be introduced later that week - but not tested against the queries in version one of the application in place now.  We all know that when you add an index - one of three things can happen - a given query will go much faster, it won't affect a given query at all, or... It will make some untested query go much much slower than it used to.  So - invisible indexes allowed us to modify the schema in a 'safe' manner - hiding the change until we were ready for it.Invisible columns accomplish the same thing - the ability to introduce a change while minimizing any negative side effects of that change.  Normally when you add a column to a table - any program with a SELECT * would start seeing that column, and programs with an INSERT INTO T VALUES (...) would pretty much immediately break (an INSERT without a list of columns in it).  Now we can add a column to a table in an invisible fashion, the column will not show up in a DESCRIBE command in SQL*Plus, it will not be returned with a SELECT *, it will not be considered in an INSERT INTO T VALUES statement.  It can be accessed by any query that asks for it, it can be populated by an INSERT statement that references it, but you won't see it otherwise.For example, let's start with a simple two column table:ops$tkyte%ORA12CR1> create table t  2  ( x int,  3    y int  4  )  5  /Table created.ops$tkyte%ORA12CR1> insert into t values ( 1, 2 );1 row created.Now, we will add an invisible column to it:ops$tkyte%ORA12CR1> alter table t add                     ( z int INVISIBLE );Table altered.Notice that a DESCRIBE will not show us this column:ops$tkyte%ORA12CR1> desc t Name              Null?    Type ----------------- -------- ------------ X                          NUMBER(38) Y                          NUMBER(38)and existing inserts are unaffected by it:ops$tkyte%ORA12CR1> insert into t values ( 3, 4 );1 row created.A SELECT * won't see it either:ops$tkyte%ORA12CR1> select * from t;         X          Y---------- ----------         1          2         3          4But we have full access to it (in well written programs! The ones that use a column list in the insert and select - never relying on "defaults":ops$tkyte%ORA12CR1> insert into t (x,y,z)                         values ( 5,6,7 );1 row created.ops$tkyte%ORA12CR1> select x, y, z from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7and when we are sure that we are ready to go with this column, we can just modify it:ops$tkyte%ORA12CR1> alter table t modify z visible;Table altered.ops$tkyte%ORA12CR1> select * from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7I will say that a better approach to this - one that is available in 11gR2 and above - would be to use editioning views (part of Edition Based Redefinition - EBR ).  I would rather use EBR over this approach, but in an environment where EBR is not being used, or the editioning views are not in place, this will achieve much the same.Read these for information on EBR:http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o10asktom-172777.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-mar/o20asktom-098897.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-may/o30asktom-082672.html

    Read the article

  • Should I, and how do I incorporate microdata into my asp.net website with 47 pages?

    - by Jason Weber
    I have an asp.net (vb) with 47 pages. The problem is that it's in 10 different languages, although 98% just use English. I have 5 master pages. I've read Google Webmaster Tools, but I'm still confounded. I'm reading about how microdata is the way to go. Does this mean I should put itemtype and itemprop span and div tags in my master pages, or should I do all of my 47 pages (.resx resource files) separately? The main key phrase I want throughout search results is "machine vision". For instance, the first couple sentences on my "about.aspx" page are: <span itemprop="name">USS Vision Inc.</span> (USS) is a privately-owned company with headquarters in <span itemprop="locality">Detroit, Michigan, USA</span>. We design, engineer, produce, and integrate special machine vision error-proofing products and <a href="http://www.ussvision.com/services/" target="_self" itemprop="url">services</a> that create lean factories by improving the quality of manufactured products, and by significantly reducing manufacturing costs through advanced automation. Am I doing this right, or how would I do this if I'm not? Should I use the itemprop="url" or other rich snippets for every link in my website? I mean, do I need to add an itemprop to just about everything, or can I just alter my master pages? Any guidance in this regard to help improve my SEO and SERPS would be greatly appreciated!

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • Artists and music - i need help to decide wich cms to use.

    - by infty
    A friend has asked me to build a site with the following options: staff members must be able to add new music and artists to the page a gallery must be provided - it is also good if each artist has the ability to have his/her own, smaller, gallery users must be able to vote for artists users must be able to alter in discussions (forums or comments sections) staff members must be able to blog staff members must be able to write articles I did a small project where i actually implementet all of these features, but i want to use an existing content managment system for all of these features so that future devolpers can, hopefully, more easy extend the website. And also, so that i dont have to provide to much documentation. I have never developed a website using an external cms like drupal or wordpress and after seeing hours of tutorial videos of both systems, i still cant make up my mind on wheter i should : a) use Drupal 7 b) use Wordpress 3 c) create my own cms I can only imagine that staff members would also want to create content using iphone or android based mobile devices. But this is not a required feature. Can someone, with experience, please tell me about their experiences with bigger projects like this? The site will approx. have a total of 400 000 - 500 000 visitors (not daily visitors, based on numbers from last year in a period of 4 months)

    Read the article

  • How do I break down and plan a personal programming project?

    - by Pureferret
    I've just started a programming job where I'm applying my 'How to code' knowledge to what I'm being taught of 'How to Program' (They are different!). As part of this, I've been taught how to capture requirements from clients before starting a new project. But... How do I do this for a nebulous personal project? I say nebulous, as I often find halfway through programming something, I want to expand what my program will do, or alter the result. Eventually, I'm tangled in code and have to restart. This can be frustrating and off-putting. Conversely, when given a fixed task and fixed requirements, it's much easier to dig in and get it done. At work I might be told "Today/This week you need to add XYZ to program 1" That is easy to do. At home (for fun) I want to make, say, a program that creates arbitrary lists. It's a very generic task. How do I start with that? I don't need it to do anything, but I want it to do something. So how do I plan a personal programming project? Related: What to plan before starting development on a project?

    Read the article

  • Exadata???DiskGroup

    - by Liu Maclean(???)
    Exadata???Asm Diskgroup ???????: 1.??dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk ????active?griddisk [root@dm01db01 ~]# dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk dm01cel01: DATA_DM01_CD_00_dm01cel01 active dm01cel01: DATA_DM01_CD_01_dm01cel01 active dm01cel01: DATA_DM01_CD_02_dm01cel01 active dm01cel01: DATA_DM01_CD_03_dm01cel01 active dm01cel01: DATA_DM01_CD_04_dm01cel01 active dm01cel01: DATA_DM01_CD_05_dm01cel01 active dm01cel01: DATA_DM01_CD_06_dm01cel01 active dm01cel01: DATA_DM01_CD_07_dm01cel01 active dm01cel01: DATA_DM01_CD_08_dm01cel01 active dm01cel01: DATA_DM01_CD_09_dm01cel01 active dm01cel01: DATA_DM01_CD_10_dm01cel01 active dm01cel01: DATA_DM01_CD_11_dm01cel01 active dm01cel01: DBFS_DG_CD_02_dm01cel01 active dm01cel01: DBFS_DG_CD_03_dm01cel01 active dm01cel01: DBFS_DG_CD_04_dm01cel01 active dm01cel01: DBFS_DG_CD_05_dm01cel01 active dm01cel01: DBFS_DG_CD_06_dm01cel01 active dm01cel01: DBFS_DG_CD_07_dm01cel01 active dm01cel01: DBFS_DG_CD_08_dm01cel01 active dm01cel01: DBFS_DG_CD_09_dm01cel01 active dm01cel01: DBFS_DG_CD_10_dm01cel01 active dm01cel01: DBFS_DG_CD_11_dm01cel01 active dm01cel01: RECO_DM01_CD_00_dm01cel01 active dm01cel01: RECO_DM01_CD_01_dm01cel01 active dm01cel01: RECO_DM01_CD_02_dm01cel01 active dm01cel01: RECO_DM01_CD_03_dm01cel01 active dm01cel01: RECO_DM01_CD_04_dm01cel01 active dm01cel01: RECO_DM01_CD_05_dm01cel01 active dm01cel01: RECO_DM01_CD_06_dm01cel01 active dm01cel01: RECO_DM01_CD_07_dm01cel01 active dm01cel01: RECO_DM01_CD_08_dm01cel01 active dm01cel01: RECO_DM01_CD_09_dm01cel01 active dm01cel01: RECO_DM01_CD_10_dm01cel01 active dm01cel01: RECO_DM01_CD_11_dm01cel01 active dm01cel02: DATA_DM01_CD_00_dm01cel02 active dm01cel02: DATA_DM01_CD_01_dm01cel02 active dm01cel02: DATA_DM01_CD_02_dm01cel02 active dm01cel02: DATA_DM01_CD_03_dm01cel02 active dm01cel02: DATA_DM01_CD_04_dm01cel02 active dm01cel02: DATA_DM01_CD_05_dm01cel02 active dm01cel02: DATA_DM01_CD_06_dm01cel02 active dm01cel02: DATA_DM01_CD_07_dm01cel02 active dm01cel02: DATA_DM01_CD_08_dm01cel02 active dm01cel02: DATA_DM01_CD_09_dm01cel02 active dm01cel02: DATA_DM01_CD_10_dm01cel02 active dm01cel02: DATA_DM01_CD_11_dm01cel02 active dm01cel02: DBFS_DG_CD_02_dm01cel02 active dm01cel02: DBFS_DG_CD_03_dm01cel02 active dm01cel02: DBFS_DG_CD_04_dm01cel02 active dm01cel02: DBFS_DG_CD_05_dm01cel02 active dm01cel02: DBFS_DG_CD_06_dm01cel02 active dm01cel02: DBFS_DG_CD_07_dm01cel02 active dm01cel02: DBFS_DG_CD_08_dm01cel02 active dm01cel02: DBFS_DG_CD_09_dm01cel02 active dm01cel02: DBFS_DG_CD_10_dm01cel02 active dm01cel02: DBFS_DG_CD_11_dm01cel02 active dm01cel02: RECO_DM01_CD_00_dm01cel02 active dm01cel02: RECO_DM01_CD_01_dm01cel02 active dm01cel02: RECO_DM01_CD_02_dm01cel02 active dm01cel02: RECO_DM01_CD_03_dm01cel02 active dm01cel02: RECO_DM01_CD_04_dm01cel02 active dm01cel02: RECO_DM01_CD_05_dm01cel02 active dm01cel02: RECO_DM01_CD_06_dm01cel02 active dm01cel02: RECO_DM01_CD_07_dm01cel02 active dm01cel02: RECO_DM01_CD_08_dm01cel02 active dm01cel02: RECO_DM01_CD_09_dm01cel02 active dm01cel02: RECO_DM01_CD_10_dm01cel02 active dm01cel02: RECO_DM01_CD_11_dm01cel02 active dm01cel03: DATA_DM01_CD_00_dm01cel03 active dm01cel03: DATA_DM01_CD_01_dm01cel03 active dm01cel03: DATA_DM01_CD_02_dm01cel03 active dm01cel03: DATA_DM01_CD_03_dm01cel03 active dm01cel03: DATA_DM01_CD_04_dm01cel03 active dm01cel03: DATA_DM01_CD_05_dm01cel03 active dm01cel03: DATA_DM01_CD_06_dm01cel03 active dm01cel03: DATA_DM01_CD_07_dm01cel03 active dm01cel03: DATA_DM01_CD_08_dm01cel03 active dm01cel03: DATA_DM01_CD_09_dm01cel03 active dm01cel03: DATA_DM01_CD_10_dm01cel03 active dm01cel03: DATA_DM01_CD_11_dm01cel03 active dm01cel03: DBFS_DG_CD_02_dm01cel03 active dm01cel03: DBFS_DG_CD_03_dm01cel03 active dm01cel03: DBFS_DG_CD_04_dm01cel03 active dm01cel03: DBFS_DG_CD_05_dm01cel03 active dm01cel03: DBFS_DG_CD_06_dm01cel03 active dm01cel03: DBFS_DG_CD_07_dm01cel03 active dm01cel03: DBFS_DG_CD_08_dm01cel03 active dm01cel03: DBFS_DG_CD_09_dm01cel03 active dm01cel03: DBFS_DG_CD_10_dm01cel03 active dm01cel03: DBFS_DG_CD_11_dm01cel03 active dm01cel03: RECO_DM01_CD_00_dm01cel03 active dm01cel03: RECO_DM01_CD_01_dm01cel03 active dm01cel03: RECO_DM01_CD_02_dm01cel03 active dm01cel03: RECO_DM01_CD_03_dm01cel03 active dm01cel03: RECO_DM01_CD_04_dm01cel03 active dm01cel03: RECO_DM01_CD_05_dm01cel03 active dm01cel03: RECO_DM01_CD_06_dm01cel03 active dm01cel03: RECO_DM01_CD_07_dm01cel03 active dm01cel03: RECO_DM01_CD_08_dm01cel03 active dm01cel03: RECO_DM01_CD_09_dm01cel03 active dm01cel03: RECO_DM01_CD_10_dm01cel03 active dm01cel03: RECO_DM01_CD_11_dm01cel03 active ??????????griddisk, ?????’cellcli -e drop griddisk’ ?’cellcli -e create griddisk’????griddisk ,??????drop DBFS_DG???griddisk 2.??ASM???create disk group ?????CELL?IP,????????????? [root@dm01db02 ~]# cat /etc/oracle/cell/network-config/cellip.ora cell="192.168.64.131" cell="192.168.64.132" cell="192.168.64.133" SQL> create diskgroup DATA_MAC normal redundancy 2 DISK 3 'o/192.168.64.131/RECO_DM01_CD_*_dm01cel01' 4 ,'o/192.168.64.132/RECO_DM01_CD_*_dm01cel02' 5 ,'o/192.168.64.133/RECO_DM01_CD_*_dm01cel03' 6 attribute 7 'AU_SIZE'='4M', 8 'CELL.SMART_SCAN_CAPABLE'='TRUE', 9 'compatible.rdbms'='11.2.0.2', 10 'compatible.asm'='11.2.0.2' 11 / 3. MOUNT ???DISKGROUP ALTER DISKGROUP DATA_MAC mount ; 4.???crsctl start/stop resource ora.DATA_MAC.dg ?????

    Read the article

  • Translating an object along its heading

    - by Kuros
    I am working on a simulation that requires me to have several objects moving around in 3D space (text output of their current position on the grid and heading is fine, I do not need graphics), and I am having some trouble getting objects to move along their relative headings. I have a basic understanding of vectors and matrices. I am using a vector to represent their position, and I am also using Euler Angles. I can translate one of my entities with a matrix along whatever axis, and I can alter their heading. For example, if I have an entity at (order is XYZ) 1, 1, 1, with a heading of 0, I can apply a translation matrix to get them to talk to 1, 1, 2 fine. However, if I change their heading to 270, they still walk to 1, 1, 3, instead of 2, 1, 2 as I desire. I have a feeling that my problem lies in not translating my matrix from world space to object space, but I am not sure how to go about that. How can I do this? Addition: I am using 3D vectors to represent their current position and their heading (using the three euler angles). For now, all I want to do is have an entity walk in a square, reporting their current position at each step. So, assuming it starts at 10, 10, 10 I want it to walk as follows: 10,10,10 -> 10, 10, 15 10, 10, 15 -> 5, 10, 15 5, 10, 15 -> 5, 10, 10 5, 10, 10 -> 10, 10, 10 My 1 Z unit translation matrix is as follows: [1 0 0 0] [0 1 0 0] [0 0 1 1] [0 0 0 1] My rotation matrix is as follows: [0 0 1 0] [0 1 0 0] [-1 0 0 0] [0 0 0 1]

    Read the article

  • 2 folders in Sys/Class/Backlight?

    - by zebrapie
    ISSUE: Backlight brightness does not change. More Detail: Brightness will not change, using both 'System Settings-Screen', or FN keys (Brightness bar shows and moves, but screen brightness does not change). Notcied a post in this thread (http://ubuntuforums.org/showthread.php?t=1866283) about having multiple folders in Sys-Class-Backlight... I HAVE TWO FOLDERS TOO! 'intel_backlight' and 'acpi_video0' Using the function keys, alters the value in the acpi_video0's 'Brightness' file - But doesn't actually alter the brightness of the screen. If I add 'backlight=vendor' in Grub, my function keys then edit the value in the 'Intel_Backlight brightness file. - But again doesnt actually change the brightness of the screen. Computer: Fujitsu Siemans Pi2515, Intel Integrated Graphics, No hdd partition. Already Tried: -Editing grub to contain: acpi_osi=Linux acpi_backlight=vendor -http://ubuntuguide.net/change-screen-brightness-with-fn-key-in-ubuntu-11-0410-10 -sudo apt-get install acpi -$ sudo setpci -s 00:02.0 F4.B=20 -Brightness does not adjust in fallback mode either. -Reinstalling OS, Using Linux Mint (Same problem). -Upgrading and downgrading BIOS. Many thanks for reading, I understand this problem may need a bit of a Linux pro to sort. If anyones up for the challenge i'll spend any amount of time being walked through this, posting results. Don't want to give up here!

    Read the article

  • How difficult is it for an artist to make their own artwork cohesive to another artists' style? [on hold]

    - by user36200
    I have a lot of artwork I purchased from a website, but the artist who drew the game assets is unavailable. I need to create additional artwork which fits with this style, but I am not an artist- nor do I have any idea of how artists work. Obviously, the solution is to find a new artist, who I can pay to draw this artwork while keeping it to look at certain way. I am scared of wasting money though. I don't want to contract an artist, only to find out it is extremely difficult for someone to match another person's art style. I don't need it to be identical, I just need it to be cohesive. I also want to know what I'm asking of people, before I ask them. Artists are workers just like me, and deserve to be understood when contracted. As an artist, is it extra difficult or time consuming to alter your artwork to match a certain style? Does it require a lot of talent to make a cohesive piece of art? To be specific, I am talking about structures, such as 2D symbols of towns for a map. They all have this "gritty" penciling effect to them and lots of saturated colors, which is what will be required when I say "cohesive". Along with looking like the architecture belongs in the same world.

    Read the article

  • Artists and music - Need Help Deciding on a CMS

    - by infty
    A friend has asked me to build a site with the following options: staff members must be able to add new music and artists to the page a gallery must be provided - it is also good if each artist has the ability to have his/her own smaller gallery users must be able to vote for artists users must be able to alter in discussions (forums or comments sections) staff members must be able to blog staff members must be able to write articles I did a small project where i actually implemented all of these features, but I want to use an existing content management system for all of these features so that future developers can, hopefully, more easy extend the website. And also, so that I don't have to provide too much documentation. I have never developed a website using an external CMS like Drupal or Wordpress and after seeing hours of tutorial videos of both systems, I still can't make up my mind on whether i should : a) use Drupal 7 b) use Wordpress 3 c) create my own cms I can imagine that staff members would also want to create content using iPhone or android based mobile devices, but this is not a required feature. Can someone, with experience, please tell me about their experiences with larger projects like this? The site will have approximately 400 000 - 500 000 visitors (not daily visitors, based on numbers from last year in a period of 4 months)

    Read the article

  • How to input data into user defined variables into MySql query

    - by user292791
    Simple Shell script echo "Enter 1 for month of March" echo "Enter 2 for month of April" echo "Enter 3 for month of May" read Month case "$Month" in 1) echo "enter establishment name" read a; mysql -u root -p $a < "March.sql";; 2) echo "enter establishment name" read b; mysql -u root -p $b < "April.sql";; 3) echo "enter establishment name" read c; mysql -u root -p $c < "May.sql";; esac done In this i have three other query files March.sql, April.sql, May.sql. i'm linking this in shell script . Example of .sql file: SELECT DISTINCT substr( a.case_no, 3, 2 ), b.case_type, b.type_name, a.case_no into outfile '/tmp/April.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n' FROM Civil_t AS a, Case_type_t AS b, disposal_proc AS c WHERE substr( a.case_no, 3, 2 ) = b.case_type AND a.date_of_decision BETWEEN '2014-04-01' AND '2014-04-30' AND a.case_no = c.case_no AND a.court_no =1; I have to alter the .sql script every time. Is there any method to read the variables from shell script and use it in mysql. For example:- echo "enter date" read a #input date Now i have read a "date" and i want to use it in March.sql query in where clause. Is there is any method of using this variable in .sql query.

    Read the article

  • How to pass dynamic information between a form and a service? [closed]

    - by qminator
    I have a design problem and hopefully the braintrust which is stack exchange can help. I have a generic form, which loads a dataset and displays it. It never has direct knowledge of what it contains but can pass it to a service for manipulation (via an Onclick event for example). However, the form might need to alter its behaviour based on the manipulation by the service. Example: The service realises this dataset requires sending of an email by the user and needs to send an instruction to the form to open up a mail form. My idea is thus: I'm thinking about passing back some type of key/name dictionary, filled with commands which the service requires. They could then be interpeted by the form without it need to reference something specific. Example: IF the service decides that the dataset needs to refresh it would send back a key/name pair, I might even be able to chain commands. Refreshing the dataset and sending a mail. Refresh / "Foo" Mail / "[email protected]" The form would reference an action explicitly (Refresh or Mail) but not the instructions themselves. Is this a valid idea or am I wasting time?

    Read the article

  • What is a quick and easy way to make a minimal news blog that pulls rss feeds? I have 2 days [on hold]

    - by user44188
    My boss wants me to make a website that pulls news from various rss feeds from all over the web. I need to pull something together, that looks pro, quick! I started by going to themeforest and looked around forever, but nothing really looked right. I need something mostly built like this already that I can just alter into our site. I can do most cms, photoshop, some code, I used to do it like this freelance years ago, but it's not really my job now. It just sort of came up suddenly, so I wanna pull through. This is a good example of the overall structure I had in mind, but it just isn't clean enough. All of the news feeds will essentially be about the same criteria, but will pertain to different geographic areas. It would be a huge plus if I could segregate the news visually in some clever way based on geography. (Like a map?) I'm definitely open to all suggestions. I have to get this done by friday!

    Read the article

  • Quit job for another but current employer doesn't want to lose me. Would it be a bad idea to stay?

    - by Confused
    So I've handed in my notice at my current job as I've been offered a job at another company. However, my current employer doesn't want to lose me and they want to know what I want to stay. I mostly enjoy working there so I'd be open to negiotiation. The new job was an unexpected opportunity that presented itself. Such things I'd be looking for are: Better computers for developers Opportunity to work from home occasionally Improved internet access (e.g. able to download software, no keyword blocking) Chance to work on other technologies than my primary (we do have projects on other technologies) Pay increase (though this isn't my primary motivation) I found out that some of these were already in progress when I handed in my notice :( Is it ever a good idea to remain at a company after you've resigned? What if they meet all my conditions and alter my contract accordingly? Will I burn my bridges at the new company (I've already told them I'd accept their offer)? Update: Thanks for the answers. Quite a mixed bag which was interesting. Anyway, just so you know, I've chosen to stay at my current company. So far, it definately feels like the right decision. Guess I won't know for a few months whether is was though.

    Read the article

  • How do I develop database-utilizing application in an agile/test-driven-development way?

    - by user39019
    I want to add databases (traditional client/server RDBMS's like Mysql/Postgresql as opposed to NoSQL, or embedded databases) to my toolbox as a developer. I've been using SQLite for simpler projects with only 1 client, but now I want to do more complicated things (ie, db-backed web development). I usually like following agile and/or test-driven-development principles. I generally code in Perl or Python. Questions: How do I test my code such that each run of the test suite starts with a 'pristine' state? Do I run a separate instance of the database server every test? Do I use a temporary database? How do I design my tables/schema so that it is flexible with respect to changing requirements? Do I start with an ORM for my language? Or do I stick to manually coding SQL? One thing I don't find appealing is having to change more than one thing (say, the CREATE TABLE statement and associated crud statements) for one change, b/c that's error prone. On the other hand, I expect ORM's to be a low slower and harder to debug than raw SQL. What is the general strategy for migrating data between one version of the program and a newer one? Do I carefully write ALTER TABLE statements between each version, or do I dump the data and import fresh in the new version?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >