Search Results

Search found 37101 results on 1485 pages for 'array based'.

Page 476/1485 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Artists and music - Need Help Deciding on a CMS

    - by infty
    A friend has asked me to build a site with the following options: staff members must be able to add new music and artists to the page a gallery must be provided - it is also good if each artist has the ability to have his/her own smaller gallery users must be able to vote for artists users must be able to alter in discussions (forums or comments sections) staff members must be able to blog staff members must be able to write articles I did a small project where i actually implemented all of these features, but I want to use an existing content management system for all of these features so that future developers can, hopefully, more easy extend the website. And also, so that I don't have to provide too much documentation. I have never developed a website using an external CMS like Drupal or Wordpress and after seeing hours of tutorial videos of both systems, I still can't make up my mind on whether i should : a) use Drupal 7 b) use Wordpress 3 c) create my own cms I can imagine that staff members would also want to create content using iPhone or android based mobile devices, but this is not a required feature. Can someone, with experience, please tell me about their experiences with larger projects like this? The site will have approximately 400 000 - 500 000 visitors (not daily visitors, based on numbers from last year in a period of 4 months)

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Most efficient Implementation a Tree in C++

    - by Topo
    I need to write a tree where each element may have any number of child elements, and because of this each branch of the tree may have any length. The tree is only going to receive elements at first and then it is going to use exclusively for iterating though it's branches in no specific order. The tree will have several million elements and must be fast but also memory efficient. My plan makes a node class to store the elements and the pointers to its children. When the tree is fully constructed, it would be transformed it to an array or something faster and if possible, loaded to the processor's cache. Construction and the search on the tree are two different problems. Can I focus on how to solve each problem on the best way individually? The construction of has to be as fast as possible but it can use memory as it pleases. Then the transformation into a format that give us speed when iterating the tree's branches. This should preferably be an array to avoid going back and forth from RAM to cache in each element of the tree. So the real question is which is the structure to implement a tree to maximize insert speed, how can I transform it to a structure that gives me the best speed and memory?

    Read the article

  • 7 drived classes with one common base class

    - by user144905
    i have written the following code, //main.cpp #include<iostream> #include<string> #include"human.h" #include"computer.h" #include"referee.h" #include"RandomComputer.h" #include"Avalanche.h" #include"Bureaucrat.h" #include"Toolbox.h" #include"Crescendo.h" #include"PaperDoll.h" #include"FistfullODollors.h" using namespace std; int main() { Avalanche pla1; Avalanche pla2; referee f; pla1.disp(); for (int i=0;i<5;i++) { cout<<pla2.mov[i]; } return 0; } in this program all included classes except referee.h and human.h are drived from computer.h. each drived calls has a char array variable which is initialized when a member of a drived class is declared. the problem is that when i declare tow diffrent drived class memebers lets say Avalache and ToolBox. upon printing the char array for one of them using for loop it prints nothing. However if i declare only one of them in main.cpp the it works properly. and the file for computer.h is as such: #ifndef COMPUTER_H #define COMPUTER_H class computer { public: int nump; char mov[]; void disp(); }; #endif ToolBox.h is like this: #ifndef TOOLBOX_H #define TOOLBOX_H #include"computer.h" class Toolbox: public computer { public: Toolbox(); }; #endif finally Avalanche.h is as following: #ifndef AVALANCHE_H #define AVALANCHE_H #include"computer.h" class Avalanche: public computer { public: Avalanche(); }; #endif

    Read the article

  • Web Applications Desktop Integrator (WebADI) Feature for Install Base Mass Update in 12.1.3

    - by LuciaC
    Purpose The integration of WebADI technology with the Install Base Mass Update function is designed to make creation and update of bulk item instances much easier than in the past. What is it? WebADI is an Excel-based desktop application where users can download an Excel template with item instances pre-populated based on search criteria.  Users can create and update item instances in the Excel sheet and finally upload the Excel data using an "upload" option available in the Excel menu. On upload, the modified data will bulk upload to interface tables which are processed by an asynchronous concurrent program that users can monitor for the uploaded results. Advantage: This allows users to work in a disconnected Environment: session time outs can be avoided, as once a template is downloaded the user can work in a disconnected environment and once all updates are done the new input can be uploaded. Also the data can be saved for later update and upload. For more details review the following: R12.1.3 Install Base WebADI Mass Update Feature (Doc ID 1535936.1) How To Use Install Base WebADI Mass Update Feature In Release 12.1.3 (Doc ID 1536498.1).

    Read the article

  • raid advice with SSD and two HDD

    - by Nin
    I have a new machine with one 128GB SSD and two 1TB HDD. On the SSD is the OS and my initial thought was to put the two HDD in RAID 1 for user data. After some more thought I came up with two other setups and now I'm in doubt :) Can someone advise what would be the best setup? 1: single SSD and HDD in RAID 1 (original thought) 2: Create 2 partitions on the HDD (128GB and 872GB). Put the two 872GB in RAID 1 and create another RAID 1 with the SSD and one 128GB HDD partition. 3: Create 2 partitions on the HDD (750/250), put the 705GB in RAID 1 and use the 2 250GB as backup and make automatic snapshots of the SSD to (one of) these partitions. I think the 2 main questions are: Is it advisable to create a raid array with only part of a drive and actively use the other part of that drive or should you always use the full disk? Is it advisable to create a raid 1 array with a SSD and HDD or will that blow the whole speed advantage of the SSD?

    Read the article

  • Duplication of Windows 7 Backup

    - by Steven Pickles
    I use the built in backup utility for Windows 7 because it's automated and flexible enough to allow me to schedule a daily shadow copy backup of particular files and folders directly to a separate internal RAID 0 array (2 x 1TB). It's also lightweight and stays out of the way. For off-site backup purposes, each week I copy the contents of the internal backup from the RAID 0 array to an external 1 TB drive. I then store move this drive to a different building. The copy from the internal backup to the external backup typically works like this: mount and erase contents of external drive highlight "file" on internal drive, hit CTRL+C CTRL+V on root directory of external drive Is there a better way to synchronize? Microsoft's SyncToy application does a pitiful job, and often leaves the folders not truly synchronized... which completely defeats the ability to use the backup's restore feature.

    Read the article

  • Linux Raid: Can mdadm --grow a raid1 while mounted?

    - by Chris
    I have 2 500gb drives in a RAID1 setup that I needed to upgrade for more space. I mdadm --fail'ed each drive in turn and I used dd to copy each drive to it's respective larger drive (2tb each), removed the smaller drives and replaced them with the larger drives, and reassembled the array and forced a resync. So now I've got a 500gb RAID1 sitting on 2TB drives, and wish to grow them. The plan is to use mdadm --manage /dev/md0 --grow to grow them, then boot a rescue cd, assemble the array under that environment, and do the resize2fs on them. Can I use mdadm --grow on a mounted and live filesystem? Also, do I need more options to make sure the grow operation stays raid1?

    Read the article

  • What is the correct way to implement Auth/ACL in MVC?

    - by WiseStrawberry
    I am looking into making a correctly laid out MVC Auth/ACL system. I think I want the authentication of a user (and the session handling) to be separate from the ACL system. (I don't know why but this seems a good idea from the things I've read.) What does MVC have to do with this question you ask? Because I wish for the application to be well integrated with my ACL. An example of a controller (CodeIgniter): <?php class forums extends MX_Controller { $allowed = array('users', 'admin'); $need_login = true; function __construct() { //example of checking if logged in. if($this->auth->logged_in() && $this->auth->is_admin()) { echo "you're logged in!"; } } public function add_topic() { if($this->auth->allowed('add_topic') { //some add topic things. } else { echo 'not allowed to add topic'; } } } ?> My thoughts $this->auth would be autoloaded in the system. I would like to check the $allowed array against the user currently (not) logged in and react accordingly. Is this a good way of doing things? I haven't seen much literature on MVC integration and Auth. I want to make things as easy as possible.

    Read the article

  • powershell indentation

    - by Steve B
    I'm writing a large script that deploys an application. This script is based on several nested functions call. Is there any way to "ident" the output based on the depth ? For example, I have : function myFn() { Write-Host "Start of myfn" myFnNested() Write-Host "End of myfn" } function myFnNested() { Write-Host "Start of myFnNested" Write-Host "End of myFnNested" } Write-Host "Start of myscript" Write-Host "End of myscript" The output of the script will be : Start of myscript Start of myfn Start of myfnNested End of myFnNested End of myFn End of myscript What I want to achieve is this output : Start of myscript Start of myfn Start of myfnNested End of myFnNested End of myFn End of myscript As I don't want to hardly code the number of spaces (since I does not know the depth level in complex script), how can I simply reach my goal ? Maybe something like this ? function myFn() { Indent() Write-Host "Start of myfn" myFnNested() Write-Host "End of myfn" UnIndent() } function myFnNested() { Indent() Write-Host "Start of myFnNested" Write-Host "End of myFnNested" UnIndent() } Write-Host "Start of myscript" Write-Host "End of myscript"

    Read the article

  • Has anyone used game salad before and how does it compare with cocos2d in terms of 2d game development

    - by jih
    First a short intro. I am new to the game development space and want to make some 2d games for iOS. I first come across cocos2d and kobold but then wanted something more graphical for rapid prototyping. I then found Game Maker which doesn't support iOS but is fairly easy to learn and then found Game Salad which supports iOS as well as other platforms. I know this question has been ask before but I want to know in terms of the types of games I want to develop what an learning investment path would be best. The types of games genre I am interest are: Side scrollers Simple games like diamond dash or ninja fruits, shanghai, etc Old fashioned zelda or dragonquest type (nintendo fan here:-) 2d adventure RPG games (real time or turn based) Mystery turn based games like carmen sandiego, wizardry, myst etc. So now the question becomes Which game development environment should I invest my time in learning. Game Salad or cocos2d? It would seem game salad would be great for quickies being graphical but in terms of 2d platform games etc would there be speed/performance/feature penalties? Are there certain 2d games genre of the 4 above that Game salad is better at while certain type cocos2d would be better at? Anyone with experience of both can share some pointers? Thanks. inexperienced jih

    Read the article

  • linux installation cd enviroment

    - by haw3d
    i recently make a custom linux system, for my special need. its on my HDD, but i want to create a cd for installation. multiple day ego, i found a livecd and create install script for that, but in power failure my HDD is gone, and i cant found that live cd again. my install script is based on recoverin tar.gz backup. my requorement is: based on glibc (not uclibc) recognize every devices have you any suggestion? excuse me for my bad english.

    Read the article

  • How to Downgrade Razor 3 and fix the issue that CSHTML not work in VS10,12 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/how-to-downgrade-razor-3-and-fix-the-issue-that.aspxFew days ago I migrate a project to MVC 4 and suddenly I have seen that MVC project’s cshtml file is no longer working. The problem happen because my project is now based on Razor 3 RC and VS12 doesn’t even have support it. (Remember that VS team will ship support in VS update 4). My migration update it to Razor 3 (which is not related to MVC 4, MVC 4 used old version of Razor 2).   So how to fix the problem. Since VS update 4 in development and MVC 3 support exist in both old Version of VS (10,12) then better to migrate back our Razor to old version so we can use our project in VS 10 or 12. If your project have Razor 3 and it seem that Syntax highlighting doesn’t work for you then I suggest you to try this Nuget package https://www.nuget.org/packages/UpgradeMvc3ToMvc4 Remember that this will not be succeed. What you need to do is delete package folder in your project and now open the packages.config remove all entry of package now.   Now Run this command PM> Install-Package UpgradeMvc3ToMvc4 If this is failed then see what thing make error in console. simply remove the reference and try again. Now run it and see this will work.   After run this you will see that WebGrease Dll have a version number issue. Simply update it to version 1.5.2 and now you have ready your project to run it in .net 4. If you do bin deployment then you don’t need to have installed MVC 4 on server either. Remember that MVC 5 is based on .net 4.5 which simply means you can’t run it in VS10. until VS12 update 4 MVC 5 cshtml page will be work as simple html pages (syntax highlighting and intellisense). Thanks for read my post

    Read the article

  • Recovering data from a Silicon Image SiI3114 RAID

    - by Isaac Truett
    I have a set of 3 disks in RAID 5 originally created with a Silicon Image SiI3114 on-board RAID controller. The old motherboard is dead. The new motherboard (which has a different raid controller) won't boot from the array. I have no reason to believe that the drives are damaged or corrupted. I'm 99% sure that the problem is that the new controller isn't compatible or I'm not setting it up properly. Is it possible to recover data from the drives using a different controller? Would a PCI card like this one allow me to read from the array again?

    Read the article

  • Suggestions for hosted file sharing services

    - by Jon
    Before I pose my question, I will give some insight as per my scenario: I work for a small business (cost is an important factor) Our bandwidth is limited and would not support an in-house FTP server We need to share files (mostly pdf, inDesign, Illustrator documents) to our clients, and as we expand, we are finding that our current locally-hosted FTP solution is too slow and is becoming a detriment to our sales team. What we need is a remotely hosted solution to share files with our clients, specifically with the following features: Greater than 100gb of secure storage The Ability to distribute unique log in credentials to clients, granting access to a personalized directory or folder, while limiting access to other files on the server. A relatively simple web-based UI for clients with limited computer knowledge We have considered a dedicated remote server, and web-based services (box.net, yousendit.com, onehub.com, filesanywhere.com) but I am unsure as per the direction we should be taking - have I left another solution out? What would you suggest? Thanks in advance.

    Read the article

  • Summing up spreadsheet data when a column contains “#N/A”

    - by Doris
    I am using Goggle Spreadsheet to work up some historical stock data and I use a Google function (=googlefinance=…) to import the historical closing prices for a stock, then I work with that data further. But, in that list of data generated from the =googlefinance=… function, one of the amounts comes up as #N/A. I don’t know why, but it happens for various symbols that I have tried. When I use a max function on the array, which includes the N/A line, the max function does not come up with anything but an N/A, so the N/A throws off any further functions. I thought I’d create a second column to the right of the imported data in which I can give it an IF function, something like, If ((A1 <0), "0", A1), with the expectation that it would return 0 if cell A1 is the N/A, and the cell value if it is not N/A. However, this still returns N/A. I also tried an IS BLANK function but that resulted in the same NA. Does anyone have any suggestions for a workaround to eliminate the N/A from an array of numbers that I am trying to work with?

    Read the article

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

  • Internal speakers do not work

    - by Nikcefo
    I have a new (from scratch, not update) installation of Ubuntu 12.10 on my notebook, Asus A3Ac (It is based on Intel Centrino - Pentium M with full duplex Intel HDA codec). In older versions of Debian-based systems Intel HDA audio didn't work correctly. Alsamixer display wrong outputs and inputs (more than notebook really have). In clean installations internal speakers were playing, but they didn't mute when headphones was plugged in. There was a solution (propably not the best but working) - edit as root /etc/modprobe.d/alsa-base.conf and add a line "options snd-hda-intel model=z71v position_fix=1". After restart it worked correctly (alsamixer displayed correct devices and internal speakers were muted after I plugged in headphones). It was also working in Ubuntu 12.04. In Ubuntu 12.10 I have another problem. The alsamixer in default (don't have to edit alsa-base.conf) display correct outputs and inputs but internal speakers don't working if the headphones isn't plugged in. I have to manually disable "Auto-Mut" option in alsamixer, then the internal spakers works (but of course they don't mute when the headphones are pluged in). Thanks for any idea how to fix it. I'm not sure if it is a bug or it's caused by a "specific hardware". Tomas

    Read the article

  • A* PathFinding Not Consistent

    - by RedShft
    I just started trying to implement a basic A* algorithm in my 2D tile based game. All of the nodes are tiles on the map, represented by a struct. I believe I understand A* on paper, as I've gone through some pseudo code, but I'm running into problems with the actual implementation. I've double and tripled checked my node graph, and it is correct, so I believe the issue to be with my algorithm. This issue is, that with the enemy still, and the player moving around, the path finding function will write "No Path" an astounding amount of times and only every so often write "Path Found". Which seems like its inconsistent. This is the node struct for reference: struct Node { bool walkable; //Whether this node is blocked or open vect2 position; //The tile's position on the map in pixels int xIndex, yIndex; //The index values of the tile in the array Node*[4] connections; //An array of pointers to nodes this current node connects to Node* parent; int gScore; int hScore; int fScore; } Here is the rest: http://pastebin.com/cCHfqKTY This is my first attempt at A* so any help would be greatly appreciated.

    Read the article

  • Unreadable sectors reported by smartd, is it serious?

    - by stribika
    I have a RAID 5 array of 4 disks. In the last 2 days I began to see these messages in the log: Jun 13 23:01:05 localhost smartd[4537]: Device: /dev/sda [SAT], 1 Currently unreadable (pending) sectors Jun 13 23:01:05 localhost smartd[4537]: Device: /dev/sdb [SAT], 2 Currently unreadable (pending) sectors If I have 2 faulty disks then the array should not show all disks OK: md0 : active raid1 sdd1[3] sdb1[1] sdc1[2] sda1[0] 64128 blocks [4/4] [UUUU] Strangely there are no other problems just the log messages. I am worried because sda is new and I previously had problems with sdb. (Completely died but the guy who sold it to me fixed it somehow.) Am I in danger of losing data? What should I do now?

    Read the article

  • Proper way to do texture mapping in modern OpenGL?

    - by RubyKing
    I'm trying to do texture mapping using OpenGL 3.3 and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here. My texcords are in a vertex array. I have my fragment color set to the texture values and texel values. I have my vertex shader sending the texture cords to texture cordinates to be used in the fragment shader. I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. Here is my code: Fragment shader #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void) { gl_FragColor = texture(texture, texture_coord); } Vertex shader #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit Here is my vertex array with texture coordinates: GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y If you need to see all the code, here is a link to every file. Thank you for your help.

    Read the article

  • Recover deleted files on windows 2008 file server

    - by aniga
    We have recently been hit by a weird virus which made all files and folders a system files/folders and also it hid all files and folders par some weird ones it created including: ..exe porn.exe secret.exe password.exe etc We have managed to restore the files with attrib command to unhide and unmark them as system files however we have noticed that we are missing some 4 to 5 folders of which (based on my luck) 2 of them are the two most important client we have. I am not sure if these files were deleted by the worm/virus or by my colleagues who are not owning up to them but the files are now gone. Worst of all, we do not have any backup what so ever (Yes I know, we should not have done that but it is a lesson learned and since last night we have created two forms of backup systems one to external device and one on the cloud, but I doubt any of that will help us now) We have 1 Windows 2008 File server and 4 client computers based on Windows 2007. I would be grateful if anyone can help us on how we can recover from this disaster which could potentially put us out of business.

    Read the article

  • In developing a soap client proxy, which return structure is easier to use and more sensible?

    - by cori
    I'm writing (in PHP) a client/proxy for a SOAP web service. The return types are consistently wrapped in response objects that contain the return values. In many cases this make a lot of sense - for instance when multiple values are being returned: GetDetailsResponse Object ( Results Object ( [TotalResults] => 10 [NextPage] => 2 ) [Details] => Array ( [0] => Detail Object ( [Id] => 1 ) ) ) But some of the methods return a single scalar value or a single object or array wrapped in a response object: GetThingummyIdResponse Object ( [ThingummyId] => 42 ) In some cases these objects might be pretty deep, so getting at properties within requires drilling down several layers: $response->Details->Detail[0]->Contents->Item[5]->Id And if I unwrap them before passing them back I can strip out a layer from consumers' code. I know I'm probably being a little bit of an Architecture Astronaut here, but the latter style really bug me, so I've been working through my code to have my proxy methods just return the scalar value to the client code where there's no absolute need for a wrapper object. My question is, am I actually making things more difficult for the consumers of my code? Would I be better off just leaving the return values wrapped in response objects so that everything is consistent, or is removing unneccessary layers of indirection/abstraction worthwhile?

    Read the article

  • Hosting solution for sensitive client data

    - by Mark
    Hello, We are developing a web application that will deal with highly sensitive (financial) data of clients (audience is medium to large sized businesses). Clients will be under scrutiny from regulators & auditors and, as such, we will be too. More importantly to give clients a level of comfort our application and related hosting arrangement should instill a lot of confidence with them. We are looking into using a cloud based service like Linode, Amazon EC2, etc. To allow for maximum flexibility We are keen on putting everything on virtual servers and avoiding having to buy our own hardware. Does a cloud based service make sense for our particular scenario? If not what type of hosting should we consider? If so what should we look out for? Thanks!

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >