Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 582/2338 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • what is the wrong in this code(openAl in vc++)

    - by maiajam
    hi how are you all? i need your help i have this code #include <conio.h> #include <stdlib.h> #include <stdio.h> #include <al.h> #include <alc.h> #include <alut.h> #pragma comment(lib, "openal32.lib") #pragma comment(lib, "alut.lib") /* * These are OpenAL "names" (or "objects"). They store and id of a buffer * or a source object. Generally you would expect to see the implementation * use values that scale up from '1', but don't count on it. The spec does * not make this mandatory (as it is OpenGL). The id's can easily be memory * pointers as well. It will depend on the implementation. */ // Buffers to hold sound data. ALuint Buffer; // Sources are points of emitting sound. ALuint Source; /* * These are 3D cartesian vector coordinates. A structure or class would be * a more flexible of handling these, but for the sake of simplicity we will * just leave it as is. */ // Position of the source sound. ALfloat SourcePos[] = { 0.0, 0.0, 0.0 }; // Velocity of the source sound. ALfloat SourceVel[] = { 0.0, 0.0, 0.0 }; // Position of the Listener. ALfloat ListenerPos[] = { 0.0, 0.0, 0.0 }; // Velocity of the Listener. ALfloat ListenerVel[] = { 0.0, 0.0, 0.0 }; // Orientation of the Listener. (first 3 elements are "at", second 3 are "up") // Also note that these should be units of '1'. ALfloat ListenerOri[] = { 0.0, 0.0, -1.0, 0.0, 1.0, 0.0 }; /* * ALboolean LoadALData() * * This function will load our sample data from the disk using the Alut * utility and send the data into OpenAL as a buffer. A source is then * also created to play that buffer. */ ALboolean LoadALData() { // Variables to load into. ALenum format; ALsizei size; ALvoid* data; ALsizei freq; ALboolean loop; // Load wav data into a buffer. alGenBuffers(1, &Buffer); if(alGetError() != AL_NO_ERROR) return AL_FALSE; alutLoadWAVFile((ALbyte *)"C:\Users\Toshiba\Desktop\Graduation Project\OpenAL\open AL test\wavdata\FancyPants.wav", &format, &data, &size, &freq, &loop); alBufferData(Buffer, format, data, size, freq); alutUnloadWAV(format, data, size, freq); // Bind the buffer with the source. alGenSources(1, &Source); if(alGetError() != AL_NO_ERROR) return AL_FALSE; alSourcei (Source, AL_BUFFER, Buffer ); alSourcef (Source, AL_PITCH, 1.0 ); alSourcef (Source, AL_GAIN, 1.0 ); alSourcefv(Source, AL_POSITION, SourcePos); alSourcefv(Source, AL_VELOCITY, SourceVel); alSourcei (Source, AL_LOOPING, loop ); // Do another error check and return. if(alGetError() == AL_NO_ERROR) return AL_TRUE; return AL_FALSE; } /* * void SetListenerValues() * * We already defined certain values for the Listener, but we need * to tell OpenAL to use that data. This function does just that. */ void SetListenerValues() { alListenerfv(AL_POSITION, ListenerPos); alListenerfv(AL_VELOCITY, ListenerVel); alListenerfv(AL_ORIENTATION, ListenerOri); } /* * void KillALData() * * We have allocated memory for our buffers and sources which needs * to be returned to the system. This function frees that memory. */ void KillALData() { alDeleteBuffers(1, &Buffer); alDeleteSources(1, &Source); alutExit(); } int main(int argc, char *argv[]) { printf("MindCode's OpenAL Lesson 1: Single Static Source\n\n"); printf("Controls:\n"); printf("p) Play\n"); printf("s) Stop\n"); printf("h) Hold (pause)\n"); printf("q) Quit\n\n"); // Initialize OpenAL and clear the error bit. alutInit(NULL, 0); alGetError(); // Load the wav data. if(LoadALData() == AL_FALSE) { printf("Error loading data."); return 0; } SetListenerValues(); // Setup an exit procedure. atexit(KillALData); // Loop. ALubyte c = ' '; while(c != 'q') { c = getche(); switch(c) { // Pressing 'p' will begin playing the sample. case 'p': alSourcePlay(Source); break; // Pressing 's' will stop the sample from playing. case 's': alSourceStop(Source); break; // Pressing 'h' will pause the sample. case 'h': alSourcePause(Source); break; }; } return 0; } and it is run willbut i cant here any thing also i am new in programong and wont to program a virtual reality sound in my graduation project and start to learn opeal and vc++ but i dont how to start and from where i must begin and i want to ask if i need to learn about API win ?? and if i need how i can learn that thank you alote and i am sorry coz of my english

    Read the article

  • passing multiple queries to view with codeigniter

    - by LvS
    I am trying to build a forum with Codeigniter. So far i have the forums themselves displayed and the threads displayed, based on the creating dynamic news tutorial. But that is 2 different pages, i need to obviously display them into one page, like this: Forum 1 - thread 1 - thread 2 - thread 3 Forum 2 - thread 1 - thread 2 etc. And then the next step is obviously to display all the posts in a thread. Most likely with some pagination going on. But that is for later. For now i have the forum controller (slimmed version): <?php class Forum extends CI_Controller { public function __construct() { parent::__construct(); $this->load->model('forum_model'); $this->lang->load('forum'); $this->lang->load('dutch'); } public function index() { $data['forums'] = $this->forum_model->get_forums(); $data['title'] = $this->lang->line('title'); $data['view'] = $this->lang->line('view'); $this->load->view('templates/header', $data); $this->load->view('forum/index', $data); $this->load->view('templates/footer'); } public function view($slug) { $data['forum_item'] = $this->forum_model->get_forums($slug); if (empty($data['forum_item'])) { show_404(); } $data['title'] = $data['forum_item']['title']; $this->load->view('templates/header', $data); $this->load->view('forum/view', $data); $this->load->view('templates/footer'); } } ?> And the forum_model (also slimmed down) <?php class Forum_model extends CI_Model { public function __construct() { $this->load->database(); } public function get_forums($slug = FALSE) { if ($slug === FALSE) { $query= $this->db->get('forum'); return $query->result_array(); } $query = $this->db->get_where('forum', array('slug' => $slug)); return $query->row_array(); } public function get_threads($forumid, $limit, $offset) { $query = $this->db->get_where('thread', array('forumid', $forumid), $limit, $offset); return $query->result_array(); } } ?> And the view file <?php foreach ($forums as $forum_item): ?> <h2><?=$forum_item['title']?></h2> <div id="main"> <?=$forum_item['description']?> </div> <p><a href="forum/<?php echo $forum_item['slug'] ?>"><?=$view?></a></p> <?php endforeach ?> Now that last one, i would like to have something like this: <?php foreach ($forums as $forum_item): ?> <h2><?=$forum_item['title']?></h2> <div id="main"> <?=$forum_item['description']?> </div> <?php foreach ($threads as $thread_item): ?> <h2><?php echo $thread_item['title'] ?></h2> <p><a href="thread/<?php echo $thread_item['slug'] ?>"><?=$view?></a></p> <?php endforeach ?> <?php endforeach ?> But the question is, how do i get the model to return like a double query to the view, so that it contains both the forums and the threads within each forum. I tried to make a foreach loop in the get_forum function, but when i do this: public function get_forums($slug = FALSE) { if ($slug === FALSE) { $query= $this->db->get('forum'); foreach ($query->row_array() as $forum_item) { $thread_query=$this->get_threads($forum_item->forumid, 50, 0); } return $query->result_array(); } $query = $this->db->get_where('forum', array('slug' => $slug)); return $query->row_array(); } i get the error A PHP Error was encountered Severity: Notice Message: Trying to get property of non-object Filename: models/forum_model.php Line Number: 16 I hope anyone has some good tips, thanks! Lenny *EDIT*** Thanks for the feedback. I have been puzzling and this seems to work now :) $query= $this->db->get('forum'); foreach ($query->result() as $forum_item) { $forum[$forum_item->forumid]['title']=$forum_item->title; $thread_query=$this->db->get_where('thread', array('forumid' => $forum_item->forumid), 20, 0); foreach ($thread_query->result() as $thread_item) { $forum[$forum_item->forumid]['thread'][]=$thread_item->title; } } return $forum; } What is now next, is how to display this multidimensional array in the view, with foreach statements.... Any suggestions ? Thanks

    Read the article

  • How to properly override Drupal imagecache presets

    - by volocuga
    Say I need to override defaul presets, provided by Ubercart module. This is what I wrote: function config_imagecache() { $presets = array( array( 'presetname' => 'product', 'actions' => array( array( 'action' => 'imagecache_crop', 'data' => array('width' => 300, 'height' => ''), 'weight' => 0, 'module' => 'imagecache', ), array( 'action' => 'canvasactions_canvas2file', 'data' => array('xpos' => 'center', 'ypos' => 'center', 'path' => 'actions/pad_300_300.gif', 'dimensions' => 'background',), 'weight' => 1, 'module' => 'imagecache_canvasactions', ), ), ), array( 'presetname' => 'uc_thumbnail', 'actions' => array( array( 'action' => 'imagecache_scale', 'data' => array('width' => 55, 'height' => 55, 'upscale' => 0), 'weight' => 0, 'module' => 'imagecache', ), array( 'action' => 'canvasactions_canvas2file', 'data' => array('xpos' => 'center','ypos' => 'center', 'path' => 'actions/pad_60_60.gif','dimensions' => 'background'), 'weight' => 1, 'module' => 'imagecache_canvasactions', ), ), ), array( 'presetname' => 'product_full', 'actions' => array( array( 'action' => 'imagecache_scale', 'data' => array('width' => 600, 'height' => 600, 'upscale' => 0), 'weight' => 0, 'module' => 'imagecache', ), ), ), array( 'presetname' => 'product_list', 'actions' => array( array( 'action' => 'imagecache_scale', 'data' => array('width' => 100, 'height' => 100, 'upscale' => 0), 'weight' => 0, 'module' => 'imagecache', ), array( 'action' => 'canvasactions_canvas2file', 'data' => array('xpos' => 'center', 'ypos' => 'center', 'path' => 'actions/pad_100_100.jpg','dimensions' => 'background',), 'weight' => 1, 'module' => 'imagecache_canvasactions', ), ), ), array( 'presetname' => 'uc_category', 'actions' => array( array( 'action' => 'imagecache_scale', 'data' => array('width' => 100, 'height' => 100, 'upscale' => 0), 'weight' => 0, 'module' => 'imagecache', ), array( 'action' => 'canvasactions_canvas2file', 'data' => array('xpos' => 'center', 'ypos' => 'center', 'path' => 'actions/pad_100_100.gif', 'dimensions' => 'background',), 'weight' => 1, 'module' => 'imagecache_canvasactions', ), ), ), array( 'presetname' => 'cart', 'actions' => array( array( 'action' => 'imagecache_scale', 'data' => array('width' => 50, 'height' => 50, 'upscale' => 0), 'weight' => 0, 'module' => 'imagecache', ), array( 'action' => 'canvasactions_canvas2file', 'data' => array('xpos' => 'center', 'ypos' => 'center', 'path' => 'actions/pad_60_60.gif', 'dimensions' => 'background',), 'weight' => 1, 'module' => 'imagecache_canvasactions', ), ), ), ); foreach ($presets as $preset) { drupal_write_record('imagecache_preset', $preset); foreach ($preset['actions'] as $action) { $action['presetid'] = $preset['presetid']; drupal_write_record('imagecache_action', $action); } } imagecache_presets(true); cache_clear_all('imagecache:presets', 'cache'); } I can see in imagecache UI the settings was applied, but really images disappeared at all. Nothing works now. Where is the mistake?

    Read the article

  • Pagination, next page doesn`t display

    - by user1738013
    if I click page 2 that`s error: Not Found The requested URL /rank/GetAll/30 was not found on this server. My link is: http://localhost/rank/GetAll/30 Model: Rank_Model <?php Class Rank_Model extends CI_Model { public function __construct() { parent::__construct(); } public function record_count() { return $this->db->count_all("ranking"); } public function fifa_rank($limit, $start) { $this->db->limit($limit, $start); $query = $this->db->get("ranking"); if ($query->num_rows() > 0) { foreach ($query->result() as $row) { $data[] = $row; } return $data; } return false; } } ?> Controller: Rank <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); class rank extends CI_Controller { function __construct() { parent::__construct(); $this->load->helper("url"); $this->load->helper(array('form', 'url')); $this->load->model('Rank_Model','',TRUE); $this->load->library("pagination"); } function GetAll() { $config = array(); $config["base_url"] = base_url() . "rank/GetAll"; $config["total_rows"] = $this->Rank_Model->record_count(); $config["per_page"] = 30; $config["uri_segment"] = 3; $this->pagination->initialize($config); $page = ($this->uri->segment(3)) ? $this->uri->segment(3) : 0; $data["results"] = $this->Rank_Model->fifa_rank($config["per_page"], $page); $data['errors_login'] = array(); $data["links"] = $this->pagination->create_links(); $this->load->view('left_column/open_fifa_rank',$data); } } View Open: open_fifa_rank <?php $this->load->view('mains/header'); $this->load->view('login/loggin'); $this->load->view('mains/menu'); $this->load->view('left_column/left_column_before'); $this->load->view('left_column/menu_left'); $this->load->view('left_column/left_column'); $this->load->view('center/center_column_before'); $this->load->view('left_column/fifa_rank'); $this->load->view('center/center_column'); $this->load->view('right_column/right_column_before'); $this->load->view('login/zaloguj'); $this->load->view('right_column/right_column'); $this->load->view('mains/footer'); ?> and View: fifa_rank <table> <thead> <tr> <td>Pozycja</td> <td>Kraj</td> <td>Punkty</td> <td>Zmiana</td> </tr> </thead> <?php foreach($results as $data) {?&gt; <tbody> <tr> <td><?php print $data->pozycja;?></td> <td><?php print $data->kraj;?></td> <td><?php print $data->punkty;?></td> <td><?php print $data->zmiana;?></td> </tr> <?php } ?> </tbody> </table> <p><?php echo $links; ?></p> Maybe you know where is my problem? Now I know where is my problem. In first page I have link: http://localhost/index.php/rank/GetAll But on the next: http://localhost/rank/GetAll/30 In secend link, I don`t have index.php. How can I fix it?

    Read the article

  • Windows Server 2008 R2 Error CAPI2 Event ID 4107

    - by umar bhatti
    I am getting following error on couple of my 2008 R2 servers. I have tried couple of fixes which didn't fix the issues. Log Name: Application Source: Microsoft-Windows-CAPI2 Date: 18/03/2013 09:48:40 Event ID: 4107 Task Category: None Level: Error Keywords: Classic User: N/A Computer: ServerName Description: Failed extract of third-party root list from auto update cab at: <http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab> with error: The data is invalid. . Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-CAPI2" Guid="{5bbca4a8-b209-48dc-a8c7-b23d3e5216fb}" EventSourceName="Microsoft-Windows-CAPI2" /> <EventID Qualifiers="0">4107</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2013-03-18T09:48:40.169581600Z" /> <EventRecordID>8713</EventRecordID> <Correlation /> <Execution ProcessID="412" ThreadID="5288" /> <Channel>Application</Channel> <Computer>ServerName <Security /> </System> <EventData> <Data>http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab</Data> <Data>The data is invalid. </Data> </EventData> </Event>

    Read the article

  • Samba: share home directories when home directories are symbolic links

    - by Owen
    I have set up a new Ubuntu 9.10 system for five users. In the system is a large LVM volume where all the data is to be kept. The main system disk is not for this purpose, so I attempted to move the home directories using usermod -d /var/data/username -m And started creating my shares for these new home locations. But then I thought: hey, Samba has built-in home directory sharing! So I enabled that, and it didn't work. The shares were not published to the network. Only the share for user 'owen' was published; his folder hadn't been moved. So I thought: maybe Samba home sharing only works for default home locations, so how about I move the home directories back to where they were, and then make them symlinks. root@boxenmkiv:/home# ls -l total 4 lrwxrwxrwx 1 brett brett 25 2010-04-03 08:48 brett -> /var/data/brett/ lrwxrwxrwx 1 carly carly 23 2010-04-03 08:48 carly -> /var/data/carly/ lrwxrwxrwx 1 dave dave 21 2010-04-03 08:48 dave -> /var/data/dave/ lrwxrwxrwx 1 kate kate 23 2010-04-03 08:47 kate -> /var/data/kate/ drwxr-xr-x 4 owen owen 4096 2010-04-03 08:44 owen Like so. Still no go. The only users share which is published to the network is 'owen' who as you can see above has not had his home directory moved. I have also added the following to my smb.conf [global] follow symlinks = yes wide symlinks = yes unix extensions = no With no luck. Am I going about doing this the entirely wrong way? Should I just give up and manually create shares for the users? Thanks in advance.

    Read the article

  • Snort/Barnyard2 Logging

    - by Eric
    I need some help with my Snort/Barnyard2 setup. My goal is to have Snort send unified2 logs to Barnyard2 and then have Barnyard2 send the data to other locations. Here is my currrent setup. OS Scientific Linux 6 Snort Version 2.9.2.3 Barnyard2 Version 2.1.9 Snort command snort -c /etc/snort/snort.conf -i eth2 & Barnyard2 command /usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.log -w /var/log/snort/barnyard.waldo & snort.conf output unified2: filename snort.log, limit 128 barnyard2.conf output alert_syslog: host=127.0.0.1 output database: log, mysql, user=snort dbname=snort password=password host=localhost With this setup, barnyard2 is showing all of the correct information in the database and I'm using BASE to view it on the web GUI. I was hoping to be able to send the full packet data to syslog with barnyard2 but after reading around, it seems that it is impossible to do that. So I then started trying to modify the snort.conf file and add lines like "output alert_full: alert.full". This definitely gave me a lot more information but still not the full packet data like I want. So my question is, is there anyway I can use barnyard2 to send the full packet data of alerts to a human readable file? Since I can't send it directly to syslog, I can create another process to take the data from that file and ship it off to another server. If not, what flags and/or snort.conf configuration would you recommend to get the most data possible but still be able to handle quite a bit of traffic? In the end of it all, these alerts will be shipped to a central server via a SSH tunnel. I'm trying to stay away from databases.

    Read the article

  • Apache 2.4, Ubuntu 12.04 Forbidden Errors

    - by tubaguy50035
    I just installed Apache 2.4 today, and I'm having some issues getting vhost configuration to work correctly. Below is the vhost conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /hosting/Client/site.com/www ServerName site.com ServerAlias www.site.com <Directory "/hosting/Client/site.com/www"> Options +Indexes +FollowSymLinks Order allow,deny Allow from all </Directory> DirectoryIndex index.html </VirtualHost> There is an index.html file in /hosting/Client/site.com/www. When I go to the site, I receive a 403 forbidden error. The www-data group is the group on the www folder, which I've already given all permissions (r/w/x). I'm really at a loss as to why this is happening. Any thoughts? If I remove the vhost and go straight to the IP address, I get the default, "It works!" page. So I know that it's working. The error log says "client denied by server configuration". apache2ctl -S dump: nick@server:~$ apache2ctl -S /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted) VirtualHost configuration: *:80 is a NameVirtualHost default server site.com (/etc/apache2/sites-enabled/site.com.conf:1) port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex watchdog-callback: using_defaults Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults PidFile: "/var/run/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG Define: ENALBLE_USR_LIB_CGI_BIN User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used Ouput of namei -mo /hosting/Client/site/www/index.html f: /hosting/Client/site.com/www/index.html drwxr-xr-x root root / drwxr-xr-x root root hosting drwxr-xr-x root root Client drwxr-xr-x nick www-data site.com drwxr-xr-x nick www-data www -rw-rwxr-x nick www-data index.html

    Read the article

  • Android failure to boot on LG [migrated]

    - by Ukavi
    I need to recover data from my AT&T LG Thrill Android Phone Background: My AT&T LG Thrill phone's battery died a couple of days ago because I forgot to charge it. When I charged the phone and tried to turn it on, it showed the LG logo followed by the dropping balls and the AT&T "Rethink Possible" screen. I then get a mesage that the Application Google Services Framework has crashed and the phone goes into a loop with the dropping balls showing again followed by "Rethink Possible" screen. This sequence repeats itself over and over and the phone does not get out of this loop. I have been able to go into the recovery screen (both Safe Mode and the Android Recovery Service) and have cleared cache, etc. However, I DO NOT want to wipe user data and restore to factory settings as this will wipe all of my data (pictures, application data, etc). Solution Needed: I need a suggestion to a way of accessing my data so that I can back it up onto an SD card/computer. I DO NOT want to root the phone as this may void the warranty. What I'm looking for is a way of perhaps putting the original flash image on the micro SD card and then have the phone read that image. Or some other similar solution that will get the phone out of this loop and allow me to get to the data.

    Read the article

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • Architecture for highly available MySQL with automatic failover in physically diverse locations

    - by Warner
    I have been researching high availability (HA) solutions for MySQL between data centers. For servers located in the same physical environment, I have preferred dual master with heartbeat (floating VIP) using an active passive approach. The heartbeat is over both a serial connection as well as an ethernet connection. Ultimately, my goal is to maintain this same level of availability but between data centers. I want to dynamically failover between both data centers without manual intervention and still maintain data integrity. There would be BGP on top. Web clusters in both locations, which would have the potential to route to the databases between both sides. If the Internet connection went down on site 1, clients would route through site 2, to the Web cluster, and then to the database in site 1 if the link between both sites is still up. With this scenario, due to the lack of physical link (serial) there is a more likely chance of split brain. If the WAN went down between both sites, the VIP would end up on both sites, where a variety of unpleasant scenarios could introduce desync. Another potential issue I see is difficulty scaling this infrastructure to a third data center in the future. The network layer is not a focus. The architecture is flexible at this stage. Again, my focus is a solution for maintaining data integrity as well as automatic failover with the MySQL databases. I would likely design the rest around this. Can you recommend a proven solution for MySQL HA between two physically diverse sites? Thank you for taking the time to read this. I look forward to reading your recommendations.

    Read the article

  • Effective backup and archive strategy for database and linked files

    - by busyspin
    I am using Postgres to store a variety of application data for a webapp. Part of the application involves storing and retrieving user uploaded files. I am storing the files in the filesystem with some associated metadata in the database. I am trying to come up with a backup and archive strategy so that I can effectively backup and archive/restore the database and the linked files. Here are the things I want to accomplish. Perform routine backups that can be used for recovery from failures and which include all DB data and the linked files. Ideally, this backup would be done while the app is running. Live backup is certainly possible with a DB but I am not sure how to keep the linked files consistent with the database during the backup process Archive chunks of data as they become "old". These chunks must includes the database data plus any linked files. It should be possible to put the archived data back into production again. It would be ideal if it were easy to determine which ranges of objects were stored in each chunk. Do you have any advice for how to accomplish these goals? If the files were in the database as BLOBS these tasks would be much easier since normal database backup and restore functionality would handle this. I am not sure how to accomplish the same thing when file data is linked to database rows.

    Read the article

  • Mac OS X Lion (10.7) Drive Encryption

    - by Skoota
    My iMac has two drives (a 256 GB solid-state drive, and regular 2 TB hard drive). The Mac OS X Lion system is installed on the solid-state drive and, like many other users, I have moved my user profile folder onto the secondary 2 TB drive. However, as you may be aware, FileVault 2 on Mac OS X Lion (10.7) only encrypts the system drive. This leaves my data drive (containing my user profile folder, with all of my data) unencrypted. I am aware that work arounds for this issue exist (such as https://github.com/jridgewell/Unlock) but I am not happy with the results since they involve decrypting the data drive on startup using a LaunchDaemon (before any users have logged into the computer) essentially meaning that any user who logs onto the computer will see the unencrypted drive. I would like a method which will only unencrypted the data when an authorised user logs into the computer. As such, is there a way to do one of the following? Encrypt the entire data drive and only decrypt the drive when an authorised user logs into the computer. This would be equivalent behaviour to the Lion FileVault 2 feature, but on a secondary drive rather than the system drive. Encrypt only the user profile folder on the data drive, and only decrypt the folder when the user logs into the computer. This would be equivalent to the behaviour of FileVault 1 on previous versions of Mac OS X? I am happy to pay for a commercial third-party product that provides the required feature(s), but I have not yet been able to find one. Thanks in advance for any assistance.

    Read the article

  • IIS permission configuration issue

    - by Dan
    Sorry the title of this question is a little ambiguous but I don't really have any idea where the issue lies - I'm seeking some clarification of the server error logs. Basically, I had a dedicated server running Windows 2003 and Plesk (v8 I think). Last week the server hardware failed and the entire thing had to be rebuilt from scratch. New hardware was put in, new operating system (Win2008), new Plesk installation (v9.5), new software (MSSQL etc) then all data ported over manually from old C and D drives to restore all 30 client sites. It was hell! All has been okay for a couple of days now but about an hour ago POP! Suddenly all sites went down giving a 500 error. Restarting all services eventually brought everything back online, but I'm now living in total fear. It can - and probably will - happen again. The guys on support gave me the following errors from the server log: The Template Persistent Cache initialization failed for Application Pool 'ASP.NET v4.0 Classic' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. The worker process for application pool 'domain1.com(domain)(2.0)(pool)' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\domain1.com(domain)(2.0)(pool).config', line number '0'. The data field contains the error code. The worker process for application pool 'PleskControlPanel' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\PleskControlPanel.config', line number '0'. The data field contains the error code. The support guys are so ambiguous about this and it scares me horribly. Can anyone positively identify the cause of this error which lead to all client website going offline? What can be done to prevent it from happening again? Any pointers would be very much appreciated! Thanks folks...

    Read the article

  • Kickstart: Serve dynamic kickstart images via a CGI or PHP script?

    - by Stefan Lasiewski
    I'd like to kickstart a couple dozen RHEL6/SL6 servers. However, some of these servers are different and I don't want to create a new ks.cfg file for each class of server. Are there any products which can generate a Kickstart file dynamically on the fly, from a template? For example, if I append a line like this to the KERNEL: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi Then the script ks.cgi can determine what host this is (Via the MAC address), and print out Kickstart options which are appropriate for that host. I could optionally override some options by passing parameters to the script, like this: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi?NODETYPE=production&IP=192.168.2.80 After we kickstart the server, we activate Cfengine/Puppet on this system and manage the system using our favorite Configuration Management product. We're experimenting with xCAT but it is proving too cumbersome. I've looked into Cobbler, but I'm not sure it does this. Update: A roll-your-own solution is discussed in the O'Reilly book: Managing RPM-Based Systems with Kickstart and Yum, Chapter 3. Customizing Your Kickstart Install Dynamic ks.cfg, which echos some of the comments in this thread: To implement such a tool is beyond the scope of this Short Cut, but I can walk through the high-level design. Any such solution would mix a data store (the things that change) with a templating solution (the things that don’t change). The data store would hold the per-machine data, such as the IP address and hostname. You would also need a unique identifier, perhaps the hostname, such that you could pick up a given machine’s data. The data store could be a flat file, XML data, or a relational database such as PostgreSQL or MySQL. In turn, to invoke the system, you pass a machine’s unique identifier as a URL parameter. For example: boot: linux ks=http://your.kickstart.server/gen_config?host-server25 In this example, the CGI (or servlet, or whatever) generates a ks.cfg for the machine server25. But where, oh where, is the code for ks.cgi?

    Read the article

  • How to allow writing to a mounted NFS partition

    - by Cerin
    How do you allow a specific user permission to write to an NFS partition? I've mounted an NFS share on my localhost (a Fedora install), and I can read and write as root, but I'm unable to write as the apache user, even though all the files and directories in the share on my localhost and remote host are owned by apache. For example, I've mounted it via this line in my /etc/fstab: remotehost:/data/media /data/media nfs _netdev,soft,intr,rw,bg 0 0 And both locations are owned by apache: [root@remotehost ~]# ls -la /data total 24 drwxr-xr-x. 6 root root 4096 Jan 6 2011 . dr-xr-xr-x. 28 root root 4096 Oct 31 2011 .. drwxr-xr-x 4 apache apache 4096 Jan 14 2011 media [root@localhost ~]# ls -la /data total 16 drwxr-xr-x 4 apache apache 4096 Dec 7 2011 . dr-xr-xr-x. 27 root root 4096 Jun 11 15:51 .. drwxrwxrwx 5 apache apache 4096 Jan 31 2011 media However, when I try and write as the apache user, I get a "Permission denied" error. [root@localhost ~]# sudo -u apache touch /data/media/test.txt' touch: cannot touch `/data/media/test.txt': Permission denied But of course it works fine as root. What am I doing wrong?

    Read the article

  • Revover original email from Sendmail log

    - by Xavi Colomer
    I have a website the contact form has been failing silently for two weeks (Wordpress + Contact form 7). Apparently updating to Contact Form 7 made the assigned email to fail [email protected], I also tested with [email protected] and it also failed until I tried with gmail and it finally worked. Apparently @telefonica.net and @me.com domains are not working with this version of the plugin, but I have to investigate the cause. I found the logs of the lost emails, but I would like to know If I can recover the sender or the content of the original messages. May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: from=www-data, size=3250, class=0, nrcpts=1, msgid=<[email protected]_web.com>, relay=www-data@localhost May 24 23:41:11 localhost sm-mta[27655]: s4P3fBdA027655: from=<[email protected]>, size=3359, class=0, nrcpts=1, msgid=<[email protected]_web.com>, proto=ESMTP, daemon=MTA-v4, relay=localhost.localdomain [127.0.0.1] May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=33250, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s4P3fBdA027655 Message accepted for delivery) May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=123359, relay=tnetmx.telefonica.net. [86.109.99.69], dsn=5.0.0, stat=Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: s4P3fCdA027657: DSN: Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fCdA027657: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30000, dsn=2.0.0, stat=Sent Thanks

    Read the article

  • what are valid 'ack' values?

    - by WileECanisLatrans
    having an issue with a vendor who claims the cause of a problem is an invalid 'ack' value in the tcp data. I'm using java so I didn't write this layer. I used snoop to capture the traffic on the wire and am using wireshark to display the data. Here is what is happening. After receiving a multi-packet(5) message I see a multi-pack(3) response. The first packet in the response has a value for 'ack' that is different than the 'ack' value in the other two packets. The vendor claims this data is suspect. I've provided sample data below. I'm not a tcp expert so I don't know if this is a problem or not. I've tried to find something on valid ack values and it seems to me the value should be 80018 but that doesn't mean the 78345 is wrong. I found this on the web and it seems to apply but I'm not sure: "the ack value of any data segment is considered valid as long as it does not acknowledge data ahead of the next segment to send". Thanks for your help. My understanding is the vendor has written their own tcp layer. * source seq ack len * vendor 75465 10924 0 * vendor 75465 10924 1440 * vendor 76905 10924 1440 * vendor 78345 10924 1440 * vendor 79785 10924 233 * me 10924 78345 0 * me 10924 80018 0 * me 10924 80018 197

    Read the article

  • USB transfer speed for Windows 7 is incredibly slow to my external drive

    - by Wolfram
    I'm running Windows 7 Pro and am try to backup 116 GB of data to my external 1 TB hard drive. My laptop has only USB 2.0 ports and my hard drive is USB 3.0 compatible, as is the cable I'm using. I understand that the transfer speed should still be in accordance with USB 2.0 speeds. However, right now I'm getting 135 KB/s and it's been gradually dropping. For an earlier transfer, I would get between 4 MB/s to 8 MB/s. So, I'm really just wondering what's going on with my transfer rate and what I can do to improve it. I'm currently about 35 GB into the 116 GB transfer. Another strange thing is that the window which shows the transfer status decided to max out at 835 MB, and therefore shows items remaining as 0. However, it is still performing the rest of the transfer, and I can see it still cycling through files. Now that I think about it, it seems plausible that the speed being shown by the window is calculated merely as total data transferred / time elapsed. Since the "counter" of data, as far as what is being displayed in the window, maxed out at 835 MB, as time increases, the speed shown is going to keep decreasing because the 'total data transferred' value isn't being incremented. So with that in mind, I suppose I don't actually know at what rate the data is being transferred currently. Nonetheless, my best speed earlier was only around 8 MB/s. Shouldn't USB 2.0 deliver closer to 35 MB/s? Also, if someone can tell me why the transfer status window is displaying the incorrect data information and how to fix this, that would also be appreciated.

    Read the article

  • MySQL ADO.NET Connector & MSSQL Integration Services

    - by user1114330
    Here I am, day three... attempting to sync a data view on a Windows Vista box (64 bit) running MSSQL 2012 and Visual Studio 2010. Sanity is slipping and hunger for progress fills my attention. I went through hell trying to get the MySQL ODBC drivers to get the job but to no avail...everyone seems to be lost and all the threads I can find are solutions that do not work for me. The problem: System DSN's not being seen by SSIS. SSIS DSN Not Showing as ODBC Data Source I make the decision to try out the ADO.NET connector...and to my surprise it is actually in the selection list in data sources in SSIS. So I take off running to create a Data Flow Task, create an ADO.NET Source (a local MSSQL DB)...all is good as usual. Then I move swiftly to creating a ADO.NET Destination, enter my credentials...wow, I am selecting a database finally on my linux server! Happy thinking that I finally have figured a way to get the job done. Then I move to mappings...nope, something is wrong...I am getting an error that hurts my eyes: Pipeline component has returned HRESULT error code 0xC0208457 from a a method call. Error at Data Flow Task [ADO NET Destination [81]]: Failed to get properties of external columns. The table name you entered may not exist or you do not have SELECT permission on the table object and an alternative attempt to get column properties through connection has failed. Detailed error messages are" You have an error in your SQL syntax check the manual that corresponds to your MySQL server version for the right syntax to use near "database".tablename" at line 1. The descriptor files on path C:\Program Files (x86)\Microsoft SQL Server\110\DTS\ProviderDescriptors\ does not contain schema information for connection of type MySQL.Data.MySqlClient.MySqlConnection. So it looks like it can't the information and therefore I cannot map the tables properly. Any ideas on this would be ultra helpful...thanks in advance to All!

    Read the article

  • Some SVN Repositories not working - 405

    - by Webnet
    I have 2 groups of repositories, web and engineering. I setup web about 3 months ago and it works great, I'm trying to move engineering over to this same SVN server and I'm getting a PROPFIND of /svn/engineering/main: 405 Method Not Allowed error when I try to do a checkout. I can checkout/commit for /svn/web just fine dav_svn.conf This is the only thing uncommented in this file.... <Location /svn/web> DAV svn SVNParentPath /var/svn-repos/web AuthType Basic AuthName "SVN Repository" AuthUserFile /etc/svn-auth-file Require valid-user </Location> <Location /svn/engineering> DAV svn SVNParentPath /var/svn-repos/engineering AuthType Basic AuthName "SVN Repository" AuthUserFile /etc/svn-auth-file Require valid-user </Location> /var/svn-repos/ drwxrwx--- 3 www-data subversion 4096 2010-06-11 11:57 engineering drwxrwx--- 5 www-data subversion 4096 2010-04-07 15:41 web /var/svn-repos/web - WORKING drwxrwx--- 7 www-data subversion 4096 2010-04-07 16:50 site1.com drwxrwx--- 7 www-data subversion 4096 2010-03-29 16:42 site2.com drwxrwx--- 7 www-data subversion 4096 2010-03-31 12:52 site3.com /var/svn-repos/engineering - NOT WORKING drwxrwx--- 6 www-data subversion 4096 2010-06-11 11:56 main I get to the bottom and now realize that there's a 6 on that last one not a 7..... what does that number mean?

    Read the article

  • How to provide users with isolated drive letters in Windows 2008 R2 (Terminal Server)

    - by Pierre
    I need to be able to host several RDP sessions on a Terminal Server, where users of group A see a drive X: mapped to a given folder of the server and another group B see the same drive letter X: mapped to another folder. For instance : User 1, Group A X: --> C:\data\A User 2, Group A X: --> C:\data\A User 3, Group B X: --> C:\data\B User 4, Group C X: --> C:\data\C Is this possible. If so, how do I configure the virtual drive mapping so that the user has nothing special to do; i.e. I want the letter X: to be available to Remote Apps launched by the user, or if the user logs in to the remote desktop. Can I somehow use subst to get this to work? I would like to avoid, if possible, mounting drive letters on local shares (i.e. I don't like the idea of having to go through \\localhost\data-A to reach the user's data).

    Read the article

  • How to setup RAID 1 with Intel RST on an existing Windows 7 system?

    - by instcode
    I'd like to setup RAID-1 using Intel Rapid Storage Technology on my Windows 7 64-bit system. I have an 1TB SATA HDD with Windows 7 system installed on the first primary partition (leftmost, ~200GB). The rest of this HDD is unallocated (~800GB). I bought another 2TB SATA, then created a primary partition (leftmost, ~500GB) and filled my data in. The rest of this HDD is unallocated (~1.5TB). A quick disk layout (XXX is the unallocated region): HDD1 (1TB): [ 200GB C:\ SYSTEM | XXXXXXXXXXXX ] HDD2 (2TB): [ 500GB Z:\ PROGRAM | XXXXXXXXXXXXXXXXXXXXXX ] Now, I want to create a 500GB RAID-1 partition (I'm not sure if using "partition" is correct here) on the rightmost of the 2 HDDs above without losing any existing data from both disks. Here is the expected layout: HDD1 (1TB): [ 200GB C:\ SYSTEM | XXXXXX | 500GB D:\ DATA - RAID-1 ] HDD2 (2TB): [ 500GB Z:\ PROGRAM | XXXXXXXXXXXXXXXX | 500GB D:\ DATA RAID-1] Let's not concern about data lost, is it possible to have that final layout using Intel RST? Previously, I tried this layout using dynamic disk & software RAID from Windows and it worked as expected, however, it's quite ugly in resynching after an OS failure that I don't want. If yes, is there a way to keep the data on existing partitions untouched or, at least, it should keep the SYSTEM partition safe (I'm okay if the PROGRAM partition has to be gone.)? Well, are there any strict/special steps I should follow when using the Intel RST manager in order to achieve that? If none of those questions above are "Yes", could you please suggest some other possible layouts that leave the C:SYSTEM partition untouched?

    Read the article

  • Combine multiple rows into one

    - by Jim
    I am trying to combine multiple rows of data into one. Column A contains the value on which the groupings will be based -- rows whose Column A values match will be combined into one row. My range extends from column A through X so I need a matching row of data to start in column Y. Example: +--------------+ ¦ 1001 ¦ A ¦ C ¦ ¦ 1001 ¦ B ¦ D ¦ ¦ 1002 ¦ A ¦ E ¦ ¦ 1002 ¦ B ¦ F ¦ ¦ 1002 ¦ C ¦ G ¦ +--------------+ Desired Result: +------------------------------+ ¦ 1001 ¦ A ¦ C ¦ B ¦ D ¦ ¦ ¦ ¦ 1002 ¦ A ¦ E ¦ B ¦ F ¦ C ¦ G ¦ +------------------------------+ The VBA code I am currently using is not taking the entire contents of the matched row. It is only taking the data in the 2nd column and moving it up. VBA Code: Sub Mergeitems() Dim cl As Range Dim rw As Range Set rw = ActiveCell Do While rw <> "" ' for each row in data set ' find first empty cell on row Set cl = rw.Offset(0, 1) Do While cl <> "" Set cl = cl.Offset(0, 1) Loop ' if next row needs to be processed... Do While rw = rw.Offset(1, 0) cl = rw.Offset(1, 1) ' move the data Set cl = cl.Offset(0, 1) ' update pointer to next blank cell rw.Offset(1, 0).EntireRow.Delete xlShiftUp ' delete old data Loop ' next row Set rw = rw.Offset(1, 0) Loop End Sub

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >