Search Results

Search found 10595 results on 424 pages for 'job definition'.

Page 72/424 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Safe to cast pointer to a forward-declared class to its true base class in C++?

    - by Matt DiMeo
    In one header file I have: #include "BaseClass.h" // a forward declaration of DerivedClass, which extends class BaseClass. class DerivedClass ; class Foo { DerivedClass *derived ; void someMethod() { // this is the cast I'm worried about. ((BaseClass*)derived)->baseClassMethod() ; } }; Now, DerivedClass is (in its own header file) derived from BaseClass, but the compiler doesn't know that at the time it's reading the definition above for class Foo. However, Foo refers to DerivedClass pointers and DerivedClass refers to Foo pointers, so they can't both know each other's declaration. First question is whether it's safe (according to C++ spec, not in any given compiler) to cast a derived class pointer to its base class pointer type in the absence of a full definition of the derived class. Second question is whether there's a better approach. I'm aware I could move someMethod()'s body out of the class definition, but in this case it's important that it be inlined (part of an actual, measured hotspot - I'm not guessing).

    Read the article

  • C errors - Cannot combine with previous 'struct' declaration specifier && Redefinition of 'MyMIDINotifyProc' as different kind of symbol

    - by user1905634
    I'm still new to C but trying to understand it better by working my way through a small MIDI audio unit (in Xcode 4.3.3). I've been searching for an answer to this all day and still don't even understand exactly what the problem is. Here's the code in question: //MyMIDINotifyProc.h #ifndef MIDIInstrumentUnit_CallbackProcs_h #define MIDIInstrumentUnit_CallbackProcs_h void MyMIDINotifyProc (const MIDINotification *message, void *refCon); #endif //MyMIDINotifyProc.c #include <CoreMIDI/CoreMIDI.h> #include "MyMIDINotifyProc.h" void MyMIDINotifyProc (const MIDINotification *message, void *refCon) { //manage notification } In the header definition I get this: ! Cannot combine with previous 'struct' declaration specifier I've made sure the definitions match and tried renaming them and I still get this in my .c file: ! Redefinition of 'MyMIDINotifyProc' as different kind of symbol Which points to the .h definition as the 'Previous definition'. I know that MIDIServices.h in the CoreMIDI framework defines: typedef void (*MIDINotifyProc)(const MIDINotification *message, void *refCon); But I don't understand if/why that would cause an error. I would be grateful if anyone could offer some help.

    Read the article

  • HttpWebRequest POST and retrieve data from php script after login

    - by KenaGT
    Hello guys i am newbie to this stuff so i'll try to explain my problem.I am building application that retrieve data after login to php script that looks like this: https://zamger.etf.unsa.ba/getrssid.php (see the page source for php scirpt definition) and definition(source) here: Korisnicko ime (UID): Šifra: After i login it shows me data that i must collect like this: RSSID: 1321B312 (this is only data that it shows and nothing else) I must do this with httpwebrequest but don't know how i tried to do it with POST(data) but it always give me the defnition of php script as response.But i need response to be like "RSSID: 1321B312" not as script definition mentioned above...please heeelp ASAP....

    Read the article

  • Why is there "data" and "newtype" in Haskell?

    - by martingw
    To me it seems that a newtype definition is just a data definition that obeys some restrictions (only one constructor and such), and that due to these restrictions the runtime system can handle newtypes more efficiently. Ok, and the handling of pattern matching for undefined values is slightly different. But suppose Haskell would only knew data definitions, no newtypes: Couldn't the compiler find out for himself whether a given data definition obeys these restrictions, and automatically treat it more efficiently? I'm sure I'm missing out on something, these Haskell designers are so clever, there must be some deeper reason for this...

    Read the article

  • If I don't odr-use a variable, can I have multiple definitions of it across translation units?

    - by sftrabbit
    The standard seems to imply that there is no restriction on the number of definitions of a variable if it is not odr-used (§3.2/3): Every program shall contain exactly one definition of every non-inline function or variable that is odr-used in that program; no diagnostic required. It does say that any variable can't be defined multiple times within a translation unit (§3.2/1): No translation unit shall contain more than one definition of any variable, function, class type, enumeration type, or template. But I can't find a restriction for non-odr-used variables across the entire program. So why can't I compile something like the following: // other.cpp int x; // main.cpp int x; int main() {} Compiling and linking these files with g++ 4.6.3, I get a linker error for multiple definition of 'x'. To be honest, I expect this, but since x is not odr-used anywhere (as far as I can tell), I can't see how the standard restricts this. Or is it undefined behaviour?

    Read the article

  • Testing IPP Printing with ipptool

    - by senloe
    I'm trying to send an IPP print job using the ipptool. Using the sample .test files, I can send commands to the printer, but I am unable to successfully use the print-job.test file. Here's an example using ipptool. c:\...>ipptool -v ipp://name.local.:631/ipp/printer print-job.test ipptool: Filename "$filename" on line 21 cannot be read. ipptool: Filename mapped to "". It looks like it's failing resolving the variable $filename within the test file so I attempted to hardcode this value in the test file. In this case I get no error, but still no print. Does anybody have any experience using ipptool to test ipp printing?

    Read the article

  • ulimit not reflected for jenkins slave

    - by techastute
    Problem Got java.io.IOException: Too many open files in solr indexing through jenkins. Did some googling and found we have to set the ulimit for the box in where we are running the job. So set the ulimit in a linux box with spec Linux x86_64 GNU/Linux in both of the following fashions ulimit -n 1000000 /etc/security/limits.conf userx soft nofile 1000000 userx hard nofile 1000000 Given userx is the user through which the jenkins job is being executed. when doing ssh to the box as userx manually through terminal and check ulimit -n am getting 10000000 Question But when executing the same ulimit -n through a jenkins job, only getting 1024 which is the default. Any advice would be much helpful?

    Read the article

  • Deploy to JBoss 7 using Hudson Deploy plugin

    - by Uluk Biy
    I have 2 machines where one of them contains the Hudson CI and other JBoss 7 AS. In Hudson, I have installed "Deploy Plugin", created new job and filled required JBoss manager user connection fields. When I run the job, the project successfully built however the deployment process to remote JBoss AS is not being triggered. No errors or messages about the deployment in log. What should I do? EDIT The deployment is triggered (at least expected) as "Post-build Action" with parameters: [x] Deploy war/ear to a container WAR/EAR files : **/*.war Container : JBoss 7.x Manager user name : test Manager password : * * * * JBoss URL : http://192.168.1.2 JBoss JMX Management port : 9990 It is not a separate job.

    Read the article

  • Server Administration

    - by Kassem
    Hi everyone, My client asked me for a job description of a system administration because I might be assigned this position along with the other guy I'm working with. To be honest, I do not know much about a System Administrator's job but I'm willing to learn. Questions: What are the security requirements of a server? * What are the key responsibilities in a system admin's job description? What are some of the day to day tasks of a system admin? What is the average monthly salary of a system admin? Note: I will be working inside a Windows environment. But your replies do not necessarily need to be constricted to a Windows environment. (*) Other software I know will be required are: Windows Server 2008 IIS 7.0 MS SQL Server .NET 4.0 Runtime Let me know if there are other things I should be aware of as well. Thanks!

    Read the article

  • Running "Rebuild Index" maintenance plan with "Online indexing"

    - by Bharanidharan
    Hi I am using Windows Server 2003 SP 2 and SQL Server 2005 Enterprise edition I am creating a "Rebuild Index" job for a particular database and I am successfully able to run the job. But when I try to enable the "Keep index online while rebuilding" option, the job does not execute successfully and throws out errors. I have attached the screenshots. Any help would be app http://img535.imageshack.us/gal.php?g=error1r.png PS: I am not able to attach the images here since i do not have 10 points yet! Thanks.

    Read the article

  • SQL Server 2005 Default Backup Plan

    - by tylerl
    I noticed that a newly imported database on SQLServer 2005 had configured itself (without my knowledge) to perform daily backups; but it's not deleting old files and quickly filling up the disk. I don't know how the backup job got configured (maybe that's something that gets transferred when you move a database?) but I'm having trouble modifying it. The backup runs as part of SQL Server Agent job called "Daily Backups". This job runs a package called "(SSIS Packages)\Maintenance Plans\Backup Plan" -- which I can't find. The "Management\Maintenance Plans" area for my server is empty. I imagine I could delete the existing plan and re-create it manually, but I was hoping to just modify what was already there, since all that's missing is deleting old files.

    Read the article

  • Which Message Queue should I choose (must run on Linux)

    - by MHS
    There are many open source Message queues for Linux, and I need some help deciding what I should go for. My problem is simple - I get sent a list of files that needs to be processed. Each job can't be split up, but they are self contained and can be spread to multiple computers. I'm thinking of solving this using a message queue. Multiple clients send a message to a central queue. Each queue has a number of subscribers that will take jobs from that queue when they have finished processing the current job. Ideally it should have the following qualities Message queue must be able to store unprocessed messages in case of a shutdown/reboot A job can only be processed by a single subscriber (don't want duplicate jobs) The subscribers should be able to send jobs of their own, that will be processed by a different set of subscribers. Can anyone suggest a simple to use message queue?

    Read the article

  • Help with running crontab from root

    - by user242065
    Im using OSX and having trouble getting a cron job to run. I type the following: $ sudo -i $ crontab -e I then enter: * * * * * root ifconfig en0 down > /dev/null 0 19 * * * root ifconfig en0 down > /dev/null 0 7 * * * root ifconfig en0 up > /dev/null and no success, the first line is for testing. I want it to shut off my internet. The next two lines I plan to leave in, once I get this working. If I type this in to the terminal the internet goes off ifconfig en0 down Why is my cron job not shutting down the internet? FYI: This is a follow up question from http://stackoverflow.com/questions/3027362/how-can-i-write-a-cron-job-that-will-block-my-internet-from-7pm-to-7am-so-i-can most of the comments there are people making fun of me. And a few attempts to solve the problem with out cron jobs.

    Read the article

  • How to configure/save layout of SQL Server's Log File Viewer?

    - by gernblandston
    When I'm viewing the job history of a particular SQL Agent Job, I typically want to see whether it succeeded, its duration and maybe the duration of the individual steps of the job. When I open the history in the Log File Viewer, I always need to scroll over and shrink the 'Message' column and drag the 'Duration' column over next to the 'Step Name' column. Is there a way to configure the layout of the Log File Viewer (e.g. reposition columns, resize columns) and save it for future sessions? Thanks!

    Read the article

  • Printer options follow Office documents

    - by tkalve
    One person (John) creates an Office document, and prints this document to his HP printer which is using HP Universal Printing PS (v4.7) driver. He has got Job Storage (Personal job) enabled for this printer, with custom username and a personal PIN. He later sends this document in an e-mail to his colleagues. Another person (Anne) opens the document, and tries to print the document to her HP printer (also using HP Universal Printing driver), but is not able to fetch it on the printer. The Job Storage options from Johns computer follows the Office Excel document, so Anne has to change this manually to her username and her PIN before she can print. What on earth is causing this, and how do we fix it?

    Read the article

  • SQL 2005 Log Shipping - Was working, now isnt!

    - by Jim
    Hello, I had log shipping working between two SQL 2005 server fine. I suspect that a job was added to the source server which backed up the transaction log to disk (nothing to do with the existing log shipping job). As I understand it, if you do this then log shipping will fail to work. Sure enough, it no longer works. I've deleted the job which had just been created. Log shipping still does not work. I've rebooted both servers and, again, Log shipping does not work. I'm at a loss now... all I get is the folloing error: The log shipping secondary database XXXXXXXXXX has restore threshold of 45 minutes and is out of sync. No restore was performed for 5882 minutes. Restored latency is 15 minutes. Check agent log and logshipping monitor information. Any help appreciated! Thanks in advance.

    Read the article

  • SQL Server not releasing Memory

    - by noob2487
    I am using SQL Server 2005. I am running a job which processes around 100 K records. Job runs fine, it takes are 45 mins to execute, which is good. But after that job is processed, I can see instance of SQL Server 2005 still there with around 900 MB of Memory. I waited for around 2 hrs but that memory was not released. Is there any process which takes care of memory here, something like GC (unpredictable) Or am I doing something wrong???

    Read the article

  • Backup SQL server db issue: delete old backup files

    - by David.Chu.ca
    I tried to use sqlmaint.exe tool to back up a database on a remote PC. Here is an example of backup: sqlmaint.exe -S remoteSQLServer\SQLInstance -U username -P pwdxxx -D myDB -BkUpMedia DISK -BkUpDB C:\MSSQL_Backups -DelBkUps 3days ... Here I specified to delete backups older than 3 days. However, the job seems not deleting old bak files on the remote PC(where the SQL server sits). The remote PC has Windows 2008 Server. I also set the C:\MSQL_Backups as shared network drive for EnyOne as owner. My understanding is that the job will delete any bak files older than 3 days. Not sure what I missed? By the way, the job runs at a box with SQL server 2005 installed.

    Read the article

  • Organizing files relationally in Windows 7?

    - by Cayetano Gonçalves
    I just took a new job as a policy analyst, and after even one week keeping track of hundreds of files- lawsuits, legislation, letters, etc- in Windows 7 is proving difficult. In my last job I was a database architect and I helped build Linux based servers to track files among an entire department, however there is no way for me to do that at this time in this job. Is there any way to track files/indices/locations/tags/themes and store them in some kind of RDBMS system, instead of storing the files in folders that only allow for flat and fixed storage? For example, if I have a file that deals with: ELID organization Appeals court John Smith It really is inconvenient to have to decide which one of these tags to create into a folder and place the file into it, when it falls under all the categories. Even if I could place tags the way you can in Stack Exchange on files, it would solve a lot of heart ache.

    Read the article

  • Printer offline until spooler service is restarted multiple times

    - by Zian Choy
    When I try to print from my ThinkPad to a printer shared through a Windows 7 Homegroup hosted by a desktop computer, I often have to restart the Print Spooler service several times before the job will go through. In particular, this problem occurs when the desktop is in sleep mode when the print job is started and then brought out of sleep mode after the print job has been kicked off. Both computers are running Windows 7 32-bit edition with the latest patches. I have tried the following with no improvement: SNMP registry hack (see MS KB for details) Following the instructions in a blog post entitled "Sharing Printers on Vista 64-bit" Looking at Printer offline until spooler service is restarted

    Read the article

  • Crontab - stop sending mail, special case ||

    - by 2ge
    Hi all, I need to put into my crontab small command, which is checking, if lighttpd web server is running, for some reason it hangups sometimes. So I got there: * * * * * root /bin/pgrep lighttpd || /usr/local/etc/rc.d/lighttpd restart >/dev/null 2>&1 problem is, this send me mail every minute, in the mail is number of PID of lighttpd, which is running. For other crontab job, redirection work, so I assume, when is there "||", it makes problem. maybe there should be better to rewrite crontab job, so it uses exit status of pgrep, so I can avoid "||". I am using FreeBSD. Thanks for any help, for now I disabled this job

    Read the article

  • Where are the SQLServer jobs stored?

    - by Saiyine
    I'd like to know what is the process a SqlServer job is executing, but I only can find that it calls DTSRun with an encrypted string. After decoding the string, it results is just the name of the job with the user and the password. How can I find what is this job really calling? Edit: I've found a candidate, they could be at the msdb.sysdtspackages, but again, can't read them as SQLServer says the data is binary. How can I read them to confirm they are the jobs?

    Read the article

  • How to allow Hudson build URL through Nginx auth_basic?

    - by rodreegez
    Hi, I have Hudson running and made available to the world via nginx. I have protected Hudson with nginx's auth_basic and that works great. The trouble is, I want to allow unauthenticated requests to the build URL, i.e. /job/<job_name>/build. Currently I have this in my nginx conf: upstream hudson { server 127.0.0.1:8888; } server { server_name ci.myurl.com; root /var/lib/hudson; location / { proxy_pass http://hudson/; auth_basic "Super secret stuff"; auth_basic_user_file /var/opt/hudson/htpasswd; } location ~ \/build { auth_basic off; } } I can't get that second location to allow unauthenticated requests. I have tried various combinations of location ~ /job/(.*)/biuld { } location ^~ \/build { } location ~ \/job\/(.*)\/build { } etc... Maddening! Can anyone point me in the right direction? Thanks, Ad.

    Read the article

  • Specific cron at time point [closed]

    - by ARTI
    I have a very specific task, but can't handle it. I am not a programmer and totally n00b on bash scritps. So the question is, how to create a cron job like this: Script A.sh could be called at any time by hands, and it should create cron job to run script B.sh once at the nearest time point. For example I will have 4 time points: 10.00pm, 10.15pm, 10.30pm, 10.45pm. So if trigger a script A.sh at 10.07pm it should creat cron job to run ONCE script B.sh at 10.15h, because 10.15h is the nearest time point in future. Is it possible? How can I write such script A.sh? I use Centos 6 It is very important and urgent for me. Thank you very much.

    Read the article

  • What's the advantage of using a bash script for cron jobs?

    - by AlxVallejo
    From my understanding you can write your crons by editing crontab -e I've found several sources that instead refer to a bash script in the cron job, rather than writing a job line for line. Is the only benefit that you can consolidate many tasks into one cron job using a bash script? Additional question for a newbie: Editing crontab -e refers to one file correct? I've noticed that if I open crontab -e and close without editing, when I open the file again there is a different numerical extension such as: "/tmp/crontab.XXXXk1DEaM" 0L, 0C I though the crontab is stored in /var/spool/cron or /etc/crontab ?? Why would it store the cron in the tmp folder?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >