Search Results

Search found 10480 results on 420 pages for 'anonymous functions'.

Page 380/420 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • The sign of a true manager is delegation (C# style)

    - by MarkPearl
    Today I thought I would write a bit about delegates in C#. Up till recently I have managed to side step any real understanding of what delegates do and why they are useful – I mean, I know roughly what they do and have used them a lot, but I have never really got down dirty with them and mucked about. Recently however with my renewed interest in Silverlight delegates came up again as a possible solution to a particular problem, and suddenly I found myself opening a bland little console application to just see exactly how far I could take delegates with my limited knowledge. So, let’s first look at the MSDN definition of delegates… A delegate declaration defines a reference type that can be used to encapsulate a method with a specific signature. A delegate instance encapsulates a static or an instance method. Delegates are roughly similar to function pointers in C++; however, delegates are type-safe and secure. Well, don’t you love MSDN for such a useful definition. I must give it credit though… later on it really explains it a bit better by saying “A delegate lets you pass a function as a parameter. The type safety of delegates requires the function you pass as a delegate to have the same signature as the delegate declaration.” A little more reading up on delegates mentions that delegates are similar to interfaces in that they enable the separation of specification and implementation. A delegate declares a single method, while an interface declares a group of methods. So enough reading - lets look at some code and see a basic example of a delegate… Let’s assume we have a console application with a simple delegate declared called AdjustValue like below… class Program { private delegate int AdjustValue(int val); static void Main(string[] args) { } } In a sense, all we have said is that we will be creating one or more methods that follow the same pattern as AdjustValue – i.e. they will take one input value of type int and return an integer. We could then expand our code to have various methods that match the structure of our delegate AdjustValue (remember the structure is int xxx (int xxx)) class Program { private delegate int AdjustValue(int val); private static int Dbl(int val) { return val * 2; } private static int AlwaysOne(int val) { return 1; } static void Main(string[] args) { } }  Above I have expanded my project to have two methods, one called Dbl and the other AlwaysOne. Dbl always returns double the input val and AlwaysOne always returns 1. I could now declare a variable and assign it to be one of those functions, like the following… class Program { private delegate int AdjustValue(int val); private static int Dbl(int val) { return val * 2; } private static int AlwaysOne(int val) { return 1; } static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; Console.WriteLine(myDelegate(1).ToString()); Console.ReadLine(); } } In this instance I have declared an instance of the AdjustValue delegate called myDelegate; I have then told myDelegate to point to the method Dbl, and then called myDelegate(1). What would the result be? Yes, in this instance it would be exactly the same as me calling the following code… static void Main(string[] args) { Console.WriteLine(Dbl(1).ToString()); Console.ReadLine(); }   So why all the extra work for delegates when we could just do what we did above and call the method directly? Well… that separation of specification to implementation comes to mind. So, this all seems pretty simple. Let’s take a slightly more complicated variation to the console application. Assume that my project is the same as the one previously except that my main method is adjusted as follows… static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; myDelegate = AlwaysOne; Console.WriteLine(myDelegate(1).ToString()); Console.ReadLine(); } What would happen in this scenario? Quite simply “1” would be written to the console, the reason being that myDelegate was last pointing to the AlwaysOne method before it was called. Make sense? In a way, the myDelegate is a variable method that can be swapped and changed when needed. Let’s make the code a little more confusing by using a delegate in the declaration of another delegate as shown below… class Program { private delegate int AdjustValue(InputValue val); private delegate int InputValue(); private static int Dbl(InputValue val) { return val()*2; } private static int GetInputVal() { Console.WriteLine("Enter a whole number : "); return Convert.ToInt32(Console.ReadLine()); } static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; Console.WriteLine(myDelegate(GetInputVal).ToString()); Console.ReadLine(); } }   Now it gets really interesting because it looks like we have passed a method into a function in the main method by declaring… Console.WriteLine(myDelegate(GetInputVal).ToString()); So, what it the output? Well, try take a guess on what will happen – then copy the code and see if you got it right. Well that brings me to the end of this short explanation of Delegates. Hopefully it made sense!

    Read the article

  • Building a Repository Pattern against an EF 5 EDMX Model - Part 1

    - by Juan
    I am part of a year long plus project that is re-writing an existing application for a client.  We have decided to develop the project using Visual Studio 2012 and .NET 4.5.  The project will be using a number of technologies and patterns to include Entity Framework 5, WCF Services, and WPF for the client UI.This is my attempt at documenting some of the successes and failures that I will be coming across in the development of the application.In building the data access layer we have to access a database that has already been designed by a dedicated dba. The dba insists on using Stored Procedures which has made the use of EF a little more difficult.  He will not allow direct table access but we did manage to get him to allow us to use Views.  Since EF 5 does not have good support to do Code First with Stored Procedures, my option was to create a model (EDMX) against the existing database views.   I then had to go select each entity and map the Insert/Update/Delete functions to their respective stored procedure. The next step after I had completed mapping the stored procedures to the entities in the EDMX model was to figure out how to build a generic repository that would work well with Entity Framework 5.  After reading the blog posts below, I adopted much of their code with some changes to allow for the use of Ninject for dependency injection.http://www.tcscblog.com/2012/06/22/entity-framework-generic-repository/ http://www.tugberkugurlu.com/archive/generic-repository-pattern-entity-framework-asp-net-mvc-and-unit-testing-triangle IRepository.cs public interface IRepository : IDisposable where T : class { void Add(T entity); void Update(T entity, int id); T GetById(object key); IQueryable Query(Expression> predicate); IQueryable GetAll(); int SaveChanges(); int SaveChanges(bool validateEntities); } GenericRepository.cs public abstract class GenericRepository : IRepository where T : class { public abstract void Add(T entity); public abstract void Update(T entity, int id); public abstract T GetById(object key); public abstract IQueryable Query(Expression> predicate); public abstract IQueryable GetAll(); public int SaveChanges() { return SaveChanges(true); } public abstract int SaveChanges(bool validateEntities); public abstract void Dispose(); } One of the issues I ran into was trying to do an update. I kept receiving errors so I posted a question on Stack Overflow http://stackoverflow.com/questions/12585664/an-object-with-the-same-key-already-exists-in-the-objectstatemanager-the-object and came up with the following hack. If someone has a better way, please let me know. DbContextRepository.cs public class DbContextRepository : GenericRepository where T : class { protected DbContext Context; protected DbSet DbSet; public DbContextRepository(DbContext context) { if (context == null) throw new ArgumentException("context"); Context = context; DbSet = Context.Set(); } public override void Add(T entity) { if (entity == null) throw new ArgumentException("Cannot add a null entity."); DbSet.Add(entity); } public override void Update(T entity, int id) { if (entity == null) throw new ArgumentException("Cannot update a null entity."); var entry = Context.Entry(entity); if (entry.State == EntityState.Detached) { var attachedEntity = DbSet.Find(id); // Need to have access to key if (attachedEntity != null) { var attachedEntry = Context.Entry(attachedEntity); attachedEntry.CurrentValues.SetValues(entity); } else { entry.State = EntityState.Modified; // This should attach entity } } } public override T GetById(object key) { return DbSet.Find(key); } public override IQueryable Query(Expression> predicate) { return DbSet.Where(predicate); } public override IQueryable GetAll() { return Context.Set(); } public override int SaveChanges(bool validateEntities) { Context.Configuration.ValidateOnSaveEnabled = validateEntities; return Context.SaveChanges(); } #region IDisposable implementation public override void Dispose() { if (Context != null) { Context.Dispose(); GC.SuppressFinalize(this); } } #endregion IDisposable implementation } At this point I am able to start creating individual repositories that are needed and add a Unit of Work.  Stay tuned for the next installment in my path to creating a Repository Pattern against EF5.

    Read the article

  • how do i make maximum minimum and average score statistic in this code? [on hold]

    - by goldensun player
    i wanna out put the maximum minimum and average score as a statistic category under the student ids and grades in the output file. how do i do that? here is the code: #include "stdafx.h" #include <iostream> #include <string> #include <fstream> #include <assert.h> using namespace std; int openfiles(ifstream& infile, ofstream& outfile); void Size(ofstream&, int, string); int main() { int num_student = 4, count, length, score2, w[6]; ifstream infile, curvingfile; char x; ofstream outfile; float score; string key, answer, id; do { openfiles(infile, outfile); // function calling infile >> key; // answer key for (int i = 0; i < num_student; i++) // loop over each student { infile >> id; infile >> answer; count = 0; length = key.size(); // length represents number of questions in exam from exam1.dat // size is a string function.... Size (outfile, length, answer); for (int j = 0; j < length; j++) // loop over each question { if (key[j] == answer[j]) count++; } score = (float) count / length; score2 = (int)(score * 100); outfile << id << " " << score2 << "%"; if (score2 >= 90)//<-----w[0] outfile << "A" << endl; else if (score2 >= 80)//<-----w[1] outfile << "B" << endl; else if (score2 >= 70)//<-----w[2] outfile << "C" << endl; else if (score2 >= 60)//<-----w[3] outfile << "D" << endl; else if (score2 >= 50)//<-----w[4] outfile << "E" << endl; else if (score2 < 50)//<-----w[5] outfile << "F" << endl; } cout << "Would you like to attempt a new trial? (y/n): "; cin >> x; } while (x == 'y' || x == 'Y'); return 0; } int openfiles(ifstream& infile, ofstream& outfile) { string name1, name2, name3, answerstring, curvedata; cin >> name1; name2; name3; if (name1 == "exit" || name2 == "exit" || name3 == "exit" ) return false; cout << "Input the name for the exam file: "; cin >> name1; infile.open(name1.c_str()); infile >> answerstring; cout << "Input the name for the curving file: "; cin >> name2; infile.open(name2.c_str()); infile >> curvedata; cout << "Input the name for the output: "; cin >> name3; outfile.open(name3.c_str()); return true; } void Size(ofstream& outfile, int length, string answer) { bool check;// extra answers, lesser answers... if (answer.size() > length) { outfile << "Unnecessary extra answers"; } else if (answer.size() < length) { outfile << "The remaining answers are incorrect"; } else { check = false; }; } and how do i use assert for preconditions and post conditional functions? i dont understand this that well...

    Read the article

  • BIP 11g Dynamic SQL

    - by Tim Dexter
    Back in the 10g release, if you wanted something beyond the standard query for your report extract; you needed to break out your favorite text editor. You gotta love 'vi' and hate emacs, am I right? And get to building a data template, they were/are lovely to write, such fun ... not! Its not fun writing them by hand but, you do get to do some cool stuff around the data extract including dynamic SQL. By that I mean the ability to add content dynamically to your your query at runtime. With 11g, we spoiled you with a visual builder, no more vi or notepad sessions, a friendly drag and drop interface allowing you to build hierarchical data sets, calculated columns, summary columns, etc. You can still create the dynamic SQL statements, its not so well documented right now, in lieu of doc updates here's the skinny. If you check out the 10g process to create dynamic sql in the docs. You need to create a data trigger function where you assign the dynamic sql to a global variable that's matched in your report SQL. In 11g, the process is really the same, BI Publisher just provides a bit more help to define what trigger code needs to be called. You still need to create the function and place it inside a package in the db. Here's a simple plsql package with the 'beforedata' function trigger. Spec create or replace PACKAGE BIREPORTS AS whereCols varchar2(2000); FUNCTION beforeReportTrig return boolean; end BIREPORTS; Body create or replace PACKAGE BODY BIREPORTS AS   FUNCTION beforeReportTrig return boolean AS   BEGIN       whereCols := ' and d.department_id = 100';     RETURN true;   END beforeReportTrig; END BIREPORTS; you'll notice the additional where clause (whereCols - declared as a public variable) is hard coded. I'll cover parameterizing that in my next post. If you can not wait, check the 10g docs for an example. I have my package compiling successfully in the db. Now, onto the BIP data model definition. 1. Create a new data model and go ahead and create your query(s) as you would normally. 2. In the query dialog box, add in the variables you want replaced at runtime using an ampersand rather than a colon e.g. &whereCols.   select     d.DEPARTMENT_NAME, ...  from    "OE"."EMPLOYEES" e,     "OE"."DEPARTMENTS" d  where   d."DEPARTMENT_ID"= e."DEPARTMENT_ID" &whereCols   Note that 'whereCols' matches the global variable name in our package. When you click OK to clear the dialog, you'll be asked for a default value for the variable, just use ' and 1=1' That leading space is important to keep the SQL valid ie required whitespace. This value will be used for the where clause if case its not set by the function code. 3. Now click on the Event Triggers tree node and create a new trigger of the type Before Data. Type in the default package name, in my example, 'BIREPORTS'. Then hit the update button to get BIP to fetch the valid functions.In my case I get to see the following: Select the BEFOREREPORTTRIG function (or your name) and shuttle it across. 4. Save your data model and now test it. For now, you can update the where clause via the plsql package. Next time ... parametrizing the dynamic clause.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • Error codes for C++

    - by billy
    #include <iostream> #include <iomanip> using namespace std; //Global constant variable declaration const int MaxRows = 8, MaxCols = 10, SEED = 10325; //Functions Declaration void PrintNameHeader(ostream& out); void Fill2DArray(double ary[][MaxCols]); void Print2DArray(const double ary[][MaxCols]); double GetTotal(const double ary[][MaxCols]); double GetAverage(const double ary[][MaxCols]); double GetRowTotal(const double ary[][MaxCols], int theRow); double GetColumnTotal(const double ary[][MaxCols], int theRow); double GetHighestInRow(const double ary[][MaxCols], int theRow); double GetLowestInRow(const double ary[][MaxCols], int theRow); double GetHighestInCol(const double ary[][MaxCols], int theCol); double GetLowestInCol(const double ary[][MaxCols], int theCol); double GetHighest(const double ary[][MaxCols], int& theRow, int& theCol); double GetLowest(const double ary[][MaxCols], int& theRow, int& theCol); int main() { int theRow; int theCol; PrintNameHeader(cout); cout << fixed << showpoint << setprecision(1); srand(static_cast<unsigned int>(SEED)); double ary[MaxRows][MaxCols]; cout << "The seed value for random number generator is: " << SEED << endl; cout << endl; Fill2DArray(ary); Print2DArray(ary); cout << " The Total for all the elements in this array is: " << setw(7) << GetTotal(ary) << endl; cout << "The Average of all the elements in this array is: " << setw(7) << GetAverage(ary) << endl; cout << endl; cout << "The sum of each row is:" << endl; for(int index = 0; index < MaxRows; index++) { cout << "Row " << (index + 1) << ": " << GetRowTotal(ary, theRow) << endl; } cout << "The highest and lowest of each row is: " << endl; for(int index = 0; index < MaxCols; index++) { cout << "Row " << (index + 1) << ": " << GetHighestInRow(ary, theRow) << " " << GetLowestInRow(ary, theRow) << endl; } cout << "The highest and lowest of each column is: " << endl; for(int index = 0; index < MaxCols; index++) { cout << "Col " << (index + 1) << ": " << GetHighestInCol(ary, theRow) << " " << GetLowestInCol(ary, theRow) << endl; } cout << "The highest value in all the elements in this array is: " << endl; cout << GetHighest(ary, theRow, theCol) << "[" << theRow << "]" << "[" << theCol << "]" << endl; cout << "The lowest value in all the elements in this array is: " << endl; cout << GetLowest(ary, theRow, theCol) << "[" << theRow << "]" << "[" << theCol << "]" << endl; return 0; } //Define Functions void PrintNameHeader(ostream& out) { out << "*******************************" << endl; out << "* *" << endl; out << "* C.S M10A Spring 2010 *" << endl; out << "* Programming Assignment 10 *" << endl; out << "* Due Date: Thurs. Mar. 25 *" << endl; out << "*******************************" << endl; out << endl; } void Fill2DArray(double ary[][MaxCols]) { for(int index1 = 0; index1 < MaxRows; index1++) { for(int index2= 0; index2 < MaxCols; index2++) { ary[index1][index2] = (rand()%1000)/10; } } } void Print2DArray(const double ary[][MaxCols]) { cout << " Column "; for(int index = 0; index < MaxCols; index++) { int column = index + 1; cout << " " << column << " "; } cout << endl; cout << " "; for(int index = 0; index < MaxCols; index++) { int column = index +1; cout << "----- "; } cout << endl; for(int index1 = 0; index1 < MaxRows; index1++) { cout << "Row " << (index1 + 1) << ":"; for(int index2= 0; index2 < MaxCols; index2++) { cout << setw(6) << ary[index1][index2]; } } } double GetTotal(const double ary[][MaxCols]) { double total = 0; for(int theRow = 0; theRow < MaxRows; theRow++) { total = total + GetRowTotal(ary, theRow); } return total; } double GetAverage(const double ary[][MaxCols]) { double total = 0, average = 0; total = GetTotal(ary); average = total / (MaxRows * MaxCols); return average; } double GetRowTotal(const double ary[][MaxCols], int theRow) { double sum = 0; for(int index = 0; index < MaxCols; index++) { sum = sum + ary[theRow][index]; } return sum; } double GetColumTotal(const double ary[][MaxCols], int theCol) { double sum = 0; for(int index = 0; index < theCol; index++) { sum = sum + ary[index][theCol]; } return sum; } double GetHighestInRow(const double ary[][MaxCols], int theRow) { double highest = 0; for(int index = 0; index < MaxCols; index++) { if(ary[theRow][index] > highest) highest = ary[theRow][index]; } return highest; } double GetLowestInRow(const double ary[][MaxCols], int theRow) { double lowest = 0; for(int index = 0; index < MaxCols; index++) { if(ary[theRow][index] < lowest) lowest = ary[theRow][index]; } return lowest; } double GetHighestInCol(const double ary[][MaxCols], int theCol) { double highest = 0; for(int index = 0; index < MaxRows; index++) { if(ary[index][theCol] > highest) highest = ary[index][theCol]; } return highest; } double GetLowestInCol(const double ary[][MaxCols], int theCol) { double lowest = 0; for(int index = 0; index < MaxRows; index++) { if(ary[index][theCol] < lowest) lowest = ary[index][theCol]; } return lowest; } double GetHighest(const double ary[][MaxCols], int& theRow, int& theCol) { theRow = 0; theCol = 0; double highest = ary[theRow][theCol]; for(int index = 0; index < MaxRows; index++) { for(int index1 = 0; index1 < MaxCols; index1++) { double highest = 0; if(ary[index1][theCol] > highest) { highest = ary[index][index1]; theRow = index; theCol = index1; } } } return highest; } double Getlowest(const double ary[][MaxCols], int& theRow, int& theCol) { theRow = 0; theCol = 0; double lowest = ary[theRow][theCol]; for(int index = 0; index < MaxRows; index++) { for(int index1 = 0; index1 < MaxCols; index1++) { double lowest = 0; if(ary[index1][theCol] < lowest) { lowest = ary[index][index1]; theRow = index; theCol = index1; } } } return lowest; } . 1>------ Build started: Project: teddy lab 10, Configuration: Debug Win32 ------ 1>Compiling... 1>lab 10.cpp 1>c:\users\owner\documents\visual studio 2008\projects\teddy lab 10\teddy lab 10\ lab 10.cpp(46) : warning C4700: uninitialized local variable 'theRow' used 1>c:\users\owner\documents\visual studio 2008\projects\teddy lab 10\teddy lab 10\ lab 10.cpp(62) : warning C4700: uninitialized local variable 'theCol' used 1>Linking... 1> lab 10.obj : error LNK2028: unresolved token (0A0002E0) "double __cdecl GetLowest(double const (* const)[10],int &,int &)" (?GetLowest@@$$FYANQAY09$$CBNAAH1@Z) referenced in function "int __cdecl main(void)" (?main@@$$HYAHXZ) 1> lab 10.obj : error LNK2019: unresolved external symbol "double __cdecl GetLowest(double const (* const)[10],int &,int &)" (?GetLowest@@$$FYANQAY09$$CBNAAH1@Z) referenced in function "int __cdecl main(void)" (?main@@$$HYAHXZ) 1>C:\Users\owner\Documents\Visual Studio 2008\Projects\ lab 10\Debug\ lab 10.exe : fatal error LNK1120: 2 unresolved externals 1>Build log was saved at "file://c:\Users\owner\Documents\Visual Studio 2008\Projects\ lab 10\teddy lab 10\Debug\BuildLog.htm" 1>teddy lab 10 - 3 error(s), 2 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • Threading across multiple files

    - by Zach M.
    My program is reading in files and using thread to compute the highest prime number, when I put a print statement into the getNum() function my numbers are printing out. However, it seems to just lag no matter how many threads I input. Each file has 1 million integers in it. Does anyone see something apparently wrong with my code? Basically the code is giving each thread 1000 integers to check before assigning a new thread. I am still a C noobie and am just learning the ropes of threading. My code is a mess right now because I have been switching things around constantly. #include <stdio.h> #include <stdlib.h> #include <time.h> #include <string.h> #include <pthread.h> #include <math.h> #include <semaphore.h> //Global variable declaration char *file1 = "primes1.txt"; char *file2 = "primes2.txt"; char *file3 = "primes3.txt"; char *file4 = "primes4.txt"; char *file5 = "primes5.txt"; char *file6 = "primes6.txt"; char *file7 = "primes7.txt"; char *file8 = "primes8.txt"; char *file9 = "primes9.txt"; char *file10 = "primes10.txt"; char **fn; //file name variable int numberOfThreads; int *highestPrime = NULL; int fileArrayNum = 0; int loop = 0; int currentFile = 0; sem_t semAccess; sem_t semAssign; int prime(int n)//check for prime number, return 1 for prime 0 for nonprime { int i; for(i = 2; i <= sqrt(n); i++) if(n % i == 0) return(0); return(1); } int getNum(FILE* file) { int number; char* tempS = malloc(20 *sizeof(char)); fgets(tempS, 20, file); tempS[strlen(tempS)-1] = '\0'; number = atoi(tempS); free(tempS);//free memory for later call return(number); } void* findPrimality(void *threadnum) //main thread function to find primes { int tNum = (int)threadnum; int checkNum; char *inUseFile = NULL; int x=1; FILE* file; while(currentFile < 10){ if(inUseFile == NULL){//inUseFIle being used to check if a file is still being read sem_wait(&semAccess);//critical section inUseFile = fn[currentFile]; sem_post(&semAssign); file = fopen(inUseFile, "r"); while(!feof(file)){ if(x % 1000 == 0 && tNum !=1){ //go for 1000 integers and then wait sem_wait(&semAssign); } checkNum = getNum(file); /* * * * * I think the issue is here * * * */ if(checkNum > highestPrime[tNum]){ if(prime(checkNum)){ highestPrime[tNum] = checkNum; } } x++; } fclose(file); inUseFile = NULL; } currentFile++; } } int main(int argc, char* argv[]) { if(argc != 2){ //checks for number of arguements being passed printf("To many ARGS\n"); return(-1); } else{//Sets thread cound to user input checking for correct number of threads numberOfThreads = atoi(argv[1]); if(numberOfThreads < 1 || numberOfThreads > 10){ printf("To many threads entered\n"); return(-1); } time_t preTime, postTime; //creating time variables int i; fn = malloc(10 * sizeof(char*)); //create file array and initialize fn[0] = file1; fn[1] = file2; fn[2] = file3; fn[3] = file4; fn[4] = file5; fn[5] = file6; fn[6] = file7; fn[7] = file8; fn[8] = file9; fn[9] = file10; sem_init(&semAccess, 0, 1); //initialize semaphores sem_init(&semAssign, 0, numberOfThreads); highestPrime = malloc(numberOfThreads * sizeof(int)); //create an array to store each threads highest number for(loop = 0; loop < numberOfThreads; loop++){//set initial values to 0 highestPrime[loop] = 0; } pthread_t calculationThread[numberOfThreads]; //thread to do the work preTime = time(NULL); //start the clock for(i = 0; i < numberOfThreads; i++){ pthread_create(&calculationThread[i], NULL, findPrimality, (void *)i); } for(i = 0; i < numberOfThreads; i++){ pthread_join(calculationThread[i], NULL); } for(i = 0; i < numberOfThreads; i++){ printf("this is a prime number: %d \n", highestPrime[i]); } postTime= time(NULL); printf("Wall time: %ld seconds\n", (long)(postTime - preTime)); } } Yes I am trying to find the highest number over all. So I have made some head way the last few hours, rescucturing the program as spudd said, currently I am getting a segmentation fault due to my use of structures, I am trying to save the largest individual primes in the struct while giving them the right indices. This is the revised code. So in short what the first thread is doing is creating all the threads and giving them access points to a very large integer array which they will go through and find prime numbers, I want to implement semaphores around the while loop so that while they are executing every 2000 lines or the end they update a global prime number. #include <stdio.h> #include <stdlib.h> #include <time.h> #include <string.h> #include <pthread.h> #include <math.h> #include <semaphore.h> //Global variable declaration char *file1 = "primes1.txt"; char *file2 = "primes2.txt"; char *file3 = "primes3.txt"; char *file4 = "primes4.txt"; char *file5 = "primes5.txt"; char *file6 = "primes6.txt"; char *file7 = "primes7.txt"; char *file8 = "primes8.txt"; char *file9 = "primes9.txt"; char *file10 = "primes10.txt"; int numberOfThreads; int entries[10000000]; int entryIndex = 0; int fileCount = 0; char** fileName; int largestPrimeNumber = 0; //Register functions int prime(int n); int getNum(FILE* file); void* findPrimality(void *threadNum); void* assign(void *num); typedef struct package{ int largestPrime; int startingIndex; int numberCount; }pack; //Beging main code block int main(int argc, char* argv[]) { if(argc != 2){ //checks for number of arguements being passed printf("To many threads!!\n"); return(-1); } else{ //Sets thread cound to user input checking for correct number of threads numberOfThreads = atoi(argv[1]); if(numberOfThreads < 1 || numberOfThreads > 10){ printf("To many threads entered\n"); return(-1); } int threadPointer[numberOfThreads]; //Pointer array to point to entries time_t preTime, postTime; //creating time variables int i; fileName = malloc(10 * sizeof(char*)); //create file array and initialize fileName[0] = file1; fileName[1] = file2; fileName[2] = file3; fileName[3] = file4; fileName[4] = file5; fileName[5] = file6; fileName[6] = file7; fileName[7] = file8; fileName[8] = file9; fileName[9] = file10; FILE* filereader; int currentNum; for(i = 0; i < 10; i++){ filereader = fopen(fileName[i], "r"); while(!feof(filereader)){ char* tempString = malloc(20 *sizeof(char)); fgets(tempString, 20, filereader); tempString[strlen(tempString)-1] = '\0'; entries[entryIndex] = atoi(tempString); entryIndex++; free(tempString); } } //sem_init(&semAccess, 0, 1); //initialize semaphores //sem_init(&semAssign, 0, numberOfThreads); time_t tPre, tPost; pthread_t coordinate; tPre = time(NULL); pthread_create(&coordinate, NULL, assign, (void**)numberOfThreads); pthread_join(coordinate, NULL); tPost = time(NULL); } } void* findPrime(void* pack_array) { pack* currentPack= pack_array; int lp = currentPack->largestPrime; int si = currentPack->startingIndex; int nc = currentPack->numberCount; int i; int j = 0; for(i = si; i < nc; i++){ while(j < 2000 || i == (nc-1)){ if(prime(entries[i])){ if(entries[i] > lp) lp = entries[i]; } j++; } } return (void*)currentPack; } void* assign(void* num) { int y = (int)num; int i; int count = 10000000/y; int finalCount = count + (10000000%y); int sIndex = 0; pack pack_array[(int)num]; pthread_t workers[numberOfThreads]; //thread to do the workers for(i = 0; i < y; i++){ if(i == (y-1)){ pack_array[i].largestPrime = 0; pack_array[i].startingIndex = sIndex; pack_array[i].numberCount = finalCount; } pack_array[i].largestPrime = 0; pack_array[i].startingIndex = sIndex; pack_array[i].numberCount = count; pthread_create(&workers[i], NULL, findPrime, (void *)&pack_array[i]); sIndex += count; } for(i = 0; i< y; i++) pthread_join(workers[i], NULL); } //Functions int prime(int n)//check for prime number, return 1 for prime 0 for nonprime { int i; for(i = 2; i <= sqrt(n); i++) if(n % i == 0) return(0); return(1); }

    Read the article

  • Trac vs. Redmine vs. JIRA vs. FogBugz for one-man shop?

    - by kizzx2
    Background I am a one-man freelancer looking for a project management software that can provide the following requirements. I have used Trac for about a year now. Tried Redmine and FogBugz on Demand for a couple of weeks. Never tried JIRA before. Basically, I'm looking for a piece of software that: Facilitates developer-client communication/collaboration Does time tracking Requirements Record time estimates/Time tracking Clients must be able to create/edit his own tickets/cases Clients must not see Developer created tickets/cases (internal) Affordable (price) with multiple clients Nice-to-haves Supports multiple projects in one installation Free eclipse integration (Mylyn) Easy time-tracking without using the Web UI (Trac's post commit hook or Redmine's commit message scanning) Clients can access the Wiki Export the data to standard formats My evaluation Trac can basically fulfill most of the above requirements, but with lots of customizations and plug-ins that it doesn't feel so clean. One downside is that the main trunk (0.11) has been around for a year or more and I still haven't seen much tendency of any upgrades coming up. Redmine has the cleanest Web UI. It's design philosophy seems to be the most elegant, with its innovative commit message scanning and stuff. However, the current version doesn't seem to be very mature and stable yet. It doesn't support internal (private) tickets and the time-tracking commit message patch doesn't support the trunk version. The good thing about it is that the main trunk still seems to be actively developed. FogBugz is actually a very well written piece of software. However the idea of paying $25/month for the client to be able to log-in to the system seems a little bit too far off for an individual developer. The free version supports letting clients create/view their own cases using email, which is a sub-optimal alternative to having a full-fledged list of the user's own cases. That also means clients can't read/write wiki pages. Its time-tracking approach is innovative and good though. However the fact that all the eclipse integration (Bugclipse, Foglyn) are commercial. Yet other investments before I can use my bug-tracker! If I revert back to the Web UI, it's not really a fast rendering Web service. Also, the in-built report functions are excellent (e.g. evidence based scheduling) JIRA is something I have zero experience with. Can someone with JIRA experience recommend why it might be a good fit for this particular situation? Question Can we share experience on this? Any specific plugins/customizations would that would best suit the requirements for this case?

    Read the article

  • Using FFmpeg in c#?

    - by daniel
    So i know its a fairly big challenge but i want to write a basic movie player/converter in c# using the FFmpeg library. However the first obstacle i need to overcome is wrapping the FFmpeg library in c#. I downloaded ffmpeg but couldn't compile it on windows, so i downloaded a precompiled version for me. Ok awesome. Then i started looking for c# wrappers. I have looked around and have found a few wrappers such as SharpFFmpeg (http://sourceforge.net/projects/sharpffmpeg/) and ffmpeg-sharp (http://code.google.com/p/ffmpeg-sharp/). First of all i wanted to use ffmpeg-sharp as its LGPL and SharpFFmpeg is GPL. However it had quite a few compile errors. Turns out it was written for the mono compiler, i tried compiling it with mon but couldn't figure out how. I then started to manually fix the compiler errors myself, but came across a few scary ones and thought i'd better leave those alone. So i gave up on ffmpeg-sharp. Then i looked at SharpFFmpeg and it looks like what i want, all the functions P/Invoked for me. However its GPL? Both the AVCodec.cs and AVFormat.cs files look like ports of avcodec.c and avformat.c which i reckon i could port myself? Then not have to worry about licencing. But i wan't to get this right before i go ahead and start coding. Should I: Write my own c++ library for interacting with ffmpeg, then have my c# program talk to the c++ library in order to play/convert videos etc. OR Port avcodec.h and avformat.h (is that all i need?) to c# by using a whole lot of DllImports and write it entirely in c#? First of all consider that i'm not great at c++ as i rarely use it but i know enough to get around. The reason i'm thinking #1 might be the better option is that most FFmpeg tutorials are in c++ and i'd also have more control over memory management that if i was to do it in c#. What do you think? Also would you happen to have any usefull links (perhaps a tutorial) for using FFmpeg? EDIT: spelling mistakes

    Read the article

  • Why SetUnhandledExceptionFilter cannot capture some exception but AddVectoredExceptionHandler can do

    - by wrongite
    I have experienced a problem that the function I passed to the SetUnhandledExceptionFilter didn't get called when the exception code c0000374 raising. But it works fine with the exception code c0000005. Then I tried to use the AddVectoredExceptionHandler instead, and it didn't have the problem, the handler function get called correctly. Is it the API bug? Can I use AddVectoredExceptionHandler instead of SetUnhandledExceptionFilter everywhere? The both functions work correctly with // Exception code c0000005 int* p1 = NULL; *p1 = 99; Only AddVectoredExceptionHandler can capture this exception. // Exception code c0000374 int* p2 = new int; delete p2; delete p2; Test program. #include <tchar.h> #include <fstream> #include <Windows.h> LONG WINAPI VectoredExceptionHandler(PEXCEPTION_POINTERS pExceptionInfo) { std::ofstream f; f.open("VectoredExceptionHandler.txt", std::ios::out | std::ios::trunc); f << std::hex << pExceptionInfo->ExceptionRecord->ExceptionCode << std::endl; f.close(); return EXCEPTION_CONTINUE_SEARCH; } LONG WINAPI TopLevelExceptionHandler(PEXCEPTION_POINTERS pExceptionInfo) { std::ofstream f; f.open("TopLevelExceptionHandler.txt", std::ios::out | std::ios::trunc); f << std::hex << pExceptionInfo->ExceptionRecord->ExceptionCode << std::endl; f.close(); return EXCEPTION_CONTINUE_SEARCH; } int _tmain(int argc, _TCHAR* argv[]) { AddVectoredExceptionHandler(1, VectoredExceptionHandler); SetUnhandledExceptionFilter(TopLevelExceptionHandler); // Exception code c0000374 int* p2 = new int; delete p2; delete p2; // Exception code c0000005 int* p1 = NULL; *p1 = 99; return 0; }

    Read the article

  • jqGrid multi-checkbox custom edittype solution

    - by gsiler
    For those of you trying to understand jqGrid custom edit types ... I created a multi-checkbox form element, and thought I'd share. This was built using version 3.6.4. If anyone has a more efficient solution, please pass it on. Within the colModel, the appropriate edit fields look like this: edittype:'custom' editoptions:{ custom_element:MultiCheckElem, custom_value:MultiCheckVal, list:'Check1,Check2,Check3,Check4' } Here are the javascript functions (BTW, It also works – with some modifications – when the list of checkboxes is in a DIV block): //———————————————————— // Description: // MultiCheckElem is the "custom_element" function that builds the custom multiple check box input // element. From what I have gathered, jqGrid calls this the first time the form is launched. After // that, only the "custom_value" function is called. // // The full list of checkboxes is in the jqGrid "editoptions" section "list" tag (in the options // parameter). //———————————————————— function MultiCheckElem( value, options ) { //———- // for each checkbox in the list // build the input element // set the initial "checked" status // endfor //———- var ctl = ''; var ckboxAry = options.list.split(','); for ( var i in ckboxAry ) { var item = ckboxAry[i]; ctl += '<input type="checkbox" '; if ( value.indexOf(item + '|') != -1 ) ctl += 'checked="checked" '; ctl += 'value="' + item + '"> ' + item + '</input><br />&nbsp;'; } ctl = ctl.replace( /<br />&nbsp;$/, '' ); return ctl; } //———————————————————— // Description: // MultiCheckVal is the "custom_value" function for the custom multiple check box input element. It // appears that jqGrid invokes this function the first time the form is submitted and, the rest of // the time, when the form is launched (action = set) and when it is submitted (action = 'get'). //———————————————————— function MultiCheckVal(elem, action, val) { var items = ''; if (action == 'get') // the form has been submitted { //———- // for each input element // if it's checked, add it to the list of items // endfor //———- for (var i in elem) { if (elem[i].tagName == 'INPUT' && elem[i].checked ) items += elem[i].value + ','; } // items contains a comma delimited list that is returned as the result of the element items = items.replace(/,$/, ''); } else // the form is launched { //———- // for each input element // based on the input value, set the checked status // endfor //———- for (var i in elem) { if (elem[i].tagName == 'INPUT') { if (val.indexOf(elem[i].value + '|') == -1) elem[i].checked = false; else elem[i].checked = true; } } // endfor } return items; }

    Read the article

  • DirectoryServicesCOMException when working with System.DirectoryServices.AccountManagement

    - by antik
    I'm attempting to determine whether a user is a member of a given group using System.DirectoryServices.AccountManagment. I'm doing this inside a SharePoint WebPart in SharePoint 2007 on a 64-bit system. Project targets .NET 3.5 Impersonation is enabled in the web.config. The IIS Site in question is using an IIS App Pool with a domain user configured as the identity. I am able to instantiate a PrincipalContext as such: PrincipalContext pc = new PrincipalContext(ContextType.Domain) Next, I try to grab a principal: using (PrincipalContext pc = new PrincipalContext(ContextType.Domain)) { GroupPrincipal group = GroupPrincipal.FindByIdentity(pc, "MYDOMAIN\somegroup"); // snip: exception thrown by line above. } Both the above and UserPrincipal.FindByIdentity with a user SAM throw a DirectoryServicesCOMException: "Logon failure: Unknown user name or bad password" I've tried passing in a complete SAMAccountName to either FindByIdentity (in the form of MYDOMAIN\username) or just the username with no change in behavior. I've tried executing the code with other credentials using both the HostingEnvironment.Impersonate and SPSecurity.RunWithElevatedPrivileges approaches and also experience the same result. I've also tried instantiating my context with the domain name in place: Principal Context pc = new PrincipalContext(ContextType.Domain, "MYDOMAIN"); This throws a PrincipalServerDownException: "The server could not be contacted." I'm working on a reasonably hardened server. I did not lock the system down so I am unsure exactly what has been done to it. If there are credentials I need to allocate to my pool identity's user or in the domain security policy in order for these to work, I can configure the domain accordingly. Are there any settings that would be preventing my code from running? Am I missing something in the code itself? Is this just not possible in a SharePoint web? EDIT: Given further testing, my code functions correctly when tested in a Console application targeting .NET 4.0. I targeted a different framework because I didn't have AccountManagement available to me in the console app when targeting .NET 3.5 for some reason. using (PrincipalContext pc = new PrincipalContext(ContextType.Domain)) using (UserPrincipal adUser = UserPrincipal.FindByIdentity(pc, "MYDOMAIN\joe.user")) using (GroupPrincipal adGroup = GroupPrincipal.FindByIdentity(pc, "MYDOMAIN\user group")) { if (adUser.IsMemberOf(adGroup)) { Console.WriteLine("User is a member!"); } else { Console.WriteLine("User is NOT a member."); } } What varies in my SharePoint environment that might prohibit this function from executing?

    Read the article

  • Java Process "The pipe has been ended" problem

    - by Amit Kumar
    I am using Java Process API to write a class that receives binary input from the network (say via TCP port A), processes it and writes binary output to the network (say via TCP port B). I am using Windows XP. The code looks like this. There are two functions called run() and receive(): run is called once at the start, while receive is called whenever there is a new input received via the network. Run and receive are called from different threads. The run process starts an exe and receives the input and output stream of the exe. Run also starts a new thread to write output from the exe on to the port B. public void run() { try { Process prc = // some exe is `start`ed using ProcessBuilder OutputStream procStdIn = new BufferedOutputStream(prc.getOutputStream()); InputStream procStdOut = new BufferedInputStream(prc.getInputStream()); Thread t = new Thread(new ProcStdOutputToPort(procStdOut)); t.start(); prc.waitFor(); t.join(); procStdIn.close(); procStdOut.close(); } catch (Exception e) { e.printStackTrace(); printError("Error : " + e.getMessage()); } } The receive forwards the received input from the port A to the exe. public void receive(byte[] b) throws Exception { procStdIn.write(b); } class ProcStdOutputToPort implements Runnable { private BufferedInputStream bis; public ProcStdOutputToPort(BufferedInputStream bis) { this.bis = bis; } public void run() { try { int bytesRead; int bufLen = 1024; byte[] buffer = new byte[bufLen]; while ((bytesRead = bis.read(buffer)) != -1) { // write output to the network } } catch (IOException ex) { Logger.getLogger().log(Level.SEVERE, null, ex); } } } The problem is that I am getting the following stack inside receive() and the prc.waitfor() returns immediately afterwards. The line number shows that the stack is while writing to the exe. The pipe has been ended java.io.IOException: The pipe has been ended at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:260) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) at java.io.FilterOutputStream.write(FilterOutputStream.java:80) at xxx.receive(xxx.java:86) Any advice about this will be appreciated.

    Read the article

  • WPF Focus In Tab Control Content When New Tab is Created

    - by Phil Sandler
    I've done a lot of searching on SO and google around this problem, but can't seem to find anything else to try. I have a MainView (window) that contains a tab control. The tab control binds to an ObservableCollection of ChildViews (user controls). The MainView's ViewModel has a method that allows adding to the collection of ChildViews, which then creates a new tab. When a new tab is created, it becomes the active tab, and this works fine. This method on the MainView is called from another ViewModel (OtherViewModel). What I am trying to do is set the keyboard focus to the first control on the tab (an AutoCompleteBox from WPFToolkit*) when a new tab is created. I also need to set the focus the same way, but WITHOUT creating a new tab (so set the focus on the currently active tab). (*Note that there seem to be some focus problems with the AutoCompleteBox--even if it does have focus you need to send a MoveNext() to it to get the cursor in its window. I have worked around this already). So here's the problem. The focusing works when I don't create a new tab, but it doesn't work when I do create a new tab. Both functions use the same method to set focus, but the create logic first calls the method that creates a new tab and sets it to active. Code that sets the focus (in the ChildView's Codebehind): IInputElement element1 = Keyboard.Focus(autoCompleteBox); //plus code to deal with AutoCompleteBox as noted. In either case, the Keyboard.FocusedElement starts out as the MainView. After a create, calling Keyboard.Focus seems to do nothing (focused element is still the MainView). Calling this without creating a tab correctly sets the keyboard focus to autoCompleteBox. Any ideas? Update: Bender's suggestion half-worked. So now in both cases, the focused element is correctly the AutoCompleteBox. What I then do is MoveNext(), which sets the focus to a TextBox. I have been assuming that this Textbox is internal to the AutoCompleteBox, as the focus was correctly set on screen when this happened. Now I'm not so sure. This is still the behavior I see when this code gets hit when NOT doing a create. After a create, MoveNext() sets the focus to an element back in my MainView. The problem must still be along the lines of Bender's answer, where the state of the controls is not the same depending on whether a new tab was created or not. Any other thoughts?

    Read the article

  • GLFW 3 initialized, yet not?

    - by mSkull
    I'm struggling with creating a window with the GLFW 3 function, glfwCreateWindow. I have set an error callback function, that pretty much just prints out the error number and description, and according to that the GLFW library have not been initialized, even though the glfwInit function just returned success? Here's an outtake from my code // Error callback function prints out any errors from GFLW to the console static void error_callback( int error, const char *description ) { cout << error << '\t' << description << endl; } bool Base::Init() { // Set error callback /*! * According to the documentation this can be use before glfwInit, * and removing won't change anything anyway */ glfwSetErrorCallback( error_callback ); // Initialize GLFW /*! * This return succesfull, but... */ if( !glfwInit() ) { cout << "INITIALIZER: Failed to initialize GLFW!" << endl; return false; } else { cout << "INITIALIZER: GLFW Initialized successfully!" << endl; } // Create window /*! * When this is called, or any other glfw functions, I get a * "65537 The GLFW library is not initialized" in the console, through * the error_callback function */ window = glfwCreateWindow( 800, 600, "GLFW Window", NULL, NULL ); if( !window ) { cout << "INITIALIZER: Failed to create window!" << endl; glfwTerminate(); return false; } // Set window to current context glfwMakeContextCurrent( window ); ... return true; } And here's what's printed out in the console INITIALIZER: GLFW Initialized succesfully! 65537 The GLFW library is not initialized INITIALIZER: Failed to create window! I think I'm getting the error because of the setup isn't entirely correct, but I've done the best I can with what I could find around the place I downloaded the windows 32 from glfw.org and stuck the 2 includes files from it into minGW/include/GLFW, the 2 .a files (from the lib-mingw folder) into minGW/lib and the dll, also from the lib-mingw folder, into Windows/System32 In code::blocks I have, from build options - linker settings, linked the 2 .a files from the download. I believe I need to link more things, but I can figure out what, or where I should get those things from.

    Read the article

  • Overlay bitmap on live video

    - by sijith
    Hi i want to Overlay bitmap on live video. Iam trying to do this with the directshow sample. I edited PlayCapMonker sample and added some functions to enable this. i did this with the procedure explained in below link http://www.ureader.com/msg/1471251.aspx Now i am gettting errors Error 2 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 5 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 6 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 9 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 21 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 22 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 26 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 27 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 36 error C2228: left of '.m_alpha' must have class/struct/union Error 38 error C2227: left of '-SetAlphaBitmap' must point to class/struct/union/generic type Error 7 error C2146: syntax error : missing ';' before identifier 'Pool' Error 4 error C2146: syntax error : missing ';' before identifier 'Format' c:\Program Files\Microsoft Platform SDK\include\Vmr9.h 368 PlayCapMoniker Error 1 error C2143: syntax error : missing ';' before '' Error 20 error C2143: syntax error : missing ';' before '' Error 25 error C2143: syntax error : missing ';' before '*' Error 30 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 33 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 37 error C2065: 'g_pMixerBitmap' : undeclared identifier Error 31 error C2065: 'g_hbm' : undeclared identifier Error 32 error C2065: 'g_hbm' : undeclared identifier Error 35 error C2065: 'config' : undeclared identifier Error 10 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 11 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 12 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 13 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 16 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 19 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 23 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 24 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 28 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 29 error C2061: syntax error : identifier 'IDirect3DSurface9' Error 14 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 15 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 17 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 18 error C2061: syntax error : identifier 'IDirect3DDevice9' Error 34 error C2039: 'pDDS' : is not a member of '_VMR9AlphaBitmap' SDK\Samples\Multimedia\DirectShow\Capture\PlayCapMoniker\PlayCapMoniker.cpp 263 PlayCapMoniker

    Read the article

  • How to send messages between c++ .dll and C# app using named pipe?

    - by Gal
    I'm making an injected .dll written in C++, and I want to communicate with a C# app using named pipes. Now, I am using the built in System.IO.Pipe .net classes in the C# app, and I'm using the regular functions in C++. I don't have much experience in C++ (Read: This is my first C++ code..), tho I'm experienced in C#. It seems that the connection with the server and the client is working, the only problem is the messaged aren't being send. I tried making the .dll the server, the C# app the server, making the pipe direction InOut (duplex) but none seems to work. When I tried to make the .dll the server, which sends messages to the C# app, the code I used was like this: DWORD ServerCreate() // function to create the server and wait till it successfully creates it to return. { hPipe = CreateNamedPipe(pipename,//The unique pipe name. This string must have the following form: \\.\pipe\pipename PIPE_ACCESS_DUPLEX, PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_NOWAIT, //write and read and return right away PIPE_UNLIMITED_INSTANCES,//The maximum number of instances that can be created for this pipe 4096 , // output time-out 4096 , // input time-out 0,//client time-out NULL); if(hPipe== INVALID_HANDLE_VALUE) { return 1;//failed } else return 0;//success } void SendMsg(string msg) { DWORD cbWritten; WriteFile(hPipe,msg.c_str(), msg.length()+1, &cbWritten,NULL); } void ProccesingPipeInstance() { while(ServerCreate() == 1)//if failed { Sleep(1000); } //if created success, wait to connect ConnectNamedPipe(hPipe, NULL); for(;;) { SendMsg("HI!"); if( ConnectNamedPipe(hPipe, NULL)==0) if(GetLastError()==ERROR_NO_DATA) { DebugPrintA("previous closed,ERROR_NO_DATA"); DisconnectNamedPipe(hPipe); ConnectNamedPipe(hPipe, NULL); } Sleep(1000); } And the C# cliend like this: static void Main(string[] args) { Console.WriteLine("Hello!"); using (var pipe = new NamedPipeClientStream(".", "HyprPipe", PipeDirection.In)) { Console.WriteLine("Created Client!"); Console.Write("Connecting to pipe server in the .dll ..."); pipe.Connect(); Console.WriteLine("DONE!"); using (var pr = new StreamReader(pipe)) { string t; while ((t = pr.ReadLine()) != null) { Console.WriteLine("Message: {0}",t); } } } } I see that the C# client connected to the .dll, but it won't receive any message.. I tried doing it the other way around, as I said before, and trying to make the C# send messages to the .dll, which would show them in a message box. The .dll was injected and connected to the C# server, but when It received a message it just crashed the application it was injected to. Please help me, or guide me on how to use named pipes between C++ and C# app

    Read the article

  • C# - Alternative to System.Timers.Timer, to call a function at a specific time.

    - by Fábio Antunes
    Hello everybody. I want to call a specific function on my C# application at a specific time. At first i thought about using a Timer (System.Time.Timer), but that soon became impossible to use. Why? Simple. The Timer Class requires a Interval in milliseconds, but considering that i might want the function to be executed, lets says in a week that would mean: 7 Days = 168 hours; 168 Hours = 10,080 minutes; 10,080 Minutes = 6,048,000 seconds; 6,048,000 Seconds = 6,048,000,000 milliseconds; So the Interval would be 6,048,000,000; Now lets remember that the Interval accepted data type is int, and as we know int range goes from -2,147,483,648 to 2,147,483,647. That makes Timer useless in this case once we cannot set a Interval bigger that 2,147,483,647 milliseconds. So i need a solution where i could specify when the function should be called. Something like this: solution.ExecuteAt = "30-04-2010 15:10:00"; solution.Function = "functionName"; solution.Start(); So when the System Time would reach "30-04-2010 15:10:00" the function would be executed in the application. How can this problem be solved? Thanks just by taking the time to read my question. But if you could provide me with some help i would be most grateful. Additional Info: What these functions will do? Getting climate information and based on that info: Starting / Shutting down other Applications (most of them Console Based); Sending custom Commands to those Console Applications; Power down, Rebooting, Sleep, Hibernate the computer; And if possible schedule the BIOS to Power Up the Computer; EDIT: It would seem that the Interval accepted data type is double, however if you set a value bigger that an int to the Interval, and call Start() it throws a exception [0, Int32.MaxValue]. EDIT 2: Jørn Schou-Rode suggested using Ncron to handle the scheduling tasks, and at first look this seems a good solution, but i would like to hear about some who as worked with it.

    Read the article

  • Cannot get UISearchBar Scope Bar to appear in Toolbar on iPad

    - by Jann
    This is really causing me fits. I put a toolbar on the IUView on the iPad. I added the following: Search Bar (not Search Bar and Search Display) to the toolbar. I set the options to be as follows: Show Cancel Button, Show Scope Bar, Scope Button Titles are: "Title1" and "Title2" (with Title2's radio button selected). Opaque, Clear Context and Auto Resize are checked. I hooked up the delegate of Search Bar to the "File's Owner" and linked it to IBOutlet theSearchBar. In my viewWillAppear I have the following: [theSearchBar setScopeButtonTitles:[NSArray arrayWithObjects:@"Near Me",@"Everywhere",nil]]; //Just in case: [theSearchBar setShowsScopeBar:YES]; //doesn't seem to do anything: //[theSearchBar sizeToFit]; searchDisplayController = [[UISearchDisplayController alloc] initWithSearchBar:theSearchBar contentsController:self]; [self setSearchDisplayController:searchDisplayController]; [searchDisplayController setDelegate:self]; [searchDisplayController setSearchResultsDataSource:self]; //again--does not seem to do anything..but people have suggested it: [theSearchBar sizeToFit]; Okay, so far, I thought, so good. So, I made the File's Owner .m file to be a delegate for: UISearchBarDelegate, UISearchDisplayDelegate. My issue: I have yet to implement the delegates necessary to do the search but still... shouldn't I be seeing the scopeBar next to the search field when I click into the search field? Just so you know I DO see the log of the characters I type, so the delegate is working. I have the following dummy functions in the .m file (just in case) // called when keyboard search button pressed - (void)searchBarSearchButtonClicked:(UISearchBar *)searchBar { NSLog(@"Search Button Clicked\n"); [theSearchBar resignFirstResponder]; } // called when cancel button pressed - (void)searchBarCancelButtonClicked:(UISearchBar *)searchBar { NSLog(@"Cancel Button Clicked\n"); [theSearchBar resignFirstResponder]; } - (void)searchBar:(UISearchBar *)searchBar textDidChange:(NSString *)searchText { NSLog(@"Search Text So Far: '%@'\n",searchText); } - (BOOL)searchBarShouldBeginEditing:(UISearchBar *)searchBar { return YES; } - (BOOL)searchBarShouldEndEditing:(UISearchBar *)searchBar { return YES; } Why doesn't the Scope Bar appear? A results UIPopoverController appears with the title "Results" and "No results found" (of course) when i type the first character in my search...but no scope bar. (not that i expect anything other than "No Results Found". I am wondering where the scope bar is supposed to appear...in the titleView of the UIPopover? In the toolbar to the right of the search area? Where?

    Read the article

  • WIX installer with Custom Actions: "built by a runtime newer than the currently loaded runtime and cannot be loaded."

    - by Rimer
    I have a WIX installer that executes Custom Actions over the course of install. When I run the WIX installer, and it encounters its first Custom Action, the installer fails out, and I receive an error in the MSI log as follows: Action start 12:03:53: LoadBCAConfigDefaults. SFXCA: Extracting custom action to temporary directory: C:\DOCUME~1\ELOY06~1\LOCALS~1\Temp\MSI10C.tmp-\ SFXCA: Binding to CLR version v2.0.50727 Calling custom action WIXCustomActions!WIXCustomActions.CustomActions.LoadBCAConfigDefaults Error: could not load custom action class WIXCustomActions.CustomActions from assembly: WIXCustomActions System.BadImageFormatException: Could not load file or assembly 'WIXCustomActions' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. File name: 'WIXCustomActions' at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.AppDomain.Load(String assemblyString) at Microsoft.Deployment.WindowsInstaller.CustomActionProxy.GetCustomActionMethod(Session session, String assemblyName, String className, String methodName) ... the specific problem from above is "System.BadImageFormatException: Could not load file or assembly 'WIXCustomActions' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded." The wordage of that error seems to indicate something like an incorrectly referenced .NET framework or something (I'm targeting 3.5 in both my custom actions and its dependencies), but I can't figure out where to make a change to address this problem. Any ideas? .... Not sure if this will help but it's the CustomActions package batch file I run to create the .dll package containing the custom action functions: =============== call "C:\Program Files\Microsoft Visual Studio 10.0\VC\vcvarsall.bat" @echo on cd "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions" csc /target:library /r:"C:\program files\windows installer xml v3.6\sdk\microsoft.deployment.windowsinstaller.dll" /r:"C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\eLoyalty.PortalLib.dll" /out:"C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\WIXCustomActions.dll" CustomActions.cs cd "C:\Program Files\Windows Installer XML v3.6\SDK" makesfxca "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\BatchCustomerAnalysisWIXCustomActionsPackage.dll" "c:\program files\windows installer xml v3.6\sdk\x86\sfxca.dll" "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\WIXCustomActions.dll" customaction.config Microsoft.Deployment.WindowsInstaller.dll

    Read the article

  • Selenium Webdriver (FirefoxDriver)

    - by Samareri
    Hey all, I'm using Selenium Webdriver to do some robottesting. Since some functions appear to only work in Firefox, I'm obligated to use Firefoxdriver. Now and then, something weird happens. Starting up te driver driver = new FirefoxDriver(); driver.get(URL); gets firefox to startup but not to go to the specified url. The strange thing is that it works on another computer with the same preferences set in Firefox. I solved this problem once by changing to another version of firefox, but this time this doesn't do the trick for me, it did however worked for the other developers. Yes, the error started for all developers on the same time, same day... My first question is: is it a firefox problem or Webdriver problem. Second question: how is it possible that it works on other pc's? Any help would be very appreciated Thanks Error: org.openqa.selenium.WebDriverException: Could not parse "". System info: os.name: 'Windows XP', os.arch: 'x86', os.version: '5.1', java.version: '1.6.0_18' Driver info: driver.version: firefox at org.openqa.selenium.firefox.Response.<init>(Response.java:53) at org.openqa.selenium.firefox.internal.AbstractExtensionConnection.nextResponse(AbstractExtensionConnection.java:258) at org.openqa.selenium.firefox.internal.AbstractExtensionConnection.readLoop(AbstractExtensionConnection.java:220) at org.openqa.selenium.firefox.internal.AbstractExtensionConnection.waitForResponseFor(AbstractExtensionConnection.java:213) at org.openqa.selenium.firefox.internal.AbstractExtensionConnection.sendMessageAndWaitForResponse(AbstractExtensionConnection.java:162) at org.openqa.selenium.firefox.FirefoxDriver.executeCommand(FirefoxDriver.java:329) at org.openqa.selenium.firefox.FirefoxDriver.sendMessage(FirefoxDriver.java:312) at org.openqa.selenium.firefox.FirefoxDriver.sendMessage(FirefoxDriver.java:308) at org.openqa.selenium.firefox.FirefoxDriver.fixId(FirefoxDriver.java:350) at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:130) at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:109) at be.....MMCRobotTest.login(MMCRobotTest.java:98) at be.....MMCRobotTestAttribute.testNewAttribute(MMCRobotTestAttribute.java:12) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: org.json.JSONException: A JSONObject text must begin with '{' at character 0 at org.json.JSONTokener.syntaxError(JSONTokener.java:496) at org.json.JSONObject.<init>(JSONObject.java:180) at org.json.JSONObject.<init>(JSONObject.java:403) at org.openqa.selenium.firefox.Response.<init>(Response.java:41) ... 30 more

    Read the article

  • WebLogic job scheduling

    - by XpiritO
    Hello, overflowers :) I'm trying to implement a WebLogic job scheduling example, to test my cluster capabilities of fail-over on scheduled tasks (to ensure that these tasks are executed on fail over scenario). With this in mind, I've been following this example and trying to configure everything accordingly. Here are the steps I've done so far: Configured a cluster with 1 admin server (AdminServer) and 2 managed instances (Noddy and Snoopy); Set up database tables (using Oracle XE): ACTIVE and WEBLOGIC_TIMERS; Set up data source to access DB and associated it to the scheduling tasks under "Settings for cluster" "Scheduling"; Implemented a job (TimerListener) and a servlet to initialize the job scheduling, as follows: . package timedexecution; import java.io.IOException; import java.io.PrintWriter; import java.io.Serializable; import java.text.SimpleDateFormat; import java.util.Date; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import commonj.timers.Timer; import commonj.timers.TimerListener; import commonj.timers.TimerManager; public class TimerServlet extends HttpServlet { private static final long serialVersionUID = 1L; protected static void logMessage(String message, PrintWriter out){ out.write("<p>"+ message +"</p>"); System.out.println(message); } @Override public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); // out.println("<html>"); out.println("<head><title>TimerServlet</title></head>"); // try { // logMessage("service() entering try block to intialize the timer from JNDI", out); // InitialContext ic = new InitialContext(); TimerManager jobScheduler = (TimerManager)ic.lookup("weblogic.JobScheduler"); // logMessage("jobScheduler reference " + jobScheduler, out); // jobScheduler.schedule(new ExampleTimerListener(), 0, 30*1000); // logMessage("Timer scheduled!", out); // //execute this job every 30 seconds logMessage("service() started the timer", out); // logMessage("Started the timer - status:", out); // } catch (NamingException ne) { String msg = ne.getMessage(); logMessage("Timer schedule failed!", out); logMessage(msg, out); } catch (Throwable t) { logMessage("service() error initializing timer manager with JNDI name weblogic.JobScheduler " + t,out); } // out.println("</body></html>"); out.close(); } private static class ExampleTimerListener implements Serializable, TimerListener { private static final long serialVersionUID = 8313912206357147939L; public void timerExpired(Timer timer) { SimpleDateFormat sdf = new SimpleDateFormat(); System.out.println( "timerExpired() called at " + sdf.format( new Date() ) ); } } } Then I executed the servlet to start the scheduling on the first managed instance (Noddy server), which returned as expected: (Servlet execution output) service() entering try block to intialize the timer from JNDI jobScheduler reference weblogic.scheduler.TimerServiceImpl@43b4c7 Timer scheduled! service() started the timer Started the timer - status: Which resulted in the creation of 2 rows in my DB tables: WEBLOGIC_TIMERS table state after servlet execution: "EDIT"; "TIMER_ID"; "LISTENER"; "START_TIME"; "INTERVAL"; "TIMER_MANAGER_NAME"; "DOMAIN_NAME"; "CLUSTER_NAME"; ""; "Noddy_1268653040156"; "[datatype]"; "1268653040156"; "30000"; "weblogic.JobScheduler"; "myCluster"; "Cluster" ACTIVE table state after servlet execution: "EDIT"; "SERVER"; "INSTANCE"; "DOMAINNAME"; "CLUSTERNAME"; "TIMEOUT"; ""; "service.SINGLETON_MASTER"; "6382071947583985002/Noddy"; "QRENcluster"; "Cluster"; "10.03.15" Although, the job is not executed as scheduled. It should print a message on the server's log output (Noddy.out file) with a timestamp, saying that the timer had expired. It doesn't. My log files state as follows: Admin server log (myCluster.log file): ####<15/Mar/2010 10H45m GMT> <Warning> <Cluster> <test-ad> <Noddy> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268649925727> <BEA-000192> <No currently living server was found that could host TimerMaster. The server will retry in a few seconds.> Noddy server log (Noddy.out file): service() entering try block to intialize the timer from JNDI jobScheduler reference weblogic.scheduler.TimerServiceImpl@43b4c7 Timer scheduled! service() started the timer Started the timer - status: <15/Mar/2010 10H45m GMT> <Warning> <Cluster> <BEA-000192> <No currently living server was found that could host TimerMaster. The server will retry in a few seconds.> (Noddy.log file): ####<15/Mar/2010 11H24m GMT> <Info> <Common> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268652270128> <BEA-000628> <Created "1" resources for pool "TxDataSourceOracle", out of which "1" are available and "0" are unavailable.> ####<15/Mar/2010 11H37m GMT> <Info> <Cluster> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1268653040226> <BEA-000182> <Job Scheduler created a job with ID Noddy_1268653040156 for TimerListener with description timedexecution.TimerServlet$ExampleTimerListener@2ce79a> ####<15/Mar/2010 11H39m GMT> <Info> <JDBC> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268653166307> <BEA-001128> <Connection for pool "TxDataSourceOracle" closed.> Can anyone help me out discovering what's wrong with my configuration? Thanks in advance for your help!

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • Issues with MIPS interrupt for tv remote simulator

    - by pred2040
    Hello I am writing a program for class to simulate a tv remote in a MIPS/SPIM enviroment. The functions of the program itself are unimportant as they worked fine before the interrupt so I left them all out. The gaol is basically to get a input from the keyboard by means of interupt, store it in $s7 and process it. The interrupt is causing my program to repeatedly spam the errors: Exception occurred at PC=0x00400068 Bad address in data/stack read: 0x00000004 Exception occurred at PC=0x00400358 Bad address in data/stack read: 0x00000000 program starts here .data msg_tvworking: .asciiz "tv is working\n" msg_sec: .asciiz "sec -- " msg_on: .asciiz "Power On" msg_off: .asciiz "Power Off" msg_channel: .asciiz " Channel " msg_volume: .asciiz " Volume " msg_sleep: .asciiz " Sleep Timer: " msg_dash: .asciiz "-\n" msg_newline: .asciiz "\n" msg_comma: .asciiz ", " array1: .space 400 # 400 bytes of storage for 100 channels array2: .space 400 # copy of above for sorting var1: .word 0 # 1 if 0-9 is pressed, 0 if not var2: .word 0 # stores number of channel (ex. 2-) var3: .word 0 # channel timer var4: .word 0 # 1 if s pressed once, 2 if twice, 0 if not var5: .word 0 # sleep wait timer var6: .word 0 # program timer var9: .float 0.01 # for channel timings .kdata var7: .word 10 var8: .word 11 .text .globl main main: li $s0, 300 li $s1, 0 # channel li $s2, 50 # volume li $s3, 1 # power - 1:on 0:off li $s4, 0 # sleep timer - 0:off li $s5, 0 # temporary li $s6, 0 # length of sleep period li $s7, 10000 # current key press li $t2, 0 # temp value not needed across calls li $t4, 0 interrupt data here mfc0 $a0, $12 ori $a0, 0xff11 mtc0 $a0, $12 lui $t0, 0xFFFF ori $a0, $0, 2 sw $a0, 0($t0) mainloop: # 1. get external input, and process it # input from interupt is taken from $a2 and placed in $s7 #for processing beq $a2, $0, next lw $s7, 4($a2) li $a2, 0 # call the process_input function here # jal process_input next: # 2. check sleep timer mainloopnext1: # 3. delay for 10ms jal delay_10ms jal check_timers jal channel_time # 4. print status lw $s5, var6 addi $s5, $s5, 1 sw $s5, var6 addi $s0, $s0, -1 bne $s0, $0, mainloopnext4 li $s0, 300 jal status_print mainloopnext4: j mainloop li $v0,10 # exit syscall -------------------------------------------------- status_print: seconds_stat: power_stat: on_stat: off_stat: channel_stat: volume_stat: sleep_stat: j $ra -------------------------------------------------- delay_10ms: li $t0, 6000 delay_10ms_loop: addi $t0, $t0, -1 bne $t0, $0, delay_10ms_loop jr $ra -------------------------------------------------- check_timers: channel_press: sleep_press: go_back_press: channel_check: channel_ignore: sleep_check: sleep_ignore: j $ra ------------------------------------------------ process_input: beq $s7, 112, power beq $s7, 117, channel_up beq $s7, 100, channel_down beq $s7, 108, volume_up beq $s7, 107, volume_down beq $s7, 115, sleep_init beq $s7, 118, history bgt $s7, 47, end_range jr $ra end_range: power: on: off: channel_up: over: channel_down: under: channel_message: channel_time: volume_up: volume_down: volume_message: sleep_init: sleep_incr: sleep: sleep_reset: history: digit_pad_init: digit_pad: jr $ra -------------------------------------------- interupt data here, followed closely from class .ktext 0x80000180 .set noat move $k1, $at .set at sw $v0, var7 sw $a0, var8 mfc0 $k0, $13 srl $a0, $k0, 2 andi $a0, $a0, 0x1f bne $a0, $zero, no_io lui $v0, 0xFFFF lw $a2, 4($v0) # keyboard data placed in $a2 no_io: mtc0 $0, $13 mfc0 $k0, $12 andi $k0, 0xfffd ori $k0, 0x11 mtc0 $k0, $12 lw $v0, var7 lw $a0, var8 .set noat move $at, $k1 .set at eret Thanks in advance.

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >