Search Results

Search found 8028 results on 322 pages for 'strategy pattern'.

Page 321/322 | < Previous Page | 317 318 319 320 321 322  | Next Page >

  • Null-free "maps": Is a callback solution slower than tryGet()?

    - by David Moles
    In comments to "How to implement List, Set, and Map in null free design?", Steven Sudit and I got into a discussion about using a callback, with handlers for "found" and "not found" situations, vs. a tryGet() method, taking an out parameter and returning a boolean indicating whether the out parameter had been populated. Steven maintained that the callback approach was more complex and almost certain to be slower; I maintained that the complexity was no greater and the performance at worst the same. But code speaks louder than words, so I thought I'd implement both and see what I got. The original question was fairly theoretical with regard to language ("And for argument sake, let's say this language don't even have null") -- I've used Java here because that's what I've got handy. Java doesn't have out parameters, but it doesn't have first-class functions either, so style-wise, it should suck equally for both approaches. (Digression: As far as complexity goes: I like the callback design because it inherently forces the user of the API to handle both cases, whereas the tryGet() design requires callers to perform their own boilerplate conditional check, which they could forget or get wrong. But having now implemented both, I can see why the tryGet() design looks simpler, at least in the short term.) First, the callback example: class CallbackMap<K, V> { private final Map<K, V> backingMap; public CallbackMap(Map<K, V> backingMap) { this.backingMap = backingMap; } void lookup(K key, Callback<K, V> handler) { V val = backingMap.get(key); if (val == null) { handler.handleMissing(key); } else { handler.handleFound(key, val); } } } interface Callback<K, V> { void handleFound(K key, V value); void handleMissing(K key); } class CallbackExample { private final Map<String, String> map; private final List<String> found; private final List<String> missing; private Callback<String, String> handler; public CallbackExample(Map<String, String> map) { this.map = map; found = new ArrayList<String>(map.size()); missing = new ArrayList<String>(map.size()); handler = new Callback<String, String>() { public void handleFound(String key, String value) { found.add(key + ": " + value); } public void handleMissing(String key) { missing.add(key); } }; } void test() { CallbackMap<String, String> cbMap = new CallbackMap<String, String>(map); for (int i = 0, count = map.size(); i < count; i++) { String key = "key" + i; cbMap.lookup(key, handler); } System.out.println(found.size() + " found"); System.out.println(missing.size() + " missing"); } } Now, the tryGet() example -- as best I understand the pattern (and I might well be wrong): class TryGetMap<K, V> { private final Map<K, V> backingMap; public TryGetMap(Map<K, V> backingMap) { this.backingMap = backingMap; } boolean tryGet(K key, OutParameter<V> valueParam) { V val = backingMap.get(key); if (val == null) { return false; } valueParam.value = val; return true; } } class OutParameter<V> { V value; } class TryGetExample { private final Map<String, String> map; private final List<String> found; private final List<String> missing; public TryGetExample(Map<String, String> map) { this.map = map; found = new ArrayList<String>(map.size()); missing = new ArrayList<String>(map.size()); } void test() { TryGetMap<String, String> tgMap = new TryGetMap<String, String>(map); for (int i = 0, count = map.size(); i < count; i++) { String key = "key" + i; OutParameter<String> out = new OutParameter<String>(); if (tgMap.tryGet(key, out)) { found.add(key + ": " + out.value); } else { missing.add(key); } } System.out.println(found.size() + " found"); System.out.println(missing.size() + " missing"); } } And finally, the performance test code: public static void main(String[] args) { int size = 200000; Map<String, String> map = new HashMap<String, String>(); for (int i = 0; i < size; i++) { String val = (i % 5 == 0) ? null : "value" + i; map.put("key" + i, val); } long totalCallback = 0; long totalTryGet = 0; int iterations = 20; for (int i = 0; i < iterations; i++) { { TryGetExample tryGet = new TryGetExample(map); long tryGetStart = System.currentTimeMillis(); tryGet.test(); totalTryGet += (System.currentTimeMillis() - tryGetStart); } System.gc(); { CallbackExample callback = new CallbackExample(map); long callbackStart = System.currentTimeMillis(); callback.test(); totalCallback += (System.currentTimeMillis() - callbackStart); } System.gc(); } System.out.println("Avg. callback: " + (totalCallback / iterations)); System.out.println("Avg. tryGet(): " + (totalTryGet / iterations)); } On my first attempt, I got 50% worse performance for callback than for tryGet(), which really surprised me. But, on a hunch, I added some garbage collection, and the performance penalty vanished. This fits with my instinct, which is that we're basically talking about taking the same number of method calls, conditional checks, etc. and rearranging them. But then, I wrote the code, so I might well have written a suboptimal or subconsicously penalized tryGet() implementation. Thoughts?

    Read the article

  • On screen orientation loads again data with Async Task

    - by Zookey
    I make Android application with master/detail pattern. So I have ListActivity class which is FragmentActivity and ListFragment class which is Fragment It all works perfect, but when I change screen orientation it calls again AsyncTask and reload all data. Here is the code for ListActivity class where I handle all logic: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_list); getActionBar().setDisplayHomeAsUpEnabled(true); getActionBar().setHomeButtonEnabled(true); getActionBar().setTitle("Dnevni horoskop"); if(findViewById(R.id.details_container) != null){ //Tablet mTwoPane = true; //Fragment stuff FragmentManager fm = getSupportFragmentManager(); FragmentTransaction ft = fm.beginTransaction(); DetailsFragment df = new DetailsFragment(); ft.add(R.id.details_container, df); ft.commit(); } pb = (ProgressBar) findViewById(R.id.pb_list); tvNoConnection = (TextView) findViewById(R.id.tv_no_internet); ivNoConnection = (ImageView) findViewById(R.id.iv_no_connection); list = (GridView) findViewById(R.id.gv_list); if(mTwoPane == true){ list.setNumColumns(1); //list.setPadding(16,16,16,16); } adapter = new CustomListAdapter(); list.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> arg0, View arg1, int position, long arg3) { pos = position; if(mTwoPane == false){ Bundle bundle = new Bundle(); bundle.putSerializable("zodiac", zodiacFeed); Intent i = new Intent(getApplicationContext(), DetailsActivity.class); i.putExtra("position", position); i.putExtras(bundle); startActivity(i); overridePendingTransition(R.anim.right_in, R.anim.right_out); } else if(mTwoPane == true){ DetailsFragment fragment = (DetailsFragment) getSupportFragmentManager().findFragmentById(R.id.details_container); fragment.setHoroscopeText(zodiacFeed.getItem(position).getText()); fragment.setLargeImage(zodiacFeed.getItem(position).getLargeImage()); fragment.setSign("Dnevni horoskop - "+zodiacFeed.getItem(position).getName()); fragment.setSignDuration(zodiacFeed.getItem(position).getDuration()); // inflate menu from xml /*if(menu != null){ MenuItem item = menu.findItem(R.id.share); Toast.makeText(getApplicationContext(), item.getTitle().toString(), Toast.LENGTH_SHORT).show(); }*/ } } }); if(!Utils.isConnected(getApplicationContext())){ pb.setVisibility(View.GONE); tvNoConnection.setVisibility(View.VISIBLE); ivNoConnection.setVisibility(View.VISIBLE); } //Calling AsyncTask to load data Log.d("TAG", "loading"); HoroscopeAsyncTask task = new HoroscopeAsyncTask(pb); task.execute(); } @Override public void onConfigurationChanged(Configuration newConfig) { // TODO Auto-generated method stub super.onConfigurationChanged(newConfig); } class CustomListAdapter extends BaseAdapter { private LayoutInflater layoutInflater; public CustomListAdapter() { layoutInflater = (LayoutInflater) getBaseContext().getSystemService( Context.LAYOUT_INFLATER_SERVICE); } public int getCount() { // TODO Auto-generated method stub // Set the total list item count return names.length; } public Object getItem(int arg0) { // TODO Auto-generated method stub return null; } public long getItemId(int arg0) { // TODO Auto-generated method stub return 0; } public View getView(int position, View convertView, ViewGroup parent) { // Inflate the item layout and set the views View listItem = convertView; int pos = position; zodiacItem = zodiacList.get(pos); if (listItem == null && mTwoPane == false) { listItem = layoutInflater.inflate(R.layout.list_item, null); } else if(mTwoPane == true){ listItem = layoutInflater.inflate(R.layout.tablet_list_item, null); } // Initialize the views in the layout ImageView iv = (ImageView) listItem.findViewById(R.id.iv_horoscope); iv.setScaleType(ScaleType.CENTER_CROP); TextView tvName = (TextView) listItem.findViewById(R.id.tv_zodiac_name); TextView tvDuration = (TextView) listItem.findViewById(R.id.tv_duration); iv.setImageResource(zodiacItem.getImage()); tvName.setText(zodiacItem.getName()); tvDuration.setText(zodiacItem.getDuration()); Animation animation = AnimationUtils.loadAnimation(getBaseContext(), R.anim.push_up); listItem.startAnimation(animation); animation = null; return listItem; } } private void getHoroscope() { String urlString = "http://balkanandroid.com/download/horoskop/examples/dnevnihoroskop.php"; try { HttpClient client = new DefaultHttpClient(); HttpPost post = new HttpPost(urlString); HttpResponse response = client.execute(post); resEntity = response.getEntity(); response_str = EntityUtils.toString(resEntity); if (resEntity != null) { Log.i("RESPONSE", response_str); runOnUiThread(new Runnable() { public void run() { try { Log.d("TAG", "Response from server : n " + response_str); } catch (Exception e) { e.printStackTrace(); } } }); } } catch (Exception ex) { Log.e("TAG", "error: " + ex.getMessage(), ex); } } private class HoroscopeAsyncTask extends AsyncTask<String, Void, Void> { public HoroscopeAsyncTask(ProgressBar pb1){ pb = pb1; } @Override protected void onPreExecute() { pb.setVisibility(View.VISIBLE); super.onPreExecute(); } @Override protected Void doInBackground(String... params) { getHoroscope(); try { Log.d("TAG", "test u try"); JSONObject jsonObject = new JSONObject(response_str); JSONArray jsonArray = jsonObject.getJSONArray("horoscope"); for(int i=0;i<jsonArray.length();i++){ Log.d("TAG", "test u for"); JSONObject horoscopeObj = jsonArray.getJSONObject(i); String horoscopeSign = horoscopeObj.getString("name_sign"); String horoscopeText = horoscopeObj.getString("txt_hrs"); zodiacItem = new ZodiacItem(horoscopeSign, horoscopeText, duration[i], images[i], largeImages[i]); zodiacList.add(zodiacItem); zodiacFeed.addItem(zodiacItem); //Treba u POJO klasu ubaciti sve. Log.d("TAG", "ZNAK: "+zodiacItem.getName()+" HOROSKOP: "+zodiacItem.getText()); } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); Log.e("TAG", "error: " + e.getMessage(), e); } return null; } @Override protected void onPostExecute(Void result) { pb.setVisibility(View.GONE); list.setAdapter(adapter); adapter.notifyDataSetChanged(); super.onPostExecute(result); } } Here is the code for ListFragment class: public class ListFragment extends Fragment { @Override public void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub // Retain this fragment across configuration changes. setRetainInstance(true); super.onCreate(savedInstanceState); } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // TODO Auto-generated method stub View view = inflater.inflate(R.layout.fragment_list, container, false); return view; } }

    Read the article

  • Httpsession with Spring 3 MVC

    - by vipul12389
    I want to use httpsession in Spring 3 MVC..i have searched all the web and got this solution..at http://forum.springsource.org/showthread.php?98850-Adding-to-stuff-to-the-session-while-using-ResponseBody Basically, My application auto authenticates user by getting winId and authorizes through LDAP..(Its a intranet site) Here is the flow of the application, 1. User enters Aplication url (http://localhost:8082/eIA_Mock_5) it has a welcome page (index.jsp) Index.jsp gets winId through jQuery and hits login.html (through Ajax) and passes windowsId login.html (Controller) authenticates through LDAP and gives back 'Valid' String as a response javascript, upon getting the correct response, redirects/loads welcome page i.e. goes to localhost:8082/eIA_Mock_5/welcome.html Now, i have filter associated with it..which checks for is session valid for each incoming request..Now the problem is even though i set data on to httpsession, yet the filter or any other controller fails to get the data through session as a result it doesnt proceeds further.. here is the code..and could you suggest what is wrong actually ?? Home_Controller.java @Controller public class Home_Controller { public static Log logger = LogFactory.getLog(Home_Controller.class); @RequestMapping(value={"/welcome"}) public ModelAndView loadWelcomePage(HttpServletRequest request,HttpServletResponse response) { ModelAndView mdv = new ModelAndView(); try{ /*HttpSession session = request.getSession(); UserMasterBean userBean = (UserMasterBean)session.getAttribute("userBean"); String userName=userBean.getWindowsId(); if(userName==null || userName.equalsIgnoreCase("")) { mdv.setViewName("homePage"); System.out.println("Unable to authenticate user "); logger.debug("Unable to authenticate user "); } else { System.out.println("Welcome User "+userName); logger.debug("Welcome User "+userName); */ mdv.setViewName("homePage"); /*}*/ } catch(Exception e){ logger.debug("inside authenticateUser ",e); e.printStackTrace(); } return mdv; } @RequestMapping(value = "/login", method = RequestMethod.GET) public @ResponseBody String authenticateUser(@RequestParam String userName,HttpSession session) { logger.debug("inside authenticateUser"); String returnResponse=new String(); try{ logger.debug("userName for Authentication "+userName); System.out.println("userName for Authentication "+userName); //HttpSession session = request.getSession(); if(userName==null || userName.trim().equalsIgnoreCase("")) returnResponse="Invalid"; else { System.out.println("uname "+userName); String ldapResponse = LDAPConnectUtil.isValidActiveDirectoryUser(userName, ""); if(ldapResponse.equalsIgnoreCase("true")) { returnResponse="Valid"; System.out.println(userName+" Authenticated"); logger.debug(userName+" Authenticated"); UserMasterBean userBean = new UserMasterBean(); userBean.setWindowsId(userName); //if(session.getAttribute("userBean")==null) session.setAttribute("userBean", userBean); } else { returnResponse="Invalid"; //session.setAttribute("userBean", null); System.out.println("Unable to Authenticate the user through Ldap"); logger.debug("Unable to Authenticate the user through Ldap"); } System.out.println("ldapResponse "+ldapResponse); logger.debug("ldapResponse "+ldapResponse); System.out.println("returnResponse "+returnResponse); } UserMasterBean u = (UserMasterBean)session.getAttribute("userBean"); System.out.println("winId "+u.getWindowsId()); } catch(Exception e){ e.printStackTrace(); logger.debug("Exception in authenticateUser ",e); } return returnResponse; } Filter public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) { System.out.println("in PageFilter"); boolean flag = false; HttpServletRequest objHttpServletRequest = (HttpServletRequest)request; HttpServletResponse objHttpServletResponse = (HttpServletResponse)response; HttpSession session = objHttpServletRequest.getSession(); String contextPath = objHttpServletRequest.getContextPath(); String servletPath = objHttpServletRequest.getSession().getServletContext().getRealPath(objHttpServletRequest.getServletPath()); logger.debug("contextPath :" + contextPath); logger.debug("servletPath :" + servletPath); System.out.println("in PageFilter, contextPath :" + contextPath); System.out.println("in PageFilter, servletPath :" + servletPath); if (servletPath.endsWith("\\") || servletPath.endsWith("/") || servletPath.indexOf("css") > 0 || servletPath.indexOf("jsp") > 0 || servletPath.indexOf("images") > 0 || servletPath.indexOf("js") > 0 || servletPath.endsWith("index.jsp") || servletPath.indexOf("xls") > 0 || servletPath.indexOf("ini") > 0 || servletPath.indexOf("login.html") > 0 || /*servletPath.endsWith("welcome.html") ||*/ servletPath.endsWith("logout.do") ) { System.out.println("User is trying to access allowed pages like Login.jsp, errorPage.jsp, js, images, css"); logger.debug("User is trying to access allowed pages like Login.jsp, errorPage.jsp, js, images, css"); flag = true; } if (flag== false) { System.out.println("flag = false"); if(session.getAttribute("userBean") == null) System.out.println("yes session.userbean is null"); if ((session != null) && (session.getAttribute("userBean") != null)) { System.out.println("session!=null && session.getAttribute(userId)!=null"); logger.debug("IF Part"); UserMasterBean userBean = (UserMasterBean)session.getAttribute("userBean"); String windowsId = userBean.getWindowsId(); logger.debug("User Id " + windowsId + " allowed access"); System.out.println("User Id " + windowsId + " allowed access"); flag = true; } else { System.out.println("else .....session!=null && session.getAttribute(userId)!=null"); logger.debug("Else Part"); flag = false; } } if (flag == true) { try { System.out.println("before chain.doFilter(request, response)"); chain.doFilter(request, response); } catch (Exception e) { e.printStackTrace(); try { objHttpServletResponse.sendRedirect(contextPath + "/logout.do"); } catch (Exception ex) { ex.printStackTrace(); } } } else { try { System.out.println("before sendRedirect"); objHttpServletResponse.sendRedirect(contextPath + "/jsp/errorPage.jsp"); } catch (Exception ex) { ex.printStackTrace(); } } System.out.println("end of PageFilter"); } Index.jsp <script type="text/javascript"> //alert("inside s13"); var WinNetwork = new ActiveXObject("WScript.Network"); var userName=WinNetwork.UserName; alert(userName); $.ajax({ url : "login.html", data : "userName="+userName, success : function(result) { alert("result == "+result); if(result=="Valid") window.location = "http://10.160.118.200:8082/eIA_Mock_5/welcome.html"; } }); </script> web.xml has a filter entry with URL pattern as * I am using spring 3 mvc

    Read the article

  • Bridge or Factory and How

    - by Chris
    I'm trying to learn patterns and I've got a job that is screaming for a pattern, I just know it but I can't figure it out. I know the filter type is something that can be abstracted and possibly bridged. I'M NOT LOOKING FOR A CODE REWRITE JUST SUGGESTIONS. I'm not looking for someone to do my job. I would like to know how patterns could be applied to this example. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.IO; using System.Xml; using System.Text.RegularExpressions; namespace CopyTool { class CopyJob { public enum FilterType { TextFilter, RegExFilter, NoFilter } public FilterType JobFilterType { get; set; } private string _jobName; public string JobName { get { return _jobName; } set { _jobName = value; } } private int currentIndex; public int CurrentIndex { get { return currentIndex; } } private DataSet ds; public int MaxJobs { get { return ds.Tables["Job"].Rows.Count; } } private string _filter; public string Filter { get { return _filter; } set { _filter = value; } } private string _fromFolder; public string FromFolder { get { return _fromFolder; } set { if (Directory.Exists(value)) { _fromFolder = value; } else { throw new DirectoryNotFoundException(String.Format("Folder not found: {0}", value)); } } } private List<string> _toFolders; public List<string> ToFolders { get { return _toFolders; } } public CopyJob() { Initialize(); } private void Initialize() { if (ds == null) { ds = new DataSet(); } ds.ReadXml(Properties.Settings.Default.ConfigLocation); LoadValues(0); } public void Execute() { ExecuteJob(FromFolder, _toFolders, Filter, JobFilterType); } public void ExecuteAll() { string OrigPath; List<string> DestPaths; string FilterText; FilterType FilterWay; foreach (DataRow rw in ds.Tables["Job"].Rows) { OrigPath = rw["FromFolder"].ToString(); FilterText = rw["FilterText"].ToString(); switch (rw["FilterType"].ToString()) { case "TextFilter": FilterWay = FilterType.TextFilter; break; case "RegExFilter": FilterWay = FilterType.RegExFilter; break; default: FilterWay = FilterType.NoFilter; break; } DestPaths = new List<string>(); foreach (DataRow crw in rw.GetChildRows("Job_ToFolder")) { DestPaths.Add(crw["FolderPath"].ToString()); } ExecuteJob(OrigPath, DestPaths, FilterText, FilterWay); } } private void ExecuteJob(string OrigPath, List<string> DestPaths, string FilterText, FilterType FilterWay) { FileInfo[] files; switch (FilterWay) { case FilterType.RegExFilter: files = GetFilesByRegEx(new Regex(FilterText), OrigPath); break; case FilterType.TextFilter: files = GetFilesByFilter(FilterText, OrigPath); break; default: files = new DirectoryInfo(OrigPath).GetFiles(); break; } foreach (string fld in DestPaths) { CopyFiles(files, fld); } } public void MoveToJob(int RecordNumber) { Save(); LoadValues(RecordNumber - 1); } public void AddToFolder(string folderPath) { if (Directory.Exists(folderPath)) { _toFolders.Add(folderPath); } else { throw new DirectoryNotFoundException(String.Format("Folder not found: {0}", folderPath)); } } public void DeleteToFolder(int index) { _toFolders.RemoveAt(index); } public void Save() { DataRow rw = ds.Tables["Job"].Rows[currentIndex]; rw["JobName"] = _jobName; rw["FromFolder"] = _fromFolder; rw["FilterText"] = _filter; switch (JobFilterType) { case FilterType.RegExFilter: rw["FilterType"] = "RegExFilter"; break; case FilterType.TextFilter: rw["FilterType"] = "TextFilter"; break; default: rw["FilterType"] = "NoFilter"; break; } DataRow[] ToFolderRows = ds.Tables["Job"].Rows[currentIndex].GetChildRows("Job_ToFolder"); for (int i = 0; i <= ToFolderRows.GetUpperBound(0); i++) { ToFolderRows[i].Delete(); } foreach (string fld in _toFolders) { DataRow ToFolderRow = ds.Tables["ToFolder"].NewRow(); ToFolderRow["JobId"] = ds.Tables["Job"].Rows[currentIndex]["JobId"]; ToFolderRow["Job_Id"] = ds.Tables["Job"].Rows[currentIndex]["Job_Id"]; ToFolderRow["FolderPath"] = fld; ds.Tables["ToFolder"].Rows.Add(ToFolderRow); } } public void Delete() { ds.Tables["Job"].Rows.RemoveAt(currentIndex); LoadValues(currentIndex++); } public void MoveNext() { Save(); currentIndex++; LoadValues(currentIndex); } public void MovePrevious() { Save(); currentIndex--; LoadValues(currentIndex); } public void MoveFirst() { Save(); LoadValues(0); } public void MoveLast() { Save(); LoadValues(ds.Tables["Job"].Rows.Count - 1); } public void CreateNew() { Save(); int MaxJobId = 0; Int32.TryParse(ds.Tables["Job"].Compute("Max(JobId)", "").ToString(), out MaxJobId); DataRow rw = ds.Tables["Job"].NewRow(); rw["JobId"] = MaxJobId + 1; ds.Tables["Job"].Rows.Add(rw); LoadValues(ds.Tables["Job"].Rows.IndexOf(rw)); } public void Commit() { Save(); ds.WriteXml(Properties.Settings.Default.ConfigLocation); } private void LoadValues(int index) { if (index > ds.Tables["Job"].Rows.Count - 1) { currentIndex = ds.Tables["Job"].Rows.Count - 1; } else if (index < 0) { currentIndex = 0; } else { currentIndex = index; } DataRow rw = ds.Tables["Job"].Rows[currentIndex]; _jobName = rw["JobName"].ToString(); _fromFolder = rw["FromFolder"].ToString(); _filter = rw["FilterText"].ToString(); switch (rw["FilterType"].ToString()) { case "TextFilter": JobFilterType = FilterType.TextFilter; break; case "RegExFilter": JobFilterType = FilterType.RegExFilter; break; default: JobFilterType = FilterType.NoFilter; break; } if (_toFolders == null) _toFolders = new List<string>(); _toFolders.Clear(); foreach (DataRow crw in rw.GetChildRows("Job_ToFolder")) { AddToFolder(crw["FolderPath"].ToString()); } } private static FileInfo[] GetFilesByRegEx(Regex rgx, string locPath) { DirectoryInfo d = new DirectoryInfo(locPath); FileInfo[] fullFileList = d.GetFiles(); List<FileInfo> filteredList = new List<FileInfo>(); foreach (FileInfo fi in fullFileList) { if (rgx.IsMatch(fi.Name)) { filteredList.Add(fi); } } return filteredList.ToArray(); } private static FileInfo[] GetFilesByFilter(string filter, string locPath) { DirectoryInfo d = new DirectoryInfo(locPath); FileInfo[] fi = d.GetFiles(filter); return fi; } private void CopyFiles(FileInfo[] files, string destPath) { foreach (FileInfo fi in files) { bool success = false; int i = 0; string copyToName = fi.Name; string copyToExt = fi.Extension; string copyToNameWithoutExt = Path.GetFileNameWithoutExtension(fi.FullName); while (!success && i < 100) { i++; try { if (File.Exists(Path.Combine(destPath, copyToName))) throw new CopyFileExistsException(); File.Copy(fi.FullName, Path.Combine(destPath, copyToName)); success = true; } catch (CopyFileExistsException ex) { copyToName = String.Format("{0} ({1}){2}", copyToNameWithoutExt, i, copyToExt); } } } } } public class CopyFileExistsException : Exception { public string Message; } }

    Read the article

  • Numpy zero rank array indexing/broadcasting

    - by Lemming
    I'm trying to write a function that supports broadcasting and is fast at the same time. However, numpy's zero-rank arrays are causing trouble as usual. I couldn't find anything useful on google, or by searching here. So, I'm asking you. How should I implement broadcasting efficiently and handle zero-rank arrays at the same time? This whole post became larger than anticipated, sorry. Details: To clarify what I'm talking about I'll give a simple example: Say I want to implement a Heaviside step-function. I.e. a function that acts on the real axis, which is 0 on the negative side, 1 on the positive side, and from case to case either 0, 0.5, or 1 at the point 0. Implementation Masking The most efficient way I found so far is the following. It uses boolean arrays as masks to assign the correct values to the corresponding slots in the output vector. from numpy import * def step_mask(x, limit=+1): """Heaviside step-function. y = 0 if x < 0 y = 1 if x > 0 See below for x == 0. Arguments: x Evaluate the function at these points. limit Which limit at x == 0? limit > 0: y = 1 limit == 0: y = 0.5 limit < 0: y = 0 Return: The values corresponding to x. """ b = broadcast(x, limit) out = zeros(b.shape) out[x>0] = 1 mask = (limit > 0) & (x == 0) out[mask] = 1 mask = (limit == 0) & (x == 0) out[mask] = 0.5 mask = (limit < 0) & (x == 0) out[mask] = 0 return out List Comprehension The following-the-numpy-docs way is to use a list comprehension on the flat iterator of the broadcast object. However, list comprehensions become absolutely unreadable for such complicated functions. def step_comprehension(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) out.flat = [ ( 1 if x_ > 0 else ( 0 if x_ < 0 else ( 1 if l_ > 0 else ( 0.5 if l_ ==0 else ( 0 ))))) for x_, l_ in b ] return out For Loop And finally, the most naive way is a for loop. It's probably the most readable option. However, Python for-loops are anything but fast. And hence, a really bad idea in numerics. def step_for(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) for i, (x_, l_) in enumerate(b): if x_ > 0: out[i] = 1 elif x_ < 0: out[i] = 0 elif l_ > 0: out[i] = 1 elif l_ < 0: out[i] = 0 else: out[i] = 0.5 return out Test First of all a brief test to see if the output is correct. >>> x = array([-1, -0.1, 0, 0.1, 1]) >>> step_mask(x, +1) array([ 0., 0., 1., 1., 1.]) >>> step_mask(x, 0) array([ 0. , 0. , 0.5, 1. , 1. ]) >>> step_mask(x, -1) array([ 0., 0., 0., 1., 1.]) It is correct, and the other two functions give the same output. Performance How about efficiency? These are the timings: In [45]: xl = linspace(-2, 2, 500001) In [46]: %timeit step_mask(xl) 10 loops, best of 3: 19.5 ms per loop In [47]: %timeit step_comprehension(xl) 1 loops, best of 3: 1.17 s per loop In [48]: %timeit step_for(xl) 1 loops, best of 3: 1.15 s per loop The masked version performs best as expected. However, I'm surprised that the comprehension is on the same level as the for loop. Zero Rank Arrays But, 0-rank arrays pose a problem. Sometimes you want to use a function scalar input. And preferably not have to worry about wrapping all scalars in at least 1-D arrays. >>> step_mask(1) Traceback (most recent call last): File "<ipython-input-50-91c06aa4487b>", line 1, in <module> step_mask(1) File "script.py", line 22, in step_mask out[x>0] = 1 IndexError: 0-d arrays can't be indexed. >>> step_for(1) Traceback (most recent call last): File "<ipython-input-51-4e0de4fcb197>", line 1, in <module> step_for(1) File "script.py", line 55, in step_for out[i] = 1 IndexError: 0-d arrays can't be indexed. >>> step_comprehension(1) array(1.0) Only the list comprehension can handle 0-rank arrays. The other two versions would need special case handling for 0-rank arrays. Numpy gets a bit messy when you want to use the same code for arrays and scalars. However, I really like to have functions that work on as arbitrary input as possible. Who knows which parameters I'll want to iterate over at some point. Question: What is the best way to implement a function as the one above? Is there a way to avoid if scalar then like special cases? I'm not looking for a built-in Heaviside. It's just a simplified example. In my code the above pattern appears in many places to make parameter iteration as simple as possible without littering the client code with for loops or comprehensions. Furthermore, I'm aware of Cython, or weave & Co., or implementation directly in C. However, the performance of the masked version above is sufficient for the moment. And for the moment I would like to keep things as simple as possible.

    Read the article

  • Tips on how to refactor this unwieldy upvote/downvote code

    - by bob_cobb
    Basically this code is for an upvote/downvote system and I'm basically Incrementing the count by 1 when voting up Decrementing the count by 1 when voting down If the number of downvotes upvotes, we'll assume it's a negative score, so the count stays 0 Reverting the count back to what it originally was when clicking upvote twice or downvote twice Never go below 0 (by showing negative numbers); Basically it's the same scoring scheme reddit uses, and I tried to get some ideas from the source which was minified and kind of hard to grok: a.fn.vote = function(b, c, e, j) { if (reddit.logged && a(this).hasClass("arrow")) { var k = a(this).hasClass("up") ? 1 : a(this).hasClass("down") ? -1 : 0, v = a(this).all_things_by_id(), p = v.children().not(".child").find(".arrow"), q = k == 1 ? "up" : "upmod"; p.filter("." + q).removeClass(q).addClass(k == 1 ? "upmod" : "up"); q = k == -1 ? "down" : "downmod"; p.filter("." + q).removeClass(q).addClass(k == -1 ? "downmod" : "down"); reddit.logged && (v.each(function() { var b = a(this).find(".entry:first, .midcol:first"); k > 0 ? b.addClass("likes").removeClass("dislikes unvoted") : k < 0 ? b.addClass("dislikes").removeClass("likes unvoted") : b.addClass("unvoted").removeClass("likes dislikes") }), a.defined(j) || (j = v.filter(":first").thing_id(), b += e ? "" : "-" + j, a.request("vote", {id: j,dir: k,vh: b}))); c && c(v, k) } }; I'm trying to look for a pattern, but there are a bunch of edge cases that I've been adding in, and it's still a little off. My code (and fiddle): $(function() { var down = $('.vote-down'); var up = $('.vote-up'); var direction = up.add(down); var largeCount = $('#js-large-count'); var totalUp = $('#js-total-up'); var totalDown = $('#js-total-down'); var totalUpCount = parseInt(totalUp.text(), 10); var totalDownCount = parseInt(totalDown.text(), 10); var castVote = function(submissionId, voteType) { /* var postURL = '/vote'; $.post(postURL, { submissionId : submissionId, voteType : voteType } , function (data){ if (data.response === 'success') { totalDown.text(data.downvotes); totalUp.text(data.upvotes); } }, 'json'); */ alert('voted!'); }; $(direction).on('click', direction, function () { // The submission ID var $that = $(this), submissionId = $that.attr('id'), voteType = $that.attr('dir'), // what direction was voted? [up or down] isDown = $that.hasClass('down'), isUp = $that.hasClass('up'), curVotes = parseInt($that.parent().find('div.count').text(), 10); // current vote castVote(submissionId, voteType); // Voted up on submission if (voteType === 'up') { var alreadyVotedUp = $that.hasClass('likes'), upCount = $that.next('div.count'), dislikes = $that.nextAll('a').first(); // next anchor attr if (alreadyVotedUp) { // Clicked the up arrow and previously voted up $that.toggleClass('likes up'); if (totalUpCount > totalDownCount) { upCount.text(curVotes - 1); largeCount.text(curVotes - 1); } else { upCount.text(0); largeCount.text(0); } upCount.css('color', '#555').hide().fadeIn(); largeCount.hide().fadeIn(); } else if (dislikes.hasClass('dislikes')) { // Voted down now are voting up if (totalDownCount > totalUpCount) { upCount.text(0); largeCount.text(0); } else if (totalUpCount > totalDownCount) { console.log(totalDownCount); console.log(totalUpCount); if (totalDownCount === 0) { upCount.text(curVotes + 1); largeCount.text(curVotes + 1); } else { upCount.text(curVotes + 2); largeCount.text(curVotes + 2); } } else { upCount.text(curVotes + 1); largeCount.text(curVotes + 1); } dislikes.toggleClass('down dislikes'); upCount.css('color', '#296394').hide().fadeIn(200); largeCount.hide().fadeIn(); } else { if (totalDownCount > totalUpCount) { upCount.text(0); largeCount.text(0); } else { // They clicked the up arrow and haven't voted up yet upCount.text(curVotes + 1); largeCount.text(curVotes + 1).hide().fadeIn(200); upCount.css('color', '#296394').hide().fadeIn(200); } } // Change arrow to dark blue if (isUp) { $that.toggleClass('up likes'); } } // Voted down on submission if (voteType === 'down') { var alreadyVotedDown = $that.hasClass('dislikes'), downCount = $that.prev('div.count'); // Get previous anchor attribute var likes = $that.prevAll('a').first(); if (alreadyVotedDown) { if (curVotes === 0) { if (totalDownCount > totalUp) { downCount.text(curVotes); largeCount.text(curVotes); } else { if (totalUpCount < totalDownCount || totalUpCount == totalDownCount) { downCount.text(0); largeCount.text(0); } else { downCount.text((totalUpCount - totalUpCount) + 1); largeCount.text((totalUpCount - totalUpCount) + 1); } } } else { downCount.text(curVotes + 1); largeCount.text(curVotes + 1); } $that.toggleClass('down dislikes'); downCount.css('color', '#555').hide().fadeIn(200); largeCount.hide().fadeIn(); } else if (likes.hasClass('likes')) { // They voted up from 0, and now are voting down if (curVotes <= 1) { downCount.text(0); largeCount.text(0); } else { // They voted up, now they are voting down (from a number > 0) downCount.text(curVotes - 2); largeCount.text(curVotes - 2); } likes.toggleClass('up likes'); downCount.css('color', '#ba2a2a').hide().fadeIn(200); largeCount.hide().fadeIn(200); } else { if (curVotes > 0) { downCount.text(curVotes - 1); largeCount.text(curVotes - 1); } else { downCount.text(curVotes); largeCount.text(curVotes); } downCount.css('color', '#ba2a2a').hide().fadeIn(200); largeCount.hide().fadeIn(200); } // Change the arrow to red if (isDown) { $that.toggleClass('down dislikes'); } } return false; }); });? Pretty convoluted, right? Is there a way to do something similar but in about 1/3 of the code I've written? After attempting to re-write it, I find myself doing the same thing so I just gave up halfway through and decided to ask for some help (fiddle of most recent).

    Read the article

  • What's New in ASP.NET 4

    - by Navaneeth
    The .NET Framework version 4 includes enhancements for ASP.NET 4 in targeted areas. Visual Studio 2010 and Microsoft Visual Web Developer Express also include enhancements and new features for improved Web development. This document provides an overview of many of the new features that are included in the upcoming release. This topic contains the following sections: ASP.NET Core Services ASP.NET Web Forms ASP.NET MVC Dynamic Data ASP.NET Chart Control Visual Web Developer Enhancements Web Application Deployment with Visual Studio 2010 Enhancements to ASP.NET Multi-Targeting ASP.NET Core Services ASP.NET 4 introduces many features that improve core ASP.NET services such as output caching and session state storage. Extensible Output Caching Since the time that ASP.NET 1.0 was released, output caching has enabled developers to store the generated output of pages, controls, and HTTP responses in memory. On subsequent Web requests, ASP.NET can serve content more quickly by retrieving the generated output from memory instead of regenerating the output from scratch. However, this approach has a limitation — generated content always has to be stored in memory. On servers that experience heavy traffic, the memory requirements for output caching can compete with memory requirements for other parts of a Web application. ASP.NET 4 adds extensibility to output caching that enables you to configure one or more custom output-cache providers. Output-cache providers can use any storage mechanism to persist HTML content. These storage options can include local or remote disks, cloud storage, and distributed cache engines. Output-cache provider extensibility in ASP.NET 4 lets you design more aggressive and more intelligent output-caching strategies for Web sites. For example, you can create an output-cache provider that caches the "Top 10" pages of a site in memory, while caching pages that get lower traffic on disk. Alternatively, you can cache every vary-by combination for a rendered page, but use a distributed cache so that the memory consumption is offloaded from front-end Web servers. You create a custom output-cache provider as a class that derives from the OutputCacheProvider type. You can then configure the provider in the Web.config file by using the new providers subsection of the outputCache element For more information and for examples that show how to configure the output cache, see outputCache Element for caching (ASP.NET Settings Schema). For more information about the classes that support caching, see the documentation for the OutputCache and OutputCacheProvider classes. By default, in ASP.NET 4, all HTTP responses, rendered pages, and controls use the in-memory output cache. The defaultProvider attribute for ASP.NET is AspNetInternalProvider. You can change the default output-cache provider used for a Web application by specifying a different provider name for defaultProvider attribute. In addition, you can select different output-cache providers for individual control and for individual requests and programmatically specify which provider to use. For more information, see the HttpApplication.GetOutputCacheProviderName(HttpContext) method. The easiest way to choose a different output-cache provider for different Web user controls is to do so declaratively by using the new providerName attribute in a page or control directive, as shown in the following example: <%@ OutputCache Duration="60" VaryByParam="None" providerName="DiskCache" %> Preloading Web Applications Some Web applications must load large amounts of data or must perform expensive initialization processing before serving the first request. In earlier versions of ASP.NET, for these situations you had to devise custom approaches to "wake up" an ASP.NET application and then run initialization code during the Application_Load method in the Global.asax file. To address this scenario, a new application preload manager (autostart feature) is available when ASP.NET 4 runs on IIS 7.5 on Windows Server 2008 R2. The preload feature provides a controlled approach for starting up an application pool, initializing an ASP.NET application, and then accepting HTTP requests. It lets you perform expensive application initialization prior to processing the first HTTP request. For example, you can use the application preload manager to initialize an application and then signal a load-balancer that the application was initialized and ready to accept HTTP traffic. To use the application preload manager, an IIS administrator sets an application pool in IIS 7.5 to be automatically started by using the following configuration in the applicationHost.config file: <applicationPools> <add name="MyApplicationPool" startMode="AlwaysRunning" /> </applicationPools> Because a single application pool can contain multiple applications, you specify individual applications to be automatically started by using the following configuration in the applicationHost.config file: <sites> <site name="MySite" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PrewarmMyCache" > <!-- Additional content --> </application> </site> </sites> <!-- Additional content --> <serviceAutoStartProviders> <add name="PrewarmMyCache" type="MyNamespace.CustomInitialization, MyLibrary" /> </serviceAutoStartProviders> When an IIS 7.5 server is cold-started or when an individual application pool is recycled, IIS 7.5 uses the information in the applicationHost.config file to determine which Web applications have to be automatically started. For each application that is marked for preload, IIS7.5 sends a request to ASP.NET 4 to start the application in a state during which the application temporarily does not accept HTTP requests. When it is in this state, ASP.NET instantiates the type defined by the serviceAutoStartProvider attribute (as shown in the previous example) and calls into its public entry point. You create a managed preload type that has the required entry point by implementing the IProcessHostPreloadClient interface, as shown in the following example: public class CustomInitialization : System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // Perform initialization. } } After your initialization code runs in the Preload method and after the method returns, the ASP.NET application is ready to process requests. Permanently Redirecting a Page Content in Web applications is often moved over the lifetime of the application. This can lead to links to be out of date, such as the links that are returned by search engines. In ASP.NET, developers have traditionally handled requests to old URLs by using the Redirect method to forward a request to the new URL. However, the Redirect method issues an HTTP 302 (Found) response (which is used for a temporary redirect). This results in an extra HTTP round trip. ASP.NET 4 adds a RedirectPermanent helper method that makes it easy to issue HTTP 301 (Moved Permanently) responses, as in the following example: RedirectPermanent("/newpath/foroldcontent.aspx"); Search engines and other user agents that recognize permanent redirects will store the new URL that is associated with the content, which eliminates the unnecessary round trip made by the browser for temporary redirects. Session State Compression By default, ASP.NET provides two options for storing session state across a Web farm. The first option is a session state provider that invokes an out-of-process session state server. The second option is a session state provider that stores data in a Microsoft SQL Server database. Because both options store state information outside a Web application's worker process, session state has to be serialized before it is sent to remote storage. If a large amount of data is saved in session state, the size of the serialized data can become very large. ASP.NET 4 introduces a new compression option for both kinds of out-of-process session state providers. By using this option, applications that have spare CPU cycles on Web servers can achieve substantial reductions in the size of serialized session state data. You can set this option using the new compressionEnabled attribute of the sessionState element in the configuration file. When the compressionEnabled configuration option is set to true, ASP.NET compresses (and decompresses) serialized session state by using the .NET Framework GZipStreamclass. The following example shows how to set this attribute. <sessionState mode="SqlServer" sqlConnectionString="data source=dbserver;Initial Catalog=aspnetstate" allowCustomSqlDatabase="true" compressionEnabled="true" /> ASP.NET Web Forms Web Forms has been a core feature in ASP.NET since the release of ASP.NET 1.0. Many enhancements have been in this area for ASP.NET 4, such as the following: The ability to set meta tags. More control over view state. Support for recently introduced browsers and devices. Easier ways to work with browser capabilities. Support for using ASP.NET routing with Web Forms. More control over generated IDs. The ability to persist selected rows in data controls. More control over rendered HTML in the FormView and ListView controls. Filtering support for data source controls. Enhanced support for Web standards and accessibility Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties Two properties have been added to the Page class: MetaKeywords and MetaDescription. These two properties represent corresponding meta tags in the HTML rendered for a page, as shown in the following example: <head id="Head1" runat="server"> <title>Untitled Page</title> <meta name="keywords" content="keyword1, keyword2' /> <meta name="description" content="Description of my page" /> </head> These two properties work like the Title property does, and they can be set in the @ Page directive. For more information, see Page.MetaKeywords and Page.MetaDescription. Enabling View State for Individual Controls A new property has been added to the Control class: ViewStateMode. You can use this property to disable view state for all controls on a page except those for which you explicitly enable view state. View state data is included in a page's HTML and increases the amount of time it takes to send a page to the client and post it back. Storing more view state than is necessary can cause significant decrease in performance. In earlier versions of ASP.NET, you could reduce the impact of view state on a page's performance by disabling view state for specific controls. But sometimes it is easier to enable view state for a few controls that need it instead of disabling it for many that do not need it. For more information, see Control.ViewStateMode. Support for Recently Introduced Browsers and Devices ASP.NET includes a feature that is named browser capabilities that lets you determine the capabilities of the browser that a user is using. Browser capabilities are represented by the HttpBrowserCapabilities object which is stored in the HttpRequest.Browser property. Information about a particular browser's capabilities is defined by a browser definition file. In ASP.NET 4, these browser definition files have been updated to contain information about recently introduced browsers and devices such as Google Chrome, Research in Motion BlackBerry smart phones, and Apple iPhone. Existing browser definition files have also been updated. For more information, see How to: Upgrade an ASP.NET Web Application to ASP.NET 4 and ASP.NET Web Server Controls and Browser Capabilities. The browser definition files that are included with ASP.NET 4 are shown in the following list: •blackberry.browser •chrome.browser •Default.browser •firefox.browser •gateway.browser •generic.browser •ie.browser •iemobile.browser •iphone.browser •opera.browser •safari.browser A New Way to Define Browser Capabilities ASP.NET 4 includes a new feature referred to as browser capabilities providers. As the name suggests, this lets you build a provider that in turn lets you write custom code to determine browser capabilities. In ASP.NET version 3.5 Service Pack 1, you define browser capabilities in an XML file. This file resides in a machine-level folder or an application-level folder. Most developers do not need to customize these files, but for those who do, the provider approach can be easier than dealing with complex XML syntax. The provider approach makes it possible to simplify the process by implementing a common browser definition syntax, or a database that contains up-to-date browser definitions, or even a Web service for such a database. For more information about the new browser capabilities provider, see the What's New for ASP.NET 4 White Paper. Routing in ASP.NET 4 ASP.NET 4 adds built-in support for routing with Web Forms. Routing is a feature that was introduced with ASP.NET 3.5 SP1 and lets you configure an application to use URLs that are meaningful to users and to search engines because they do not have to specify physical file names. This can make your site more user-friendly and your site content more discoverable by search engines. For example, the URL for a page that displays product categories in your application might look like the following example: http://website/products.aspx?categoryid=12 By using routing, you can use the following URL to render the same information: http://website/products/software The second URL lets the user know what to expect and can result in significantly improved rankings in search engine results. the new features include the following: The PageRouteHandler class is a simple HTTP handler that you use when you define routes. You no longer have to write a custom route handler. The HttpRequest.RequestContext and Page.RouteData properties make it easier to access information that is passed in URL parameters. The RouteUrl expression provides a simple way to create a routed URL in markup. The RouteValue expression provides a simple way to extract URL parameter values in markup. The RouteParameter class makes it easier to pass URL parameter values to a query for a data source control (similar to FormParameter). You no longer have to change the Web.config file to enable routing. For more information about routing, see the following topics: ASP.NET Routing Walkthrough: Using ASP.NET Routing in a Web Forms Application How to: Define Routes for Web Forms Applications How to: Construct URLs from Routes How to: Access URL Parameters in a Routed Page Setting Client IDs The new ClientIDMode property makes it easier to write client script that references HTML elements rendered for server controls. Increasing use of Microsoft Ajax makes the need to do this more common. For example, you may have a data control that renders a long list of products with prices and you want to use client script to make a Web service call and update individual prices in the list as they change without refreshing the entire page. Typically you get a reference to an HTML element in client script by using the document.GetElementById method. You pass to this method the value of the id attribute of the HTML element you want to reference. In the case of elements that are rendered for ASP.NET server controls earlier versions of ASP.NET could make this difficult or impossible. You were not always able to predict what id values ASP.NET would generate, or ASP.NET could generate very long id values. The problem was especially difficult for data controls that would generate multiple rows for a single instance of the control in your markup. ASP.NET 4 adds two new algorithms for generating id attributes. These algorithms can generate id attributes that are easier to work with in client script because they are more predictable and that are easier to work with because they are simpler. For more information about how to use the new algorithms, see the following topics: ASP.NET Web Server Control Identification Walkthrough: Making Data-Bound Controls Easier to Access from JavaScript Walkthrough: Making Controls Located in Web User Controls Easier to Access from JavaScript How to: Access Controls from JavaScript by ID Persisting Row Selection in Data Controls The GridView and ListView controls enable users to select a row. In previous versions of ASP.NET, row selection was based on the row index on the page. For example, if you select the third item on page 1 and then move to page 2, the third item on page 2 is selected. In most cases, is more desirable not to select any rows on page 2. ASP.NET 4 supports Persisted Selection, a new feature that was initially supported only in Dynamic Data projects in the .NET Framework 3.5 SP1. When this feature is enabled, the selected item is based on the row data key. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. This is a much more natural behavior than the behavior in earlier versions of ASP.NET. Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature in the GridView control, for example, by setting the EnablePersistedSelection property, as shown in the following example: <asp:GridView id="GridView2" runat="server" PersistedSelection="true"> </asp:GridView> FormView Control Enhancements The FormView control is enhanced to make it easier to style the content of the control with CSS. In previous versions of ASP.NET, the FormView control rendered it contents using an item template. This made styling more difficult in the markup because unexpected table row and table cell tags were rendered by the control. The FormView control supports RenderOuterTable, a property in ASP.NET 4. When this property is set to false, as show in the following example, the table tags are not rendered. This makes it easier to apply CSS style to the contents of the control. <asp:FormView ID="FormView1" runat="server" RenderTable="false"> For more information, see FormView Web Server Control Overview. ListView Control Enhancements The ListView control, which was introduced in ASP.NET 3.5, has all the functionality of the GridView control while giving you complete control over the output. This control has been made easier to use in ASP.NET 4. The earlier version of the control required that you specify a layout template that contained a server control with a known ID. The following markup shows a typical example of how to use the ListView control in ASP.NET 3.5. <asp:ListView ID="ListView1" runat="server"> <LayoutTemplate> <asp:PlaceHolder ID="ItemPlaceHolder" runat="server"></asp:PlaceHolder> </LayoutTemplate> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> In ASP.NET 4, the ListView control does not require a layout template. The markup shown in the previous example can be replaced with the following markup: <asp:ListView ID="ListView1" runat="server"> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> For more information, see ListView Web Server Control Overview. Filtering Data with the QueryExtender Control A very common task for developers who create data-driven Web pages is to filter data. This traditionally has been performed by building Where clauses in data source controls. This approach can be complicated, and in some cases the Where syntax does not let you take advantage of the full functionality of the underlying database. To make filtering easier, a new QueryExtender control has been added in ASP.NET 4. This control can be added to EntityDataSource or LinqDataSource controls in order to filter the data returned by these controls. Because the QueryExtender control relies on LINQ, but you do not to need to know how to write LINQ queries to use the query extender. The QueryExtender control supports a variety of filter options. The following lists QueryExtender filter options. Term Definition SearchExpression Searches a field or fields for string values and compares them to a specified string value. RangeExpression Searches a field or fields for values in a range specified by a pair of values. PropertyExpression Compares a specified value to a property value in a field. If the expression evaluates to true, the data that is being examined is returned. OrderByExpression Sorts data by a specified column and sort direction. CustomExpression Calls a function that defines custom filter in the page. For more information, see QueryExtenderQueryExtender Web Server Control Overview. Enhanced Support for Web Standards and Accessibility Earlier versions of ASP.NET controls sometimes render markup that does not conform to HTML, XHTML, or accessibility standards. ASP.NET 4 eliminates most of these exceptions. For details about how the HTML that is rendered by each control meets accessibility standards, see ASP.NET Controls and Accessibility. CSS for Controls that Can be Disabled In ASP.NET 3.5, when a control is disabled (see WebControl.Enabled), a disabled attribute is added to the rendered HTML element. For example, the following markup creates a Label control that is disabled: <asp:Label id="Label1" runat="server"   Text="Test" Enabled="false" /> In ASP.NET 3.5, the previous control settings generate the following HTML: <span id="Label1" disabled="disabled">Test</span> In HTML 4.01, the disabled attribute is not considered valid on span elements. It is valid only on input elements because it specifies that they cannot be accessed. On display-only elements such as span elements, browsers typically support rendering for a disabled appearance, but a Web page that relies on this non-standard behavior is not robust according to accessibility standards. For display-only elements, you should use CSS to indicate a disabled visual appearance. Therefore, by default ASP.NET 4 generates the following HTML for the control settings shown previously: <span id="Label1" class="aspNetDisabled">Test</span> You can change the value of the class attribute that is rendered by default when a control is disabled by setting the DisabledCssClass property. CSS for Validation Controls In ASP.NET 3.5, validation controls render a default color of red as an inline style. For example, the following markup creates a RequiredFieldValidator control: <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server"   ErrorMessage="Required Field" ControlToValidate="RadioButtonList1" /> ASP.NET 3.5 renders the following HTML for the validator control: <span id="RequiredFieldValidator1"   style="color:Red;visibility:hidden;">RequiredFieldValidator</span> By default, ASP.NET 4 does not render an inline style to set the color to red. An inline style is used only to hide or show the validator, as shown in the following example: <span id="RequiredFieldValidator1"   style"visibility:hidden;">RequiredFieldValidator</span> Therefore, ASP.NET 4 does not automatically show error messages in red. For information about how to use CSS to specify a visual style for a validation control, see Validating User Input in ASP.NET Web Pages. CSS for the Hidden Fields Div Element ASP.NET uses hidden fields to store state information such as view state and control state. These hidden fields are contained by a div element. In ASP.NET 3.5, this div element does not have a class attribute or an id attribute. Therefore, CSS rules that affect all div elements could unintentionally cause this div to be visible. To avoid this problem, ASP.NET 4 renders the div element for hidden fields with a CSS class that you can use to differentiate the hidden fields div from others. The new classvalue is shown in the following example: <div class="aspNetHidden"> CSS for the Table, Image, and ImageButton Controls By default, in ASP.NET 3.5, some controls set the border attribute of rendered HTML to zero (0). The following example shows HTML that is generated by the Table control in ASP.NET 3.5: <table id="Table2" border="0"> The Image control and the ImageButton control also do this. Because this is not necessary and provides visual formatting information that should be provided by using CSS, the attribute is not generated in ASP.NET 4. CSS for the UpdatePanel and UpdateProgress Controls In ASP.NET 3.5, the UpdatePanel and UpdateProgress controls do not support expando attributes. This makes it impossible to set a CSS class on the HTMLelements that they render. In ASP.NET 4 these controls have been changed to accept expando attributes, as shown in the following example: <asp:UpdatePanel runat="server" class="myStyle"> </asp:UpdatePanel> The following HTML is rendered for this markup: <div id="ctl00_MainContent_UpdatePanel1" class="expandoclass"> </div> Eliminating Unnecessary Outer Tables In ASP.NET 3.5, the HTML that is rendered for the following controls is wrapped in a table element whose purpose is to apply inline styles to the entire control: FormView Login PasswordRecovery ChangePassword If you use templates to customize the appearance of these controls, you can specify CSS styles in the markup that you provide in the templates. In that case, no extra outer table is required. In ASP.NET 4, you can prevent the table from being rendered by setting the new RenderOuterTable property to false. Layout Templates for Wizard Controls In ASP.NET 3.5, the Wizard and CreateUserWizard controls generate an HTML table element that is used for visual formatting. In ASP.NET 4 you can use a LayoutTemplate element to specify the layout. If you do this, the HTML table element is not generated. In the template, you create placeholder controls to indicate where items should be dynamically inserted into the control. (This is similar to how the template model for the ListView control works.) For more information, see the Wizard.LayoutTemplate property. New HTML Formatting Options for the CheckBoxList and RadioButtonList Controls ASP.NET 3.5 uses HTML table elements to format the output for the CheckBoxList and RadioButtonList controls. To provide an alternative that does not use tables for visual formatting, ASP.NET 4 adds two new options to the RepeatLayout enumeration: UnorderedList. This option causes the HTML output to be formatted by using ul and li elements instead of a table. OrderedList. This option causes the HTML output to be formatted by using ol and li elements instead of a table. For examples of HTML that is rendered for the new options, see the RepeatLayout enumeration. Header and Footer Elements for the Table Control In ASP.NET 3.5, the Table control can be configured to render thead and tfoot elements by setting the TableSection property of the TableHeaderRow class and the TableFooterRow class. In ASP.NET 4 these properties are set to the appropriate values by default. CSS and ARIA Support for the Menu Control In ASP.NET 3.5, the Menu control uses HTML table elements for visual formatting, and in some configurations it is not keyboard-accessible. ASP.NET 4 addresses these problems and improves accessibility in the following ways: The generated HTML is structured as an unordered list (ul and li elements). CSS is used for visual formatting. The menu behaves in accordance with ARIA standards for keyboard access. You can use arrow keys to navigate menu items. (For information about ARIA, see Accessibility in Visual Studio and ASP.NET.) ARIA role and property attributes are added to the generated HTML. (Attributes are added by using JavaScript instead of included in the HTML, to avoid generating HTML that would cause markup validation errors.) Styles for the Menu control are rendered in a style block at the top of the page, instead of inline with the rendered HTML elements. If you want to use a separate CSS file so that you can modify the menu styles, you can set the Menu control's new IncludeStyleBlock property to false, in which case the style block is not generated. Valid XHTML for the HtmlForm Control In ASP.NET 3.5, the HtmlForm control (which is created implicitly by the <form runat="server"> tag) renders an HTML form element that has both name and id attributes. The name attribute is deprecated in XHTML 1.1. Therefore, this control does not render the name attribute in ASP.NET 4. Maintaining Backward Compatibility in Control Rendering An existing ASP.NET Web site might have code in it that assumes that controls are rendering HTML the way they do in ASP.NET 3.5. To avoid causing backward compatibility problems when you upgrade the site to ASP.NET 4, you can have ASP.NET continue to generate HTML the way it does in ASP.NET 3.5 after you upgrade the site. To do so, you can set the controlRenderingCompatibilityVersion attribute of the pages element to "3.5" in the Web.config file of an ASP.NET 4 Web site, as shown in the following example: <system.web>   <pages controlRenderingCompatibilityVersion="3.5"/> </system.web> If this setting is omitted, the default value is the same as the version of ASP.NET that the Web site targets. (For information about multi-targeting in ASP.NET, see .NET Framework Multi-Targeting for ASP.NET Web Projects.) ASP.NET MVC ASP.NET MVC helps Web developers build compelling standards-based Web sites that are easy to maintain because it decreases the dependency among application layers by using the Model-View-Controller (MVC) pattern. MVC provides complete control over the page markup. It also improves testability by inherently supporting Test Driven Development (TDD). Web sites created using ASP.NET MVC have a modular architecture. This allows members of a team to work independently on the various modules and can be used to improve collaboration. For example, developers can work on the model and controller layers (data and logic), while the designer work on the view (presentation). For tutorials, walkthroughs, conceptual content, code samples, and a complete API reference, see ASP.NET MVC 2. Dynamic Data Dynamic Data was introduced in the .NET Framework 3.5 SP1 release in mid-2008. This feature provides many enhancements for creating data-driven applications, such as the following: A RAD experience for quickly building a data-driven Web site. Automatic validation that is based on constraints defined in the data model. The ability to easily change the markup that is generated for fields in the GridView and DetailsView controls by using field templates that are part of your Dynamic Data project. For ASP.NET 4, Dynamic Data has been enhanced to give developers even more power for quickly building data-driven Web sites. For more information, see ASP.NET Dynamic Data Content Map. Enabling Dynamic Data for Individual Data-Bound Controls in Existing Web Applications You can use Dynamic Data features in existing ASP.NET Web applications that do not use scaffolding by enabling Dynamic Data for individual data-bound controls. Dynamic Data provides the presentation and data layer support for rendering these controls. When you enable Dynamic Data for data-bound controls, you get the following benefits: Setting default values for data fields. Dynamic Data enables you to provide default values at run time for fields in a data control. Interacting with the database without creating and registering a data model. Automatically validating the data that is entered by the user without writing any code. For more information, see Walkthrough: Enabling Dynamic Data in ASP.NET Data-Bound Controls. New Field Templates for URLs and E-mail Addresses ASP.NET 4 introduces two new built-in field templates, EmailAddress.ascx and Url.ascx. These templates are used for fields that are marked as EmailAddress or Url using the DataTypeAttribute attribute. For EmailAddress objects, the field is displayed as a hyperlink that is created by using the mailto: protocol. When users click the link, it opens the user's e-mail client and creates a skeleton message. Objects typed as Url are displayed as ordinary hyperlinks. The following example shows how to mark fields. [DataType(DataType.EmailAddress)] public object HomeEmail { get; set; } [DataType(DataType.Url)] public object Website { get; set; } Creating Links with the DynamicHyperLink Control Dynamic Data uses the new routing feature that was added in the .NET Framework 3.5 SP1 to control the URLs that users see when they access the Web site. The new DynamicHyperLink control makes it easy to build links to pages in a Dynamic Data site. For information, see How to: Create Table Action Links in Dynamic Data Support for Inheritance in the Data Model Both the ADO.NET Entity Framework and LINQ to SQL support inheritance in their data models. An example of this might be a database that has an InsurancePolicy table. It might also contain CarPolicy and HousePolicy tables that have the same fields as InsurancePolicy and then add more fields. Dynamic Data has been modified to understand inherited objects in the data model and to support scaffolding for the inherited tables. For more information, see Walkthrough: Mapping Table-per-Hierarchy Inheritance in Dynamic Data. Support for Many-to-Many Relationships (Entity Framework Only) The Entity Framework has rich support for many-to-many relationships between tables, which is implemented by exposing the relationship as a collection on an Entity object. New field templates (ManyToMany.ascx and ManyToMany_Edit.ascx) have been added to provide support for displaying and editing data that is involved in many-to-many relationships. For more information, see Working with Many-to-Many Data Relationships in Dynamic Data. New Attributes to Control Display and Support Enumerations The DisplayAttribute has been added to give you additional control over how fields are displayed. The DisplayNameAttribute attribute in earlier versions of Dynamic Data enabled you to change the name that is used as a caption for a field. The new DisplayAttribute class lets you specify more options for displaying a field, such as the order in which a field is displayed and whether a field will be used as a filter. The attribute also provides independent control of the name that is used for the labels in a GridView control, the name that is used in a DetailsView control, the help text for the field, and the watermark used for the field (if the field accepts text input). The EnumDataTypeAttribute class has been added to let you map fields to enumerations. When you apply this attribute to a field, you specify an enumeration type. Dynamic Data uses the new Enumeration.ascx field template to create UI for displaying and editing enumeration values. The template maps the values from the database to the names in the enumeration. Enhanced Support for Filters Dynamic Data 1.0 had built-in filters for Boolean columns and foreign-key columns. The filters did not let you specify the order in which they were displayed. The new DisplayAttribute attribute addresses this by giving you control over whether a column appears as a filter and in what order it will be displayed. An additional enhancement is that filtering support has been rewritten to use the new QueryExtender feature of Web Forms. This lets you create filters without requiring knowledge of the data source control that the filters will be used with. Along with these extensions, filters have also been turned into template controls, which lets you add new ones. Finally, the DisplayAttribute class mentioned earlier allows the default filter to be overridden, in the same way that UIHint allows the default field template for a column to be overridden. For more information, see Walkthrough: Filtering Rows in Tables That Have a Parent-Child Relationship and QueryableFilterRepeater. ASP.NET Chart Control The ASP.NET chart server control enables you to create ASP.NET pages applications that have simple, intuitive charts for complex statistical or financial analysis. The chart control supports the following features: Data series, chart areas, axes, legends, labels, titles, and more. Data binding. Data manipulation, such as copying, splitting, merging, alignment, grouping, sorting, searching, and filtering. Statistical formulas and financial formulas. Advanced chart appearance, such as 3-D, anti-aliasing, lighting, and perspective. Events and customizations. Interactivity and Microsoft Ajax. Support for the Ajax Content Delivery Network (CDN), which provides an optimized way for you to add Microsoft Ajax Library and jQuery scripts to your Web applications. For more information, see Chart Web Server Control Overview. Visual Web Developer Enhancements The following sections provide information about enhancements and new features in Visual Studio 2010 and Visual Web Developer Express. The Web page designer in Visual Studio 2010 has been enhanced for better CSS compatibility, includes additional support for HTML and ASP.NET markup snippets, and features a redesigned version of IntelliSense for JScript. Improved CSS Compatibility The Visual Web Developer designer in Visual Studio 2010 has been updated to improve CSS 2.1 standards compliance. The designer better preserves HTML source code and is more robust than in previous versions of Visual Studio. HTML and JScript Snippets In the HTML editor, IntelliSense auto-completes tag names. The IntelliSense Snippets feature auto-completes whole tags and more. In Visual Studio 2010, IntelliSense snippets are supported for JScript, alongside C# and Visual Basic, which were supported in earlier versions of Visual Studio. Visual Studio 2010 includes over 200 snippets that help you auto-complete common ASP.NET and HTML tags, including required attributes (such as runat="server") and common attributes specific to a tag (such as ID, DataSourceID, ControlToValidate, and Text). You can download additional snippets, or you can write your own snippets that encapsulate the blocks of markup that you or your team use for common tasks. For more information on HTML snippets, see Walkthrough: Using HTML Snippets. JScript IntelliSense Enhancements In Visual 2010, JScript IntelliSense has been redesigned to provide an even richer editing experience. IntelliSense now recognizes objects that have been dynamically generated by methods such as registerNamespace and by similar techniques used by other JavaScript frameworks. Performance has been improved to analyze large libraries of script and to display IntelliSense with little or no processing delay. Compatibility has been significantly increased to support almost all third-party libraries and to support diverse coding styles. Documentation comments are now parsed as you type and are immediately leveraged by IntelliSense. Web Application Deployment with Visual Studio 2010 For Web application projects, Visual Studio now provides tools that work with the IIS Web Deployment Tool (Web Deploy) to automate many processes that had to be done manually in earlier versions of ASP.NET. For example, the following tasks can now be automated: Creating an IIS application on the destination computer and configuring IIS settings. Copying files to the destination computer. Changing Web.config settings that must be different in the destination environment. Propagating changes to data or data structures in SQL Server databases that are used by the Web application. For more information about Web application deployment, see ASP.NET Deployment Content Map. Enhancements to ASP.NET Multi-Targeting ASP.NET 4 adds new features to the multi-targeting feature to make it easier to work with projects that target earlier versions of the .NET Framework. Multi-targeting was introduced in ASP.NET 3.5 to enable you to use the latest version of Visual Studio without having to upgrade existing Web sites or Web services to the latest version of the .NET Framework. In Visual Studio 2008, when you work with a project targeted for an earlier version of the .NET Framework, most features of the development environment adapt to the targeted version. However, IntelliSense displays language features that are available in the current version, and property windows display properties available in the current version. In Visual Studio 2010, only language features and properties available in the targeted version of the .NET Framework are shown. For more information about multi-targeting, see the following topics: .NET Framework Multi-Targeting for ASP.NET Web Projects ASP.NET Side-by-Side Execution Overview How to: Host Web Applications That Use Different Versions of the .NET Framework on the Same Server How to: Deploy Web Site Projects Targeted for Earlier Versions of the .NET Framework

    Read the article

  • Zen and the Art of File and Folder Organization

    - by Mark Virtue
    Is your desk a paragon of neatness, or does it look like a paper-bomb has gone off? If you’ve been putting off getting organized because the task is too huge or daunting, or you don’t know where to start, we’ve got 40 tips to get you on the path to zen mastery of your filing system. For all those readers who would like to get their files and folders organized, or, if they’re already organized, better organized—we have compiled a complete guide to getting organized and staying organized, a comprehensive article that will hopefully cover every possible tip you could want. Signs that Your Computer is Poorly Organized If your computer is a mess, you’re probably already aware of it.  But just in case you’re not, here are some tell-tale signs: Your Desktop has over 40 icons on it “My Documents” contains over 300 files and 60 folders, including MP3s and digital photos You use the Windows’ built-in search facility whenever you need to find a file You can’t find programs in the out-of-control list of programs in your Start Menu You save all your Word documents in one folder, all your spreadsheets in a second folder, etc Any given file that you’re looking for may be in any one of four different sets of folders But before we start, here are some quick notes: We’re going to assume you know what files and folders are, and how to create, save, rename, copy and delete them The organization principles described in this article apply equally to all computer systems.  However, the screenshots here will reflect how things look on Windows (usually Windows 7).  We will also mention some useful features of Windows that can help you get organized. Everyone has their own favorite methodology of organizing and filing, and it’s all too easy to get into “My Way is Better than Your Way” arguments.  The reality is that there is no perfect way of getting things organized.  When I wrote this article, I tried to keep a generalist and objective viewpoint.  I consider myself to be unusually well organized (to the point of obsession, truth be told), and I’ve had 25 years experience in collecting and organizing files on computers.  So I’ve got a lot to say on the subject.  But the tips I have described here are only one way of doing it.  Hopefully some of these tips will work for you too, but please don’t read this as any sort of “right” way to do it. At the end of the article we’ll be asking you, the reader, for your own organization tips. Why Bother Organizing At All? For some, the answer to this question is self-evident. And yet, in this era of powerful desktop search software (the search capabilities built into the Windows Vista and Windows 7 Start Menus, and third-party programs like Google Desktop Search), the question does need to be asked, and answered. I have a friend who puts every file he ever creates, receives or downloads into his My Documents folder and doesn’t bother filing them into subfolders at all.  He relies on the search functionality built into his Windows operating system to help him find whatever he’s looking for.  And he always finds it.  He’s a Search Samurai.  For him, filing is a waste of valuable time that could be spent enjoying life! It’s tempting to follow suit.  On the face of it, why would anyone bother to take the time to organize their hard disk when such excellent search software is available?  Well, if all you ever want to do with the files you own is to locate and open them individually (for listening, editing, etc), then there’s no reason to ever bother doing one scrap of organization.  But consider these common tasks that are not achievable with desktop search software: Find files manually.  Often it’s not convenient, speedy or even possible to utilize your desktop search software to find what you want.  It doesn’t work 100% of the time, or you may not even have it installed.  Sometimes its just plain faster to go straight to the file you want, if you know it’s in a particular sub-folder, rather than trawling through hundreds of search results. Find groups of similar files (e.g. all your “work” files, all the photos of your Europe holiday in 2008, all your music videos, all the MP3s from Dark Side of the Moon, all your letters you wrote to your wife, all your tax returns).  Clever naming of the files will only get you so far.  Sometimes it’s the date the file was created that’s important, other times it’s the file format, and other times it’s the purpose of the file.  How do you name a collection of files so that they’re easy to isolate based on any of the above criteria?  Short answer, you can’t. Move files to a new computer.  It’s time to upgrade your computer.  How do you quickly grab all the files that are important to you?  Or you decide to have two computers now – one for home and one for work.  How do you quickly isolate only the work-related files to move them to the work computer? Synchronize files to other computers.  If you have more than one computer, and you need to mirror some of your files onto the other computer (e.g. your music collection), then you need a way to quickly determine which files are to be synced and which are not.  Surely you don’t want to synchronize everything? Choose which files to back up.  If your backup regime calls for multiple backups, or requires speedy backups, then you’ll need to be able to specify which files are to be backed up, and which are not.  This is not possible if they’re all in the same folder. Finally, if you’re simply someone who takes pleasure in being organized, tidy and ordered (me! me!), then you don’t even need a reason.  Being disorganized is simply unthinkable. Tips on Getting Organized Here we present our 40 best tips on how to get organized.  Or, if you’re already organized, to get better organized. Tip #1.  Choose Your Organization System Carefully The reason that most people are not organized is that it takes time.  And the first thing that takes time is deciding upon a system of organization.  This is always a matter of personal preference, and is not something that a geek on a website can tell you.  You should always choose your own system, based on how your own brain is organized (which makes the assumption that your brain is, in fact, organized). We can’t instruct you, but we can make suggestions: You may want to start off with a system based on the users of the computer.  i.e. “My Files”, “My Wife’s Files”, My Son’s Files”, etc.  Inside “My Files”, you might then break it down into “Personal” and “Business”.  You may then realize that there are overlaps.  For example, everyone may want to share access to the music library, or the photos from the school play.  So you may create another folder called “Family”, for the “common” files. You may decide that the highest-level breakdown of your files is based on the “source” of each file.  In other words, who created the files.  You could have “Files created by ME (business or personal)”, “Files created by people I know (family, friends, etc)”, and finally “Files created by the rest of the world (MP3 music files, downloaded or ripped movies or TV shows, software installation files, gorgeous desktop wallpaper images you’ve collected, etc).”  This system happens to be the one I use myself.  See below:  Mark is for files created by meVC is for files created by my company (Virtual Creations)Others is for files created by my friends and familyData is the rest of the worldAlso, Settings is where I store the configuration files and other program data files for my installed software (more on this in tip #34, below). Each folder will present its own particular set of requirements for further sub-organization.  For example, you may decide to organize your music collection into sub-folders based on the artist’s name, while your digital photos might get organized based on the date they were taken.  It can be different for every sub-folder! Another strategy would be based on “currentness”.  Files you have yet to open and look at live in one folder.  Ones that have been looked at but not yet filed live in another place.  Current, active projects live in yet another place.  All other files (your “archive”, if you like) would live in a fourth folder. (And of course, within that last folder you’d need to create a further sub-system based on one of the previous bullet points). Put some thought into this – changing it when it proves incomplete can be a big hassle!  Before you go to the trouble of implementing any system you come up with, examine a wide cross-section of the files you own and see if they will all be able to find a nice logical place to sit within your system. Tip #2.  When You Decide on Your System, Stick to It! There’s nothing more pointless than going to all the trouble of creating a system and filing all your files, and then whenever you create, receive or download a new file, you simply dump it onto your Desktop.  You need to be disciplined – forever!  Every new file you get, spend those extra few seconds to file it where it belongs!  Otherwise, in just a month or two, you’ll be worse off than before – half your files will be organized and half will be disorganized – and you won’t know which is which! Tip #3.  Choose the Root Folder of Your Structure Carefully Every data file (document, photo, music file, etc) that you create, own or is important to you, no matter where it came from, should be found within one single folder, and that one single folder should be located at the root of your C: drive (as a sub-folder of C:\).  In other words, do not base your folder structure in standard folders like “My Documents”.  If you do, then you’re leaving it up to the operating system engineers to decide what folder structure is best for you.  And every operating system has a different system!  In Windows 7 your files are found in C:\Users\YourName, whilst on Windows XP it was C:\Documents and Settings\YourName\My Documents.  In UNIX systems it’s often /home/YourName. These standard default folders tend to fill up with junk files and folders that are not at all important to you.  “My Documents” is the worst offender.  Every second piece of software you install, it seems, likes to create its own folder in the “My Documents” folder.  These folders usually don’t fit within your organizational structure, so don’t use them!  In fact, don’t even use the “My Documents” folder at all.  Allow it to fill up with junk, and then simply ignore it.  It sounds heretical, but: Don’t ever visit your “My Documents” folder!  Remove your icons/links to “My Documents” and replace them with links to the folders you created and you care about! Create your own file system from scratch!  Probably the best place to put it would be on your D: drive – if you have one.  This way, all your files live on one drive, while all the operating system and software component files live on the C: drive – simply and elegantly separated.  The benefits of that are profound.  Not only are there obvious organizational benefits (see tip #10, below), but when it comes to migrate your data to a new computer, you can (sometimes) simply unplug your D: drive and plug it in as the D: drive of your new computer (this implies that the D: drive is actually a separate physical disk, and not a partition on the same disk as C:).  You also get a slight speed improvement (again, only if your C: and D: drives are on separate physical disks). Warning:  From tip #12, below, you will see that it’s actually a good idea to have exactly the same file system structure – including the drive it’s filed on – on all of the computers you own.  So if you decide to use the D: drive as the storage system for your own files, make sure you are able to use the D: drive on all the computers you own.  If you can’t ensure that, then you can still use a clever geeky trick to store your files on the D: drive, but still access them all via the C: drive (see tip #17, below). If you only have one hard disk (C:), then create a dedicated folder that will contain all your files – something like C:\Files.  The name of the folder is not important, but make it a single, brief word. There are several reasons for this: When creating a backup regime, it’s easy to decide what files should be backed up – they’re all in the one folder! If you ever decide to trade in your computer for a new one, you know exactly which files to migrate You will always know where to begin a search for any file If you synchronize files with other computers, it makes your synchronization routines very simple.   It also causes all your shortcuts to continue to work on the other machines (more about this in tip #24, below). Once you’ve decided where your files should go, then put all your files in there – Everything!  Completely disregard the standard, default folders that are created for you by the operating system (“My Music”, “My Pictures”, etc).  In fact, you can actually relocate many of those folders into your own structure (more about that below, in tip #6). The more completely you get all your data files (documents, photos, music, etc) and all your configuration settings into that one folder, then the easier it will be to perform all of the above tasks. Once this has been done, and all your files live in one folder, all the other folders in C:\ can be thought of as “operating system” folders, and therefore of little day-to-day interest for us. Here’s a screenshot of a nicely organized C: drive, where all user files are located within the \Files folder:   Tip #4.  Use Sub-Folders This would be our simplest and most obvious tip.  It almost goes without saying.  Any organizational system you decide upon (see tip #1) will require that you create sub-folders for your files.  Get used to creating folders on a regular basis. Tip #5.  Don’t be Shy About Depth Create as many levels of sub-folders as you need.  Don’t be scared to do so.  Every time you notice an opportunity to group a set of related files into a sub-folder, do so.  Examples might include:  All the MP3s from one music CD, all the photos from one holiday, or all the documents from one client. It’s perfectly okay to put files into a folder called C:\Files\Me\From Others\Services\WestCo Bank\Statements\2009.  That’s only seven levels deep.  Ten levels is not uncommon.  Of course, it’s possible to take this too far.  If you notice yourself creating a sub-folder to hold only one file, then you’ve probably become a little over-zealous.  On the other hand, if you simply create a structure with only two levels (for example C:\Files\Work) then you really haven’t achieved any level of organization at all (unless you own only six files!).  Your “Work” folder will have become a dumping ground, just like your Desktop was, with most likely hundreds of files in it. Tip #6.  Move the Standard User Folders into Your Own Folder Structure Most operating systems, including Windows, create a set of standard folders for each of its users.  These folders then become the default location for files such as documents, music files, digital photos and downloaded Internet files.  In Windows 7, the full list is shown below: Some of these folders you may never use nor care about (for example, the Favorites folder, if you’re not using Internet Explorer as your browser).  Those ones you can leave where they are.  But you may be using some of the other folders to store files that are important to you.  Even if you’re not using them, Windows will still often treat them as the default storage location for many types of files.  When you go to save a standard file type, it can become annoying to be automatically prompted to save it in a folder that’s not part of your own file structure. But there’s a simple solution:  Move the folders you care about into your own folder structure!  If you do, then the next time you go to save a file of the corresponding type, Windows will prompt you to save it in the new, moved location. Moving the folders is easy.  Simply drag-and-drop them to the new location.  Here’s a screenshot of the default My Music folder being moved to my custom personal folder (Mark): Tip #7.  Name Files and Folders Intelligently This is another one that almost goes without saying, but we’ll say it anyway:  Do not allow files to be created that have meaningless names like Document1.doc, or folders called New Folder (2).  Take that extra 20 seconds and come up with a meaningful name for the file/folder – one that accurately divulges its contents without repeating the entire contents in the name. Tip #8.  Watch Out for Long Filenames Another way to tell if you have not yet created enough depth to your folder hierarchy is that your files often require really long names.  If you need to call a file Johnson Sales Figures March 2009.xls (which might happen to live in the same folder as Abercrombie Budget Report 2008.xls), then you might want to create some sub-folders so that the first file could be simply called March.xls, and living in the Clients\Johnson\Sales Figures\2009 folder. A well-placed file needs only a brief filename! Tip #9.  Use Shortcuts!  Everywhere! This is probably the single most useful and important tip we can offer.  A shortcut allows a file to be in two places at once. Why would you want that?  Well, the file and folder structure of every popular operating system on the market today is hierarchical.  This means that all objects (files and folders) always live within exactly one parent folder.  It’s a bit like a tree.  A tree has branches (folders) and leaves (files).  Each leaf, and each branch, is supported by exactly one parent branch, all the way back to the root of the tree (which, incidentally, is exactly why C:\ is called the “root folder” of the C: drive). That hard disks are structured this way may seem obvious and even necessary, but it’s only one way of organizing data.  There are others:  Relational databases, for example, organize structured data entirely differently.  The main limitation of hierarchical filing structures is that a file can only ever be in one branch of the tree – in only one folder – at a time.  Why is this a problem?  Well, there are two main reasons why this limitation is a problem for computer users: The “correct” place for a file, according to our organizational rationale, is very often a very inconvenient place for that file to be located.  Just because it’s correctly filed doesn’t mean it’s easy to get to.  Your file may be “correctly” buried six levels deep in your sub-folder structure, but you may need regular and speedy access to this file every day.  You could always move it to a more convenient location, but that would mean that you would need to re-file back to its “correct” location it every time you’d finished working on it.  Most unsatisfactory. A file may simply “belong” in two or more different locations within your file structure.  For example, say you’re an accountant and you have just completed the 2009 tax return for John Smith.  It might make sense to you to call this file 2009 Tax Return.doc and file it under Clients\John Smith.  But it may also be important to you to have the 2009 tax returns from all your clients together in the one place.  So you might also want to call the file John Smith.doc and file it under Tax Returns\2009.  The problem is, in a purely hierarchical filing system, you can’t put it in both places.  Grrrrr! Fortunately, Windows (and most other operating systems) offers a way for you to do exactly that:  It’s called a “shortcut” (also known as an “alias” on Macs and a “symbolic link” on UNIX systems).  Shortcuts allow a file to exist in one place, and an icon that represents the file to be created and put anywhere else you please.  In fact, you can create a dozen such icons and scatter them all over your hard disk.  Double-clicking on one of these icons/shortcuts opens up the original file, just as if you had double-clicked on the original file itself. Consider the following two icons: The one on the left is the actual Word document, while the one on the right is a shortcut that represents the Word document.  Double-clicking on either icon will open the same file.  There are two main visual differences between the icons: The shortcut will have a small arrow in the lower-left-hand corner (on Windows, anyway) The shortcut is allowed to have a name that does not include the file extension (the “.docx” part, in this case) You can delete the shortcut at any time without losing any actual data.  The original is still intact.  All you lose is the ability to get to that data from wherever the shortcut was. So why are shortcuts so great?  Because they allow us to easily overcome the main limitation of hierarchical file systems, and put a file in two (or more) places at the same time.  You will always have files that don’t play nice with your organizational rationale, and can’t be filed in only one place.  They demand to exist in two places.  Shortcuts allow this!  Furthermore, they allow you to collect your most often-opened files and folders together in one spot for convenient access.  The cool part is that the original files stay where they are, safe forever in their perfectly organized location. So your collection of most often-opened files can – and should – become a collection of shortcuts! If you’re still not convinced of the utility of shortcuts, consider the following well-known areas of a typical Windows computer: The Start Menu (and all the programs that live within it) The Quick Launch bar (or the Superbar in Windows 7) The “Favorite folders” area in the top-left corner of the Windows Explorer window (in Windows Vista or Windows 7) Your Internet Explorer Favorites or Firefox Bookmarks Each item in each of these areas is a shortcut!  Each of those areas exist for one purpose only:  For convenience – to provide you with a collection of the files and folders you access most often. It should be easy to see by now that shortcuts are designed for one single purpose:  To make accessing your files more convenient.  Each time you double-click on a shortcut, you are saved the hassle of locating the file (or folder, or program, or drive, or control panel icon) that it represents. Shortcuts allow us to invent a golden rule of file and folder organization: “Only ever have one copy of a file – never have two copies of the same file.  Use a shortcut instead” (this rule doesn’t apply to copies created for backup purposes, of course!) There are also lesser rules, like “don’t move a file into your work area – create a shortcut there instead”, and “any time you find yourself frustrated with how long it takes to locate a file, create a shortcut to it and place that shortcut in a convenient location.” So how to we create these massively useful shortcuts?  There are two main ways: “Copy” the original file or folder (click on it and type Ctrl-C, or right-click on it and select Copy):  Then right-click in an empty area of the destination folder (the place where you want the shortcut to go) and select Paste shortcut: Right-drag (drag with the right mouse button) the file from the source folder to the destination folder.  When you let go of the mouse button at the destination folder, a menu pops up: Select Create shortcuts here. Note that when shortcuts are created, they are often named something like Shortcut to Budget Detail.doc (windows XP) or Budget Detail – Shortcut.doc (Windows 7).   If you don’t like those extra words, you can easily rename the shortcuts after they’re created, or you can configure Windows to never insert the extra words in the first place (see our article on how to do this). And of course, you can create shortcuts to folders too, not just to files! Bottom line: Whenever you have a file that you’d like to access from somewhere else (whether it’s convenience you’re after, or because the file simply belongs in two places), create a shortcut to the original file in the new location. Tip #10.  Separate Application Files from Data Files Any digital organization guru will drum this rule into you.  Application files are the components of the software you’ve installed (e.g. Microsoft Word, Adobe Photoshop or Internet Explorer).  Data files are the files that you’ve created for yourself using that software (e.g. Word Documents, digital photos, emails or playlists). Software gets installed, uninstalled and upgraded all the time.  Hopefully you always have the original installation media (or downloaded set-up file) kept somewhere safe, and can thus reinstall your software at any time.  This means that the software component files are of little importance.  Whereas the files you have created with that software is, by definition, important.  It’s a good rule to always separate unimportant files from important files. So when your software prompts you to save a file you’ve just created, take a moment and check out where it’s suggesting that you save the file.  If it’s suggesting that you save the file into the same folder as the software itself, then definitely don’t follow that suggestion.  File it in your own folder!  In fact, see if you can find the program’s configuration option that determines where files are saved by default (if it has one), and change it. Tip #11.  Organize Files Based on Purpose, Not on File Type If you have, for example a folder called Work\Clients\Johnson, and within that folder you have two sub-folders, Word Documents and Spreadsheets (in other words, you’re separating “.doc” files from “.xls” files), then chances are that you’re not optimally organized.  It makes little sense to organize your files based on the program that created them.  Instead, create your sub-folders based on the purpose of the file.  For example, it would make more sense to create sub-folders called Correspondence and Financials.  It may well be that all the files in a given sub-folder are of the same file-type, but this should be more of a coincidence and less of a design feature of your organization system. Tip #12.  Maintain the Same Folder Structure on All Your Computers In other words, whatever organizational system you create, apply it to every computer that you can.  There are several benefits to this: There’s less to remember.  No matter where you are, you always know where to look for your files If you copy or synchronize files from one computer to another, then setting up the synchronization job becomes very simple Shortcuts can be copied or moved from one computer to another with ease (assuming the original files are also copied/moved).  There’s no need to find the target of the shortcut all over again on the second computer Ditto for linked files (e.g Word documents that link to data in a separate Excel file), playlists, and any files that reference the exact file locations of other files. This applies even to the drive that your files are stored on.  If your files are stored on C: on one computer, make sure they’re stored on C: on all your computers.  Otherwise all your shortcuts, playlists and linked files will stop working! Tip #13.  Create an “Inbox” Folder Create yourself a folder where you store all files that you’re currently working on, or that you haven’t gotten around to filing yet.  You can think of this folder as your “to-do” list.  You can call it “Inbox” (making it the same metaphor as your email system), or “Work”, or “To-Do”, or “Scratch”, or whatever name makes sense to you.  It doesn’t matter what you call it – just make sure you have one! Once you have finished working on a file, you then move it from the “Inbox” to its correct location within your organizational structure. You may want to use your Desktop as this “Inbox” folder.  Rightly or wrongly, most people do.  It’s not a bad place to put such files, but be careful:  If you do decide that your Desktop represents your “to-do” list, then make sure that no other files find their way there.  In other words, make sure that your “Inbox”, wherever it is, Desktop or otherwise, is kept free of junk – stray files that don’t belong there. So where should you put this folder, which, almost by definition, lives outside the structure of the rest of your filing system?  Well, first and foremost, it has to be somewhere handy.  This will be one of your most-visited folders, so convenience is key.  Putting it on the Desktop is a great option – especially if you don’t have any other folders on your Desktop:  the folder then becomes supremely easy to find in Windows Explorer: You would then create shortcuts to this folder in convenient spots all over your computer (“Favorite Links”, “Quick Launch”, etc). Tip #14.  Ensure You have Only One “Inbox” Folder Once you’ve created your “Inbox” folder, don’t use any other folder location as your “to-do list”.  Throw every incoming or created file into the Inbox folder as you create/receive it.  This keeps the rest of your computer pristine and free of randomly created or downloaded junk.  The last thing you want to be doing is checking multiple folders to see all your current tasks and projects.  Gather them all together into one folder. Here are some tips to help ensure you only have one Inbox: Set the default “save” location of all your programs to this folder. Set the default “download” location for your browser to this folder. If this folder is not your desktop (recommended) then also see if you can make a point of not putting “to-do” files on your desktop.  This keeps your desktop uncluttered and Zen-like: (the Inbox folder is in the bottom-right corner) Tip #15.  Be Vigilant about Clearing Your “Inbox” Folder This is one of the keys to staying organized.  If you let your “Inbox” overflow (i.e. allow there to be more than, say, 30 files or folders in there), then you’re probably going to start feeling like you’re overwhelmed:  You’re not keeping up with your to-do list.  Once your Inbox gets beyond a certain point (around 30 files, studies have shown), then you’ll simply start to avoid it.  You may continue to put files in there, but you’ll be scared to look at it, fearing the “out of control” feeling that all overworked, chaotic or just plain disorganized people regularly feel. So, here’s what you can do: Visit your Inbox/to-do folder regularly (at least five times per day). Scan the folder regularly for files that you have completed working on and are ready for filing.  File them immediately. Make it a source of pride to keep the number of files in this folder as small as possible.  If you value peace of mind, then make the emptiness of this folder one of your highest (computer) priorities If you know that a particular file has been in the folder for more than, say, six weeks, then admit that you’re not actually going to get around to processing it, and move it to its final resting place. Tip #16.  File Everything Immediately, and Use Shortcuts for Your Active Projects As soon as you create, receive or download a new file, store it away in its “correct” folder immediately.  Then, whenever you need to work on it (possibly straight away), create a shortcut to it in your “Inbox” (“to-do”) folder or your desktop.  That way, all your files are always in their “correct” locations, yet you still have immediate, convenient access to your current, active files.  When you finish working on a file, simply delete the shortcut. Ideally, your “Inbox” folder – and your Desktop – should contain no actual files or folders.  They should simply contain shortcuts. Tip #17.  Use Directory Symbolic Links (or Junctions) to Maintain One Unified Folder Structure Using this tip, we can get around a potential hiccup that we can run into when creating our organizational structure – the issue of having more than one drive on our computer (C:, D:, etc).  We might have files we need to store on the D: drive for space reasons, and yet want to base our organized folder structure on the C: drive (or vice-versa). Your chosen organizational structure may dictate that all your files must be accessed from the C: drive (for example, the root folder of all your files may be something like C:\Files).  And yet you may still have a D: drive and wish to take advantage of the hundreds of spare Gigabytes that it offers.  Did you know that it’s actually possible to store your files on the D: drive and yet access them as if they were on the C: drive?  And no, we’re not talking about shortcuts here (although the concept is very similar). By using the shell command mklink, you can essentially take a folder that lives on one drive and create an alias for it on a different drive (you can do lots more than that with mklink – for a full rundown on this programs capabilities, see our dedicated article).  These aliases are called directory symbolic links (and used to be known as junctions).  You can think of them as “virtual” folders.  They function exactly like regular folders, except they’re physically located somewhere else. For example, you may decide that your entire D: drive contains your complete organizational file structure, but that you need to reference all those files as if they were on the C: drive, under C:\Files.  If that was the case you could create C:\Files as a directory symbolic link – a link to D:, as follows: mklink /d c:\files d:\ Or it may be that the only files you wish to store on the D: drive are your movie collection.  You could locate all your movie files in the root of your D: drive, and then link it to C:\Files\Media\Movies, as follows: mklink /d c:\files\media\movies d:\ (Needless to say, you must run these commands from a command prompt – click the Start button, type cmd and press Enter) Tip #18. Customize Your Folder Icons This is not strictly speaking an organizational tip, but having unique icons for each folder does allow you to more quickly visually identify which folder is which, and thus saves you time when you’re finding files.  An example is below (from my folder that contains all files downloaded from the Internet): To learn how to change your folder icons, please refer to our dedicated article on the subject. Tip #19.  Tidy Your Start Menu The Windows Start Menu is usually one of the messiest parts of any Windows computer.  Every program you install seems to adopt a completely different approach to placing icons in this menu.  Some simply put a single program icon.  Others create a folder based on the name of the software.  And others create a folder based on the name of the software manufacturer.  It’s chaos, and can make it hard to find the software you want to run. Thankfully we can avoid this chaos with useful operating system features like Quick Launch, the Superbar or pinned start menu items. Even so, it would make a lot of sense to get into the guts of the Start Menu itself and give it a good once-over.  All you really need to decide is how you’re going to organize your applications.  A structure based on the purpose of the application is an obvious candidate.  Below is an example of one such structure: In this structure, Utilities means software whose job it is to keep the computer itself running smoothly (configuration tools, backup software, Zip programs, etc).  Applications refers to any productivity software that doesn’t fit under the headings Multimedia, Graphics, Internet, etc. In case you’re not aware, every icon in your Start Menu is a shortcut and can be manipulated like any other shortcut (copied, moved, deleted, etc). With the Windows Start Menu (all version of Windows), Microsoft has decided that there be two parallel folder structures to store your Start Menu shortcuts.  One for you (the logged-in user of the computer) and one for all users of the computer.  Having two parallel structures can often be redundant:  If you are the only user of the computer, then having two parallel structures is totally redundant.  Even if you have several users that regularly log into the computer, most of your installed software will need to be made available to all users, and should thus be moved out of the “just you” version of the Start Menu and into the “all users” area. To take control of your Start Menu, so you can start organizing it, you’ll need to know how to access the actual folders and shortcut files that make up the Start Menu (both versions of it).  To find these folders and files, click the Start button and then right-click on the All Programs text (Windows XP users should right-click on the Start button itself): The Open option refers to the “just you” version of the Start Menu, while the Open All Users option refers to the “all users” version.  Click on the one you want to organize. A Windows Explorer window then opens with your chosen version of the Start Menu selected.  From there it’s easy.  Double-click on the Programs folder and you’ll see all your folders and shortcuts.  Now you can delete/rename/move until it’s just the way you want it. Note:  When you’re reorganizing your Start Menu, you may want to have two Explorer windows open at the same time – one showing the “just you” version and one showing the “all users” version.  You can drag-and-drop between the windows. Tip #20.  Keep Your Start Menu Tidy Once you have a perfectly organized Start Menu, try to be a little vigilant about keeping it that way.  Every time you install a new piece of software, the icons that get created will almost certainly violate your organizational structure. So to keep your Start Menu pristine and organized, make sure you do the following whenever you install a new piece of software: Check whether the software was installed into the “just you” area of the Start Menu, or the “all users” area, and then move it to the correct area. Remove all the unnecessary icons (like the “Read me” icon, the “Help” icon (you can always open the help from within the software itself when it’s running), the “Uninstall” icon, the link(s)to the manufacturer’s website, etc) Rename the main icon(s) of the software to something brief that makes sense to you.  For example, you might like to rename Microsoft Office Word 2010 to simply Word Move the icon(s) into the correct folder based on your Start Menu organizational structure And don’t forget:  when you uninstall a piece of software, the software’s uninstall routine is no longer going to be able to remove the software’s icon from the Start Menu (because you moved and/or renamed it), so you’ll need to remove that icon manually. Tip #21.  Tidy C:\ The root of your C: drive (C:\) is a common dumping ground for files and folders – both by the users of your computer and by the software that you install on your computer.  It can become a mess. There’s almost no software these days that requires itself to be installed in C:\.  99% of the time it can and should be installed into C:\Program Files.  And as for your own files, well, it’s clear that they can (and almost always should) be stored somewhere else. In an ideal world, your C:\ folder should look like this (on Windows 7): Note that there are some system files and folders in C:\ that are usually and deliberately “hidden” (such as the Windows virtual memory file pagefile.sys, the boot loader file bootmgr, and the System Volume Information folder).  Hiding these files and folders is a good idea, as they need to stay where they are and are almost never needed to be opened or even seen by you, the user.  Hiding them prevents you from accidentally messing with them, and enhances your sense of order and well-being when you look at your C: drive folder. Tip #22.  Tidy Your Desktop The Desktop is probably the most abused part of a Windows computer (from an organization point of view).  It usually serves as a dumping ground for all incoming files, as well as holding icons to oft-used applications, plus some regularly opened files and folders.  It often ends up becoming an uncontrolled mess.  See if you can avoid this.  Here’s why… Application icons (Word, Internet Explorer, etc) are often found on the Desktop, but it’s unlikely that this is the optimum place for them.  The “Quick Launch” bar (or the Superbar in Windows 7) is always visible and so represents a perfect location to put your icons.  You’ll only be able to see the icons on your Desktop when all your programs are minimized.  It might be time to get your application icons off your desktop… You may have decided that the Inbox/To-do folder on your computer (see tip #13, above) should be your Desktop.  If so, then enough said.  Simply be vigilant about clearing it and preventing it from being polluted by junk files (see tip #15, above).  On the other hand, if your Desktop is not acting as your “Inbox” folder, then there’s no reason for it to have any data files or folders on it at all, except perhaps a couple of shortcuts to often-opened files and folders (either ongoing or current projects).  Everything else should be moved to your “Inbox” folder. In an ideal world, it might look like this: Tip #23.  Move Permanent Items on Your Desktop Away from the Top-Left Corner When files/folders are dragged onto your desktop in a Windows Explorer window, or when shortcuts are created on your Desktop from Internet Explorer, those icons are always placed in the top-left corner – or as close as they can get.  If you have other files, folders or shortcuts that you keep on the Desktop permanently, then it’s a good idea to separate these permanent icons from the transient ones, so that you can quickly identify which ones the transients are.  An easy way to do this is to move all your permanent icons to the right-hand side of your Desktop.  That should keep them separated from incoming items. Tip #24.  Synchronize If you have more than one computer, you’ll almost certainly want to share files between them.  If the computers are permanently attached to the same local network, then there’s no need to store multiple copies of any one file or folder – shortcuts will suffice.  However, if the computers are not always on the same network, then you will at some point need to copy files between them.  For files that need to permanently live on both computers, the ideal way to do this is to synchronize the files, as opposed to simply copying them. We only have room here to write a brief summary of synchronization, not a full article.  In short, there are several different types of synchronization: Where the contents of one folder are accessible anywhere, such as with Dropbox Where the contents of any number of folders are accessible anywhere, such as with Windows Live Mesh Where any files or folders from anywhere on your computer are synchronized with exactly one other computer, such as with the Windows “Briefcase”, Microsoft SyncToy, or (much more powerful, yet still free) SyncBack from 2BrightSparks.  This only works when both computers are on the same local network, at least temporarily. A great advantage of synchronization solutions is that once you’ve got it configured the way you want it, then the sync process happens automatically, every time.  Click a button (or schedule it to happen automatically) and all your files are automagically put where they’re supposed to be. If you maintain the same file and folder structure on both computers, then you can also sync files depend upon the correct location of other files, like shortcuts, playlists and office documents that link to other office documents, and the synchronized files still work on the other computer! Tip #25.  Hide Files You Never Need to See If you have your files well organized, you will often be able to tell if a file is out of place just by glancing at the contents of a folder (for example, it should be pretty obvious if you look in a folder that contains all the MP3s from one music CD and see a Word document in there).  This is a good thing – it allows you to determine if there are files out of place with a quick glance.  Yet sometimes there are files in a folder that seem out of place but actually need to be there, such as the “folder art” JPEGs in music folders, and various files in the root of the C: drive.  If such files never need to be opened by you, then a good idea is to simply hide them.  Then, the next time you glance at the folder, you won’t have to remember whether that file was supposed to be there or not, because you won’t see it at all! To hide a file, simply right-click on it and choose Properties: Then simply tick the Hidden tick-box:   Tip #26.  Keep Every Setup File These days most software is downloaded from the Internet.  Whenever you download a piece of software, keep it.  You’ll never know when you need to reinstall the software. Further, keep with it an Internet shortcut that links back to the website where you originally downloaded it, in case you ever need to check for updates. See tip #33 below for a full description of the excellence of organizing your setup files. Tip #27.  Try to Minimize the Number of Folders that Contain Both Files and Sub-folders Some of the folders in your organizational structure will contain only files.  Others will contain only sub-folders.  And you will also have some folders that contain both files and sub-folders.  You will notice slight improvements in how long it takes you to locate a file if you try to avoid this third type of folder.  It’s not always possible, of course – you’ll always have some of these folders, but see if you can avoid it. One way of doing this is to take all the leftover files that didn’t end up getting stored in a sub-folder and create a special “Miscellaneous” or “Other” folder for them. Tip #28.  Starting a Filename with an Underscore Brings it to the Top of a List Further to the previous tip, if you name that “Miscellaneous” or “Other” folder in such a way that its name begins with an underscore “_”, then it will appear at the top of the list of files/folders. The screenshot below is an example of this.  Each folder in the list contains a set of digital photos.  The folder at the top of the list, _Misc, contains random photos that didn’t deserve their own dedicated folder: Tip #29.  Clean Up those CD-ROMs and (shudder!) Floppy Disks Have you got a pile of CD-ROMs stacked on a shelf of your office?  Old photos, or files you archived off onto CD-ROM (or even worse, floppy disks!) because you didn’t have enough disk space at the time?  In the meantime have you upgraded your computer and now have 500 Gigabytes of space you don’t know what to do with?  If so, isn’t it time you tidied up that stack of disks and filed them into your gorgeous new folder structure? So what are you waiting for?  Bite the bullet, copy them all back onto your computer, file them in their appropriate folders, and then back the whole lot up onto a shiny new 1000Gig external hard drive! Useful Folders to Create This next section suggests some useful folders that you might want to create within your folder structure.  I’ve personally found them to be indispensable. The first three are all about convenience – handy folders to create and then put somewhere that you can always access instantly.  For each one, it’s not so important where the actual folder is located, but it’s very important where you put the shortcut(s) to the folder.  You might want to locate the shortcuts: On your Desktop In your “Quick Launch” area (or pinned to your Windows 7 Superbar) In your Windows Explorer “Favorite Links” area Tip #30.  Create an “Inbox” (“To-Do”) Folder This has already been mentioned in depth (see tip #13), but we wanted to reiterate its importance here.  This folder contains all the recently created, received or downloaded files that you have not yet had a chance to file away properly, and it also may contain files that you have yet to process.  In effect, it becomes a sort of “to-do list”.  It doesn’t have to be called “Inbox” – you can call it whatever you want. Tip #31.  Create a Folder where Your Current Projects are Collected Rather than going hunting for them all the time, or dumping them all on your desktop, create a special folder where you put links (or work folders) for each of the projects you’re currently working on. You can locate this folder in your “Inbox” folder, on your desktop, or anywhere at all – just so long as there’s a way of getting to it quickly, such as putting a link to it in Windows Explorer’s “Favorite Links” area: Tip #32.  Create a Folder for Files and Folders that You Regularly Open You will always have a few files that you open regularly, whether it be a spreadsheet of your current accounts, or a favorite playlist.  These are not necessarily “current projects”, rather they’re simply files that you always find yourself opening.  Typically such files would be located on your desktop (or even better, shortcuts to those files).  Why not collect all such shortcuts together and put them in their own special folder? As with the “Current Projects” folder (above), you would want to locate that folder somewhere convenient.  Below is an example of a folder called “Quick links”, with about seven files (shortcuts) in it, that is accessible through the Windows Quick Launch bar: See tip #37 below for a full explanation of the power of the Quick Launch bar. Tip #33.  Create a “Set-ups” Folder A typical computer has dozens of applications installed on it.  For each piece of software, there are often many different pieces of information you need to keep track of, including: The original installation setup file(s).  This can be anything from a simple 100Kb setup.exe file you downloaded from a website, all the way up to a 4Gig ISO file that you copied from a DVD-ROM that you purchased. The home page of the software manufacturer (in case you need to look up something on their support pages, their forum or their online help) The page containing the download link for your actual file (in case you need to re-download it, or download an upgraded version) The serial number Your proof-of-purchase documentation Any other template files, plug-ins, themes, etc that also need to get installed For each piece of software, it’s a great idea to gather all of these files together and put them in a single folder.  The folder can be the name of the software (plus possibly a very brief description of what it’s for – in case you can’t remember what the software does based in its name).  Then you would gather all of these folders together into one place, and call it something like “Software” or “Setups”. If you have enough of these folders (I have several hundred, being a geek, collected over 20 years), then you may want to further categorize them.  My own categorization structure is based on “platform” (operating system): The last seven folders each represents one platform/operating system, while _Operating Systems contains set-up files for installing the operating systems themselves.  _Hardware contains ROMs for hardware I own, such as routers. Within the Windows folder (above), you can see the beginnings of the vast library of software I’ve compiled over the years: An example of a typical application folder looks like this: Tip #34.  Have a “Settings” Folder We all know that our documents are important.  So are our photos and music files.  We save all of these files into folders, and then locate them afterwards and double-click on them to open them.  But there are many files that are important to us that can’t be saved into folders, and then searched for and double-clicked later on.  These files certainly contain important information that we need, but are often created internally by an application, and saved wherever that application feels is appropriate. A good example of this is the “PST” file that Outlook creates for us and uses to store all our emails, contacts, appointments and so forth.  Another example would be the collection of Bookmarks that Firefox stores on your behalf. And yet another example would be the customized settings and configuration files of our all our software.  Granted, most Windows programs store their configuration in the Registry, but there are still many programs that use configuration files to store their settings. Imagine if you lost all of the above files!  And yet, when people are backing up their computers, they typically only back up the files they know about – those that are stored in the “My Documents” folder, etc.  If they had a hard disk failure or their computer was lost or stolen, their backup files would not include some of the most vital files they owned.  Also, when migrating to a new computer, it’s vital to ensure that these files make the journey. It can be a very useful idea to create yourself a folder to store all your “settings” – files that are important to you but which you never actually search for by name and double-click on to open them.  Otherwise, next time you go to set up a new computer just the way you want it, you’ll need to spend hours recreating the configuration of your previous computer! So how to we get our important files into this folder?  Well, we have a few options: Some programs (such as Outlook and its PST files) allow you to place these files wherever you want.  If you delve into the program’s options, you will find a setting somewhere that controls the location of the important settings files (or “personal storage” – PST – when it comes to Outlook) Some programs do not allow you to change such locations in any easy way, but if you get into the Registry, you can sometimes find a registry key that refers to the location of the file(s).  Simply move the file into your Settings folder and adjust the registry key to refer to the new location. Some programs stubbornly refuse to allow their settings files to be placed anywhere other then where they stipulate.  When faced with programs like these, you have three choices:  (1) You can ignore those files, (2) You can copy the files into your Settings folder (let’s face it – settings don’t change very often), or (3) you can use synchronization software, such as the Windows Briefcase, to make synchronized copies of all your files in your Settings folder.  All you then have to do is to remember to run your sync software periodically (perhaps just before you run your backup software!). There are some other things you may decide to locate inside this new “Settings” folder: Exports of registry keys (from the many applications that store their configurations in the Registry).  This is useful for backup purposes or for migrating to a new computer Notes you’ve made about all the specific customizations you have made to a particular piece of software (so that you’ll know how to do it all again on your next computer) Shortcuts to webpages that detail how to tweak certain aspects of your operating system or applications so they are just the way you like them (such as how to remove the words “Shortcut to” from the beginning of newly created shortcuts).  In other words, you’d want to create shortcuts to half the pages on the How-To Geek website! Here’s an example of a “Settings” folder: Windows Features that Help with Organization This section details some of the features of Microsoft Windows that are a boon to anyone hoping to stay optimally organized. Tip #35.  Use the “Favorite Links” Area to Access Oft-Used Folders Once you’ve created your great new filing system, work out which folders you access most regularly, or which serve as great starting points for locating the rest of the files in your folder structure, and then put links to those folders in your “Favorite Links” area of the left-hand side of the Windows Explorer window (simply called “Favorites” in Windows 7):   Some ideas for folders you might want to add there include: Your “Inbox” folder (or whatever you’ve called it) – most important! The base of your filing structure (e.g. C:\Files) A folder containing shortcuts to often-accessed folders on other computers around the network (shown above as Network Folders) A folder containing shortcuts to your current projects (unless that folder is in your “Inbox” folder) Getting folders into this area is very simple – just locate the folder you’re interested in and drag it there! Tip #36.  Customize the Places Bar in the File/Open and File/Save Boxes Consider the screenshot below: The highlighted icons (collectively known as the “Places Bar”) can be customized to refer to any folder location you want, allowing instant access to any part of your organizational structure. Note:  These File/Open and File/Save boxes have been superseded by new versions that use the Windows Vista/Windows 7 “Favorite Links”, but the older versions (shown above) are still used by a surprisingly large number of applications. The easiest way to customize these icons is to use the Group Policy Editor, but not everyone has access to this program.  If you do, open it up and navigate to: User Configuration > Administrative Templates > Windows Components > Windows Explorer > Common Open File Dialog If you don’t have access to the Group Policy Editor, then you’ll need to get into the Registry.  Navigate to: HKEY_CURRENT_USER \ Software \ Microsoft  \ Windows \ CurrentVersion \ Policies \ comdlg32 \ Placesbar It should then be easy to make the desired changes.  Log off and log on again to allow the changes to take effect. Tip #37.  Use the Quick Launch Bar as a Application and File Launcher That Quick Launch bar (to the right of the Start button) is a lot more useful than people give it credit for.  Most people simply have half a dozen icons in it, and use it to start just those programs.  But it can actually be used to instantly access just about anything in your filing system: For complete instructions on how to set this up, visit our dedicated article on this topic. Tip #38.  Put a Shortcut to Windows Explorer into Your Quick Launch Bar This is only necessary in Windows Vista and Windows XP.  The Microsoft boffins finally got wise and added it to the Windows 7 Superbar by default. Windows Explorer – the program used for managing your files and folders – is one of the most useful programs in Windows.  Anyone who considers themselves serious about being organized needs instant access to this program at any time.  A great place to create a shortcut to this program is in the Windows XP and Windows Vista “Quick Launch” bar: To get it there, locate it in your Start Menu (usually under “Accessories”) and then right-drag it down into your Quick Launch bar (and create a copy). Tip #39.  Customize the Starting Folder for Your Windows 7 Explorer Superbar Icon If you’re on Windows 7, your Superbar will include a Windows Explorer icon.  Clicking on the icon will launch Windows Explorer (of course), and will start you off in your “Libraries” folder.  Libraries may be fine as a starting point, but if you have created yourself an “Inbox” folder, then it would probably make more sense to start off in this folder every time you launch Windows Explorer. To change this default/starting folder location, then first right-click the Explorer icon in the Superbar, and then right-click Properties:Then, in Target field of the Windows Explorer Properties box that appears, type %windir%\explorer.exe followed by the path of the folder you wish to start in.  For example: %windir%\explorer.exe C:\Files If that folder happened to be on the Desktop (and called, say, “Inbox”), then you would use the following cleverness: %windir%\explorer.exe shell:desktop\Inbox Then click OK and test it out. Tip #40.  Ummmmm…. No, that’s it.  I can’t think of another one.  That’s all of the tips I can come up with.  I only created this one because 40 is such a nice round number… Case Study – An Organized PC To finish off the article, I have included a few screenshots of my (main) computer (running Vista).  The aim here is twofold: To give you a sense of what it looks like when the above, sometimes abstract, tips are applied to a real-life computer, and To offer some ideas about folders and structure that you may want to steal to use on your own PC. Let’s start with the C: drive itself.  Very minimal.  All my files are contained within C:\Files.  I’ll confine the rest of the case study to this folder: That folder contains the following: Mark: My personal files VC: My business (Virtual Creations, Australia) Others contains files created by friends and family Data contains files from the rest of the world (can be thought of as “public” files, usually downloaded from the Net) Settings is described above in tip #34 The Data folder contains the following sub-folders: Audio:  Radio plays, audio books, podcasts, etc Development:  Programmer and developer resources, sample source code, etc (see below) Humour:  Jokes, funnies (those emails that we all receive) Movies:  Downloaded and ripped movies (all legal, of course!), their scripts, DVD covers, etc. Music:  (see below) Setups:  Installation files for software (explained in full in tip #33) System:  (see below) TV:  Downloaded TV shows Writings:  Books, instruction manuals, etc (see below) The Music folder contains the following sub-folders: Album covers:  JPEG scans Guitar tabs:  Text files of guitar sheet music Lists:  e.g. “Top 1000 songs of all time” Lyrics:  Text files MIDI:  Electronic music files MP3 (representing 99% of the Music folder):  MP3s, either ripped from CDs or downloaded, sorted by artist/album name Music Video:  Video clips Sheet Music:  usually PDFs The Data\Writings folder contains the following sub-folders: (all pretty self-explanatory) The Data\Development folder contains the following sub-folders: Again, all pretty self-explanatory (if you’re a geek) The Data\System folder contains the following sub-folders: These are usually themes, plug-ins and other downloadable program-specific resources. The Mark folder contains the following sub-folders: From Others:  Usually letters that other people (friends, family, etc) have written to me For Others:  Letters and other things I have created for other people Green Book:  None of your business Playlists:  M3U files that I have compiled of my favorite songs (plus one M3U playlist file for every album I own) Writing:  Fiction, philosophy and other musings of mine Mark Docs:  Shortcut to C:\Users\Mark Settings:  Shortcut to C:\Files\Settings\Mark The Others folder contains the following sub-folders: The VC (Virtual Creations, my business – I develop websites) folder contains the following sub-folders: And again, all of those are pretty self-explanatory. Conclusion These tips have saved my sanity and helped keep me a productive geek, but what about you? What tips and tricks do you have to keep your files organized?  Please share them with us in the comments.  Come on, don’t be shy… Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Print or Create a Text File List of the Contents in a Directory the Easy WayCustomize the Windows 7 or Vista Send To MenuAdd Copy To / Move To on Windows 7 or Vista Right-Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

  • IIS 7.5 , Tomcat 7 - Isapi redirector - Fail Over - sticky sessions

    - by Jose Matias
    I have two instances of Tomcat 7.0.8 running in the same machine (Tomcat7A and Tomcat7B) and IIS 7.5 acting as front-end load-balancer with isapi-redirector 1.2.31, running on Windows 2008 R2. When i disconnect the instance wich is handling a request i can see a new instance being assigned with the same sessionid but then the user is redirected to the login page. server.xml configuration file <Engine name="Catalina" defaultHost="localhost" jvmRoute="Tomcat7A"> <Realm className="org.apache.catalina.realm.LockOutRealm"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8"> <Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/> <Channel className="org.apache.catalina.tribes.group.GroupChannel"> <Membership className="org.apache.catalina.tribes.membership.McastService" address="228.0.0.8" bind="7.3.1.22" port="45564" frequency="500" dropTime="3000"/> <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="auto" port="4200" autoBind="100" selectorTimeout="5000" maxThreads="6"/> <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> </Sender> <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> </Channel> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt"/> <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> </Cluster> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> worker_mount_file=C:\tomcat\iis\conf\uriworkermap_prod.properties worker.list = balancer,status worker.Tomcat7B.host = 7.3.1.22 worker.Tomcat7B.type = ajp13 worker.Tomcat7B.port = 8010 worker.Tomcat7B.lbfactor = 10 worker.Tomcat7A.host = 7.3.1.22 worker.Tomcat7A.type = ajp13 worker.Tomcat7A.port = 8009 worker.Tomcat7A.lbfactor = 10 worker.balancer.type = lb worker.balancer.sticky_session = 1 worker.balancer.balance_workers = Tomcat7B, Tomcat7A worker.status.type = status isapi_redirect log [debug] wc_get_worker_for_name::jk_worker.c (116): found a worker balancer [debug] HttpExtensionProc::jk_isapi_plugin.c (2188): got a worker for name balancer [debug] service::jk_lb_worker.c (1118): service sticky_session=1 id='89569C584CC4F58740D649C4BE655D36.Tomcat7B' [debug] get_most_suitable_worker::jk_lb_worker.c (946): searching worker for partial sessionid 89569C584CC4F58740D649C4BE655D36.Tomcat7B [debug] get_most_suitable_worker::jk_lb_worker.c (954): searching worker for session route Tomcat7B [debug] get_most_suitable_worker::jk_lb_worker.c (968): found worker Tomcat7B (Tomcat7B) for route Tomcat7B and partial sessionid 89569C584CC4F58740D649C4BE655D36.Tomcat7B [debug] service::jk_lb_worker.c (1161): service worker=Tomcat7B route=Tomcat7B [debug] ajp_get_endpoint::jk_ajp_common.c (3096): acquired connection pool slot=0 after 0 retries [debug] ajp_marshal_into_msgb::jk_ajp_common.c (605): ajp marshaling done [debug] ajp_service::jk_ajp_common.c (2379): processing Tomcat7B with 2 retries [debug] jk_shutdown_socket::jk_connect.c (726): About to shutdown socket 820 [7.3.1.22:24482 -> 7.3.1.22:8010] [debug] jk_shutdown_socket::jk_connect.c (797): shutting down the read side of socket 820 [7.3.1.22:24482 -> 7.3.1.22:8010] [debug] jk_shutdown_socket::jk_connect.c (808): Shutdown socket 820 [7.3.1.22:24482 -> 7.3.1.22:8010] and read 0 lingering bytes in 0 sec. [debug] ajp_send_request::jk_ajp_common.c (1496): (Tomcat7B) failed sending request, socket 820 is not connected any more (errno=-10000) [debug] ajp_next_connection::jk_ajp_common.c (823): (Tomcat7B) Will try pooled connection socket 896 from slot 1 [debug] jk_shutdown_socket::jk_connect.c (726): About to shutdown socket 896 [7.3.1.22:24488 -> 7.3.1.22:8010] [debug] jk_shutdown_socket::jk_connect.c (797): shutting down the read side of socket 896 [7.3.1.22:24488 -> 7.3.1.22:8010] [debug] jk_shutdown_socket::jk_connect.c (808): Shutdown socket 896 [7.3.1.22:24488 -> 7.3.1.22:8010] and read 0 lingering bytes in 0 sec. [debug] ajp_send_request::jk_ajp_common.c (1496): (Tomcat7B) failed sending request, socket 896 is not connected any more (errno=-10000) [info] ajp_send_request::jk_ajp_common.c (1567): (Tomcat7B) all endpoints are disconnected, detected by connect check (2), cping (0), send (0) [debug] jk_open_socket::jk_connect.c (484): socket TCP_NODELAY set to On [debug] jk_open_socket::jk_connect.c (608): trying to connect socket 896 to 7.3.1.22:8010 [info] jk_open_socket::jk_connect.c (626): connect to 7.3.1.22:8010 failed (errno=61) [info] ajp_connect_to_endpoint::jk_ajp_common.c (959): Failed opening socket to (7.3.1.22:8010) (errno=61) [error] ajp_send_request::jk_ajp_common.c (1578): (Tomcat7B) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [info] ajp_service::jk_ajp_common.c (2543): (Tomcat7B) sending request to tomcat failed (recoverable), because of error during request sending (attempt=1) [debug] ajp_service::jk_ajp_common.c (2400): retry 1, sleeping for 100 ms before retrying [debug] ajp_send_request::jk_ajp_common.c (1572): (Tomcat7B) all endpoints are disconnected. [debug] jk_open_socket::jk_connect.c (484): socket TCP_NODELAY set to On [debug] jk_open_socket::jk_connect.c (608): trying to connect socket 896 to 7.3.1.22:8010 [info] jk_open_socket::jk_connect.c (626): connect to 7.3.1.22:8010 failed (errno=61) [info] ajp_connect_to_endpoint::jk_ajp_common.c (959): Failed opening socket to (7.3.1.22:8010) (errno=61) [error] ajp_send_request::jk_ajp_common.c (1578): (Tomcat7B) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [info] ajp_service::jk_ajp_common.c (2543): (Tomcat7B) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [error] ajp_service::jk_ajp_common.c (2562): (Tomcat7B) connecting to tomcat failed. [debug] ajp_reset_endpoint::jk_ajp_common.c (757): (Tomcat7B) resetting endpoint with socket -1 (socket shutdown) [debug] ajp_done::jk_ajp_common.c (3013): recycling connection pool slot=0 for worker Tomcat7B [debug] service::jk_lb_worker.c (1374): worker Tomcat7B escalating local error to global error [info] service::jk_lb_worker.c (1388): service failed, worker Tomcat7B is in error state [debug] service::jk_lb_worker.c (1399): recoverable error... will try to recover on other worker [debug] get_most_suitable_worker::jk_lb_worker.c (946): searching worker for partial sessionid 89569C584CC4F58740D649C4BE655D36.Tomcat7B [debug] get_most_suitable_worker::jk_lb_worker.c (954): searching worker for session route Tomcat7B [debug] get_most_suitable_worker::jk_lb_worker.c (1001): found best worker Tomcat7A (Tomcat7A) using method 'Request' [debug] service::jk_lb_worker.c (1161): service worker=Tomcat7A route=Tomcat7B [debug] ajp_get_endpoint::jk_ajp_common.c (3096): acquired connection pool slot=0 after 0 retries [debug] ajp_marshal_into_msgb::jk_ajp_common.c (605): ajp marshaling done [debug] ajp_service::jk_ajp_common.c (2379): processing Tomcat7A with 2 retries [debug] ajp_send_request::jk_ajp_common.c (1572): (Tomcat7A) all endpoints are disconnected. [debug] jk_open_socket::jk_connect.c (484): socket TCP_NODELAY set to On [debug] jk_open_socket::jk_connect.c (608): trying to connect socket 896 to 7.3.1.22:8009 [debug] jk_open_socket::jk_connect.c (634): socket 896 [7.3.1.22:24496 -> 7.3.1.22:8009] connected [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): sending to ajp13 pos=4 len=615 max=8192 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0000 .4.c....HTTP/1.1 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0010 .../Accounter/pr [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0020 intFrameSet.jhtm [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0030 l...::1...::1... [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0040 localhost..P.... [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0050 ...Keep-Alive... [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0060 ..0....rimage/jp [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0070 eg,.image/gif,.i [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0080 mage/pjpeg,.appl [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0090 ication/x-ms-app [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00a0 lication,.applic [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00b0 ation/xaml+xml,. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00c0 application/x-ms [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00d0 -xbap,.*/*...Acc [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00e0 ept-Encoding...g [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00f0 zip,.deflate...A [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0100 ccept-Language.. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0110 .nb-NO....]Usern [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0120 ame=NA_jose.mati [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0130 as_AT_addenergy. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0140 no;.JSESSIONID=8 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0150 9569C584CC4F5874 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0160 0D649C4BE655D36. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0170 Tomcat7B.....loc [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0180 alhost.....http: [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0190 //localhost/Acco [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01a0 unter/NemsAccoun [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01b0 ter.jhtml....uMo [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01c0 zilla/4.0.(compa [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01d0 tible;.MSIE.8.0; [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01e0 .Windows.NT.6.1; [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01f0 .WOW64;.Trident/ [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0200 4.0;.SLCC2;..NET [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0210 .CLR.2.0.50727;. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0220 .NET4.0C;..NET4. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0230 0E)............F [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0240 rameName=Reports [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0250 _CS_EUETS....Tom [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0260 cat7B........... [debug] ajp_send_request::jk_ajp_common.c (1632): (Tomcat7A) request body to send 0 - request body to resend 0 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): received from ajp13 pos=0 len=238 max=8192 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0000 .....Moved.Tempo [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0010 rarily......OJSE [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0020 SSIONID=6A2507A4 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0030 626F698EC74A733C [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0040 DBA7D9FE.Tomcat7 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0050 A;.Path=/Account [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0060 er;.HttpOnly...P [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0070 ragma...no-cache [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0080 ...Cache-Control [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0090 ...no-cache....& [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 00a0 http://localhost [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 00b0 /Accounter/login [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 00c0 .jhtml.....text/ [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 00d0 html;charset=ISO [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 00e0 -8859-1.....0... [debug] ajp_unmarshal_response::jk_ajp_common.c (660): status = 302 [debug] ajp_unmarshal_response::jk_ajp_common.c (667): Number of headers is = 6 [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[0] [Set-Cookie] = [JSESSIONID=6A2507A4626F698EC74A733CDBA7D9FE.Tomcat7A; Path=/Accounter; HttpOnly] [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[1] [Pragma] = [no-cache] [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[2] [Cache-Control] = [no-cache] [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[3] [Location] = [http://localhost/Accounter/login.jhtml] [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[4] [Content-Type] = [text/html;charset=ISO-8859-1] [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[5] [Content-Length] = [0] [debug] start_response::jk_isapi_plugin.c (963): Starting response for URI '/Accounter/printFrameSet.jhtml' (protocol HTTP/1.1) [debug] start_response::jk_isapi_plugin.c (1063): Not using Keep-Alive [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): received from ajp13 pos=0 len=2 max=8192 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0000 ................ [debug] ajp_process_callback::jk_ajp_common.c (1943): AJP13 protocol: Reuse is OK [debug] ajp_reset_endpoint::jk_ajp_common.c (757): (Tomcat7A) resetting endpoint with socket 896 [debug] ajp_done::jk_ajp_common.c (3013): recycling connection pool slot=0 for worker Tomcat7A [debug] HttpExtensionProc::jk_isapi_plugin.c (2211): service() returned OK [debug] HttpFilterProc::jk_isapi_plugin.c (1851): Filter started [debug] map_uri_to_worker_ext::jk_uri_worker_map.c (1036): Attempting to map URI '/localhost/Accounter/login.jhtml' from 8 maps [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/servlet/*=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/ws/*=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/nems*.pdf=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/*.service=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/*.jhtml=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/*.json=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/jkmanager=status' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/servlet/*=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/ws/*=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/nems*.pdf=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/*.service=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/*.jhtml=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (863): Found a wildchar match '/Accounter/*.jhtml=balancer' [debug] HttpFilterProc::jk_isapi_plugin.c (1938): check if [/Accounter/login.jhtml] points to the web-inf directory [debug] HttpFilterProc::jk_isapi_plugin.c (1954): [/Accounter/login.jhtml] is a servlet url - should redirect to balancer [debug] HttpFilterProc::jk_isapi_plugin.c (1994): fowarding escaped URI [/Accounter/login.jhtml] [debug] init_ws_service::jk_isapi_plugin.c (2982): Reading extension header HTTP_TOMCATWORKER0000000180000000: balancer [debug] init_ws_service::jk_isapi_plugin.c (2983): Reading extension header HTTP_TOMCATWORKERIDX0000000180000000: 5 [debug] init_ws_service::jk_isapi_plugin.c (2984): Reading extension header HTTP_TOMCATURI0000000180000000: /Accounter/login.jhtml [debug] init_ws_service::jk_isapi_plugin.c (2985): Reading extension header HTTP_TOMCATQUERY0000000180000000: (null) [debug] init_ws_service::jk_isapi_plugin.c (3040): Applying service extensions [debug] init_ws_service::jk_isapi_plugin.c (3298): Service protocol=HTTP/1.1 method=GET host=::1 addr=::1 name=localhost port=80 auth= user= uri=/Accounter/login.jhtml [debug] init_ws_service::jk_isapi_plugin.c (3310): Service request headers=9 attributes=0 chunked=no content-length=0 available=0 [debug] wc_get_worker_for_name::jk_worker.c (116): found a worker balancer [debug] HttpExtensionProc::jk_isapi_plugin.c (2188): got a worker for name balancer [debug] service::jk_lb_worker.c (1118): service sticky_session=1 id='6A2507A4626F698EC74A733CDBA7D9FE.Tomcat7A' [debug] get_most_suitable_worker::jk_lb_worker.c (946): searching worker for partial sessionid 6A2507A4626F698EC74A733CDBA7D9FE.Tomcat7A [debug] get_most_suitable_worker::jk_lb_worker.c (954): searching worker for session route Tomcat7A [debug] get_most_suitable_worker::jk_lb_worker.c (968): found worker Tomcat7A (Tomcat7A) for route Tomcat7A and partial sessionid 6A2507A4626F698EC74A733CDBA7D9FE.Tomcat7A [debug] service::jk_lb_worker.c (1161): service worker=Tomcat7A route=Tomcat7A [debug] ajp_get_endpoint::jk_ajp_common.c (3096): acquired connection pool slot=0 after 0 retries [debug] ajp_marshal_into_msgb::jk_ajp_common.c (605): ajp marshaling done [debug] ajp_service::jk_ajp_common.c (2379): processing Tomcat7A with 2 retries [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): sending to ajp13 pos=4 len=577 max=8192 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0000 .4.=....HTTP/1.1 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0010 .../Accounter/lo [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0020 gin.jhtml...::1. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0030 ..::1...localhos [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0040 t..P.......Keep- [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0050 Alive.....0....r [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0060 image/jpeg,.imag [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0070 e/gif,.image/pjp [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0080 eg,.application/ [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0090 x-ms-application [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00a0 ,.application/xa [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00b0 ml+xml,.applicat [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00c0 ion/x-ms-xbap,.* [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00d0 /*...Accept-Enco [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00e0 ding...gzip,.def [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00f0 late...Accept-La [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0100 nguage...nb-NO.. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0110 ..]Username=NA_j [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0120 ose.matias_AT_ad [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0130 denergy.no;.JSES [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0140 SIONID=6A2507A46 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0150 26F698EC74A733CD [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0160 BA7D9FE.Tomcat7A [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0170 .....localhost.. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0180 ...http://localh [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0190 ost/Accounter/Ne [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01a0 msAccounter.jhtm [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01b0 l....uMozilla/4. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01c0 0.(compatible;.M [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01d0 SIE.8.0;.Windows [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01f0 Trident/4.0;.SLC [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01e0 .NT.6.1;.WOW64;. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0200 C2;..NET.CLR.2.0 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0210 .50727;..NET4.0C [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0220 ;..NET4.0E)..... [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0230 .......Tomcat7A. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0240 ................ [debug] ajp_send_request::jk_ajp_common.c (1621): (Tomcat7A) Statistics about invalid connections: connect check (0), cping (0), send (0) [debug] ajp_send_request::jk_ajp_common.c (1632): (Tomcat7A) request body to send 0 - request body to resend 0 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): received from ajp13 pos=0 len=135 max=8192 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0000 .....OK.....Prag [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0010 ma...no-cache... [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0020 Expires...Thu,.0 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0030 1.Jan.1970.00:00 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0040 :00.GMT...Cache- [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0050 Control...no-cac [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0060 he...Cache-Contr [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0070 ol...no-store... [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0080 ..2995..........

    Read the article

  • PSTN Trunk TDM400P Install on Asterisk / Trixbox

    - by Jona
    Hey All, I'm trying to get a TDM400P card with FXO module to connect to our PSTN line. The card is correctly detected by Linux: [trixbox1.localdomain asterisk]# lspci 00:09.0 Communication controller: Tiger Jet Network Inc. Tiger3XX Modem/ISDN interface I've run setup-pstn which produces the following output trixbox1.localdomain ~]# setup-pstn -------------------------------------------------------------- Detecting PSTN cards and USB PSTN Devices -------------------------------------------------------------- Hardware present! STOPPING ASTERISK Asterisk Stopped STOPPING FOP SERVER FOP Server Stopped Unloading DAHDI hardware modules: done Loading DAHDI hardware modules: wct4xxp: [ OK ] wcte12xp: [ OK ] wct1xxp: [ OK ] wcte11xp: [ OK ] wctdm24xxp: [ OK ] opvxa1200: [ OK ] wcfxo: [ OK ] wctdm: [ OK ] wcb4xxp: [ OK ] wctc4xxp: [ OK ] xpp_usb: [ OK ] Running dahdi_cfg: [ OK ] SETTING FILE PERMISSIONS Permissions OK STARTING ASTERISK Asterisk Started STARTING FOP SERVER FOP Server Started Chan Extension Context Language MOH Interpret Blocked State pseudo default en default In Service 1 from-pstn en default In Service dahdi_scan returns: dahdi_scan [1] active=yes alarms=OK description=Wildcard TDM400P REV I Board 5 name=WCTDM/4 manufacturer=Digium devicetype=Wildcard TDM400P REV I location=PCI Bus 00 Slot 10 basechan=1 totchans=4 irq=209 type=analog port=1,FXO port=2,none port=3,none port=4,none And asterisk can see the channel: > trixbox1*CLI> dahdi show channel 1 > Channel: 1LI> File Descriptor: 14 > Span: 11*CLI> Extension: I> Dialing: > noI> Context: from-pstn Caller ID: I> > Calling TON: 0 Caller ID name: > Mailbox: none Destroy: 0LI> InAlarm: > 1LI> Signalling Type: FXS Kewlstart > Radio: 0*CLI> Owner: <None> Real: > <None>> Callwait: <None> Threeway: > <None> Confno: -1LI> Propagated > Conference: -1 Real in conference: 0 > DSP: no1*CLI> Busy Detection: no TDD: > no1*CLI> Relax DTMF: no > Dialing/CallwaitCAS: 0/0 Default law: > ulaw Fax Handled: no Pulse phone: no > DND: no1*CLI> Echo Cancellation: > trixbox1128 taps trixbox1(unless TDM > bridged) currently OFF Actual > Confinfo: Num/0, Mode/0x0000 Actual > Confmute: No > Hookstate (FXS only): Onhook A cat of /etc/asterisk/dahdi.conf shows: [trixbox1.localdomain ~]# cat /etc/asterisk/dahdi-channels.conf ; Autogenerated by /usr/sbin/dahdi_genconf on Tue May 25 17:45:13 2010 ; If you edit this file and execute /usr/sbin/dahdi_genconf again, ; your manual changes will be LOST. ; Dahdi Channels Configurations (chan_dahdi.conf) ; ; This is not intended to be a complete chan_dahdi.conf. Rather, it is intended ; to be #include-d by /etc/chan_dahdi.conf that will include the global settings ; ; Span 1: WCTDM/4 "Wildcard TDM400P REV I Board 5" (MASTER) ;;; line="1 WCTDM/4/0 FXSKS (SWEC: MG2)" signalling=fxs_ks callerid=asreceived group=0 context=from-pstn channel => 1 callerid= group= context=default I have configured a "ZAP Trunk (DAHDI compatibility Mode)" with the ZAP identifier 1 and an outbound route, but when ever I try to make an external call via it I get the "All Circuits are busy now, please try your call again later message". I have one outbound route which uses the dial pattern 9|. and the Trunk Zap/1 and one Zap Trunk which uses Zap Identifier (trunk name): 1 and has no Dial Rules. The FXO module is directly connected to our phone line from BT via a BT-RJ11 cable. When running tail -f /var/log/asterisk/full and placing a call I get the following output: [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP RTP TOS bits 184 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP RTP CoS mark 5 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP VRTP TOS bits 136 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP VRTP CoS mark 6 [May 26 11:10:52] WARNING[2661] pbx.c: FONALITY: This thread has already held the conlock, skip locking [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:1] Macro("SIP/801-b7ce8c28", "user-callerid,SKIPTTL,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:1] Set("SIP/801-b7ce8c28", "AMPUSER=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:2] GotoIf("SIP/801-b7ce8c28", "0?report") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:3] ExecIf("SIP/801-b7ce8c28", "1?Set(REALCALLERIDNUM=801)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:4] Set("SIP/801-b7ce8c28", "AMPUSER=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:5] Set("SIP/801-b7ce8c28", "AMPUSERCIDNAME=Jona") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:6] GotoIf("SIP/801-b7ce8c28", "0?report") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:7] Set("SIP/801-b7ce8c28", "AMPUSERCID=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:8] Set("SIP/801-b7ce8c28", "CALLERID(all)="Jona" <801>") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:9] Set("SIP/801-b7ce8c28", "REALCALLERIDNUM=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:10] ExecIf("SIP/801-b7ce8c28", "0?Set(CHANNEL(language)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:11] GotoIf("SIP/801-b7ce8c28", "1?continue") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-user-callerid,s,20) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:20] NoOp("SIP/801-b7ce8c28", "Using CallerID "Jona" <801>") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:2] Set("SIP/801-b7ce8c28", "_NODEST=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:3] Macro("SIP/801-b7ce8c28", "record-enable,801,OUT,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:1] GotoIf("SIP/801-b7ce8c28", "1?check") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-record-enable,s,4) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:4] AGI("SIP/801-b7ce8c28", "recordingcheck,20100526-111052,1274868652.1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Launched AGI Script /var/lib/asterisk/agi-bin/recordingcheck [May 26 11:10:52] VERBOSE[2858] logger.c: recordingcheck,20100526-111052,1274868652.1: Outbound recording not enabled [May 26 11:10:52] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28>AGI Script recordingcheck completed, returning 0 [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:5] MacroExit("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:4] Macro("SIP/801-b7ce8c28", "dialout-trunk,1,01483890915,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:1] Set("SIP/801-b7ce8c28", "DIAL_TRUNK=1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:2] GosubIf("SIP/801-b7ce8c28", "0?sub-pincheck,s,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:3] GotoIf("SIP/801-b7ce8c28", "0?disabletrunk,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:4] Set("SIP/801-b7ce8c28", "DIAL_NUMBER=01483890915") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:5] Set("SIP/801-b7ce8c28", "DIAL_TRUNK_OPTIONS=tr") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:6] Set("SIP/801-b7ce8c28", "OUTBOUND_GROUP=OUT_1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:7] GotoIf("SIP/801-b7ce8c28", "1?nomax") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s,9) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:9] GotoIf("SIP/801-b7ce8c28", "0?skipoutcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:10] Set("SIP/801-b7ce8c28", "DIAL_TRUNK_OPTIONS=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:11] Macro("SIP/801-b7ce8c28", "outbound-callerid,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:1] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERPRES()=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:2] ExecIf("SIP/801-b7ce8c28", "0?Set(REALCALLERIDNUM=801)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:3] GotoIf("SIP/801-b7ce8c28", "1?normcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-outbound-callerid,s,6) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:6] Set("SIP/801-b7ce8c28", "USEROUTCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:7] Set("SIP/801-b7ce8c28", "EMERGENCYCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:8] Set("SIP/801-b7ce8c28", "TRUNKOUTCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:9] GotoIf("SIP/801-b7ce8c28", "1?trunkcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-outbound-callerid,s,12) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:12] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERID(all)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:13] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERID(all)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:14] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERPRES()=prohib_passed_screen)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:12] ExecIf("SIP/801-b7ce8c28", "0?AGI(fixlocalprefix)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:13] Set("SIP/801-b7ce8c28", "OUTNUM=01483890915") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:14] Set("SIP/801-b7ce8c28", "custom=DAHDI/1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:15] ExecIf("SIP/801-b7ce8c28", "0?Set(DIAL_TRUNK_OPTIONS=M(setmusic^))") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:16] Macro("SIP/801-b7ce8c28", "dialout-trunk-predial-hook,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk-predial-hook:1] MacroExit("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:17] GotoIf("SIP/801-b7ce8c28", "0?bypass,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:18] GotoIf("SIP/801-b7ce8c28", "0?customtrunk") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:19] Dial("SIP/801-b7ce8c28", "DAHDI/1/01483890915,300,") in new stack [May 26 11:10:52] WARNING[2858] app_dial.c: Unable to create channel of type 'DAHDI' (cause 0 - Unknown) [May 26 11:10:52] VERBOSE[2858] logger.c: == Everyone is busy/congested at this time (1:0/0/1) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:20] Goto("SIP/801-b7ce8c28", "s-CHANUNAVAIL,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s-CHANUNAVAIL,1) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s-CHANUNAVAIL@macro-dialout-trunk:1] GotoIf("SIP/801-b7ce8c28", "1?noreport") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s-CHANUNAVAIL,3) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s-CHANUNAVAIL@macro-dialout-trunk:3] NoOp("SIP/801-b7ce8c28", "TRUNK Dial failed due to CHANUNAVAIL (hangupcause: 0) - failing through to other trunks") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:5] Macro("SIP/801-b7ce8c28", "outisbusy,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outisbusy:1] Playback("SIP/801-b7ce8c28", "all-circuits-busy-now,noanswer") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28> Playing 'all-circuits-busy-now.ulaw' (language 'en') [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-outisbusy:2] Playback("SIP/801-b7ce8c28", "pls-try-call-later,noanswer") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28> Playing 'pls-try-call-later.ulaw' (language 'en') [May 26 11:10:54] WARNING[2661] pbx.c: FONALITY: This thread has already held the conlock, skip locking [May 26 11:10:54] VERBOSE[2858] logger.c: == Spawn extension (macro-outisbusy, s, 2) exited non-zero on 'SIP/801-b7ce8c28' in macro 'outisbusy' [May 26 11:10:54] VERBOSE[2858] logger.c: == Spawn extension (from-internal, 901483890915, 5) exited non-zero on 'SIP/801-b7ce8c28' [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [h@from-internal:1] Macro("SIP/801-b7ce8c28", "hangupcall") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:1] ResetCDR("SIP/801-b7ce8c28", "vw") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:2] NoCDR("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:3] GotoIf("SIP/801-b7ce8c28", "1?skiprg") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,6) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:6] GotoIf("SIP/801-b7ce8c28", "1?skipblkvm") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,9) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:9] GotoIf("SIP/801-b7ce8c28", "1?theend") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,11) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:11] Hangup("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: == Spawn extension (macro-hangupcall, s, 11) exited non-zero on 'SIP/801-b7ce8c28' in macro 'hangupcall' [May 26 11:10:55] VERBOSE[2858] logger.c: == Spawn extension (from-internal, h, 1) exited non-zero on 'SIP/801-b7ce8c28' I'm guessing I've missed a configuration step somewhere but no idea where, any help greatly appreciated.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Windows CE Chat Transcript (March 30, 2010)

    - by Bruce Eitman
    For those of you who missed the chat today, here is the raw transcript.   By raw, I mean that I copied and pasted the discussion without any edits. This is divided into two parts, the top part is the answers from the Microsoft Experts and the bottom part is the questions from the audience. Answers from Microsoft:   Karel Danihelka [MS] (Expert)[2010-3-30 12:2]: Hi everyone, my name is Karel Danihelka and I am developer in partner response team. Sing Wee [MS] (Expert)[2010-3-30 12:2]: Hi, I'm Sing Wee, part of the CoreOS/BSP Test Team. GLanger_MS (Expert)[2010-3-30 12:2]: Hi, I'm Glen Langer, program manager on the Core Team. Karel Danihelka [MS] (Expert)[2010-3-30 12:3]: Q: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals. Where should i start? A: Until you are using CPU with GHz frequency your only chance is use interrupt handler and implement all funcionality there. But it will be really tricky and may reduce system performance. If period will be near to millisecond timeframe you can use normal thread wait for event pattern. Karel Danihelka [MS] (Expert)[2010-3-30 12:5]: Q: I want to partition my NAND Flash device. One partition to use for hive ragistry and the other for the apps and data. The only way to do it is programmatically or setting some registry values ? A: It need to be set in registry - generally you need mark this partition as boot partition. Karel Danihelka [MS] (Expert)[2010-3-30 12:7]: Q: My CPU is Intel celeron M processor 1Ghz. A: In this case you can try use normal approach - in interrupt handler return SYSINTR and start thread in device driver which will spin thread waiting on event attached to this SYSINTR. Karel Danihelka [MS] (Expert)[2010-3-30 12:7]: Q: If i need to implement it using interrupt handlers, What are all the files that I should look at? A: Good quesiton - I would recommend documentation and there was BSP development book to download for free. mikehall_ms (Moderator)[2010-3-30 12:8]: Q: Hi guys, what's the formal way to report bugs back to the core team / product team? The mechanism of calling the support phone number every time is really onerous and time-consuming. Is there another mechanism? A: Using product support is the formal way to report bugs/issues - Product support can then create an issue that can be tracked. Karel Danihelka [MS] (Expert)[2010-3-30 12:9]: Q: But the operation for creating the partitions ? A: This is tricky - if you will make it autopartition & autoformat it will be created by filesystem. But generally it depends on your boot loader. mikehall_ms (Moderator)[2010-3-30 12:10]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? A: At MIX 2010 Charlie Kindel presented a session that described some of the core technologies that make up Windows Phone 7 Series, including the underlying operating system (Windows CE) and the new ISV programming model based on Silverlight and .NET - check out the Mix Online Videos to get more information. davbo-msft (Moderator)[2010-3-30 12:10]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? A: This forum is to discussed released products in the industry. Windows Mobile & Windows CE are based on the same Windows CE Kernel/system. Windows CE is focused on deliverying the OS for embedded customers in the market where Windows Mobile is focused on deliverying compelling Windows Phone platform. davbo-msft (Moderator)[2010-3-30 12:11]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? A: http://en.wikipedia.org/wiki/Windows_CE Wikipedia gives a good breakdown of the version history. Travis Hobrla [MS] (Expert)[2010-3-30 12:13]: Q: I created a OS design with KITL and kernel debugger enabled. But I am unable to connect to the target for debugging. I am getting the following error when i try to connect with the device. PB Debugger Cannot initialize the Kernel Debugger. PB Debugger Debugger could not initialize connection. PB Debugger The Kernel Debugger is waiting to connect with target. PB Debugger The Kernel Debugger has been disconnected successfully. A: One possibility is that a rogue cesvchost.exe has co-opted the debugger. I am assuming this is CE 6.0? Can you try exiting visual studio and manually killing the cesvchost.exe process from the Task Manager? davbo-msft (Moderator)[2010-3-30 12:14]: Q: Hi guys, what's the formal way to report bugs back to the core team / product team? The mechanism of calling the support phone number every time is really onerous and time-consuming. Is there another mechanism? A: For info on contacting Microsoft support refer to the support page on the Embedded website: http://msdn.microsoft.com/en-us/windowsembedded/dd897633.aspx Sing Wee [MS] (Expert)[2010-3-30 12:16]: Q: Do u mean ISR/IST implementation? How can i register an interrupt? What kind of interrupt should i register? A: A good introduction to interrupts in WinCE 6.0 can be found here (aside from the documentation on MSDN): http://download.microsoft.com/download/9/c/f/9cffaa58-4000-48d6-a4b2-5fed9e4e6410/Chapter%206%20-%20Developing%20Device%20Drivers.pdf mikehall_ms (Moderator)[2010-3-30 12:16]: Q: What will be different in Windows Compact 7 from CE 6.0? A: Unfortunately we cannot discuss unreleased products on this chat - keep an eye on the Windows Embedded web site and blogs to keep up to date with product announcements. Travis Hobrla [MS] (Expert)[2010-3-30 12:16]: Q: I am using CE 6.0. There is no cesvchost process running in my system. A: What operating system are you using? Karel Danihelka [MS] (Expert)[2010-3-30 12:16]: Q: So...I have to modify file system code to create 2 partition at system startup ?!! I haven't understood.... A: You don't need to modify code, there are registry settings to achive this (look to documentation). But you may need to create partition table in boot loader. Unfortunatelly there isn't simple way how to do it. davbo-msft (Moderator)[2010-3-30 12:18]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Windows Embedded CE 6.0 R3 includes Sliverlight for Windows Embedded. Refer to New Features overview on the embedded web site. http://www.microsoft.com/windowsembedded/en-us/products/windowsce/default.mspx. Silverlight - The power of Silverlight brought to Windows Embedded CE to create rich applications and user interfaces is new part of Windows CE Embedded. mikehall_ms (Moderator)[2010-3-30 12:20]: Q: The link for developing device drivers is not working. can u please check that? A: http://msdn.microsoft.com/en-us/library/ms923714.aspx davbo-msft (Moderator)[2010-3-30 12:20]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Sorry misunderstood the question I thought you were asking if embedded CE could handle Silverlight. Please repost so that the question goes back into the active queue because once answered no way to put the status back to open. Travis Hobrla [MS] (Expert)[2010-3-30 12:21]: Q: sorry! Windows XP SP3 A: Can you try exiting VS2005 and confirming cesvchost.exe is not running, then renaming C:\Documents and Settings\USERNAME\Local Settings\Application Data\Microsoft\CoreCon\1.0 to 1.0_backup, then restarting VS2005? Sing Wee [MS] (Expert)[2010-3-30 12:24]: Q: Can I have the book's name please? A: I believe the downloadable version is related to the last link I sent. If you go to the following website, I believe you can download the whole thing: http://msdn.microsoft.com/en-us/windowsembedded/ce/cc294468.aspx davbo-msft (Moderator)[2010-3-30 12:24]: Q: If one has an image on a Silverlight page, it seems to be cached. How would one refresh that cache after changing the underlying image? A: change the URI of the image or use a writeable bitmap if they want to manually toggle the pixels Sing Wee [MS] (Expert)[2010-3-30 12:25]: A: Whoops, hit [ENTER] too early. On the right side, you'll see there an "Exam Preparation Kit" link that can be downloaded in several different languages. Sing Wee [MS] (Expert)[2010-3-30 12:25]: Q: Can I have the book's name please? A: Whoops, hit [ENTER] too early. On the right side, you'll see there an "Exam Preparation Kit" link that can be downloaded in several different languages. Karel Danihelka [MS] (Expert)[2010-3-30 12:26]: Q: I have a NAND Flash on my target device. On this flash I have the hive registry and an application.I have observed that when the NAND flash is fully, the system startup time is longer....is there a degradation of NAND use that influences the startup time ? Why ? A: Yes - on boot flash abstraction library (old one) read metadata from all sectors to rebuild physical - logical mapping table. davbo-msft (Moderator)[2010-3-30 12:27]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Need addition info on this question. Can you provide more details on what you are trying to do in Silverlight? davbo-msft (Moderator)[2010-3-30 12:27]: Q: If one has an image on a Silverlight page, it seems to be cached. How would one refresh that cache after changing the underlying image? A: Additonal Info: if you want to manually touch the pixels use WriteableBitmap if you want to use the underlying HWND then use IXRVisualHost::GetHWND() davbo-msft (Moderator)[2010-3-30 12:28]: Q: Writable bitmap, is there an example of the syntax? A: if you want to manually touch the pixels use WriteableBitmap if you want to use the underlying HWND then use IXRVisualHost::GetHWND() davbo-msft (Moderator)[2010-3-30 12:29]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Can I get more information about this question about what you are trying to accomplish in Silverlight? davbo-msft (Moderator)[2010-3-30 12:31]: Q: IXRVisualHost::GetHWND() exactly what I needed Thanks, A: Your welcome Sing Wee [MS] (Expert)[2010-3-30 12:31]: Q: ok. thanks for the book's link A: No problem. Travis Hobrla [MS] (Expert)[2010-3-30 12:32]: Q: Typically for SoC devices you name your hardware specific libraries in the form "SOCDIRNAME_LIBNAME". In our platform "OMAP35XX_TPS659XX_TI_V1" if you do this we cause the catalog parser to die... For example if we have a library "Musbfn_OMAP35XX_TPS659XX_TI_V1.dll" entering this in the catalogs pbcxml file in a <module> section causes the XML parser to fail with : Error 3 The 'urn:Microsoft.PlatformBuilder/Catalog:Module' element is        invalid - The value '012345678901234567890123456789.dll' is invalid         according to its datatype         'urn:Microsoft.PlatformBuilder/Catalog:CatalogFileName' - The actual         length is greater than the MaxLength value. A: There are a couple workarounds I can think of. I believe the Module element is only used when doing SYSGEN parsing to make sure dependent SYSGENs are present when the item is selected, so I believe it is optional to the catalog. The other obvious workaround is to shorten the soc name. I realize neither of these solutions is ideal. This is not something we anticipated when we tested CE6.0, sorry. Travis Hobrla [MS] (Expert)[2010-3-30 12:33]: Q: I am getting this error only when I select the KdStub as the debugger in Target device connectivity. A: Right, but KdStub is the debugger that you should use. Have you tried the steps I suggested? Travis Hobrla [MS] (Expert)[2010-3-30 12:36]: Q: If I select Active KTIL, My OS doesn't boots. It says "loading NK.EXE at 0x<xxxxx> location" after that nothing comes in the debug log. A: Can you look at the serial debug output and see what is happening there? Often it can give you a clue to the KITL driver malfunctioning. Travis Hobrla [MS] (Expert)[2010-3-30 12:38]: Q: I have tried that and I am getting the same error. A: I am assuming you have a device created in Target -> Connectivity Options in Platform Builder. What are the Kernel download / Kernel transport for your device? Travis Hobrla [MS] (Expert)[2010-3-30 12:40]: Q: KITL: *** Device Name CEPC56059 *** WARN: KITL will run in polling mode VBridge:: built on [Jul 10 2009] time [10:20:14] VBridgeInit()...TX = [16384] bytes -- Rx = [16384] bytes Tx buffer [0xA1B84860] to [0xA1B88860]. Rx buffer [0xA1B88880] to [0xA1B8C880]. VBridge:: NK add MAC: [0-60-65-2-DA-FB] Connecting to Desktop KITL: Connected host IP: 1 Port: 1086 .. this is the output of the serial debug A: This looks reasonable and does not give clues as to why boot would halt at that point. If you capture a network trace or turn on KITL debug zones via dpCurSettings in kitl.dll, do you see KITL active after this? Travis Hobrla [MS] (Expert)[2010-3-30 12:41]: Q: Both is happening via Ethernet. A: Only thing I have left to suggest is a Platform Builder installation Repair, then. Karel Danihelka [MS] (Expert)[2010-3-30 12:42]: Q: Hi, I saw that the ATADISK is quite generic and des not have any optimizations. Do you have any advice to consider while tryin to improve the performance of it? A: If I remember correctly sample code has support for some specific hardware controllers (little obsolete now). This should be good start point (if you will not decide take existing driver as sample and write you own). Travis Hobrla [MS] (Expert)[2010-3-30 12:44]: Q: I didn't do that. I have to try. A: I think that's the next valid step. You need to figure out whether KITL is hanging or the device - use instrumented serial debug messages and network trace to determine this. Sing Wee [MS] (Expert)[2010-3-30 12:46]: Q: KITL: *** Device Name CEPC56059 *** WARN: KITL will run in polling mode VBridge:: built on [Jul 10 2009] time [10:20:14] VBridgeInit()...TX = [16384] bytes -- Rx = [16384] bytes Tx buffer [0xA1B84860] to [0xA1B88860]. Rx buffer [0xA1B88880] to [0xA1B8C880]. VBridge:: NK add MAC: [0-60-65-2-DA-FB] Connecting to Desktop KITL: Connected host IP: 1 Port: 1086 .. this is the output of the serial debug A: Neo, have you by any chance tried looking into your firewall to see if it might be blocking traffic on any particular ports? Wireshark/netmon might be able to help you here if that's the issue. davbo-msft (Moderator)[2010-3-30 12:48]: Q: I lost spell check, how can i get it back A: Hello - can you give additional details about your question? Is this related to a Windows CE Embedded application? masatos_MSFT (Expert)[2010-3-30 12:51]: Q: When attempting to run the CETK cellcore tests the documentation states the pre-requisites include "stinger.ini", "ltk.ini" but windows CE doesn't provide these or document what they fully need to contain. Implicitly you also need "datatrans.xml" which isn't supplied. If you get around this error and steal these from Windows Mobile instead, when you try and run the CETK tests you get a data abort in radiometricsdll.dll. How should we invoke the cellcore parts of CETK? A: Hi Pev, what version of Windows CE and CETK are you using? I do not have the expertise to answer this question, but can find somebody who can. Travis Hobrla [MS] (Expert)[2010-3-30 12:52]: Q: I don't see a kitlcore.dll in my OS. is my debug image fails to load because of that? A: kitl.dll should be all that's needed, kitlcore.lib is linked into that. Travis Hobrla [MS] (Expert)[2010-3-30 12:55]: Q: I've got a platform (not developed by myself) where I2C bus support has to be provided through the OAL as the kernel needs to talk to devices such as the power management IC and gas gauge so a 'proper' I2C driver hanging off device manager isn't possible. This happens to be a polled driver, so obviously it hits the system hard when either under lots of traffic or an error condition occurs and the driver constantly polls. I originally thought that there was no straightforward way to make such code interrupt driven in the kernel (as it's a cludge) but I realised that that's exactly what ETHDBG drivers do. Is there any reason why I shouldn't have a go at implementing a similar mechanism for our kernel resident I2C driver? If not, are there any obvious pitfalls - I've not seen any other BSP's do this in the past... A: You can make a 'proper' driver that calls down into the OAL to do the actual I2C transactions. Alternatively you can build an interrupt-based version in the OAL where you handle everything in the ISR. There is nothing wrong with that so long as the rest of your drivers and app threads can handle longer times with interrupts off while you are servicing I2C interrupts. Sing Wee [MS] (Expert)[2010-3-30 12:55]: Q: I am having trouble with my mouse, I have the microsoft wireless mobile mouse 3000, when I push the scroll button I am suppose to have autoscroll instead it shows other web pages,Can you help me out tell me what to do!!! A: Sorry, this current chat is about Windows Embedded Compact. Hope you're able to find an answer to your question elsewhere. davbo-msft (Moderator)[2010-3-30 12:56]: Q: Is the Silverlight Animation "Spline" a BezierSpline? A: Spline - http://msdn.microsoft.com/en-us/library/ee501495.aspx<BR< a>>   masatos_MSFT (Expert)[2010-3-30 12:57]: Thanks for the info Pev. I will follow up with the CETK experts here and get back to you. davbo-msft (Moderator)[2010-3-30 12:59]: Q: Spline- bad link A: http://msdn.microsoft.com/en-us/library/ee501495.aspx davbo-msft (Moderator)[2010-3-30 13:0]: Q: Sorry, got to tirm the ">" A: No Worries http://msdn.microsoft.com/en-us/library/ee501495.aspx davbo-msft (Moderator)[2010-3-30 13:0]: Hello everyone, we are just about out of time. Thank you for joining us for our Windows Embedded CE 6.0 chat today! <http://www.Microsoft.com/Embedded>; A special thank you to the product group members for coming out. The transcript of today’s chat will be posted online as soon as possible, to <http://msdn.microsoft.com/en-us/chats>;. We’ll see you again for another chat next month. Please check <http://msdn.microsoft.com/en-us/chats>; for the list of upcoming chats. If you still have unanswered questions, let me suggest that you post them on one of our newsgroups on <http://msdn.microsoft.com/en-us/windowsembedded/ce/default.aspx> -Windows Embedded CE 6.0 R3 Now Available! <http://msdn.microsoft.com/windowsembedded/ce/dd630616.aspx>; davbo-msft (Moderator)[2010-3-30 13:1]: Q: hi everybody. I would like to know if there is something know about a bug in RTC API (VOIP), especially when using SIP. According the to the analysis with application verifyier there is a heap link in rtcdllmedia.dll. All of the unreleased chunks seem to have a size of 6560 bytes. A: I will follow up with the Networking Team for a response. davbo-msft (Moderator)[2010-3-30 13:1]: Q: Hi, we've problems with debugging of applications (= breakpoints in Platform Builder will be ignored) over KITL on Windows CE 5.0, if the PDB files are large (over 60MB). Are there any limitations to size of the PDB files? A: I will follow up with the tools team for a response and post with the transcript. Sing Wee [MS] (Expert)[2010-3-30 13:1]: Q: I am unable to use the target control in my development environment. any ideas? A: Make sure you have SYSGEN_SHELL=1 set in your build environment. davbo-msft (Moderator)[2010-3-30 13:3]: Q: what are the main differences between Object Store and RAM disk ? They are both in RAM...are there performance differences ? access differences ? A: I will follow up with the Core Team and get a response posted with the transcript to MSDN  The Questions   [2010-3-30 12:57]: Thanks for the info Pev. I will follow up with the CETK experts here and get back to you. [2010-3-30 12:59]:   [2010-3-30 13:0]:   [2010-3-30 13:0]: Hello everyone, we are just about out of time. Thank you for joining us for our Windows Embedded CE 6.0 chat today! <http://www.Microsoft.com/Embedded>; A special thank you to the product group members for coming out. The transcript of today’s chat will be posted online as soon as possible, to <http://msdn.microsoft.com/en-us/chats>;. We’ll see you again for another chat next month. Please check <http://msdn.microsoft.com/en-us/chats>; for the list of upcoming chats. If you still have unanswered questions, let me suggest that you post them on one of our newsgroups on <http://msdn.microsoft.com/en-us/windowsembedded/ce/default.aspx> -Windows Embedded CE 6.0 R3 Now Available! <http://msdn.microsoft.com/windowsembedded/ce/dd630616.aspx>; [2010-3-30 13:1]: [2010-3-30 13:1]: [2010-3-30 13:1]: [2010-3-30 13:3]: neo (Guest)[2010-3-30 11:37]: Hi all KellyG (Guest)[2010-3-30 11:37]: Hi KellyG (Guest)[2010-3-30 11:37]: I have a question unrelated to windows Ce embedded, can you please help me?? neo (Guest)[2010-3-30 11:38]: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals! c neo (Guest)[2010-3-30 11:38]: yes. post it. May be i cud give a try KellyG (Guest)[2010-3-30 11:38]: My Product key listed on my tower is not the product key I need for microsoft office, but that is the only product key listed. neo (Guest)[2010-3-30 11:39]: I hope this is a chat for windows embedded. please post ur queries in office forums KellyG (Guest)[2010-3-30 11:39]: it is but i could not find a forum for office neo (Guest)[2010-3-30 11:40]: I think moderators will help u out. @ davbo-msft: can u help this guy? neo (Guest)[2010-3-30 11:41]: Q: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals. Where should i start? davbo-msft (Moderator)[2010-3-30 11:50]: Our chat today covers the topic of Windows Embedded CE! 1. This chat will last for one hour. During this hour, our Experts will respond to as many questions as they can. Please understand that there may be some questions we cannot respond to due to lack of information or because the information is not yet public. 2. We encourage you to submit questions for our Experts. To do so, type your questions in the send box, select the “ask the Experts” box and click SEND. Questions sent directly to the Guest Chat room will not be answered by the Experts, but we encourage other community members to assist. 3. We ask that you stay on topic for the duration of the chat. This helps the Guests and Experts follow the conversation more easily. We invite you to ask off topic questions after this chat is over, but not during. 4. Please abide by the Chat Code of Conduct. Chat code of conduct: <http://msdn.microsoft.com/chats/chatroom.aspx?ctl=hlp#Conduct>; Pev (Guest)[2010-3-30 11:54]: Evening! davbo-msft (Moderator)[2010-3-30 11:54]: Hello everyone this is Dave Boyce - I worked in the Multimedia area for Windows CE. neo (Guest)[2010-3-30 11:55]: hello dave neo (Guest)[2010-3-30 11:55]: The chat code of conduct link is not working! Pev (Guest)[2010-3-30 11:56]: Best be polite just in case then ;-) neo (Guest)[2010-3-30 11:56]: davbo-msft (Moderator)[2010-3-30 11:57]: I'll check out the issue w/ the link paolopat (Guest)[2010-3-30 12:0]: Hello davbo-msft (Moderator)[2010-3-30 12:0]: We are pleased to welcome our Experts for today’s chat. I will have them introduce themselves now. Chat will begin in a couple of minutes. <http://www.Microsoft.com/Embedded>; paolopat (Guest)[2010-3-30 12:3]: Hello Experts ! neo (Guest)[2010-3-30 12:3]: Welcome all! paolopat (Guest)[2010-3-30 12:3]: Q: I want to partition my NAND Flash device. One partition to use for hive ragistry and the other for the apps and data. The only way to do it is programmatically or setting some registry values ? neo (Guest)[2010-3-30 12:3]: Q: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals. Where should i start? neo (Guest)[2010-3-30 12:5]: Q: My CPU is Intel celeron M processor 1Ghz. Pev (Guest)[2010-3-30 12:5]: neo: if your silicon has multiple general purpose timers, pick one that's not in use for the system timer / profiler and set it up to trigger irqs for your purpose. You can't guarantee hard realtime type responses though... GarySwalling (Guest)[2010-3-30 12:5]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? Pev (Guest)[2010-3-30 12:6]: Q: Hi guys, what's the formal way to report bugs back to the core team / product team? The mechanism of calling the support phone number every time is really onerous and time-consuming. Is there another mechanism? paolopat (Guest)[2010-3-30 12:6]: Q: But the operation for creating the partitions ? neo (Guest)[2010-3-30 12:6]: Q: If i need to implement it using interrupt handlers, What are all the files that I should look at? GPM (Guest)[2010-3-30 12:6]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? Jhony (Guest)[2010-3-30 12:7]: Q: I created a OS design with KITL and kernel debugger enabled. But I am unable to connect to the target for debugging. I am getting the following error when i try to connect with the device. PB Debugger Cannot initialize the Kernel Debugger. PB Debugger Debugger could not initialize connection. PB Debugger The Kernel Debugger is waiting to connect with target. PB Debugger The Kernel Debugger has been disconnected successfully. Charles (Guest)[2010-3-30 12:7]: What will be different in Windows Compact 7 from CE 6.0? neo (Guest)[2010-3-30 12:8]: Can I have the book's name please? kiefs_dev (Guest)[2010-3-30 12:8]: Q: hi everybody. I would like to know if there is something know about a bug in RTC API (VOIP), especially when using SIP. According the to the analysis with application verifyier there is a heap link in rtcdllmedia.dll. All of the unreleased chunks seem to have a size of 6560 bytes. paolopat (Guest)[2010-3-30 12:10]: Q: So...I have to modify file system code to create 2 partition at system startup ?!! I haven't understood.... neo (Guest)[2010-3-30 12:10]: Q: Do u mean ISR/IST implementation? How can i register an interrupt? What kind of interrupt should i register? neo (Guest)[2010-3-30 12:11]: Q: Can I have the book's name please? Charles (Guest)[2010-3-30 12:11]: Q: What will be different in Windows Compact 7 from CE 6.0? PaulT (Guest)[2010-3-30 12:11]: neo: I'd say that you really need the docs for YOUR BSP, not generic documents for BSPs in general. Each BSP may be architected differently. If you're using the CEPC BSP, then the documentation that comes with Platform Builder is a reasonable place to look. GPM (Guest)[2010-3-30 12:11]: Q: If one has an image on a Silverlight page, it seems to be cached. How would one refresh that cache after changing the underlying image? Elektrobit (Guest)[2010-3-30 12:12]: Q: Hi, we've problems with debugging of applications (= breakpoints in Platform Builder will be ignored) over KITL on Windows CE 5.0, if the PDB files are large (over 60MB). Are there any limitations to size of the PDB files? Jhony (Guest)[2010-3-30 12:15]: Q: I am using CE 6.0. There is no cesvchost process running in my system. alexquisi (Guest)[2010-3-30 12:15]: Q: Hi, I saw that the ATADISK is quite generic and des not have any optimizations. Do you have any advice to consider while tryin to improve the performance of it? Jhony (Guest)[2010-3-30 12:17]: Windows XP service pack 1 Jhony (Guest)[2010-3-30 12:17]: Q: sorry! Windows XP SP3 neo (Guest)[2010-3-30 12:19]: Q: The link for developing device drivers is not working. can u please check that? paolopat (Guest)[2010-3-30 12:20]: Q: I have a NAND Flash on my target device. On this flash I have the hive registry and an application.I have observed that when the NAND flash is fully, the system startup time is longer....is there a degradation of NAND use that influences the startup time ? Why ? Pev (Guest)[2010-3-30 12:20]: Q: When attempting to run the CETK cellcore tests the documentation states the pre-requisites include "stinger.ini", "ltk.ini" but windows CE doesn't provide these or document what they fully need to contain. Implicitly you also need "datatrans.xml" which isn't supplied. If you get around this error and steal these from Windows Mobile instead, when you try and run the CETK tests you get a data abort in radiometricsdll.dll. How should we invoke the cellcore parts of CETK? Pev (Guest)[2010-3-30 12:21]: Hi all, Pev (Guest)[2010-3-30 12:21]: oops Pev (Guest)[2010-3-30 12:21]: :-D Pev (Guest)[2010-3-30 12:24]: Typically for SoC devices you name your hardware specific libraries in the form "SOCDIRNAME_LIBNAME". In our platform "OMAP35XX_TPS659XX_TI_V1" if you do this we cause the catalog parser to die... For example if we have a library "Musbfn_OMAP35XX_TPS659XX_TI_V1.dll" entering this in the catalogs pbcxml file in a <module> section causes the XML parser to fail with : Pev (Guest)[2010-3-30 12:25]: Q: Error 3 The 'urn:Microsoft.PlatformBuilder/Catalog:Module' element is        invalid - The value '012345678901234567890123456789.dll' is invalid         according to its datatype         'urn:Microsoft.PlatformBuilder/Catalog:CatalogFileName' - The actual         length is greater than the MaxLength value. GPM (Guest)[2010-3-30 12:25]: Q: Writable bitmap, is there an example of the syntax? Pev (Guest)[2010-3-30 12:25]: Q: Typically for SoC devices you name your hardware specific libraries in the form "SOCDIRNAME_LIBNAME". In our platform "OMAP35XX_TPS659XX_TI_V1" if you do this we cause the catalog parser to die... For example if we have a library "Musbfn_OMAP35XX_TPS659XX_TI_V1.dll" entering this in the catalogs pbcxml file in a <module> section causes the XML parser to fail with : Error 3 The 'urn:Microsoft.PlatformBuilder/Catalog:Module' element is        invalid - The value '012345678901234567890123456789.dll' is invalid         according to its datatype         'urn:Microsoft.PlatformBuilder/Catalog:CatalogFileName' - The actual         length is greater than the MaxLength value. Pev (Guest)[2010-3-30 12:25]: sorry, messed up submission there! GPM (Guest)[2010-3-30 12:26]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? GPM (Guest)[2010-3-30 12:28]: Q: IXRVisualHost::GetHWND() exactly what I needed Thanks, PaulT (Guest)[2010-3-30 12:29]: GPM: You don't have to keep submitting the questions. The chat experts have an application that they're using to follow the chat and all Ask the Experts questions are logged. Jhony (Guest)[2010-3-30 12:29]: Q: I am getting this error only when I select the KdStub as the debugger in Target device connectivity. neo (Guest)[2010-3-30 12:30]: Q: ok. thanks for the book's link Pev (Guest)[2010-3-30 12:31]: Hm, did those two I submitted get picked up by anyone? neo (Guest)[2010-3-30 12:33]: Q: If I select Active KTIL, My OS doesn't boots. It says "loading NK.EXE at 0x<xxxxx> location" after that nothing comes in the debug log. PaulT (Guest)[2010-3-30 12:34]: Pev: PaulT (Guest)[2010-3-30 12:35]: Pev: I'm sure they did. The guys who are actually on the chat may not be experts in that part of things. That's usually the explanation when you don't get an answer in 10 minutes or so. Pev (Guest)[2010-3-30 12:36]: Ah, fair enough Susie (Guest)[2010-3-30 12:36]: My Outlook Express incoming mail is corrput. No ONE has been able to fix the problem, Dell or Norton. I have dial up I'm in a rural area Jhony (Guest)[2010-3-30 12:37]: Q: I have tried that and I am getting the same error. Pev (Guest)[2010-3-30 12:37]: Susie : Use Thunderbird instead :-D PaulT (Guest)[2010-3-30 12:37]: Susie: Sorry, but this chat is not about Windows, but Embedded (like what runs on a phone). Your best chance is to find a local expert or talk to your ISP. paolopat (Guest)[2010-3-30 12:37]: Q: what are the main differences between Object Store and RAM disk ? They are both in RAM...are there performance differences ? access differences ? Susie (Guest)[2010-3-30 12:38]: My computer knowlege is very limited, what is Thunderbird? Pev (Guest)[2010-3-30 12:38]: A different email client :-D neo (Guest)[2010-3-30 12:39]: Q: KITL: *** Device Name CEPC56059 *** WARN: KITL will run in polling mode VBridge:: built on [Jul 10 2009] time [10:20:14] VBridgeInit()...TX = [16384] bytes -- Rx = [16384] bytes Tx buffer [0xA1B84860] to [0xA1B88860]. Rx buffer [0xA1B88880] to [0xA1B8C880]. VBridge:: NK add MAC: [0-60-65-2-DA-FB] Connecting to Desktop KITL: Connected host IP: 1 Port: 1086 .. this is the output of the serial debug Susie (Guest)[2010-3-30 12:39]: Do I need to uninstall Outlook Express youngboyzie (Guest)[2010-3-30 12:39]: I need to start battery calibration for my new battery for my dell inspiron 1525 laptop and should be able to reach the BIOS screen by hitting f2 but this isnt working... help? Pev (Guest)[2010-3-30 12:39]: Nah, you can run it instead - you'll still need help from your ISP to configure it I expect Jhony (Guest)[2010-3-30 12:39]: Q: Both is happening via Ethernet. PaulT (Guest)[2010-3-30 12:40]: youngboyzie: You're off-topic. This is not a general chat for Windows and certainly not for Dell. You'll have to ask Dell how to get to setup; it's their machine. bill (Guest)[2010-3-30 12:41]: I lost spell check, how can i get et back neo (Guest)[2010-3-30 12:42]: Q: I didn't do that. I have to try. Jhony (Guest)[2010-3-30 12:43]: Q: Ok. I will do it then. bill (Guest)[2010-3-30 12:43]: Q: I lost spell check, how can i get it back PaulT (Guest)[2010-3-30 12:44]: bill: This isn't a general Windows chat. There are some Web forums that you might try. GarySwalling (Guest)[2010-3-30 12:45]: Q: Thanks, I found the Phone 7 presentation at http://live.visitmix.com/MIX10/Sessions/CL13 GPM (Guest)[2010-3-30 12:45]: Q: Is the Silverlight Animation "Spline" a BezierSpline? neo (Guest)[2010-3-30 12:46]: Q: ok. I'll do it. thanks Pev (Guest)[2010-3-30 12:46]: Whowever was asking about KITL connection : I've had this loads in the past. I think I started debugging last time by using wireshark to see what was happening on the network then setting up the OAL_ETHER and OAL_FUNC and OAL_VERBOSE as well as OAL_KITL flags to see what was actually happening in the driver.... Pev (Guest)[2010-3-30 12:47]: I'd generally make sure that you're testing though a 10baseT hub (instead of anything faster) and forcing Active KITL in polled mode too... neo (Guest)[2010-3-30 12:48]: Q: I disabled the firewall in my PC. Pev (Guest)[2010-3-30 12:50]: Q: I've got a platform (not developed by myself) where I2C bus support has to be provided through the OAL as the kernel needs to talk to devices such as the power management IC and gas gauge so a 'proper' I2C driver hanging off device manager isn't possible. This happens to be a polled driver, so obviously it hits the system hard when either under lots of traffic or an error condition occurs and the driver constantly polls. I originally thought that there was no straightforward way to make such code interrupt driven in the kernel (as it's a cludge) but I realised that that's exactly what ETHDBG drivers do. Is there any reason why I shouldn't have a go at implementing a similar mechanism for our kernel resident I2C driver? If not, are there any obvious pitfalls - I've not seen any other BSP's do this in the past... neo (Guest)[2010-3-30 12:51]: Q: I don't see a kitlcore.dll in my OS. is my debug image fails to load because of that? Pev (Guest)[2010-3-30 12:52]: Q: Hi masatos, I'm using Windows Embedded CE 6.0 with R3 and patched to feb 2010's QFE's (with it's associated CETK version) this is a machine with only CE 6.0 on (no conflicts with earlier CE or WM...) neo (Guest)[2010-3-30 12:54]: ok. Got it neo (Guest)[2010-3-30 12:54]: Q: ok. Got it Roundman (Guest)[2010-3-30 12:55]: Q: I am having trouble with my mouse, I have the microsoft wireless mobile mouse 3000, when I push the scroll button I am suppose to have autoscroll instead it shows other web pages,Can you help me out tell me what to do!!! Pev (Guest)[2010-3-30 12:55]: Hey neo, debugging kitl issues is really frustrating but dont lose heart :-) neo (Guest)[2010-3-30 12:55]: @ pev : u fixed the problem of KITL after that? neo (Guest)[2010-3-30 12:56]: I am getting the same error again and again. I even cleaned my environment and tried in a fresh PC. But didn't succeed yet Pev (Guest)[2010-3-30 12:56]: Well, eventually - my experiences probably won't help you as different platforms have different reasons for doing that neo (Guest)[2010-3-30 12:56]: I think so PaulT (Guest)[2010-3-30 12:58]: neo: Have you searched the old messages in microsoft.public.windowsce.platbuilder? It seems to me that there was a packet size situation where it was possible to have problems with KITL connections based on a setting on the PC. Google Groups, groups.google.com, Advanced Groups Search will allow you to search a single newsgroup or a set of newsgroups easily. GPM (Guest)[2010-3-30 12:58]: Q: Spline- bad link PaulT (Guest)[2010-3-30 12:58]: GPM: without the > at the end does it work? It seems to for me... neo (Guest)[2010-3-30 12:58]: But some times if i try to connect to the device again. The Image information is seen in the serial debug. what does that mean?Download BIN file information: ----------------------------------------------------- [0]: Base Address=0x220000 Length=0x18DAADC Received a broadcast message !CheckUDP: Not UDP (proto = 0x00000001) after this i am getting the old errors. PB debugger cannot initialize ... GPM (Guest)[2010-3-30 12:59]: Q: Sorry, got to tirm the ">" neo (Guest)[2010-3-30 12:59]: ok paul. I ll look into that. davbo-msft (Moderator)[2010-3-30 13:0]: Hello everyone, we are just about out of time. Thank you for joining us for our Windows Embedded CE 6.0 chat today! <http://www.Microsoft.com/Embedded>; A special thank you to the product group members for coming out. The transcript of today’s chat will be posted online as soon as possible, to <http://msdn.microsoft.com/en-us/chats>;. We’ll see you again for another chat next month. Please check <http://msdn.microsoft.com/en-us/chats>; for the list of upcoming chats. If you still have unanswered questions, let me suggest that you post them on one of our newsgroups on <http://msdn.microsoft.com/en-us/windowsembedded/ce/default.aspx> -Windows Embedded CE 6.0 R3 Now Available! <http://msdn.microsoft.com/windowsembedded/ce/dd630616.aspx>; neo (Guest)[2010-3-30 13:1]: Q: I am unable to use the target control in my development environment. any ideas? Pev (Guest)[2010-3-30 13:1]: Sure, if KITL isn't connected target control won't work as it runs over kitl... neo (Guest)[2010-3-30 13:1]: ok .thanks pev neo (Guest)[2010-3-30 13:2]: yes. sysgen_shell is set to 1 neo (Guest)[2010-3-30 13:2]: Q: yes. sysgen_shell is set to 1 Marcelovk (Guest)[2010-3-30 13:2]: Q: Is there any way to extract the default command lines of the tests in CETK? I want to have it running unconnected from the desktop.   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • Table sorting & pagination with jQuery and Razor in ASP.NET MVC

    - by hajan
    Introduction jQuery enjoys living inside pages which are built on top of ASP.NET MVC Framework. The ASP.NET MVC is a place where things are organized very well and it is quite hard to make them dirty, especially because the pattern enforces you on purity (you can still make it dirty if you want so ;) ). We all know how easy is to build a HTML table with a header row, footer row and table rows showing some data. With ASP.NET MVC we can do this pretty easy, but, the result will be pure HTML table which only shows data, but does not includes sorting, pagination or some other advanced features that we were used to have in the ASP.NET WebForms GridView. Ok, there is the WebGrid MVC Helper, but what if we want to make something from pure table in our own clean style? In one of my recent projects, I’ve been using the jQuery tablesorter and tablesorter.pager plugins that go along. You don’t need to know jQuery to make this work… You need to know little CSS to create nice design for your table, but of course you can use mine from the demo… So, what you will see in this blog is how to attach this plugin to your pure html table and a div for pagination and make your table with advanced sorting and pagination features.   Demo Project Resources The resources I’m using for this demo project are shown in the following solution explorer window print screen: Content/images – folder that contains all the up/down arrow images, pagination buttons etc. You can freely replace them with your own, but keep the names the same if you don’t want to change anything in the CSS we will built later. Content/Site.css – The main css theme, where we will add the theme for our table too Controllers/HomeController.cs – The controller I’m using for this project Models/Person.cs – For this demo, I’m using Person.cs class Scripts – jquery-1.4.4.min.js, jquery.tablesorter.js, jquery.tablesorter.pager.js – required script to make the magic happens Views/Home/Index.cshtml – Index view (razor view engine) the other items are not important for the demo. ASP.NET MVC 1. Model In this demo I use only one Person class which defines Person entity with several properties. You can use your own model, maybe one which will access data from database or any other resource. Person.cs public class Person {     public string Name { get; set; }     public string Surname { get; set; }     public string Email { get; set; }     public int? Phone { get; set; }     public DateTime? DateAdded { get; set; }     public int? Age { get; set; }     public Person(string name, string surname, string email,         int? phone, DateTime? dateadded, int? age)     {         Name = name;         Surname = surname;         Email = email;         Phone = phone;         DateAdded = dateadded;         Age = age;     } } 2. View In our example, we have only one Index.chtml page where Razor View engine is used. Razor view engine is my favorite for ASP.NET MVC because it’s very intuitive, fluid and keeps your code clean. 3. Controller Since this is simple example with one page, we use one HomeController.cs where we have two methods, one of ActionResult type (Index) and another GetPeople() used to create and return list of people. HomeController.cs public class HomeController : Controller {     //     // GET: /Home/     public ActionResult Index()     {         ViewBag.People = GetPeople();         return View();     }     public List<Person> GetPeople()     {         List<Person> listPeople = new List<Person>();                  listPeople.Add(new Person("Hajan", "Selmani", "[email protected]", 070070070,DateTime.Now, 25));                     listPeople.Add(new Person("Straight", "Dean", "[email protected]", 123456789, DateTime.Now.AddDays(-5), 35));         listPeople.Add(new Person("Karsen", "Livia", "[email protected]", 46874651, DateTime.Now.AddDays(-2), 31));         listPeople.Add(new Person("Ringer", "Anne", "[email protected]", null, DateTime.Now, null));         listPeople.Add(new Person("O'Leary", "Michael", "[email protected]", 32424344, DateTime.Now, 44));         listPeople.Add(new Person("Gringlesby", "Anne", "[email protected]", null, DateTime.Now.AddDays(-9), 18));         listPeople.Add(new Person("Locksley", "Stearns", "[email protected]", 2135345, DateTime.Now, null));         listPeople.Add(new Person("DeFrance", "Michel", "[email protected]", 235325352, DateTime.Now.AddDays(-18), null));         listPeople.Add(new Person("White", "Johnson", null, null, DateTime.Now.AddDays(-22), 55));         listPeople.Add(new Person("Panteley", "Sylvia", null, 23233223, DateTime.Now.AddDays(-1), 32));         listPeople.Add(new Person("Blotchet-Halls", "Reginald", null, 323243423, DateTime.Now, 26));         listPeople.Add(new Person("Merr", "South", "[email protected]", 3232442, DateTime.Now.AddDays(-5), 85));         listPeople.Add(new Person("MacFeather", "Stearns", "[email protected]", null, DateTime.Now, null));         return listPeople;     } }   TABLE CSS/HTML DESIGN Now, lets start with the implementation. First of all, lets create the table structure and the main CSS. 1. HTML Structure @{     Layout = null;     } <!DOCTYPE html> <html> <head>     <title>ASP.NET & jQuery</title>     <!-- referencing styles, scripts and writing custom js scripts will go here --> </head> <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th> value </th>                 </tr>             </thead>             <tbody>                 <tr>                     <td>value</td>                 </tr>             </tbody>             <tfoot>                 <tr>                     <th> value </th>                 </tr>             </tfoot>         </table>         <div id="pager">                      </div>     </div> </body> </html> So, this is the main structure you need to create for each of your tables where you want to apply the functionality we will create. Of course the scripts are referenced once ;). As you see, our table has class tablesorter and also we have a div with id pager. In the next steps we will use both these to create the needed functionalities. The complete Index.cshtml coded to get the data from controller and display in the page is: <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </thead>             <tbody>                 @{                     foreach (var p in ViewBag.People)                     {                                 <tr>                         <td>@p.Name</td>                         <td>@p.Surname</td>                         <td>@p.Email</td>                         <td>@p.Phone</td>                         <td>@p.DateAdded</td>                     </tr>                     }                 }             </tbody>             <tfoot>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </tfoot>         </table>         <div id="pager" style="position: none;">             <form>             <img src="@Url.Content("~/Content/images/first.png")" class="first" />             <img src="@Url.Content("~/Content/images/prev.png")" class="prev" />             <input type="text" class="pagedisplay" />             <img src="@Url.Content("~/Content/images/next.png")" class="next" />             <img src="@Url.Content("~/Content/images/last.png")" class="last" />             <select class="pagesize">                 <option selected="selected" value="5">5</option>                 <option value="10">10</option>                 <option value="20">20</option>                 <option value="30">30</option>                 <option value="40">40</option>             </select>             </form>         </div>     </div> </body> So, mainly the structure is the same. I have added @Razor code to create table with data retrieved from the ViewBag.People which has been filled with data in the home controller. 2. CSS Design The CSS code I’ve created is: /* DEMO TABLE */ body {     font-size: 75%;     font-family: Verdana, Tahoma, Arial, "Helvetica Neue", Helvetica, Sans-Serif;     color: #232323;     background-color: #fff; } table { border-spacing:0; border:1px solid gray;} table.tablesorter thead tr .header {     background-image: url(images/bg.png);     background-repeat: no-repeat;     background-position: center right;     cursor: pointer; } table.tablesorter tbody td {     color: #3D3D3D;     padding: 4px;     background-color: #FFF;     vertical-align: top; } table.tablesorter tbody tr.odd td {     background-color:#F0F0F6; } table.tablesorter thead tr .headerSortUp {     background-image: url(images/asc.png); } table.tablesorter thead tr .headerSortDown {     background-image: url(images/desc.png); } table th { width:150px;            border:1px outset gray;            background-color:#3C78B5;            color:White;            cursor:pointer; } table thead th:hover { background-color:Yellow; color:Black;} table td { width:150px; border:1px solid gray;} PAGINATION AND SORTING Now, when everything is ready and we have the data, lets make pagination and sorting functionalities 1. jQuery Scripts referencing <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.pager.js")" type="text/javascript"></script> 2. jQuery Sorting and Pagination script   <script type="text/javascript">     $(function () {         $("table.tablesorter").tablesorter({ widthFixed: true, sortList: [[0, 0]] })         .tablesorterPager({ container: $("#pager"), size: $(".pagesize option:selected").val() });     }); </script> So, with only two lines of code, I’m using both tablesorter and tablesorterPager plugins, giving some options to both these. Options added: tablesorter - widthFixed: true – gives fixed width of the columns tablesorter - sortList[[0,0]] – An array of instructions for per-column sorting and direction in the format: [[columnIndex, sortDirection], ... ] where columnIndex is a zero-based index for your columns left-to-right and sortDirection is 0 for Ascending and 1 for Descending. A valid argument that sorts ascending first by column 1 and then column 2 looks like: [[0,0],[1,0]] (source: http://tablesorter.com/docs/) tablesorterPager – container: $(“#pager”) – tells the pager container, the div with id pager in our case. tablesorterPager – size: the default size of each page, where I get the default value selected, so if you put selected to any other of the options in your select list, you will have this number of rows as default per page for the table too. END RESULTS 1. Table once the page is loaded (default results per page is 5 and is automatically sorted by 1st column as sortList is specified) 2. Sorted by Phone Descending 3. Changed pagination to 10 items per page 4. Sorted by Phone and Name (use SHIFT to sort on multiple columns) 5. Sorted by Date Added 6. Page 3, 5 items per page   ADDITIONAL ENHANCEMENTS We can do additional enhancements to the table. We can make search for each column. I will cover this in one of my next blogs. Stay tuned. DEMO PROJECT You can download demo project source code from HERE.CONCLUSION Once you finish with the demo, run your page and open the source code. You will be amazed of the purity of your code.Working with pagination in client side can be very useful. One of the benefits is performance, but if you have thousands of rows in your tables, you will get opposite result when talking about performance. Hence, sometimes it is nice idea to make pagination on back-end. So, the compromise between both approaches would be best to combine both of them. I use at most up to 500 rows on client-side and once the user reach the last page, we can trigger ajax postback which can get the next 500 rows using server-side pagination of the same data. I would like to recommend the following blog post http://weblogs.asp.net/gunnarpeipman/archive/2010/09/14/returning-paged-results-from-repositories-using-pagedresult-lt-t-gt.aspx, which will help you understand how to return page results from repository. I hope this was helpful post for you. Wait for my next posts ;). Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Problem connecting to Hsqldb via Hibernate when running a Eclipse GWT project.

    - by Toby
    Hi, I'm trying to run a simple GWT project where I'm trying to do a simple persitence via hibernate to a HSQLDB database. The database I'm using I have been using for at least 2 years with several osgi applications without any problems. So all I done is reused the same configuration and added a simple object mapping file. The problem I have is that I get a socket creation error when ever I try to persist the object with in GWT jetty. I now the database is up and running, I can telnet to it, run OSGI projects that uses the same config with out problems. This is the stack I get when running 25 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Environment - Hibernate 3.3.1.GA 30 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Environment - hibernate.properties not found 34 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Environment - Bytecode provider name : javassist 42 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Environment - using JDK 1.4 java.sql.Timestamp handling 162 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Configuration - configuring from resource: hibernate.cfg.xml 162 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Configuration - Configuration resource: hibernate.cfg.xml 268 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Configuration - Reading mappings from resource : hbm-mappings/project.hbm.xml 382 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.HbmBinder - Mapping class: se.kanit.projectmgr.db.ProjectDAO - T_PROJECT 419 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Configuration - Configured SessionFactory: null 3534 [21704474@qtp-26509496-0] INFO org.hibernate.connection.DriverManagerConnectionProvider - Using Hibernate built-in connection pool (not for production use!) 3534 [21704474@qtp-26509496-0] INFO org.hibernate.connection.DriverManagerConnectionProvider - Hibernate connection pool size: 1 3534 [21704474@qtp-26509496-0] INFO org.hibernate.connection.DriverManagerConnectionProvider - autocommit mode: false 3537 [21704474@qtp-26509496-0] INFO org.hibernate.connection.DriverManagerConnectionProvider - using driver: org.hsqldb.jdbcDriver at URL: jdbc:hsqldb:hsql://localhost:1476/dirtyharry 3537 [21704474@qtp-26509496-0] INFO org.hibernate.connection.DriverManagerConnectionProvider - connection properties: {user=sa, password=****} 3594 [21704474@qtp-26509496-0] WARN org.hibernate.cfg.SettingsFactory - Could not obtain connection metadata java.sql.SQLException: socket creation error at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.(Unknown Source) at org.hsqldb.jdbcDriver.getConnection(Unknown Source) at org.hsqldb.jdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:154) at org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) at org.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:111) at org.hibernate.cfg.Configuration.buildSettings(Configuration.java:2101) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1325) at se.kanit.projectmgr.db.HibernateUtil.(HibernateUtil.java:24) at se.kanit.web.projectmgr.server.issues.IssuesService.addIssue(IssuesService.java:26) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.appengine.tools.development.agent.runtime.Runtime.invoke(Runtime.java:100) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:122) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:349) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:938) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) 3626 [21704474@qtp-26509496-0] INFO org.hibernate.dialect.Dialect - Using dialect: org.hibernate.dialect.HSQLDialect 3640 [21704474@qtp-26509496-0] INFO org.hibernate.transaction.TransactionFactoryFactory - Using default transaction strategy (direct JDBC transactions) 3644 [21704474@qtp-26509496-0] INFO org.hibernate.transaction.TransactionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 3644 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Automatic flush during beforeCompletion(): disabled 3644 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Automatic session close at end of transaction: disabled 3645 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Scrollable result sets: disabled 3645 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - JDBC3 getGeneratedKeys(): disabled 3645 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Connection release mode: auto 3646 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Default batch fetch size: 1 3646 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Generate SQL with comments: disabled 3646 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Order SQL updates by primary key: disabled 3646 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Order SQL inserts for batching: disabled 3646 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 3651 [21704474@qtp-26509496-0] INFO org.hibernate.hql.ast.ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory 3651 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Query language substitutions: {} 3652 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - JPA-QL strict compliance: disabled 3652 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Second-level cache: enabled 3652 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Query cache: disabled 3664 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 3665 [21704474@qtp-26509496-0] INFO org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge - Cache provider: org.hibernate.cache.NoCacheProvider 3666 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Optimize cache for minimal puts: disabled 3666 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Structured second-level cache entries: disabled 3678 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Statistics: disabled 3678 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Deleted entity synthetic identifier rollback: disabled 3678 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Default entity-mode: pojo 3679 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Named query checking : enabled 3775 [21704474@qtp-26509496-0] INFO org.hibernate.impl.SessionFactoryImpl - building session factory 4155 [21704474@qtp-26509496-0] INFO org.hibernate.impl.SessionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured 4170 [21704474@qtp-26509496-0] INFO org.hibernate.tool.hbm2ddl.SchemaUpdate - Running hbm2ddl schema update 4170 [21704474@qtp-26509496-0] INFO org.hibernate.tool.hbm2ddl.SchemaUpdate - fetching database metadata 4171 [21704474@qtp-26509496-0] ERROR org.hibernate.tool.hbm2ddl.SchemaUpdate - could not get database metadata java.sql.SQLException: socket creation error at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.(Unknown Source) at org.hsqldb.jdbcDriver.getConnection(Unknown Source) at org.hsqldb.jdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:154) at org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) at org.hibernate.tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.prepare(SuppliedConnectionProviderConnectionHelper.java:51) at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:168) at org.hibernate.impl.SessionFactoryImpl.(SessionFactoryImpl.java:346) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1327) at se.kanit.projectmgr.db.HibernateUtil.(HibernateUtil.java:24) at se.kanit.web.projectmgr.server.issues.IssuesService.addIssue(IssuesService.java:26) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.appengine.tools.development.agent.runtime.Runtime.invoke(Runtime.java:100) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:122) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:349) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:938) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) 4172 [21704474@qtp-26509496-0] ERROR org.hibernate.tool.hbm2ddl.SchemaUpdate - could not complete schema update java.sql.SQLException: socket creation error at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.(Unknown Source) at org.hsqldb.jdbcDriver.getConnection(Unknown Source) at org.hsqldb.jdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:154) at org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) at org.hibernate.tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.prepare(SuppliedConnectionProviderConnectionHelper.java:51) at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:168) at org.hibernate.impl.SessionFactoryImpl.(SessionFactoryImpl.java:346) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1327) at se.kanit.projectmgr.db.HibernateUtil.(HibernateUtil.java:24) at se.kanit.web.projectmgr.server.issues.IssuesService.addIssue(IssuesService.java:26) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.appengine.tools.development.agent.runtime.Runtime.invoke(Runtime.java:100) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:122) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:349) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:938) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) 4293 [21704474@qtp-26509496-0] WARN org.hibernate.util.JDBCExceptionReporter - SQL Error: -80, SQLState: 08000 4293 [21704474@qtp-26509496-0] ERROR org.hibernate.util.JDBCExceptionReporter - socket creation error Cannot open connection org.hibernate.exception.JDBCConnectionException: Cannot open connection at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:97) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:52) at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:449) at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167) at org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142) at org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85) at org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1353) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.appengine.tools.development.agent.runtime.Runtime.invoke(Runtime.java:100) at org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:342) at $Proxy7.beginTransaction(Unknown Source) at se.kanit.projectmgr.db.HibernateUtil.saveOrUpdate(HibernateUtil.java:115) at se.kanit.web.projectmgr.server.issues.IssuesService.addIssue(IssuesService.java:26) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.appengine.tools.development.agent.runtime.Runtime.invoke(Runtime.java:100) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:122) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:349) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:938) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Caused by: java.sql.SQLException: socket creation error at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.(Unknown Source) at org.hsqldb.jdbcDriver.getConnection(Unknown Source) at org.hsqldb.jdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:154) at org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446) ... 49 more Any tips and ideas are greatly appreciated. Cheers. Toby.

    Read the article

  • SPP Socket createRfcommSocketToServiceRecord will not connect

    - by philDev
    Hello, I want to use Android 2.1 to connect to an external Bluetooth device, wich is offering an SPP port to me. In this case it is an external GPS unit. When I'm trying to connect I can't connect an established socket while being in the "client" mode. Then if I try to set up a socket (being in the server role), to RECEIVE text from my PC everything works just fine. The Computer can connect as the client to the Socket on the Phone via SPP using the SSP UUID or some random UUID. So the Problem is not that I'm using the wrong UUID. But the other way around (e.g. calling connect on the established client socket) createRfcommSocketToServiceRecord(UUID uuid)) just doesn't work. Sadly I don't have the time to inspect the problem further. It would be greate If somebody could point me the right way. In the following part of the Logfile has to be the Problem. Greets PhilDev P.S. I'm going to be present during the Office hours. Here the log file: 03-21 03:10:52.020: DEBUG/BluetoothSocket.cpp(4643): initSocketFromFdNative 03-21 03:10:52.025: DEBUG/BluetoothSocket(4643): connect 03-21 03:10:52.025: DEBUG/BluetoothSocket(4643): doSdp 03-21 03:10:52.050: DEBUG/ADAPTER(2132): create_device(01:00:00:7F:B5:B3) 03-21 03:10:52.050: DEBUG/ADAPTER(2132): adapter_create_device(01:00:00:7F:B5:B3) 03-21 03:10:52.055: DEBUG/DEVICE(2132): Creating device [address = 01:00:00:7F:B5:B3] /org/bluez/2132/hci0/dev_01_00_00_7F_B5_B3 [name = ] 03-21 03:10:52.055: DEBUG/DEVICE(2132): btd_device_ref(0x10c18): ref=1 03-21 03:10:52.065: INFO/BluetoothEventLoop.cpp(1914): event_filter: Received signal org.bluez.Adapter:DeviceCreated from /org/bluez/2132/hci0 03-21 03:10:52.065: INFO/BluetoothService.cpp(1914): ... Object Path = /org/bluez/2132/hci0/dev_01_00_00_7F_B5_B3 03-21 03:10:52.065: INFO/BluetoothService.cpp(1914): ... Pattern = 00001101-0000-1000-8000-00805f9b34fb, strlen = 36 03-21 03:10:52.070: DEBUG/DEVICE(2132): *************DiscoverServices******** 03-21 03:10:52.070: INFO/DTUN_HCID(2132): dtun_client_get_remote_svc_channel: starting discovery on (uuid16=0x0011) 03-21 03:10:52.070: INFO/DTUN_HCID(2132): bdaddr=01:00:00:7F:B5:B3 03-21 03:10:52.070: INFO/DTUN_CLNT(2132): Client calling DTUN_METHOD_DM_GET_REMOTE_SERVICE_CHANNEL (id 4) 03-21 03:10:52.070: INFO/(2106): DTUN_ReceiveCtrlMsg: [DTUN] Received message [BTLIF_DTUN_METHOD_CALL] 4354 03-21 03:10:52.070: INFO/(2106): handle_method_call: handle_method_call :: received DTUN_METHOD_DM_GET_REMOTE_SERVICE_CHANNEL (id 4), len 134 03-21 03:10:52.075: ERROR/BTLD(2106): ****************search UUID = 1101*********** 03-21 03:10:52.075: INFO//system/bin/btld(2103): btapp_dm_GetRemoteServiceChannel() 03-21 03:10:52.120: DEBUG/BluetoothService(1914): updateDeviceServiceChannelCache(01:00:00:7F:B5:B3) 03-21 03:10:52.120: DEBUG/BluetoothEventLoop(1914): ClassValue: null for remote device: 01:00:00:7F:B5:B3 is null 03-21 03:10:52.120: INFO/BluetoothEventLoop.cpp(1914): event_filter: Received signal org.bluez.Adapter:PropertyChanged from /org/bluez/2132/hci0 03-21 03:10:52.305: WARN/BTLD(2106): bta_dm_check_av:0 03-21 03:10:56.395: DEBUG/WifiService(1914): ACTION_BATTERY_CHANGED pluggedType: 2 03-21 03:10:57.440: WARN/BTLD(2106): SDP - Rcvd conn cnf with error: 0x4 CID 0x43 03-21 03:10:57.440: INFO/BTL-IFS(2106): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 13 pbytes (hdl 10) 03-21 03:10:57.445: INFO/DTUN_CLNT(2132): dtun-rx signal [DTUN_SIG_DM_RMT_SERVICE_CHANNEL] (id 42) len 15 03-21 03:10:57.445: INFO/DTUN_HCID(2132): dtun_dm_sig_rmt_service_channel: success=1, service=00000000 03-21 03:10:57.445: ERROR/DTUN_HCID(2132): discovery unsuccessful! package de.phil_dev.android.BT; import java.io.BufferedInputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.util.UUID; import android.app.Activity; import android.bluetooth.BluetoothAdapter; import android.bluetooth.BluetoothClass; import android.bluetooth.BluetoothDevice; import android.bluetooth.BluetoothServerSocket; import android.bluetooth.BluetoothSocket; import android.content.Intent; import android.os.Bundle; import android.util.Log; import android.widget.Toast; public class ThinBTClient extends Activity { private static final String TAG = "THINBTCLIENT"; private static final boolean D = true; private BluetoothAdapter mBluetoothAdapter = null; private BluetoothSocket btSocket = null; private BufferedInputStream inStream = null; private BluetoothServerSocket myServerSocket; private ConnectThread myConnection; private ServerThread myServer; // Well known SPP UUID (will *probably* map to // RFCOMM channel 1 (default) if not in use); // see comments in onResume(). private static final UUID MY_UUID = UUID .fromString("00001101-0000-1000-8000-00805F9B34FB"); // .fromString("94f39d29-7d6d-437d-973b-fba39e49d4ee"); // ==> hardcode your slaves MAC address here <== // PC // private static String address = "00:09:DD:50:86:A0"; // GPS private static String address = "00:0B:0D:8E:D4:33"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); if (D) Log.e(TAG, "+++ ON CREATE +++"); mBluetoothAdapter = BluetoothAdapter.getDefaultAdapter(); if (mBluetoothAdapter == null) { Toast.makeText(this, "Bluetooth is not available.", Toast.LENGTH_LONG).show(); finish(); return; } if (!mBluetoothAdapter.isEnabled()) { Toast.makeText(this, "Please enable your BT and re-run this program.", Toast.LENGTH_LONG).show(); finish(); return; } if (D) Log.e(TAG, "+++ DONE IN ON CREATE, GOT LOCAL BT ADAPTER +++"); } @Override public void onStart() { super.onStart(); if (D) Log.e(TAG, "++ ON START ++"); } @Override public void onResume() { super.onResume(); if (D) { Log.e(TAG, "+ ON RESUME +"); Log.e(TAG, "+ ABOUT TO ATTEMPT CLIENT CONNECT +"); } // Make the phone discoverable // When this returns, it will 'know' about the server, // via it's MAC address. // mBluetoothAdapter.startDiscovery(); BluetoothDevice device = mBluetoothAdapter.getRemoteDevice(address); Log.e(TAG, device.getName() + " connected"); // myServer = new ServerThread(); // myServer.start(); myConnection = new ConnectThread(device); myConnection.start(); } @Override public void onPause() { super.onPause(); if (D) Log.e(TAG, "- ON PAUSE -"); try { btSocket.close(); } catch (IOException e2) { Log.e(TAG, "ON PAUSE: Unable to close socket.", e2); } } @Override public void onStop() { super.onStop(); if (D) Log.e(TAG, "-- ON STOP --"); } @Override public void onDestroy() { super.onDestroy(); if (D) Log.e(TAG, "--- ON DESTROY ---"); } private class ServerThread extends Thread { private final BluetoothServerSocket myServSocket; public ServerThread() { BluetoothServerSocket tmp = null; // create listening socket try { tmp = mBluetoothAdapter .listenUsingRfcommWithServiceRecord( "myServer", MY_UUID); } catch (IOException e) { Log.e(TAG, "Server establishing failed"); } myServSocket = tmp; } public void run() { Log.e(TAG, "Beginn waiting for connection"); BluetoothSocket connectSocket = null; InputStream inStream = null; byte[] buffer = new byte[1024]; int bytes; while (true) { try { connectSocket = myServSocket.accept(); } catch (IOException e) { Log.e(TAG, "Connection failed"); break; } Log.e(TAG, "ALL THE WAY AROUND"); try { connectSocket = connectSocket.getRemoteDevice() .createRfcommSocketToServiceRecord(MY_UUID); connectSocket.connect(); } catch (IOException e1) { Log.e(TAG, "DIDNT WORK"); } // handle Connection try { inStream = connectSocket.getInputStream(); while (true) { try { bytes = inStream.read(buffer); Log.e(TAG, "Received: " + buffer.toString()); } catch (IOException e3) { Log.e(TAG, "disconnected"); break; } } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); break; } } } void cancel() { } } private class ConnectThread extends Thread { private final BluetoothSocket mySocket; private final BluetoothDevice myDevice; public ConnectThread(BluetoothDevice device) { myDevice = device; BluetoothSocket tmp = null; try { tmp = device.createRfcommSocketToServiceRecord(MY_UUID); } catch (IOException e) { Log.e(TAG, "CONNECTION IN THREAD DIDNT WORK"); } mySocket = tmp; } public void run() { Log.e(TAG, "STARTING TO CONNECT THE SOCKET"); setName("My Connection Thread"); InputStream inStream = null; boolean run = false; //mBluetoothAdapter.cancelDiscovery(); try { mySocket.connect(); run = true; } catch (IOException e) { run = false; Log.e(TAG, this.getName() + ": CONN DIDNT WORK, Try closing socket"); try { mySocket.close(); } catch (IOException e1) { Log.e(TAG, this.getName() + ": COULD CLOSE SOCKET", e1); this.destroy(); } } synchronized (ThinBTClient.this) { myConnection = null; } byte[] buffer = new byte[1024]; int bytes; // handle Connection try { inStream = mySocket.getInputStream(); while (run) { try { bytes = inStream.read(buffer); Log.e(TAG, "Received: " + buffer.toString()); } catch (IOException e3) { Log.e(TAG, "disconnected"); } } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } // starting connected thread (handling there in and output } public void cancel() { try { mySocket.close(); } catch (IOException e) { Log.e(TAG, this.getName() + " SOCKET NOT CLOSED"); } } } }

    Read the article

  • Sonar Analysis crashing with default configuration in Maven

    - by Robert Mandeville
    I'm starting to experiment with Sonar, and having trouble. I'm running everything on the same Red Hat Linux server, against Java 1.6.10. I launched the server with "bin/linux-x86-32" (the JVM is 32-bit). The sonar.log shows no SEVERE or ERROR and one WARNING, that I'm using the default Derby database (I'll fix that once I get things running at all). I am trying to build a Maven project that builds a JAR. I made no Sonar-specific changes (other than one described below). I can run "mvn clean install" with no problem. However, if I then run "mvn -e sonar:sonar", I get the stacktrace listed below. The server logs no events. I added the dependency "commons-pool:commons-pool:20030825.183949, but to no avail. Any idea as to what I'm doing wrong? [INFO] Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building buildUtil 1.0 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- sonar-maven-plugin:2.0:sonar (default-cli) @ buildUtil --- [INFO] Sonar version: 2.14 [WARN] [14:54:17.730] Derby database should be used for evaluation purpose only [INFO] [14:54:17.732] Create JDBC datasource [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.130s [INFO] Finished at: Mon Apr 09 14:54:17 EDT 2012 [INFO] Final Memory: 8M/198M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.0:sonar (default-cli) on project buildUtil: Can not execute Sonar: PicoLifecycleException: method 'public final org.sonar.core.persistence.DefaultDatabase org.sonar.core.persistence.DefaultDatabase.start()', instance 'org.sonar.batch.bootstrap.BatchDatabase@41b635, java.lang.RuntimeException: wrapper: org/apache/commons/pool/impl/GenericObjectPool: org.apache.commons.pool.impl.GenericObjectPool -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.0:sonar (default-cli) on project buildUtil: Can not execute Sonar at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) Caused by: org.apache.maven.plugin.MojoExecutionException: Can not execute Sonar at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:118) at org.codehaus.mojo.sonar.Bootstraper.start(Bootstraper.java:65) at org.codehaus.mojo.sonar.SonarMojo.execute(SonarMojo.java:90) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 19 more Caused by: org.picocontainer.PicoLifecycleException: PicoLifecycleException: method 'public final org.sonar.core.persistence.DefaultDatabase org.sonar.core.persistence.DefaultDatabase.start()', instance 'org.sonar.batch.bootstrap.BatchDatabase@41b635, java.lang.RuntimeException: wrapper at org.picocontainer.monitors.NullComponentMonitor.lifecycleInvocationFailed(NullComponentMonitor.java:77) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.monitorAndThrowReflectionLifecycleException(ReflectionLifecycleStrategy.java:132) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:115) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89) at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84) at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169) at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132) at org.picocontainer.behaviors.Stored.start(Stored.java:110) at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1009) at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1002) at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:760) at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:70) at org.sonar.batch.bootstrap.Module.start(Module.java:82) at org.sonar.batch.bootstrapper.Batch.startBatch(Batch.java:71) at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:58) at org.sonar.maven3.SonarMojo.execute(SonarMojo.java:143) at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:113) ... 23 more Caused by: java.lang.RuntimeException: wrapper at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.monitorAndThrowReflectionLifecycleException(ReflectionLifecycleStrategy.java:130) ... 38 more Caused by: java.lang.NoClassDefFoundError: org/apache/commons/pool/impl/GenericObjectPool at org.apache.commons.dbcp.BasicDataSourceFactory.createDataSource(BasicDataSourceFactory.java:152) at org.sonar.core.persistence.DefaultDatabase.initDatasource(DefaultDatabase.java:114) at org.sonar.core.persistence.DefaultDatabase.start(DefaultDatabase.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110) ... 37 more Caused by: java.lang.ClassNotFoundException: org.apache.commons.pool.impl.GenericObjectPool at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50) at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244) at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 45 more [ERROR] [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException The POM I'm using is: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.phoenix.build</groupId> <artifactId>buildUtil</artifactId> <version>1.0</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <build> <sourceDirectory>src/main/java</sourceDirectory> <testSourceDirectory>src/test/java</testSourceDirectory> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.7.2</version> <configuration> <excludes> <exclude>**/*integrationTest.java</exclude> </excludes> </configuration> <executions> <execution> <id>integration-tests</id> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> <configuration> <skip>false</skip> <excludes> <exclude>none</exclude> </excludes> <includes> <include>**/*integrationTest.java</include> </includes> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.2</version> <executions> <execution> <goals> <goal>test-jar</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>commons-cli</groupId> <artifactId>commons-cli</artifactId> <version>1.2</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>jaxen</groupId> <artifactId>jaxen</artifactId> <version>1.1.1</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>dom4j</groupId> <artifactId>dom4j</artifactId> <version>1.6.1</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.8.2</version> <type>jar</type> <scope>test</scope> <optional>false</optional> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-artifact</artifactId> <version>2.0</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>org.codehaus.plexus</groupId> <artifactId>plexus-classworlds</artifactId> <version>2.2.2</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2jcc</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>Common</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2fs</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2java</artifactId> <version>9.7</version> <type>zip</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2jcc_javax</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2jcc_license_cisuz</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2jcc_license_cu</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2policy</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>sqlj</artifactId> <version>9.7</version> <type>zip</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2qgjava</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> </dependencies>

    Read the article

  • EntityManager injection works in JBoss 7.1.1 but not WebSphere 7

    - by BikerJared
    I've built an EJB that will manage my database access. I'm building a web app around it that uses Struts 2. The problem I'm having is when I deploy the ear, the EntityManager doesn't get injected into my service class (and winds up null and results in NullPointerExceptions). The weird thing is, it works on JBoss 7.1.1 but not on WebSphere 7. You'll notice that Struts doesn't inject the EJB, so I've got some intercepter code that does that. My current working theory right now is that the WS7 container can't inject the EntityManager because of Struts for some unknown reason. My next step is to try Spring next, but I'd really like to get this to work if possible. I've spent a few days searching and trying various things and haven't had any luck. I figured I'd give this a shot. Let me know if I can provide additional information. <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="JPATestPU" transaction-type="JTA"> <description>JPATest Persistence Unit</description> <jta-data-source>jdbc/Test-DS</jta-data-source> <class>org.jaredstevens.jpatest.db.entities.User</class> <properties> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> </persistence> package org.jaredstevens.jpatest.db.entities; import java.io.Serializable; import javax.persistence.*; @Entity @Table public class User implements Serializable { private static final long serialVersionUID = -2643583108587251245L; private long id; private String name; private String email; @Id @GeneratedValue(strategy = GenerationType.TABLE) public long getId() { return id; } public void setId(long id) { this.id = id; } @Column(nullable=false) public String getName() { return this.name; } public void setName( String name ) { this.name = name; } @Column(nullable=false) public String getEmail() { return this.email; } @Column(nullable=false) public void setEmail( String email ) { this.email= email; } } package org.jaredstevens.jpatest.db.services; import java.util.List; import javax.ejb.Remote; import javax.ejb.Stateless; import javax.ejb.TransactionAttribute; import javax.ejb.TransactionAttributeType; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.PersistenceContextType; import javax.persistence.Query; import org.jaredstevens.jpatest.db.entities.User; import org.jaredstevens.jpatest.db.interfaces.IUserService; @Stateless(name="UserService",mappedName="UserService") @Remote public class UserService implements IUserService { @PersistenceContext(unitName="JPATestPU",type=PersistenceContextType.TRANSACTION) private EntityManager em; @TransactionAttribute(TransactionAttributeType.REQUIRED) public User getUserById(long userId) { User retVal = null; if(userId > 0) { retVal = (User)this.getEm().find(User.class, userId); } return retVal; } @TransactionAttribute(TransactionAttributeType.REQUIRED) public List<User> getUsers() { List<User> retVal = null; String sql; sql = "SELECT u FROM User u ORDER BY u.id ASC"; Query q = this.getEm().createQuery(sql); retVal = (List<User>)q.getResultList(); return retVal; } @TransactionAttribute(TransactionAttributeType.REQUIRED) public void save(User user) { this.getEm().persist(user); } @TransactionAttribute(TransactionAttributeType.REQUIRED) public boolean remove(long userId) { boolean retVal = false; if(userId > 0) { User user = null; user = (User)this.getEm().find(User.class, userId); if(user != null) this.getEm().remove(user); if(this.getEm().find(User.class, userId) == null) retVal = true; } return retVal; } public EntityManager getEm() { return em; } public void setEm(EntityManager em) { this.em = em; } } package org.jaredstevens.jpatest.actions.user; import javax.ejb.EJB; import org.jaredstevens.jpatest.db.entities.User; import org.jaredstevens.jpatest.db.interfaces.IUserService; import com.opensymphony.xwork2.ActionSupport; public class UserAction extends ActionSupport { @EJB(mappedName="UserService") private IUserService userService; private static final long serialVersionUID = 1L; private String userId; private String name; private String email; private User user; public String getUserById() { String retVal = ActionSupport.SUCCESS; this.setUser(userService.getUserById(Long.parseLong(this.userId))); return retVal; } public String save() { String retVal = ActionSupport.SUCCESS; User user = new User(); if(this.getUserId() != null && Long.parseLong(this.getUserId()) > 0) user.setId(Long.parseLong(this.getUserId())); user.setName(this.getName()); user.setEmail(this.getEmail()); userService.save(user); this.setUser(user); return retVal; } public String getUserId() { return this.userId; } public void setUserId(String userId) { this.userId = userId; } public String getName() { return this.name; } public void setName( String name ) { this.name = name; } public String getEmail() { return this.email; } public void setEmail( String email ) { this.email = email; } public User getUser() { return this.user; } public void setUser(User user) { this.user = user; } } package org.jaredstevens.jpatest.utils; import com.opensymphony.xwork2.ActionInvocation; import com.opensymphony.xwork2.interceptor.Interceptor; public class EJBAnnotationProcessorInterceptor implements Interceptor { private static final long serialVersionUID = 1L; public void destroy() { } public void init() { } public String intercept(ActionInvocation ai) throws Exception { EJBAnnotationProcessor.process(ai.getAction()); return ai.invoke(); } } package org.jaredstevens.jpatest.utils; import java.lang.reflect.Field; import javax.ejb.EJB; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; public class EJBAnnotationProcessor { public static void process(Object instance)throws Exception{ Field[] fields = instance.getClass().getDeclaredFields(); if(fields != null && fields.length > 0){ EJB ejb; for(Field field : fields){ ejb = field.getAnnotation(EJB.class); if(ejb != null){ field.setAccessible(true); field.set(instance, EJBAnnotationProcessor.getEJB(ejb.mappedName())); } } } } private static Object getEJB(String mappedName) { Object retVal = null; String path = ""; Context cxt = null; String[] paths = {"cell/nodes/virgoNode01/servers/server1/","java:module/"}; for( int i=0; i < paths.length; ++i ) { try { path = paths[i]+mappedName; cxt = new InitialContext(); retVal = cxt.lookup(path); if(retVal != null) break; } catch (NamingException e) { retVal = null; } } return retVal; } } <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN" "http://struts.apache.org/dtds/struts-2.0.dtd"> <struts> <constant name="struts.devMode" value="true" /> <package name="basicstruts2" namespace="/diagnostics" extends="struts-default"> <interceptors> <interceptor name="ejbAnnotationProcessor" class="org.jaredstevens.jpatest.utils.EJBAnnotationProcessorInterceptor"/> <interceptor-stack name="baseStack"> <interceptor-ref name="defaultStack"/> <interceptor-ref name="ejbAnnotationProcessor"/> </interceptor-stack> </interceptors> <default-interceptor-ref name="baseStack"/> </package> <package name="restAPI" namespace="/conduit" extends="json-default"> <interceptors> <interceptor name="ejbAnnotationProcessor" class="org.jaredstevens.jpatest.utils.EJBAnnotationProcessorInterceptor" /> <interceptor-stack name="baseStack"> <interceptor-ref name="defaultStack" /> <interceptor-ref name="ejbAnnotationProcessor" /> </interceptor-stack> </interceptors> <default-interceptor-ref name="baseStack" /> <action name="UserAction.getUserById" class="org.jaredstevens.jpatest.actions.user.UserAction" method="getUserById"> <result type="json"> <param name="ignoreHierarchy">false</param> <param name="includeProperties"> ^user\.id, ^user\.name, ^user\.email </param> </result> <result name="error" type="json" /> </action> <action name="UserAction.save" class="org.jaredstevens.jpatest.actions.user.UserAction" method="save"> <result type="json"> <param name="ignoreHierarchy">false</param> <param name="includeProperties"> ^user\.id, ^user\.name, ^user\.email </param> </result> <result name="error" type="json" /> </action> </package> </struts> Stack Trace java.lang.NullPointerException org.jaredstevens.jpatest.actions.user.UserAction.save(UserAction.java:38) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) java.lang.reflect.Method.invoke(Method.java:611) com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:453) com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:292) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:255) org.jaredstevens.jpatest.utils.EJBAnnotationProcessorInterceptor.intercept(EJBAnnotationProcessorInterceptor.java:21) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:256) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:176) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:265) org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:138) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:190) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:90) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:243) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:100) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:141) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:145) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:171) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:176) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:192) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:187) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:54) org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:511) org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:188) com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:116) com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:77) com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:908) com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:997) com.ibm.ws.webcontainer.extension.DefaultExtensionProcessor.invokeFilters(DefaultExtensionProcessor.java:1062) com.ibm.ws.webcontainer.extension.DefaultExtensionProcessor.handleRequest(DefaultExtensionProcessor.java:982) com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3935) com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:276) com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:931) com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1583) com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:186) com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:452) com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewRequest(HttpInboundLink.java:511) com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.processRequest(HttpInboundLink.java:305) com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:276) com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:214) com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:113) com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:165) com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217) com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161) com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138) com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204) com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775) com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905) com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1604)

    Read the article

  • Why is XML Deserilzation not throwing exceptions when it should.

    - by chobo2
    Hi Here is some dummy xml and dummy xml schema I made. schema <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.domain.com" xmlns="http://www.domain.com" elementFormDefault="qualified"> <xs:element name="vehicles"> <xs:complexType> <xs:sequence> <xs:element name="owner" minOccurs="1" maxOccurs="1"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="2" /> <xs:maxLength value="8" /> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="Car" minOccurs="1" maxOccurs="1"> <xs:complexType> <xs:sequence> <xs:element name="Information" type="CarInfo" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="Truck"> <xs:complexType> <xs:sequence> <xs:element name="Information" type="CarInfo" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="SUV"> <xs:complexType> <xs:sequence> <xs:element name="Information" type="CarInfo" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:complexType name="CarInfo"> <xs:sequence> <xs:element name="CarName"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="1"/> <xs:maxLength value="50"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="CarPassword"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="6"/> <xs:maxLength value="50"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="CarEmail"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:pattern value="\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*"/> </xs:restriction> </xs:simpleType> </xs:element> </xs:sequence> </xs:complexType> </xs:schema> xml sample <?xml version="1.0" encoding="utf-8" ?> <vehicles> <owner>Car</owner> <Car> <Information> <CarName>Bob</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> <Information> <CarName>Bob2</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> </Car> <Truck> <Information> <CarName>Jim</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> <Information> <CarName>Jim2</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> </Truck> <SUV> <Information> <CarName>Jane</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> <Information> <CarName>Jane</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> </SUV> </vehicles> Serialization Class using System; using System.Collections.Generic; using System.Xml; using System.Xml.Serialization; [XmlRoot("vehicles")] public class MyClass { public MyClass() { Cars = new List<Information>(); Trucks = new List<Information>(); SUVs = new List<Information>(); } [XmlElement(ElementName = "owner")] public string Owner { get; set; } [XmlElement("Car")] public List<Information> Cars { get; set; } [XmlElement("Truck")] public List<Information> Trucks { get; set; } [XmlElement("SUV")] public List<Information> SUVs { get; set; } } public class CarInfo { public CarInfo() { Info = new List<Information>(); } [XmlElement("Information")] public List<Information> Info { get; set; } } public class Information { [XmlElement(ElementName = "CarName")] public string CarName { get; set; } [XmlElement("CarPassword")] public string CarPassword { get; set; } [XmlElement("CarEmail")] public string CarEmail { get; set; } } Now I think this should all validate. If not assume it is write as my real file does work and this is what this dummy one is based off. Now my problem is this. I want to enforce as much as I can from my schema. Such as the "owner" tag must be the first element and should show up one time and only one time ( minOccurs="1" maxOccurs="1"). Right now I can remove the owner element from my dummy xml file and deseriliaze it and it will go on it's happy way and convert it to object and will just put that property as null. I don't want that I want it to throw an exception or something saying this does match what was expected. I don't want to have to validate things like that once deserialized. Same goes for the <car></car> tag I want that to appear always even if there is no information yet I can remove that too and it will be happy with that. So what tags do I have to add to make my serialization class know that these things are required and if they are not found throw an exception.

    Read the article

  • Multiple errors while adding searching to app

    - by Thijs
    Hi, I'm fairly new at iOS programming, but I managed to make a (in my opinion quite nice) app for the app store. The main function for my next update will be a search option for the app. I followed a tutorial I found on the internet and adapted it to fit my app. I got back quite some errors, most of which I managed to fix. But now I'm completely stuck and don't know what to do next. I know it's a lot to ask, but if anyone could take a look at the code underneath, it would be greatly appreciated. Thanks! // // RootViewController.m // GGZ info // // Created by Thijs Beckers on 29-12-10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "RootViewController.h" // Always import the next level view controller header(s) #import "CourseCodes.h" @implementation RootViewController @synthesize dataForCurrentLevel, tableViewData; #pragma mark - #pragma mark View lifecycle // OVERRIDE METHOD - (void)viewDidLoad { [super viewDidLoad]; // Go get the data for the app... // Create a custom string that points to the right location in the app bundle NSString *pathToPlist = [[NSBundle mainBundle] pathForResource:@"SCSCurriculum" ofType:@"plist"]; // Now, place the result into the dictionary property // Note that we must retain it to keep it around dataForCurrentLevel = [[NSDictionary dictionaryWithContentsOfFile:pathToPlist] retain]; // Place the top level keys (the program codes) in an array for the table view // Note that we must retain it to keep it around // NSDictionary has a really useful instance method - allKeys // The allKeys method returns an array with all of the keys found in (this level of) a dictionary tableViewData = [[[dataForCurrentLevel allKeys] sortedArrayUsingSelector:@selector(caseInsensitiveCompare:)] retain]; //Initialize the copy array. copyListOfItems = [[NSMutableArray alloc] init]; // Set the nav bar title self.title = @"GGZ info"; //Add the search bar self.tableView.tableHeaderView = searchBar; searchBar.autocorrectionType = UITextAutocorrectionTypeNo; searching = NO; letUserSelectRow = YES; } /* - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; } */ /* - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } */ /* - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } */ /* - (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; } */ //RootViewController.m - (void) searchBarTextDidBeginEditing:(UISearchBar *)theSearchBar { searching = YES; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; //Add the done button. self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(doneSearching_Clicked:)] autorelease]; } - (NSIndexPath *)tableView :(UITableView *)theTableView willSelectRowAtIndexPath:(NSIndexPath *)indexPath { if(letUserSelectRow) return indexPath; else return nil; } //RootViewController.m - (void)searchBar:(UISearchBar *)theSearchBar textDidChange:(NSString *)searchText { //Remove all objects first. [copyListOfItems removeAllObjects]; if([searchText length] &gt; 0) { searching = YES; letUserSelectRow = YES; self.tableView.scrollEnabled = YES; [self searchTableView]; } else { searching = NO; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; } [self.tableView reloadData]; } //RootViewController.m - (void) searchBarSearchButtonClicked:(UISearchBar *)theSearchBar { [self searchTableView]; } - (void) searchTableView { NSString *searchText = searchBar.text; NSMutableArray *searchArray = [[NSMutableArray alloc] init]; for (NSDictionary *dictionary in listOfItems) { NSArray *array = [dictionary objectForKey:@"Countries"]; [searchArray addObjectsFromArray:array]; } for (NSString *sTemp in searchArray) { NSRange titleResultsRange = [sTemp rangeOfString:searchText options:NSCaseInsensitiveSearch]; if (titleResultsRange.length &gt; 0) [copyListOfItems addObject:sTemp]; } [searchArray release]; searchArray = nil; } //RootViewController.m - (void) doneSearching_Clicked:(id)sender { searchBar.text = @""; [searchBar resignFirstResponder]; letUserSelectRow = YES; searching = NO; self.navigationItem.rightBarButtonItem = nil; self.tableView.scrollEnabled = YES; [self.tableView reloadData]; } //RootViewController.m - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { if (searching) return 1; else return [listOfItems count]; } // Customize the number of rows in the table view. - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if (searching) return [copyListOfItems count]; else { //Number of rows it should expect should be based on the section NSDictionary *dictionary = [listOfItems objectAtIndex:section]; NSArray *array = [dictionary objectForKey:@"Countries"]; return [array count]; } } - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { if(searching) return @""; if(section == 0) return @"Countries to visit"; else return @"Countries visited"; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } // Set up the cell... if(searching) cell.text = [copyListOfItems objectAtIndex:indexPath.row]; else { //First get the dictionary object NSDictionary *dictionary = [listOfItems objectAtIndex:indexPath.section]; NSArray *array = [dictionary objectForKey:@"Countries"]; NSString *cellValue = [array objectAtIndex:indexPath.row]; cell.text = cellValue; } return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //Get the selected country NSString *selectedCountry = nil; if(searching) selectedCountry = [copyListOfItems objectAtIndex:indexPath.row]; else { NSDictionary *dictionary = [listOfItems objectAtIndex:indexPath.section]; NSArray *array = [dictionary objectForKey:@"Countries"]; selectedCountry = [array objectAtIndex:indexPath.row]; } //Initialize the detail view controller and display it. DetailViewController *dvController = [[DetailViewController alloc] initWithNibName:@"DetailView" bundle:[NSBundle mainBundle]]; dvController.selectedCountry = selectedCountry; [self.navigationController pushViewController:dvController animated:YES]; [dvController release]; dvController = nil; } //RootViewController.m - (void) searchBarTextDidBeginEditing:(UISearchBar *)theSearchBar { //Add the overlay view. if(ovController == nil) ovController = [[OverlayViewController alloc] initWithNibName:@"OverlayView" bundle:[NSBundle mainBundle]]; CGFloat yaxis = self.navigationController.navigationBar.frame.size.height; CGFloat width = self.view.frame.size.width; CGFloat height = self.view.frame.size.height; //Parameters x = origion on x-axis, y = origon on y-axis. CGRect frame = CGRectMake(0, yaxis, width, height); ovController.view.frame = frame; ovController.view.backgroundColor = [UIColor grayColor]; ovController.view.alpha = 0.5; ovController.rvController = self; [self.tableView insertSubview:ovController.view aboveSubview:self.parentViewController.view]; searching = YES; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; //Add the done button. self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(doneSearching_Clicked:)] autorelease]; } // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations. return YES; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Relinquish ownership any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Relinquish ownership of anything that can be recreated in viewDidLoad or on demand. // For example: self.myOutlet = nil; } - (void)dealloc { [dataForCurrentLevel release]; [tableViewData release]; [super dealloc]; } #pragma mark - #pragma mark Table view methods // DATA SOURCE METHOD - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } // DATA SOURCE METHOD - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // How many rows should be displayed? return [tableViewData count]; } // DELEGATE METHOD - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { // Cell reuse block static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell's contents - we want the program code, and a disclosure indicator cell.textLabel.text = [tableViewData objectAtIndex:indexPath.row]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; return cell; } //RootViewController.m - (void)searchBar:(UISearchBar *)theSearchBar textDidChange:(NSString *)searchText { //Remove all objects first. [copyListOfItems removeAllObjects]; if([searchText length] &gt; 0) { [ovController.view removeFromSuperview]; searching = YES; letUserSelectRow = YES; self.tableView.scrollEnabled = YES; [self searchTableView]; } else { [self.tableView insertSubview:ovController.view aboveSubview:self.parentViewController.view]; searching = NO; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; } [self.tableView reloadData]; } //RootViewController.m - (void) doneSearching_Clicked:(id)sender { searchBar.text = @""; [searchBar resignFirstResponder]; letUserSelectRow = YES; searching = NO; self.navigationItem.rightBarButtonItem = nil; self.tableView.scrollEnabled = YES; [ovController.view removeFromSuperview]; [ovController release]; ovController = nil; [self.tableView reloadData]; } // DELEGATE METHOD - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // In any navigation-based application, you follow the same pattern: // 1. Create an instance of the next-level view controller // 2. Configure that instance, with settings and data if necessary // 3. Push it on to the navigation stack // In this situation, the next level view controller is another table view // Therefore, we really don't need a nib file (do you see a CourseCodes.xib? no, there isn't one) // So, a UITableViewController offers an initializer that programmatically creates a view // 1. Create the next level view controller // ======================================== CourseCodes *nextVC = [[CourseCodes alloc] initWithStyle:UITableViewStylePlain]; // 2. Configure it... // ================== // It needs data from the dictionary - the "value" for the current "key" (that was tapped) NSDictionary *nextLevelDictionary = [dataForCurrentLevel objectForKey:[tableViewData objectAtIndex:indexPath.row]]; nextVC.dataForCurrentLevel = nextLevelDictionary; // Set the view title nextVC.title = [tableViewData objectAtIndex:indexPath.row]; // 3. Push it on to the navigation stack // ===================================== [self.navigationController pushViewController:nextVC animated:YES]; // Memory manage it [nextVC release]; } /* // Override to support conditional editing of the table view. - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return YES; } */ /* // Override to support editing the table view. - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the row from the data source. [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; } else if (editingStyle == UITableViewCellEditingStyleInsert) { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view. } } */ /* // Override to support rearranging the table view. - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { } */ /* // Override to support conditional rearranging of the table view. - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the item to be re-orderable. return YES; } */ @end

    Read the article

  • Java Mvc And Hibernate

    - by GigaPr
    Hi i am trying to learn Java, Hibernate and the MVC pattern. Following various tutorial online i managed to map my database, i have created few Main methods to test it and it works. Furthermore i have created few pages using the MVC patter and i am able to display some mock data as well in a view. the problem is i can not connect the two. this is what i have My view Looks like this <%@ include file="/WEB-INF/jsp/include.jsp" %> <html> <head> <title>Users</title> <%@ include file="/WEB-INF/jsp/head.jsp" %> </head> <body> <%@ include file="/WEB-INF/jsp/header.jsp" %> <img src="images/rss.png" alt="Rss Feed"/> <%@ include file="/WEB-INF/jsp/menu.jsp" %> <div class="ContainerIntroText"> <img src="images/usersList.png" class="marginL150px" alt="Add New User"/> <br/> <br/> <div class="usersList"> <div class="listHeaders"> <div class="headerBox"> <strong>FirstName</strong> </div> <div class="headerBox"> <strong>LastName</strong> </div> <div class="headerBox"> <strong>Username</strong> </div> <div class="headerAction"> <strong>Edit</strong> </div> <div class="headerAction"> <strong>Delete</strong> </div> </div> <br><br> <c:forEach items="${users}" var="user"> <div class="listElement"> <c:out value="${user.firstName}"/> </div> <div class="listElement"> <c:out value="${user.lastName}"/> </div> <div class="listElement"> <c:out value="${user.username}"/> </div> <div class="listElementAction"> <input type="button" name="Edit" title="Edit" value="Edit"/> </div> <div class="listElementAction"> <input type="image" src="images/delete.png" name="image" alt="Delete" > </div> <br /> </c:forEach> </div> </div> <a id="addUser" href="addUser.htm" title="Click to add a new user">&nbsp;</a> </body> </html> My controller public class UsersController implements Controller { private UserServiceImplementation userServiceImplementation; public ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { ModelAndView modelAndView = new ModelAndView("users"); List<User> users = this.userServiceImplementation.get(); modelAndView.addObject("users", users); return modelAndView; } public UserServiceImplementation getUserServiceImplementation() { return userServiceImplementation; } public void setUserServiceImplementation(UserServiceImplementation userServiceImplementation) { this.userServiceImplementation = userServiceImplementation; } } My servelet definitions <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd"> <!-- the application context definition for the springapp DispatcherServlet --> <bean name="/home.htm" class="com.rssFeed.mvc.HomeController"/> <bean name="/rssFeeds.htm" class="com.rssFeed.mvc.RssFeedsController"/> <bean name="/addUser.htm" class="com.rssFeed.mvc.AddUserController"/> <bean name="/users.htm" class="com.rssFeed.mvc.UsersController"> <property name="userServiceImplementation" ref="userServiceImplementation"/> </bean> <bean id="userServiceImplementation" class="com.rssFeed.ServiceImplementation.UserServiceImplementation"> <property name="users"> <list> <ref bean="user1"/> <ref bean="user2"/> </list> </property> </bean> <bean id="user1" class="com.rssFeed.domain.User"> <property name="firstName" value="firstName1"/> <property name="lastName" value="lastName1"/> <property name="username" value="username1"/> <property name="password" value="password1"/> </bean> <bean id="user2" class="com.rssFeed.domain.User"> <property name="firstName" value="firstName2"/> <property name="lastName" value="lastName2"/> <property name="username" value="username2"/> <property name="password" value="password2"/> </bean> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"></property> <property name="prefix" value="/WEB-INF/jsp/"></property> <property name="suffix" value=".jsp"></property> </bean> </beans> and finally this class to access the database public class HibernateUserDao extends HibernateDaoSupport implements UserDao { public void addUser(User user) { getHibernateTemplate().saveOrUpdate(user); } public List<User> get() { User user1 = new User(); user1.setFirstName("FirstName"); user1.setLastName("LastName"); user1.setUsername("Username"); user1.setPassword("Password"); List<User> users = new LinkedList<User>(); users.add(user1); return users; } public User get(int id) { throw new UnsupportedOperationException("Not supported yet."); } public User get(String username) { return null; } } the database connection occurs in this file <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:hsql://localhost/rss"/> <property name="username" value="sa"/> <property name="password" value=""/> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" > <property name="dataSource" ref="dataSource" /> <property name="mappingResources"> <list> <value>com/rssFeed/domain/User.hbm.xml</value> </list> </property> <property name="hibernateProperties" > <props> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> </bean> <bean id="userDao" class="com.rssFeed.dao.hibernate.HibernateUserDao"> <property name="sessionFactory" ref="sessionFactory"/> </bean> </beans> Could you help me to solve this problem i spent the last 4 days and nights on this issue without any success Thanks

    Read the article

  • Pure Server-Side Filtering with RadGridView and WCF RIA Services

    Those of you who are familiar with WCF RIA Services know that the DomainDataSource control provides a FilterDescriptors collection that enables you to filter data returned by the query on the server. We have been using this DomainDataSource feature in our RIA Services with DomainDataSource online example for almost an year now. In the example, we are listening for RadGridViews Filtering event in order to intercept any filtering that is performed on the client and translate it to something that the DomainDataSource will understand, in this case a System.Windows.Data.FilterDescriptor being added or removed from its FilterDescriptors collection. Think of RadGridView.FilterDescriptors as client-side filtering and of DomainDataSource.FilterDescriptors as server-side filtering. We no longer need the client-side one. With the introduction of the Custom Filtering Controls feature many new possibilities have opened. With these custom controls we no longer need to do any filtering on the client. I have prepared a very small project that demonstrates how to filter solely on the server by using a custom filtering control. As I have already mentioned filtering on the server is done through the FilterDescriptors collection of the DomainDataSource control. This collection holds instances of type System.Windows.Data.FilterDescriptor. The FilterDescriptor has three important properties: PropertyPath: Specifies the name of the property that we want to filter on (the left operand). Operator: Specifies the type of comparison to use when filtering. An instance of FilterOperator Enumeration. Value: The value to compare with (the right operand). An instance of the Parameter Class. By adding filters, you can specify that only entities which meet the condition in the filter are loaded from the domain context. In case you are not familiar with these concepts you might find Brad Abrams blog interesting. Now, our requirements are to create some kind of UI that will manipulate the DomainDataSource.FilterDescriptors collection. When it comes to collections, my first choice of course would be RadGridView. If you are not familiar with the Custom Filtering Controls concept I would strongly recommend getting acquainted with my step-by-step tutorial Custom Filtering with RadGridView for Silverlight and checking the online example out. I have created a simple custom filtering control that contains a RadGridView and several buttons. This control is aware of the DomainDataSource instance, since it is operating on its FilterDescriptors collection. In fact, the RadGridView that is inside it is bound to this collection. In order to display filters that are relevant for the current column only, I have applied a filter to the grid. This filter is a Telerik.Windows.Data.FilterDescriptor and is used to filter the little grid inside the custom control. It should not be confused with the DomainDataSource.FilterDescriptors collection that RadGridView is actually bound to. These are the RIA filters. Additionally, I have added several other features. For example, if you have specified a DataFormatString on your original column, the Value column inside the custom control will pick it up and format the filter values accordingly. Also, I have transferred the data type of the column that you are filtering to the Value column of the custom control. This will help the little RadGridView determine what kind of editor to show up when you begin edit, for example a date picker for DateTime columns. Finally, I have added four buttons two of them can be used to add or remove filters and the other two will communicate the changes you have made to the server. Here is the full source code of the DomainDataSourceFilteringControl. The XAML: <UserControl x:Class="PureServerSideFiltering.DomainDataSourceFilteringControl"    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:telerikGrid="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.GridView"     xmlns:telerik="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"     Width="300">     <Border x:Name="LayoutRoot"             BorderThickness="1"             BorderBrush="#FF8A929E"             Padding="5"             Background="#FFDFE2E5">           <Grid>             <Grid.RowDefinitions>                 <RowDefinition Height="Auto"/>                 <RowDefinition Height="150"/>                 <RowDefinition Height="Auto"/>             </Grid.RowDefinitions>               <StackPanel Grid.Row="0"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="addFilterButton"                                   Click="OnAddFilterButtonClick"                                   Content="Add Filter"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="removeFilterButton"                                   Click="OnRemoveFilterButtonClick"                                   Content="Remove Filter"                                   Margin="2"                                   Width="96"/>             </StackPanel>               <telerikGrid:RadGridView Name="filtersGrid"                                     Grid.Row="1"                                     Margin="2"                                     ItemsSource="{Binding FilterDescriptors}"                                     AddingNewDataItem="OnFilterGridAddingNewDataItem"                                     ColumnWidth="*"                                     ShowGroupPanel="False"                                     AutoGenerateColumns="False"                                     CanUserResizeColumns="False"                                     CanUserReorderColumns="False"                                     CanUserFreezeColumns="False"                                     RowIndicatorVisibility="Collapsed"                                     IsFilteringAllowed="False"                                     CanUserSortColumns="False">                 <telerikGrid:RadGridView.Columns>                     <telerikGrid:GridViewComboBoxColumn DataMemberBinding="{Binding Operator}"                                                         UniqueName="Operator"/>                     <telerikGrid:GridViewDataColumn Header="Value"                                                     DataMemberBinding="{Binding Value.Value}"                                                     UniqueName="Value"/>                 </telerikGrid:RadGridView.Columns>             </telerikGrid:RadGridView>               <StackPanel Grid.Row="2"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="filterButton"                                   Click="OnApplyFiltersButtonClick"                                   Content="Apply Filters"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="clearButton"                                   Click="OnClearFiltersButtonClick"                                   Content="Clear Filters"                                   Margin="2"                                   Width="96"/>             </StackPanel>           </Grid>       </Border> </UserControl>   And the code-behind: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Telerik.Windows.Controls.GridView; using System.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Data;   namespace PureServerSideFiltering {     /// <summary>     /// A custom filtering control capable of filtering purely server-side.     /// </summary>     public partial class DomainDataSourceFilteringControl : UserControl, IFilteringControl     {         // The main player here.         DomainDataSource domainDataSource;           // This is the name of the property that this column displays.         private string dataMemberName;           // This is the type of the property that this column displays.         private Type dataMemberType;           /// <summary>         /// Identifies the <see cref="IsActive"/> dependency property.         /// </summary>         /// <remarks>         /// The state of the filtering funnel (i.e. full or empty) is bound to this property.         /// </remarks>         public static readonly DependencyProperty IsActiveProperty =             DependencyProperty.Register(                 "IsActive",                 typeof(bool),                 typeof(DomainDataSourceFilteringControl),                 new PropertyMetadata(false));           /// <summary>         /// Gets or sets a value indicating whether the filtering is active.         /// </summary>         /// <remarks>         /// Set this to true if you want to lit-up the filtering funnel.         /// </remarks>         public bool IsActive         {             get { return (bool)GetValue(IsActiveProperty); }             set { SetValue(IsActiveProperty, value); }         }           /// <summary>         /// Gets or sets the domain data source.         /// We need this in order to work on its FilterDescriptors collection.         /// </summary>         /// <value>The domain data source.</value>         public DomainDataSource DomainDataSource         {             get { return this.domainDataSource; }             set { this.domainDataSource = value; }         }           public System.Windows.Data.FilterDescriptorCollection FilterDescriptors         {             get { return this.DomainDataSource.FilterDescriptors; }         }           public DomainDataSourceFilteringControl()         {             InitializeComponent();         }           public void Prepare(GridViewBoundColumnBase column)         {             this.LayoutRoot.DataContext = this;               if (this.DomainDataSource == null)             {                 // Sorry, but we need a DomainDataSource. Can't do anything without it.                 return;             }               // This is the name of the property that this column displays.             this.dataMemberName = column.GetDataMemberName();               // This is the type of the property that this column displays.             // We need this in order to see which FilterOperators to feed to the combo-box column.             this.dataMemberType = column.DataType;               // We will use our magic Type extension method to see which operators are applicable for             // this data type. You can go to the extension method body and see what it does.             ((GridViewComboBoxColumn)this.filtersGrid.Columns["Operator"]).ItemsSource                 = this.dataMemberType.ApplicableFilterOperators();               // This is very nice as well. We will tell the Value column its data type. In this way             // RadGridView will pick up the best editor according to the data type. For example,             // if the data type of the value is DateTime, you will be editing it with a DatePicker.             // Nice!             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataType = this.dataMemberType;               // Yet another nice feature. We will transfer the original DataFormatString (if any) to             // the Value column. In this way if you have specified a DataFormatString for the original             // column, you will see all filter values formatted accordingly.             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataFormatString = column.DataFormatString;               // This is important. Since our little filtersGrid will be bound to the entire collection             // of this.domainDataSource.FilterDescriptors, we need to set a Telerik filter on the             // grid so that it will display FilterDescriptor which are relevane to this column ONLY!             Telerik.Windows.Data.FilterDescriptor columnFilter = new Telerik.Windows.Data.FilterDescriptor("PropertyPath"                 , Telerik.Windows.Data.FilterOperator.IsEqualTo                 , this.dataMemberName);             this.filtersGrid.FilterDescriptors.Add(columnFilter);               // We want to listen for this in order to activate and de-activate the UI funnel.             this.filtersGrid.Items.CollectionChanged += this.OnFilterGridItemsCollectionChanged;         }           /// <summary>         // Since the DomainDataSource is a little bit picky about adding uninitialized FilterDescriptors         // to its collection, we will prepare each new instance with some default values and then         // the user can change them later. Go to the event handler to see how we do this.         /// </summary>         void OnFilterGridAddingNewDataItem(object sender, GridViewAddingNewEventArgs e)         {             // We need to initialize the new instance with some values and let the user go on from here.             System.Windows.Data.FilterDescriptor newFilter = new System.Windows.Data.FilterDescriptor();               // This is a must. It should know what member it is filtering on.             newFilter.PropertyPath = this.dataMemberName;               // Initialize it with one of the allowed operators.             // TypeExtensions.ApplicableFilterOperators method for more info.             newFilter.Operator = this.dataMemberType.ApplicableFilterOperators().First();               if (this.dataMemberType == typeof(DateTime))             {                 newFilter.Value.Value = DateTime.Now;             }             else if (this.dataMemberType == typeof(string))             {                 newFilter.Value.Value = "<enter text>";             }             else if (this.dataMemberType.IsValueType)             {                 // We need something non-null for all value types.                 newFilter.Value.Value = Activator.CreateInstance(this.dataMemberType);             }               // Let the user edit the new filter any way he/she likes.             e.NewObject = newFilter;         }           void OnFilterGridItemsCollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)         {             // We are active only if we have any filters define. In this case the filtering funnel will lit-up.             this.IsActive = this.filtersGrid.Items.Count > 0;         }           private void OnApplyFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // Since this.domainDataSource.AutoLoad is false, this will take into             // account all filtering changes that the user has made since the last             // Load() and pull the new data to the client.             this.DomainDataSource.Load();         }           private void OnClearFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // We want to remove ONLY those filters from the DomainDataSource             // that this control is responsible for.             this.DomainDataSource.FilterDescriptors                 .Where(fd => fd.PropertyPath == this.dataMemberName) // Only "our" filters.                 .ToList()                 .ForEach(fd => this.DomainDataSource.FilterDescriptors.Remove(fd)); // Bye-bye!               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // After we did our housekeeping, get the new data to the client.             this.DomainDataSource.Load();         }           private void OnAddFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Let the user enter his/or her requirements for a new filter.             this.filtersGrid.BeginInsert();             this.filtersGrid.UpdateLayout();         }           private void OnRemoveFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Find the currently selected filter and destroy it.             System.Windows.Data.FilterDescriptor filterToRemove = this.filtersGrid.SelectedItem as System.Windows.Data.FilterDescriptor;             if (filterToRemove != null                 && this.DomainDataSource.FilterDescriptors.Contains(filterToRemove))             {                 this.DomainDataSource.FilterDescriptors.Remove(filterToRemove);             }         }           private void ClosePopup()         {             System.Windows.Controls.Primitives.Popup popup = this.ParentOfType<System.Windows.Controls.Primitives.Popup>();             if (popup != null)             {                 popup.IsOpen = false;             }         }     } }   Finally, we need to tell RadGridViews Columns to use this custom control instead of the default one. Here is how to do it: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Windows.Data; using Telerik.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Controls.GridView;   namespace PureServerSideFiltering {     public partial class MainPage : UserControl     {         public MainPage()         {             InitializeComponent();             this.grid.AutoGeneratingColumn += this.OnGridAutoGeneratingColumn;               // Uncomment this if you want the DomainDataSource to start pre-filtered.             // You will notice how our custom filtering controls will correctly read this information,             // populate their UI with the respective filters and lit-up the funnel to indicate that             // filtering is active. Go ahead and try it.             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("Title", System.Windows.Data.FilterOperator.Contains, "Assistant"));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsGreaterThan, new DateTime(1998, 12, 31)));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsLessThanOrEqualTo, new DateTime(1999, 12, 31)));               this.employeesDataSource.Load();         }           /// <summary>         /// First of all, we will need to replace the default filtering control         /// of each column with out custom filtering control DomainDataSourceFilteringControl         /// </summary>         private void OnGridAutoGeneratingColumn(object sender, GridViewAutoGeneratingColumnEventArgs e)         {             GridViewBoundColumnBase dataColumn = e.Column as GridViewBoundColumnBase;             if (dataColumn != null)             {                 // We do not like ugly dates.                 if (dataColumn.DataType == typeof(DateTime))                 {                     dataColumn.DataFormatString = "{0:d}"; // Short date pattern.                       // Notice how this format will be later transferred to the Value column                     // of the grid that we have inside the DomainDataSourceFilteringControl.                 }                   // Replace the default filtering control with our.                 dataColumn.FilteringControl = new DomainDataSourceFilteringControl()                 {                     // Let the control know about the DDS, after all it will work directly on it.                     DomainDataSource = this.employeesDataSource                 };                   // Finally, lit-up the filtering funnel through the IsActive dependency property                 // in case there are some filters on the DDS that match our column member.                 string dataMemberName = dataColumn.GetDataMemberName();                 dataColumn.FilteringControl.IsActive =                     this.employeesDataSource.FilterDescriptors                     .Where(fd => fd.PropertyPath == dataMemberName)                     .Count() > 0;             }         }     } } The best part is that we are not only writing filters for the DomainDataSource we can read and load them. If the DomainDataSource has some pre-existing filters (like I have created in the code above), our control will read them and will populate its UI accordingly. Even the filtering funnel will light-up! Remember, the funnel is controlled by the IsActive property of our control. While this is just a basic implementation, the source code is absolutely yours and you can take it from here and extend it to match your specific business requirements. Below the main grid there is another debug grid. With its help you can monitor what filter descriptors are added and removed to the domain data source. Download Source Code. (You will have to have the AdventureWorks sample database installed on the default SQLExpress instance in order to run it.) Enjoy!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ruby regex, parsing html

    - by danwoods
    Hello all, I'm trying to parse some returned html to look for currently playing movies. The pattern I'm trying to match looks like: <span dir=ltr>Clash of the Titans</span> Of which there are several in the returned html. (the html is huge, I've posted a sample at the bottom) I'm trying get an array of the movie titles with the following command: titles = listings_html.split(/(<span dir=ltr>).*(<\/span>)/) But I'm not getting the results I'm expecting. Can anyone see a problem with my approach or regex? Returned html (I believe the 'markdown'formating will render the some of the html, but this is just an example): <script>window.gbar={};(function(){function h(a,b,d){var c="on"+b;if(a.addEventListener)a.addEventListener(b,d,false);else if(a.attachEvent)a.attachEvent(c,d);else{var f=a[c];a[c]=function(){var e=f.apply(this,arguments),g=d.apply(this,arguments);return e==undefined?g:g==undefined?e:g&&e}}};var i=window.gbar,k,l,m;function n(a){var b=window.encodeURIComponent&&(document.forms[0].q||"").value;if(b)a.href=a.href.replace(/([?&])q=[^&]*|$/,function(d,c){return(c||"&")+"q="+encodeURIComponent(b)})}i.qs=n;function o(a,b,d,c,f,e){var g=document.getElementById(a);if(g){var j=g.style;j.left=c?"auto":b+"px";j.right=c?b+"px":"auto";j.top=d+"px";j.visibility=l?"hidden":"visible";if(f&&e){j.width=f+"px";j.height=e+"px"}else{o(k,b,d,c,g.offsetWidth,g.offsetHeight);l=l?"":a}}}i.tg=function(a){a=a||window.event;var b,d=a.target||a.srcElement;a.cancelBubble=true;if(k!=null)p(d);else{b=document.createElement(Array.every||window.createPopup?"iframe":"div");b.frameBorder="0";k=b.id="gbs";b.src="javascript:''";d.parentNode.appendChild(b);h(document,"click",i.close);p(d);i.alld&&i.alld(function(){var c=document.getElementById("gbli");if(c){var f=c.parentNode;q(f,c);var e=c.prevSibling;f.removeChild(c);i.removeExtraDelimiters(f,e);b.style.height=f.offsetHeight+"px"}})}};function r(a){var b,d=document.defaultView;if(d&&d.getComputedStyle){if(a=d.getComputedStyle(a,""))b=a.direction}else b=a.currentStyle?a.currentStyle.direction:a.style.direction;return b=="rtl"}function p(a){var b=0;if(a.className!="gb3")a=a.parentNode;var d=a.getAttribute("aria-owns")||"gbi",c=a.offsetWidth,f=a.offsetTop>20?46:24,e=false;do b+=a.offsetLeft||0;while(a=a.offsetParent);a=(document.documentElement.clientWidth||document.body.clientWidth)-b-c;c=r(document.body);if(d=="gbi"){var g=document.getElementById("gbi");q(g,document.getElementById("gbli")||g.firstChild);if(c){b=a;e=true}}else if(!c){b=a;e=true}l!=d&&i.close();o(d,b,f,e)}i.close=function(){l&&o(l,0,0)};function s(a,b,d){if(!m){m="gb2";if(i.alld){var c=i.findClassName(a);if(c)m=c}}a.insertBefore(b,d).className=m}function q(a,b){for(var d,c=window.navExtra;c&&(d=c.pop());)s(a,d,b)}i.addLink=function(a,b,d){if((b=document.getElementById(b))&&a){a.className="gb4";var c=document.createElement("span");c.appendChild(a);c.appendChild(document.createTextNode(" | "));c.id=d;b.appendChild(c)}}})();if(!window.google)window.google={};if(!window.google.movies)window.google.movies={};window.google.movies.registerFixdir=function(){var c="[\u0000- !-@[-{-\u00bf\u00d7\u00f7\u02b9-\u02ff\u2000-\u2bff]",g=new RegExp("^"+c+"([0-9]"+c+"$|[A-Za-z\u00c0-\u00d6\u00d8-\u00f6\u00f8-\u02b8\u0300-\u0590\u0800-\u1fff\u2c00-\ufb1c\ufdfe-\ufe6f\ufefd-\uffff])"),h=new RegExp("^"+c+"$");function e(d,a){if(!a)a=d&&d.target?d.target:window.event.srcElement;a.dir=g.test(a.value)?"ltr":(h.test(a.value)?"":"rtl")} var i=[document.getElementsByName("q")[0],document.getElementById("mtq")];for(var f=0,b;b=i[f];f++)if(b){b.onkeyup=e;e(null,b)}}; Movie Showtimes - Google Search.fl:link{}a:link,.w,a.w:link,.w a:link{color:#00c}a:visited{color:#551a8b}a:active{color:red}.t a:link,.t a:active,.t a:visited,.t{color:#000}.left{width:12em}.box{background:#fff}.nopadding{padding:0}.k{background:#36c}.z{display:none}.x{width:3em}.y{width:23em}.b{color:#00c;font-size:12pt;font-weight:bold}.i,.i:link{color:#a90a08}.n a{color:#000;font-size:10pt}.n .b a{color:#00c}.n .i{font-size:10pt;font-weight:bold}.h{cursor:pointer}body{background:#fff;font:82% Arial,Helvetica,sans-serif;margin:3px 0 0;padding:0}table{border-collapse:collapse;border-spacing:0}img{border:0}td,th{vertical-align:top}h1,h2{font-size:100%;margin:0}a{color:#00c}/ CSS for page */#title_bar{background:#f0f7f9;border-top:1px solid #6b90da;padding-bottom:4px;padding-left:8px;padding-top:4px}#google_bar{margin:3px 10px}#search_form{margin:3px 10px}#left_nav{border-right:1px solid #c9d7f1;margin-top:11px;position:absolute;left:9px;width:13.4em}#left_nav .section{margin-bottom:1.2em}.hidden{visibility:hidden}#results{height:auto !important;height:350px;margin-left:15em;min-height:350px;min-width:800px;width:expression(document.body.clientWidth<1000?"800px":"99.9%")}.name{font-size:124%;margin:0}.times{clear:both;margin:0}.address{margin:0}#movie_results{overflow:auto}.movie_results{margin-top:11px}.movie{clear:both;margin-bottom:40px}.movie .header{padding-left:8px}.movie .img{border:1px solid #ccc;float:left;margin-bottom:10px}.movie .desc{margin-bottom:15px;max-width:42em}.movie h2{font-size:124%;margin-bottom:2px}.movie .info{margin-bottom:10px}.movie .syn{margin-bottom:10px}.movie .section_title{background:#f0f7f9;clear:both;font-size:108%;margin-bottom:11px;margin-top:11px;padding-bottom:4px;padding-left:8px;padding-top:5px}.movie .showtimes{margin-bottom:8px;padding-left:8px}.movie .show_left{width:49%}.movie .show_right{width:49%}.movie .theater{padding-bottom:15px}.theater{clear:both;padding-bottom:1px}.theater_after_icon{padding-left:25px}.theater .show_left{width:49%}.theater .show_right{width:49%}.theater h2{font-size:124%;margin-bottom:2px}.theater .icon{float:left;height:3em;margin-right:5px}.theater .closure{font-size:100%}.theater .info{font-size:100%;padding-bottom:5px;padding-top:5px}.theater .movie{margin-bottom:8px;margin-right:8px;max-width:42em}.theater .movie .desc{margin-bottom:5px;margin-left:0}.theater .movie .info{margin-top:0}.theater .showtimes{margin-bottom:40px;margin-top:8px}#theater_map{right:0;left:0;position:relative;top:0}#theater_static_map{border:1px solid #c9d7f1;margin:10px}.map_marker .name{margin-top:10px}.photo{border:1px solid #ccc;margin-bottom:20px;margin-left:8px}.show_left{float:left;margin:0;width:49.999%}.show_right{float:right;margin:0;width:50%}.show_more{clear:both;font-size:124%;margin:0}.show_more a{color:#77c}.reviews{margin-bottom:8px;padding-left:8px}.review{margin-bottom:5px}.review .publisher{color:green}.review .date{color:#6f6f6f}.trailer{margin-bottom:8px;padding-left:8px}.clear{clear:both}.iconA{background:url(http://maps.gstatic.com/mapfiles/red_icons_A_J.png) repeat 0 0}.iconB{background:url(http://maps.gstatic.com/mapfiles/red_icons_A_J.png) repeat 0 -38px}.iconC{background:url(http://maps.gstatic.com/mapfiles/red_icons_A_J.png) repeat 0 -76px}.iconD{background:url(http://maps.gstatic.com/mapfiles/red_icons_A_J.png) repeat 0 -114px}.iconE{background:url(http://maps.gstatic.com/mapfiles/red_icons_A_J.png) repeat 0 -152px}.iconF{background:url(http://maps.gstatic.com/mapfiles/red_icons_A_J.png) repeat 0 -190px}.iconG{background:url(http://maps.gstatic.com/mapfiles/red_icons_A_J.png) repeat 0 -228px}.iconH{background:url(http://maps.gstatic.com/mapfiles/red_icons_A_J.png) repeat 0 -266px}.iconI{background:url(http://maps.gstatic.com/mapfiles/red_icons_A_J.png) repeat 0 -304px}.iconJ{background:url(http://maps.gstatic.com/mapfiles/red_icons_A_J.png) repeat 0 -342px}.iconK{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 0}.iconL{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -38px}.iconM{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -76px}.iconN{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -114px}.iconO{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -152px}.iconP{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -190px}.iconQ{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -228px}.iconR{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -266px}.iconS{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -304px}.iconT{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -342px}.iconU{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -380px}.iconV{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -418px}.iconW{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -456px}.iconX{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -494px}.iconY{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -532px}.iconZ{background:url(http://maps.gstatic.com/mapfiles/red_icons_K_Z.png) repeat 0 -570px}#gbar,#guser{font-size:13px;padding-top:1px !important}#gbar{float:left;height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}#gbs,.gbm{background:#fff;left:0;position:absolute;text-align:left;visibility:hidden;z-index:1000}.gbm{border:1px solid;border-color:#c9d7f1 #36c #36c #a2bae7;z-index:1001}.gb1{margin-right:.5em}.gb1,.gb3{zoom:1}.gb2{display:block;padding:.2em .5em;}.gb2,.gb3{text-decoration:none;border-bottom:none}a.gb1,a.gb2,a.gb3,a.gb4{color:#00c !important}.gbi .gb3,.gbi .gb2,.gbi .gb4{color:#dd8e27 !important}.gbf .gb3,.gbf .gb2,.gbf .gb4{color:#900 !important}a.gb2:hover{background:#36c;color:#fff !important}Web Images Videos Maps News Shopping Gmail more ▼Books Finance Translate Scholar Blogs YouTube Calendar Photos Documents Reader Sites Groups even more » [email protected] | Google Account settings | Sign out     Advanced Search  PreferencesShowtimes for Murfreesboro, TN 37130Change Location› Today › Tomorrow › Monday › Tuesday› Theaters › Movies› Show list view › Show map viewPremiere 6 Theater810 Northwest Broad Street, Murfreesboro, TN - (615) 896-4100Clash of the Titans? - 1hr 50min?? - Rated PG-13?? - Action/Adventure? - Trailer - IMDb2:10  4:15  6:15  8:20  10:25pmDiary of a Wimpy Kid? - 1hr 33min?? - Rated PG?? - Comedy/Drama? - Trailer - IMDb2:00  3:50  6:00  7:50  9:40pmHow to Train Your Dragon?1hr 38min?? - Rated PG?? - Family/Animation? - IMDb2:00  3:55  6:00  7:55  9:50pmThe Bounty Hunter? - 1hr 46min?? - Rated PG-13?? - Action/Adventure/Comedy/Romance? - Trailer - IMDb2:15  4:15  6:25  8:25  10:30pmThe Last Song? - 1hr 47min?? - Rated PG?? - Drama? - Trailer - IMDb2:20  4:15  6:30  8:35  10:35pmTyler Perry's Why Did I Get Married Too?2hr 1min?? - Rated PG-13?? - Comedy?2:20  4:35  7:30  9:45pmContinental Cinema 5450 US Highway 231 N, Troy, AL - (334) 808-4225Clash of the Titans 3D? - 1hr 50min?? - Rated PG-13?? - Action/Adventure? - IMDb1:00  4:00  7:00  9:30pmHow to Train Your Dragon 3D? - 1hr 38min?? - Rated PG?? - Family/Animation? - IMDb1:05  4:05  7:05  9:25pmThe Bounty Hunter? - 1hr 46min?? - Rated PG-13?? - Action/Adventure/Comedy/Romance? - Trailer - IMDb1:00  4:00  7:00  9:30pmThe Last Song? - 1hr 47min?? - Rated PG?? - Drama? - Trailer - IMDb1:05  4:05  7:05  9:25pmTyler Perry's Why Did I Get Married Too?2hr 1min?? - Rated PG-13?? - Comedy?12:55  3:55  6:55  9:35pmMall Cinema - Hartford KYUS Hwy 231 South 62 East, Hartford, KY - (270) 298-3315Clash of the Titans? - 1hr 50min?? - Rated PG-13?? - Action/Adventure? - Trailer - IMDb5:00  7:00  9:00pmHow to Train Your Dragon?1hr 38min?? - Rated PG?? - Family/Animation? - IMDb5:00  7:00  9:00pmCarmike Wynnsong 16 - Murfreesboro2626 Cason Square Boulevard, Murfreesboro, TN - (615) 893-2253The Last Song? - 1hr 47min?? - Rated PG?? - Drama? - Trailer - IMDb12:15  1:00  2:45  4:00  5:15  7:00  7:45 

    Read the article

  • Using a mounted NTFS share with nginx

    - by Hoff
    I have set up a local testing VM with Ubuntu Server 12.04 LTS and the LEMP stack. It's kind of an unconventional setup because instead of having all my PHP scripts on the local machine, I've mounted an NTFS share as the document root because I do my development on Windows. I had everything working perfectly up until this morning, now I keep getting a dreaded 'File not found.' error. I am almost certain this must be somehow permission related, because if I copy my site over to /var/www, nginx and php-fpm have no problems serving my PHP scripts. What I can't figure out is why all of a sudden (after a reboot of the server), no PHP files will be served but instead just the 'File not found.' error. Static files work fine, so I think it's PHP that is causing the headache. Both nginx and php-fpm are configured to run as the user www-data: root@ubuntu-server:~# ps aux | grep 'nginx\|php-fpm' root 1095 0.0 0.0 5816 792 ? Ss 11:11 0:00 nginx: master process /opt/nginx/sbin/nginx -c /etc/nginx/nginx.conf www-data 1096 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process www-data 1098 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process root 1130 0.0 0.4 175560 4212 ? Ss 11:11 0:00 php-fpm: master process (/etc/php5/php-fpm.conf) www-data 1131 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1132 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1133 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www root 1686 0.0 0.0 4368 816 pts/1 S+ 11:11 0:00 grep --color=auto nginx\|php-fpm I have mounted the NTFS share at /mnt/webfiles by editing /etc/fstab and adding the following line: //192.168.0.199/c$/Websites/ /mnt/webfiles cifs username=Jordan,password=mypasswordhere,gid=33,uid=33 0 0 Where gid 33 is the www-data group and uid 33 is the user www-data. If I list the contents of one of my sites you can in fact see that they belong to the user www-data: root@ubuntu-server:~# ls -l /mnt/webfiles/nTv5-2.0 total 8 drwxr-xr-x 0 www-data www-data 0 Jun 6 19:12 app drwxr-xr-x 0 www-data www-data 0 Aug 22 19:00 assets -rwxr-xr-x 0 www-data www-data 1150 Jan 4 2012 favicon.ico -rwxr-xr-x 0 www-data www-data 1412 Dec 28 2011 index.php drwxr-xr-x 0 www-data www-data 0 Jun 3 16:44 lib drwxr-xr-x 0 www-data www-data 0 Jan 3 2012 plugins drwxr-xr-x 0 www-data www-data 0 Jun 3 16:45 vendors If I switch to the www-data user, I have no problem creating a new file on the share: root@ubuntu-server:~# su www-data $ > /mnt/webfiles/test.txt $ ls -l /mnt/webfiles | grep test\.txt -rwxr-xr-x 0 www-data www-data 0 Sep 8 11:19 test.txt There should be no problem reading or writing to the share with php-fpm running as the user www-data. When I examine the error log of nginx, it's filled with a bunch of lines that look like the following: 2012/09/08 11:22:36 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" 2012/09/08 11:22:39 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET /apc.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" It's bizarre that this was working previously and now all of sudden PHP is complaining that it can't "find" the scripts on the share. Does anybody know why this is happening? EDIT I tried editing php-fpm.conf and changing chdir to the following: chdir = /mnt/webfiles When I try and restart the php-fpm service, I get the error: Starting php-fpm [08-Sep-2012 14:20:55] ERROR: [pool www] the chdir path '/mnt/webfiles' does not exist or is not a directory This is a total load of bullshit because this directory DOES exist and is mounted! Any ls commands to list that directory work perfectly. Why the hell can't PHP-FPM see this directory?! Here are my configuration files for reference: nginx.conf user www-data; worker_processes 2; error_log /var/log/nginx/nginx.log info; pid /var/run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { include fastcgi.conf; include mime.types; default_type application/octet-stream; set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; ## Proxy proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 32m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; ## Compression gzip on; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; ### TCP options tcp_nodelay on; tcp_nopush on; keepalive_timeout 65; sendfile on; include /etc/nginx/sites-enabled/*; } my site config server { listen 80; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /mnt/webfiles/nTv5-2.0/app/webroot; index index.php; ## Block bad bots if ($http_user_agent ~* (HTTrack|HTMLParser|libcurl|discobot|Exabot|Casper|kmccrew|plaNETWORK|RPT-HTTPClient)) { return 444; } ## Block certain Referers (case insensitive) if ($http_referer ~* (sex|vigra|viagra) ) { return 444; } ## Deny dot files: location ~ /\. { deny all; } ## Favicon Not Found location = /favicon.ico { access_log off; log_not_found off; } ## Robots.txt Not Found location = /robots.txt { access_log off; log_not_found off; } if (-f $document_root/maintenance.html) { rewrite ^(.*)$ /maintenance.html last; } location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "max-age=2678400, public, must-revalidate"; } location / { try_files $uri $uri/ index.php; if (-f $request_filename) { break; } rewrite ^(.+)$ /index.php?url=$1 last; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } } php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix (/opt/php5). This prefix can be dynamicaly changed by using the ; '-p' argument from the command line. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. ; Relative path can also be used. They will be prefixed by: ; - the global prefix if it's been set (-p arguement) ; - /opt/php5 otherwise ;include=etc/fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Note: the default prefix is /opt/php5/var ; Default Value: none pid = /var/run/php-fpm.pid ; Error log file ; Note: the default prefix is /opt/php5/var ; Default Value: log/php-fpm.log error_log = /var/log/php5-fpm/php-fpm.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; Multiple pools of child processes may be started with different listening ; ports and different management options. The name of the pool will be ; used in logs and stats. There is no limitation on the number of pools which ; FPM can handle. Your system will tell you anyway :) ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /opt/php5) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. ;listen = 127.0.0.1:9000 listen = /var/run/php5-fpm.sock ; Set listen(2) backlog. A value of '-1' means unlimited. ; Default Value: 128 (-1 on FreeBSD and OpenBSD) ;listen.backlog = -1 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives: ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes to be created when pm is set to 'dynamic'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. ; Note: Used when pm is set to either 'static' or 'dynamic' ; Note: This value is mandatory. pm.max_children = 50 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 20 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 5 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 35 ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. By default, the status page shows the following ; information: ; accepted conn - the number of request accepted by the pool; ; pool - the name of the pool; ; process manager - static or dynamic; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes. ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic') ; The values of 'idle processes', 'active processes' and 'total processes' are ; updated each second. The value of 'accepted conn' is updated in real time. ; Example output: ; accepted conn: 12073 ; pool: www ; process manager: static ; idle processes: 35 ; active processes: 65 ; total processes: 100 ; max children reached: 1 ; By default the status page output is formatted as text/plain. Passing either ; 'html' or 'json' as a query string will return the corresponding output ; syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ping.response = pong ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /opt/php5) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i

    Read the article

< Previous Page | 317 318 319 320 321 322  | Next Page >